Oct 29 01:25:29.657968 kernel: Linux version 5.15.192-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 11.3.1_p20221209 p3) 11.3.1 20221209, GNU ld (Gentoo 2.39 p5) 2.39.0) #1 SMP Tue Oct 28 23:40:27 -00 2025 Oct 29 01:25:29.657982 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=vmware flatcar.autologin verity.usrhash=201610a31b2ff0ec76573eccf3918f182ba51086e5a85b3aea8675643c4efef7 Oct 29 01:25:29.657989 kernel: Disabled fast string operations Oct 29 01:25:29.657993 kernel: BIOS-provided physical RAM map: Oct 29 01:25:29.657997 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009ebff] usable Oct 29 01:25:29.658001 kernel: BIOS-e820: [mem 0x000000000009ec00-0x000000000009ffff] reserved Oct 29 01:25:29.658007 kernel: BIOS-e820: [mem 0x00000000000dc000-0x00000000000fffff] reserved Oct 29 01:25:29.658011 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000007fedffff] usable Oct 29 01:25:29.658015 kernel: BIOS-e820: [mem 0x000000007fee0000-0x000000007fefefff] ACPI data Oct 29 01:25:29.658019 kernel: BIOS-e820: [mem 0x000000007feff000-0x000000007fefffff] ACPI NVS Oct 29 01:25:29.658023 kernel: BIOS-e820: [mem 0x000000007ff00000-0x000000007fffffff] usable Oct 29 01:25:29.658027 kernel: BIOS-e820: [mem 0x00000000f0000000-0x00000000f7ffffff] reserved Oct 29 01:25:29.658031 kernel: BIOS-e820: [mem 0x00000000fec00000-0x00000000fec0ffff] reserved Oct 29 01:25:29.658035 kernel: BIOS-e820: [mem 0x00000000fee00000-0x00000000fee00fff] reserved Oct 29 01:25:29.658042 kernel: BIOS-e820: [mem 0x00000000fffe0000-0x00000000ffffffff] reserved Oct 29 01:25:29.658046 kernel: NX (Execute Disable) protection: active Oct 29 01:25:29.658051 kernel: SMBIOS 2.7 present. Oct 29 01:25:29.658055 kernel: DMI: VMware, Inc. VMware Virtual Platform/440BX Desktop Reference Platform, BIOS 6.00 05/28/2020 Oct 29 01:25:29.658060 kernel: vmware: hypercall mode: 0x00 Oct 29 01:25:29.658064 kernel: Hypervisor detected: VMware Oct 29 01:25:29.658069 kernel: vmware: TSC freq read from hypervisor : 3408.000 MHz Oct 29 01:25:29.658074 kernel: vmware: Host bus clock speed read from hypervisor : 66000000 Hz Oct 29 01:25:29.658078 kernel: vmware: using clock offset of 3682535772 ns Oct 29 01:25:29.658083 kernel: tsc: Detected 3408.000 MHz processor Oct 29 01:25:29.658088 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Oct 29 01:25:29.658093 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Oct 29 01:25:29.658097 kernel: last_pfn = 0x80000 max_arch_pfn = 0x400000000 Oct 29 01:25:29.658102 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Oct 29 01:25:29.658106 kernel: total RAM covered: 3072M Oct 29 01:25:29.658112 kernel: Found optimal setting for mtrr clean up Oct 29 01:25:29.658117 kernel: gran_size: 64K chunk_size: 64K num_reg: 2 lose cover RAM: 0G Oct 29 01:25:29.658122 kernel: Using GB pages for direct mapping Oct 29 01:25:29.658126 kernel: ACPI: Early table checksum verification disabled Oct 29 01:25:29.658131 kernel: ACPI: RSDP 0x00000000000F6A00 000024 (v02 PTLTD ) Oct 29 01:25:29.658135 kernel: ACPI: XSDT 0x000000007FEE965B 00005C (v01 INTEL 440BX 06040000 VMW 01324272) Oct 29 01:25:29.658140 kernel: ACPI: FACP 0x000000007FEFEE73 0000F4 (v04 INTEL 440BX 06040000 PTL 000F4240) Oct 29 01:25:29.658144 kernel: ACPI: DSDT 0x000000007FEEAD55 01411E (v01 PTLTD Custom 06040000 MSFT 03000001) Oct 29 01:25:29.658149 kernel: ACPI: FACS 0x000000007FEFFFC0 000040 Oct 29 01:25:29.658153 kernel: ACPI: FACS 0x000000007FEFFFC0 000040 Oct 29 01:25:29.658159 kernel: ACPI: BOOT 0x000000007FEEAD2D 000028 (v01 PTLTD $SBFTBL$ 06040000 LTP 00000001) Oct 29 01:25:29.658166 kernel: ACPI: APIC 0x000000007FEEA5EB 000742 (v01 PTLTD ? APIC 06040000 LTP 00000000) Oct 29 01:25:29.658170 kernel: ACPI: MCFG 0x000000007FEEA5AF 00003C (v01 PTLTD $PCITBL$ 06040000 LTP 00000001) Oct 29 01:25:29.658175 kernel: ACPI: SRAT 0x000000007FEE9757 0008A8 (v02 VMWARE MEMPLUG 06040000 VMW 00000001) Oct 29 01:25:29.658180 kernel: ACPI: HPET 0x000000007FEE971F 000038 (v01 VMWARE VMW HPET 06040000 VMW 00000001) Oct 29 01:25:29.658186 kernel: ACPI: WAET 0x000000007FEE96F7 000028 (v01 VMWARE VMW WAET 06040000 VMW 00000001) Oct 29 01:25:29.658191 kernel: ACPI: Reserving FACP table memory at [mem 0x7fefee73-0x7fefef66] Oct 29 01:25:29.658196 kernel: ACPI: Reserving DSDT table memory at [mem 0x7feead55-0x7fefee72] Oct 29 01:25:29.658201 kernel: ACPI: Reserving FACS table memory at [mem 0x7fefffc0-0x7fefffff] Oct 29 01:25:29.658206 kernel: ACPI: Reserving FACS table memory at [mem 0x7fefffc0-0x7fefffff] Oct 29 01:25:29.658211 kernel: ACPI: Reserving BOOT table memory at [mem 0x7feead2d-0x7feead54] Oct 29 01:25:29.658216 kernel: ACPI: Reserving APIC table memory at [mem 0x7feea5eb-0x7feead2c] Oct 29 01:25:29.658220 kernel: ACPI: Reserving MCFG table memory at [mem 0x7feea5af-0x7feea5ea] Oct 29 01:25:29.658225 kernel: ACPI: Reserving SRAT table memory at [mem 0x7fee9757-0x7fee9ffe] Oct 29 01:25:29.658231 kernel: ACPI: Reserving HPET table memory at [mem 0x7fee971f-0x7fee9756] Oct 29 01:25:29.658236 kernel: ACPI: Reserving WAET table memory at [mem 0x7fee96f7-0x7fee971e] Oct 29 01:25:29.658241 kernel: system APIC only can use physical flat Oct 29 01:25:29.658246 kernel: Setting APIC routing to physical flat. Oct 29 01:25:29.658251 kernel: SRAT: PXM 0 -> APIC 0x00 -> Node 0 Oct 29 01:25:29.658256 kernel: SRAT: PXM 0 -> APIC 0x02 -> Node 0 Oct 29 01:25:29.658260 kernel: SRAT: PXM 0 -> APIC 0x04 -> Node 0 Oct 29 01:25:29.658265 kernel: SRAT: PXM 0 -> APIC 0x06 -> Node 0 Oct 29 01:25:29.658270 kernel: SRAT: PXM 0 -> APIC 0x08 -> Node 0 Oct 29 01:25:29.658276 kernel: SRAT: PXM 0 -> APIC 0x0a -> Node 0 Oct 29 01:25:29.658281 kernel: SRAT: PXM 0 -> APIC 0x0c -> Node 0 Oct 29 01:25:29.658286 kernel: SRAT: PXM 0 -> APIC 0x0e -> Node 0 Oct 29 01:25:29.658291 kernel: SRAT: PXM 0 -> APIC 0x10 -> Node 0 Oct 29 01:25:29.658295 kernel: SRAT: PXM 0 -> APIC 0x12 -> Node 0 Oct 29 01:25:29.658300 kernel: SRAT: PXM 0 -> APIC 0x14 -> Node 0 Oct 29 01:25:29.658305 kernel: SRAT: PXM 0 -> APIC 0x16 -> Node 0 Oct 29 01:25:29.658310 kernel: SRAT: PXM 0 -> APIC 0x18 -> Node 0 Oct 29 01:25:29.658315 kernel: SRAT: PXM 0 -> APIC 0x1a -> Node 0 Oct 29 01:25:29.658320 kernel: SRAT: PXM 0 -> APIC 0x1c -> Node 0 Oct 29 01:25:29.658325 kernel: SRAT: PXM 0 -> APIC 0x1e -> Node 0 Oct 29 01:25:29.658330 kernel: SRAT: PXM 0 -> APIC 0x20 -> Node 0 Oct 29 01:25:29.658335 kernel: SRAT: PXM 0 -> APIC 0x22 -> Node 0 Oct 29 01:25:29.658340 kernel: SRAT: PXM 0 -> APIC 0x24 -> Node 0 Oct 29 01:25:29.658345 kernel: SRAT: PXM 0 -> APIC 0x26 -> Node 0 Oct 29 01:25:29.658349 kernel: SRAT: PXM 0 -> APIC 0x28 -> Node 0 Oct 29 01:25:29.658354 kernel: SRAT: PXM 0 -> APIC 0x2a -> Node 0 Oct 29 01:25:29.658359 kernel: SRAT: PXM 0 -> APIC 0x2c -> Node 0 Oct 29 01:25:29.658364 kernel: SRAT: PXM 0 -> APIC 0x2e -> Node 0 Oct 29 01:25:29.658369 kernel: SRAT: PXM 0 -> APIC 0x30 -> Node 0 Oct 29 01:25:29.658374 kernel: SRAT: PXM 0 -> APIC 0x32 -> Node 0 Oct 29 01:25:29.658379 kernel: SRAT: PXM 0 -> APIC 0x34 -> Node 0 Oct 29 01:25:29.658384 kernel: SRAT: PXM 0 -> APIC 0x36 -> Node 0 Oct 29 01:25:29.658389 kernel: SRAT: PXM 0 -> APIC 0x38 -> Node 0 Oct 29 01:25:29.658394 kernel: SRAT: PXM 0 -> APIC 0x3a -> Node 0 Oct 29 01:25:29.658398 kernel: SRAT: PXM 0 -> APIC 0x3c -> Node 0 Oct 29 01:25:29.658403 kernel: SRAT: PXM 0 -> APIC 0x3e -> Node 0 Oct 29 01:25:29.658408 kernel: SRAT: PXM 0 -> APIC 0x40 -> Node 0 Oct 29 01:25:29.658413 kernel: SRAT: PXM 0 -> APIC 0x42 -> Node 0 Oct 29 01:25:29.658418 kernel: SRAT: PXM 0 -> APIC 0x44 -> Node 0 Oct 29 01:25:29.658424 kernel: SRAT: PXM 0 -> APIC 0x46 -> Node 0 Oct 29 01:25:29.658428 kernel: SRAT: PXM 0 -> APIC 0x48 -> Node 0 Oct 29 01:25:29.658433 kernel: SRAT: PXM 0 -> APIC 0x4a -> Node 0 Oct 29 01:25:29.658440 kernel: SRAT: PXM 0 -> APIC 0x4c -> Node 0 Oct 29 01:25:29.658448 kernel: SRAT: PXM 0 -> APIC 0x4e -> Node 0 Oct 29 01:25:29.658455 kernel: SRAT: PXM 0 -> APIC 0x50 -> Node 0 Oct 29 01:25:29.658462 kernel: SRAT: PXM 0 -> APIC 0x52 -> Node 0 Oct 29 01:25:29.658469 kernel: SRAT: PXM 0 -> APIC 0x54 -> Node 0 Oct 29 01:25:29.658477 kernel: SRAT: PXM 0 -> APIC 0x56 -> Node 0 Oct 29 01:25:29.658485 kernel: SRAT: PXM 0 -> APIC 0x58 -> Node 0 Oct 29 01:25:29.658492 kernel: SRAT: PXM 0 -> APIC 0x5a -> Node 0 Oct 29 01:25:29.658497 kernel: SRAT: PXM 0 -> APIC 0x5c -> Node 0 Oct 29 01:25:29.658501 kernel: SRAT: PXM 0 -> APIC 0x5e -> Node 0 Oct 29 01:25:29.658506 kernel: SRAT: PXM 0 -> APIC 0x60 -> Node 0 Oct 29 01:25:29.658511 kernel: SRAT: PXM 0 -> APIC 0x62 -> Node 0 Oct 29 01:25:29.658516 kernel: SRAT: PXM 0 -> APIC 0x64 -> Node 0 Oct 29 01:25:29.658521 kernel: SRAT: PXM 0 -> APIC 0x66 -> Node 0 Oct 29 01:25:29.658525 kernel: SRAT: PXM 0 -> APIC 0x68 -> Node 0 Oct 29 01:25:29.658530 kernel: SRAT: PXM 0 -> APIC 0x6a -> Node 0 Oct 29 01:25:29.658535 kernel: SRAT: PXM 0 -> APIC 0x6c -> Node 0 Oct 29 01:25:29.658541 kernel: SRAT: PXM 0 -> APIC 0x6e -> Node 0 Oct 29 01:25:29.658546 kernel: SRAT: PXM 0 -> APIC 0x70 -> Node 0 Oct 29 01:25:29.658550 kernel: SRAT: PXM 0 -> APIC 0x72 -> Node 0 Oct 29 01:25:29.658555 kernel: SRAT: PXM 0 -> APIC 0x74 -> Node 0 Oct 29 01:25:29.658571 kernel: SRAT: PXM 0 -> APIC 0x76 -> Node 0 Oct 29 01:25:29.658582 kernel: SRAT: PXM 0 -> APIC 0x78 -> Node 0 Oct 29 01:25:29.658592 kernel: SRAT: PXM 0 -> APIC 0x7a -> Node 0 Oct 29 01:25:29.658609 kernel: SRAT: PXM 0 -> APIC 0x7c -> Node 0 Oct 29 01:25:29.658614 kernel: SRAT: PXM 0 -> APIC 0x7e -> Node 0 Oct 29 01:25:29.658622 kernel: SRAT: PXM 0 -> APIC 0x80 -> Node 0 Oct 29 01:25:29.658631 kernel: SRAT: PXM 0 -> APIC 0x82 -> Node 0 Oct 29 01:25:29.658638 kernel: SRAT: PXM 0 -> APIC 0x84 -> Node 0 Oct 29 01:25:29.658643 kernel: SRAT: PXM 0 -> APIC 0x86 -> Node 0 Oct 29 01:25:29.658648 kernel: SRAT: PXM 0 -> APIC 0x88 -> Node 0 Oct 29 01:25:29.658653 kernel: SRAT: PXM 0 -> APIC 0x8a -> Node 0 Oct 29 01:25:29.658658 kernel: SRAT: PXM 0 -> APIC 0x8c -> Node 0 Oct 29 01:25:29.658664 kernel: SRAT: PXM 0 -> APIC 0x8e -> Node 0 Oct 29 01:25:29.658669 kernel: SRAT: PXM 0 -> APIC 0x90 -> Node 0 Oct 29 01:25:29.658675 kernel: SRAT: PXM 0 -> APIC 0x92 -> Node 0 Oct 29 01:25:29.658680 kernel: SRAT: PXM 0 -> APIC 0x94 -> Node 0 Oct 29 01:25:29.658685 kernel: SRAT: PXM 0 -> APIC 0x96 -> Node 0 Oct 29 01:25:29.658690 kernel: SRAT: PXM 0 -> APIC 0x98 -> Node 0 Oct 29 01:25:29.658695 kernel: SRAT: PXM 0 -> APIC 0x9a -> Node 0 Oct 29 01:25:29.658701 kernel: SRAT: PXM 0 -> APIC 0x9c -> Node 0 Oct 29 01:25:29.658706 kernel: SRAT: PXM 0 -> APIC 0x9e -> Node 0 Oct 29 01:25:29.658711 kernel: SRAT: PXM 0 -> APIC 0xa0 -> Node 0 Oct 29 01:25:29.658716 kernel: SRAT: PXM 0 -> APIC 0xa2 -> Node 0 Oct 29 01:25:29.658722 kernel: SRAT: PXM 0 -> APIC 0xa4 -> Node 0 Oct 29 01:25:29.658728 kernel: SRAT: PXM 0 -> APIC 0xa6 -> Node 0 Oct 29 01:25:29.658733 kernel: SRAT: PXM 0 -> APIC 0xa8 -> Node 0 Oct 29 01:25:29.658738 kernel: SRAT: PXM 0 -> APIC 0xaa -> Node 0 Oct 29 01:25:29.658747 kernel: SRAT: PXM 0 -> APIC 0xac -> Node 0 Oct 29 01:25:29.658752 kernel: SRAT: PXM 0 -> APIC 0xae -> Node 0 Oct 29 01:25:29.658757 kernel: SRAT: PXM 0 -> APIC 0xb0 -> Node 0 Oct 29 01:25:29.658762 kernel: SRAT: PXM 0 -> APIC 0xb2 -> Node 0 Oct 29 01:25:29.658771 kernel: SRAT: PXM 0 -> APIC 0xb4 -> Node 0 Oct 29 01:25:29.658776 kernel: SRAT: PXM 0 -> APIC 0xb6 -> Node 0 Oct 29 01:25:29.658783 kernel: SRAT: PXM 0 -> APIC 0xb8 -> Node 0 Oct 29 01:25:29.658788 kernel: SRAT: PXM 0 -> APIC 0xba -> Node 0 Oct 29 01:25:29.658793 kernel: SRAT: PXM 0 -> APIC 0xbc -> Node 0 Oct 29 01:25:29.660180 kernel: SRAT: PXM 0 -> APIC 0xbe -> Node 0 Oct 29 01:25:29.660195 kernel: SRAT: PXM 0 -> APIC 0xc0 -> Node 0 Oct 29 01:25:29.660201 kernel: SRAT: PXM 0 -> APIC 0xc2 -> Node 0 Oct 29 01:25:29.660207 kernel: SRAT: PXM 0 -> APIC 0xc4 -> Node 0 Oct 29 01:25:29.660212 kernel: SRAT: PXM 0 -> APIC 0xc6 -> Node 0 Oct 29 01:25:29.660218 kernel: SRAT: PXM 0 -> APIC 0xc8 -> Node 0 Oct 29 01:25:29.660223 kernel: SRAT: PXM 0 -> APIC 0xca -> Node 0 Oct 29 01:25:29.660230 kernel: SRAT: PXM 0 -> APIC 0xcc -> Node 0 Oct 29 01:25:29.660236 kernel: SRAT: PXM 0 -> APIC 0xce -> Node 0 Oct 29 01:25:29.660241 kernel: SRAT: PXM 0 -> APIC 0xd0 -> Node 0 Oct 29 01:25:29.660246 kernel: SRAT: PXM 0 -> APIC 0xd2 -> Node 0 Oct 29 01:25:29.660251 kernel: SRAT: PXM 0 -> APIC 0xd4 -> Node 0 Oct 29 01:25:29.660256 kernel: SRAT: PXM 0 -> APIC 0xd6 -> Node 0 Oct 29 01:25:29.660261 kernel: SRAT: PXM 0 -> APIC 0xd8 -> Node 0 Oct 29 01:25:29.660266 kernel: SRAT: PXM 0 -> APIC 0xda -> Node 0 Oct 29 01:25:29.660272 kernel: SRAT: PXM 0 -> APIC 0xdc -> Node 0 Oct 29 01:25:29.660277 kernel: SRAT: PXM 0 -> APIC 0xde -> Node 0 Oct 29 01:25:29.660283 kernel: SRAT: PXM 0 -> APIC 0xe0 -> Node 0 Oct 29 01:25:29.660288 kernel: SRAT: PXM 0 -> APIC 0xe2 -> Node 0 Oct 29 01:25:29.660293 kernel: SRAT: PXM 0 -> APIC 0xe4 -> Node 0 Oct 29 01:25:29.660299 kernel: SRAT: PXM 0 -> APIC 0xe6 -> Node 0 Oct 29 01:25:29.660304 kernel: SRAT: PXM 0 -> APIC 0xe8 -> Node 0 Oct 29 01:25:29.660309 kernel: SRAT: PXM 0 -> APIC 0xea -> Node 0 Oct 29 01:25:29.660315 kernel: SRAT: PXM 0 -> APIC 0xec -> Node 0 Oct 29 01:25:29.660320 kernel: SRAT: PXM 0 -> APIC 0xee -> Node 0 Oct 29 01:25:29.660325 kernel: SRAT: PXM 0 -> APIC 0xf0 -> Node 0 Oct 29 01:25:29.660330 kernel: SRAT: PXM 0 -> APIC 0xf2 -> Node 0 Oct 29 01:25:29.660336 kernel: SRAT: PXM 0 -> APIC 0xf4 -> Node 0 Oct 29 01:25:29.660341 kernel: SRAT: PXM 0 -> APIC 0xf6 -> Node 0 Oct 29 01:25:29.660347 kernel: SRAT: PXM 0 -> APIC 0xf8 -> Node 0 Oct 29 01:25:29.660352 kernel: SRAT: PXM 0 -> APIC 0xfa -> Node 0 Oct 29 01:25:29.660357 kernel: SRAT: PXM 0 -> APIC 0xfc -> Node 0 Oct 29 01:25:29.660362 kernel: SRAT: PXM 0 -> APIC 0xfe -> Node 0 Oct 29 01:25:29.660367 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x0009ffff] Oct 29 01:25:29.660373 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00100000-0x7fffffff] Oct 29 01:25:29.660378 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x80000000-0xbfffffff] hotplug Oct 29 01:25:29.660385 kernel: NUMA: Node 0 [mem 0x00000000-0x0009ffff] + [mem 0x00100000-0x7fffffff] -> [mem 0x00000000-0x7fffffff] Oct 29 01:25:29.660391 kernel: NODE_DATA(0) allocated [mem 0x7fffa000-0x7fffffff] Oct 29 01:25:29.660396 kernel: Zone ranges: Oct 29 01:25:29.660402 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Oct 29 01:25:29.660407 kernel: DMA32 [mem 0x0000000001000000-0x000000007fffffff] Oct 29 01:25:29.660412 kernel: Normal empty Oct 29 01:25:29.660418 kernel: Movable zone start for each node Oct 29 01:25:29.660423 kernel: Early memory node ranges Oct 29 01:25:29.660429 kernel: node 0: [mem 0x0000000000001000-0x000000000009dfff] Oct 29 01:25:29.660434 kernel: node 0: [mem 0x0000000000100000-0x000000007fedffff] Oct 29 01:25:29.660441 kernel: node 0: [mem 0x000000007ff00000-0x000000007fffffff] Oct 29 01:25:29.660446 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000007fffffff] Oct 29 01:25:29.660451 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Oct 29 01:25:29.660457 kernel: On node 0, zone DMA: 98 pages in unavailable ranges Oct 29 01:25:29.660462 kernel: On node 0, zone DMA32: 32 pages in unavailable ranges Oct 29 01:25:29.660467 kernel: ACPI: PM-Timer IO Port: 0x1008 Oct 29 01:25:29.660473 kernel: system APIC only can use physical flat Oct 29 01:25:29.660478 kernel: ACPI: LAPIC_NMI (acpi_id[0x00] high edge lint[0x1]) Oct 29 01:25:29.660483 kernel: ACPI: LAPIC_NMI (acpi_id[0x01] high edge lint[0x1]) Oct 29 01:25:29.660490 kernel: ACPI: LAPIC_NMI (acpi_id[0x02] high edge lint[0x1]) Oct 29 01:25:29.660495 kernel: ACPI: LAPIC_NMI (acpi_id[0x03] high edge lint[0x1]) Oct 29 01:25:29.660500 kernel: ACPI: LAPIC_NMI (acpi_id[0x04] high edge lint[0x1]) Oct 29 01:25:29.660505 kernel: ACPI: LAPIC_NMI (acpi_id[0x05] high edge lint[0x1]) Oct 29 01:25:29.660511 kernel: ACPI: LAPIC_NMI (acpi_id[0x06] high edge lint[0x1]) Oct 29 01:25:29.660516 kernel: ACPI: LAPIC_NMI (acpi_id[0x07] high edge lint[0x1]) Oct 29 01:25:29.660522 kernel: ACPI: LAPIC_NMI (acpi_id[0x08] high edge lint[0x1]) Oct 29 01:25:29.660527 kernel: ACPI: LAPIC_NMI (acpi_id[0x09] high edge lint[0x1]) Oct 29 01:25:29.660532 kernel: ACPI: LAPIC_NMI (acpi_id[0x0a] high edge lint[0x1]) Oct 29 01:25:29.660537 kernel: ACPI: LAPIC_NMI (acpi_id[0x0b] high edge lint[0x1]) Oct 29 01:25:29.660544 kernel: ACPI: LAPIC_NMI (acpi_id[0x0c] high edge lint[0x1]) Oct 29 01:25:29.660549 kernel: ACPI: LAPIC_NMI (acpi_id[0x0d] high edge lint[0x1]) Oct 29 01:25:29.660554 kernel: ACPI: LAPIC_NMI (acpi_id[0x0e] high edge lint[0x1]) Oct 29 01:25:29.660560 kernel: ACPI: LAPIC_NMI (acpi_id[0x0f] high edge lint[0x1]) Oct 29 01:25:29.660565 kernel: ACPI: LAPIC_NMI (acpi_id[0x10] high edge lint[0x1]) Oct 29 01:25:29.660570 kernel: ACPI: LAPIC_NMI (acpi_id[0x11] high edge lint[0x1]) Oct 29 01:25:29.660575 kernel: ACPI: LAPIC_NMI (acpi_id[0x12] high edge lint[0x1]) Oct 29 01:25:29.660581 kernel: ACPI: LAPIC_NMI (acpi_id[0x13] high edge lint[0x1]) Oct 29 01:25:29.660586 kernel: ACPI: LAPIC_NMI (acpi_id[0x14] high edge lint[0x1]) Oct 29 01:25:29.660592 kernel: ACPI: LAPIC_NMI (acpi_id[0x15] high edge lint[0x1]) Oct 29 01:25:29.660597 kernel: ACPI: LAPIC_NMI (acpi_id[0x16] high edge lint[0x1]) Oct 29 01:25:29.660603 kernel: ACPI: LAPIC_NMI (acpi_id[0x17] high edge lint[0x1]) Oct 29 01:25:29.660608 kernel: ACPI: LAPIC_NMI (acpi_id[0x18] high edge lint[0x1]) Oct 29 01:25:29.660613 kernel: ACPI: LAPIC_NMI (acpi_id[0x19] high edge lint[0x1]) Oct 29 01:25:29.660619 kernel: ACPI: LAPIC_NMI (acpi_id[0x1a] high edge lint[0x1]) Oct 29 01:25:29.660624 kernel: ACPI: LAPIC_NMI (acpi_id[0x1b] high edge lint[0x1]) Oct 29 01:25:29.660629 kernel: ACPI: LAPIC_NMI (acpi_id[0x1c] high edge lint[0x1]) Oct 29 01:25:29.660635 kernel: ACPI: LAPIC_NMI (acpi_id[0x1d] high edge lint[0x1]) Oct 29 01:25:29.660640 kernel: ACPI: LAPIC_NMI (acpi_id[0x1e] high edge lint[0x1]) Oct 29 01:25:29.660646 kernel: ACPI: LAPIC_NMI (acpi_id[0x1f] high edge lint[0x1]) Oct 29 01:25:29.660652 kernel: ACPI: LAPIC_NMI (acpi_id[0x20] high edge lint[0x1]) Oct 29 01:25:29.660657 kernel: ACPI: LAPIC_NMI (acpi_id[0x21] high edge lint[0x1]) Oct 29 01:25:29.660662 kernel: ACPI: LAPIC_NMI (acpi_id[0x22] high edge lint[0x1]) Oct 29 01:25:29.660668 kernel: ACPI: LAPIC_NMI (acpi_id[0x23] high edge lint[0x1]) Oct 29 01:25:29.660673 kernel: ACPI: LAPIC_NMI (acpi_id[0x24] high edge lint[0x1]) Oct 29 01:25:29.660678 kernel: ACPI: LAPIC_NMI (acpi_id[0x25] high edge lint[0x1]) Oct 29 01:25:29.660683 kernel: ACPI: LAPIC_NMI (acpi_id[0x26] high edge lint[0x1]) Oct 29 01:25:29.660688 kernel: ACPI: LAPIC_NMI (acpi_id[0x27] high edge lint[0x1]) Oct 29 01:25:29.660695 kernel: ACPI: LAPIC_NMI (acpi_id[0x28] high edge lint[0x1]) Oct 29 01:25:29.660700 kernel: ACPI: LAPIC_NMI (acpi_id[0x29] high edge lint[0x1]) Oct 29 01:25:29.660706 kernel: ACPI: LAPIC_NMI (acpi_id[0x2a] high edge lint[0x1]) Oct 29 01:25:29.660714 kernel: ACPI: LAPIC_NMI (acpi_id[0x2b] high edge lint[0x1]) Oct 29 01:25:29.660722 kernel: ACPI: LAPIC_NMI (acpi_id[0x2c] high edge lint[0x1]) Oct 29 01:25:29.660730 kernel: ACPI: LAPIC_NMI (acpi_id[0x2d] high edge lint[0x1]) Oct 29 01:25:29.660738 kernel: ACPI: LAPIC_NMI (acpi_id[0x2e] high edge lint[0x1]) Oct 29 01:25:29.660745 kernel: ACPI: LAPIC_NMI (acpi_id[0x2f] high edge lint[0x1]) Oct 29 01:25:29.660751 kernel: ACPI: LAPIC_NMI (acpi_id[0x30] high edge lint[0x1]) Oct 29 01:25:29.660756 kernel: ACPI: LAPIC_NMI (acpi_id[0x31] high edge lint[0x1]) Oct 29 01:25:29.660763 kernel: ACPI: LAPIC_NMI (acpi_id[0x32] high edge lint[0x1]) Oct 29 01:25:29.660768 kernel: ACPI: LAPIC_NMI (acpi_id[0x33] high edge lint[0x1]) Oct 29 01:25:29.660774 kernel: ACPI: LAPIC_NMI (acpi_id[0x34] high edge lint[0x1]) Oct 29 01:25:29.660779 kernel: ACPI: LAPIC_NMI (acpi_id[0x35] high edge lint[0x1]) Oct 29 01:25:29.660784 kernel: ACPI: LAPIC_NMI (acpi_id[0x36] high edge lint[0x1]) Oct 29 01:25:29.660790 kernel: ACPI: LAPIC_NMI (acpi_id[0x37] high edge lint[0x1]) Oct 29 01:25:29.660812 kernel: ACPI: LAPIC_NMI (acpi_id[0x38] high edge lint[0x1]) Oct 29 01:25:29.660818 kernel: ACPI: LAPIC_NMI (acpi_id[0x39] high edge lint[0x1]) Oct 29 01:25:29.660824 kernel: ACPI: LAPIC_NMI (acpi_id[0x3a] high edge lint[0x1]) Oct 29 01:25:29.660831 kernel: ACPI: LAPIC_NMI (acpi_id[0x3b] high edge lint[0x1]) Oct 29 01:25:29.660842 kernel: ACPI: LAPIC_NMI (acpi_id[0x3c] high edge lint[0x1]) Oct 29 01:25:29.660848 kernel: ACPI: LAPIC_NMI (acpi_id[0x3d] high edge lint[0x1]) Oct 29 01:25:29.660853 kernel: ACPI: LAPIC_NMI (acpi_id[0x3e] high edge lint[0x1]) Oct 29 01:25:29.660862 kernel: ACPI: LAPIC_NMI (acpi_id[0x3f] high edge lint[0x1]) Oct 29 01:25:29.660867 kernel: ACPI: LAPIC_NMI (acpi_id[0x40] high edge lint[0x1]) Oct 29 01:25:29.660873 kernel: ACPI: LAPIC_NMI (acpi_id[0x41] high edge lint[0x1]) Oct 29 01:25:29.660878 kernel: ACPI: LAPIC_NMI (acpi_id[0x42] high edge lint[0x1]) Oct 29 01:25:29.660883 kernel: ACPI: LAPIC_NMI (acpi_id[0x43] high edge lint[0x1]) Oct 29 01:25:29.660890 kernel: ACPI: LAPIC_NMI (acpi_id[0x44] high edge lint[0x1]) Oct 29 01:25:29.660895 kernel: ACPI: LAPIC_NMI (acpi_id[0x45] high edge lint[0x1]) Oct 29 01:25:29.660900 kernel: ACPI: LAPIC_NMI (acpi_id[0x46] high edge lint[0x1]) Oct 29 01:25:29.660905 kernel: ACPI: LAPIC_NMI (acpi_id[0x47] high edge lint[0x1]) Oct 29 01:25:29.660911 kernel: ACPI: LAPIC_NMI (acpi_id[0x48] high edge lint[0x1]) Oct 29 01:25:29.660916 kernel: ACPI: LAPIC_NMI (acpi_id[0x49] high edge lint[0x1]) Oct 29 01:25:29.660921 kernel: ACPI: LAPIC_NMI (acpi_id[0x4a] high edge lint[0x1]) Oct 29 01:25:29.660927 kernel: ACPI: LAPIC_NMI (acpi_id[0x4b] high edge lint[0x1]) Oct 29 01:25:29.660932 kernel: ACPI: LAPIC_NMI (acpi_id[0x4c] high edge lint[0x1]) Oct 29 01:25:29.660937 kernel: ACPI: LAPIC_NMI (acpi_id[0x4d] high edge lint[0x1]) Oct 29 01:25:29.660943 kernel: ACPI: LAPIC_NMI (acpi_id[0x4e] high edge lint[0x1]) Oct 29 01:25:29.660948 kernel: ACPI: LAPIC_NMI (acpi_id[0x4f] high edge lint[0x1]) Oct 29 01:25:29.660954 kernel: ACPI: LAPIC_NMI (acpi_id[0x50] high edge lint[0x1]) Oct 29 01:25:29.660959 kernel: ACPI: LAPIC_NMI (acpi_id[0x51] high edge lint[0x1]) Oct 29 01:25:29.660964 kernel: ACPI: LAPIC_NMI (acpi_id[0x52] high edge lint[0x1]) Oct 29 01:25:29.660969 kernel: ACPI: LAPIC_NMI (acpi_id[0x53] high edge lint[0x1]) Oct 29 01:25:29.660978 kernel: ACPI: LAPIC_NMI (acpi_id[0x54] high edge lint[0x1]) Oct 29 01:25:29.660983 kernel: ACPI: LAPIC_NMI (acpi_id[0x55] high edge lint[0x1]) Oct 29 01:25:29.660988 kernel: ACPI: LAPIC_NMI (acpi_id[0x56] high edge lint[0x1]) Oct 29 01:25:29.660994 kernel: ACPI: LAPIC_NMI (acpi_id[0x57] high edge lint[0x1]) Oct 29 01:25:29.661000 kernel: ACPI: LAPIC_NMI (acpi_id[0x58] high edge lint[0x1]) Oct 29 01:25:29.661005 kernel: ACPI: LAPIC_NMI (acpi_id[0x59] high edge lint[0x1]) Oct 29 01:25:29.661010 kernel: ACPI: LAPIC_NMI (acpi_id[0x5a] high edge lint[0x1]) Oct 29 01:25:29.661015 kernel: ACPI: LAPIC_NMI (acpi_id[0x5b] high edge lint[0x1]) Oct 29 01:25:29.661021 kernel: ACPI: LAPIC_NMI (acpi_id[0x5c] high edge lint[0x1]) Oct 29 01:25:29.661026 kernel: ACPI: LAPIC_NMI (acpi_id[0x5d] high edge lint[0x1]) Oct 29 01:25:29.661031 kernel: ACPI: LAPIC_NMI (acpi_id[0x5e] high edge lint[0x1]) Oct 29 01:25:29.661037 kernel: ACPI: LAPIC_NMI (acpi_id[0x5f] high edge lint[0x1]) Oct 29 01:25:29.661042 kernel: ACPI: LAPIC_NMI (acpi_id[0x60] high edge lint[0x1]) Oct 29 01:25:29.661048 kernel: ACPI: LAPIC_NMI (acpi_id[0x61] high edge lint[0x1]) Oct 29 01:25:29.661053 kernel: ACPI: LAPIC_NMI (acpi_id[0x62] high edge lint[0x1]) Oct 29 01:25:29.661058 kernel: ACPI: LAPIC_NMI (acpi_id[0x63] high edge lint[0x1]) Oct 29 01:25:29.661064 kernel: ACPI: LAPIC_NMI (acpi_id[0x64] high edge lint[0x1]) Oct 29 01:25:29.661069 kernel: ACPI: LAPIC_NMI (acpi_id[0x65] high edge lint[0x1]) Oct 29 01:25:29.661074 kernel: ACPI: LAPIC_NMI (acpi_id[0x66] high edge lint[0x1]) Oct 29 01:25:29.661079 kernel: ACPI: LAPIC_NMI (acpi_id[0x67] high edge lint[0x1]) Oct 29 01:25:29.661085 kernel: ACPI: LAPIC_NMI (acpi_id[0x68] high edge lint[0x1]) Oct 29 01:25:29.661090 kernel: ACPI: LAPIC_NMI (acpi_id[0x69] high edge lint[0x1]) Oct 29 01:25:29.661096 kernel: ACPI: LAPIC_NMI (acpi_id[0x6a] high edge lint[0x1]) Oct 29 01:25:29.661101 kernel: ACPI: LAPIC_NMI (acpi_id[0x6b] high edge lint[0x1]) Oct 29 01:25:29.661106 kernel: ACPI: LAPIC_NMI (acpi_id[0x6c] high edge lint[0x1]) Oct 29 01:25:29.661111 kernel: ACPI: LAPIC_NMI (acpi_id[0x6d] high edge lint[0x1]) Oct 29 01:25:29.661117 kernel: ACPI: LAPIC_NMI (acpi_id[0x6e] high edge lint[0x1]) Oct 29 01:25:29.661122 kernel: ACPI: LAPIC_NMI (acpi_id[0x6f] high edge lint[0x1]) Oct 29 01:25:29.661127 kernel: ACPI: LAPIC_NMI (acpi_id[0x70] high edge lint[0x1]) Oct 29 01:25:29.661132 kernel: ACPI: LAPIC_NMI (acpi_id[0x71] high edge lint[0x1]) Oct 29 01:25:29.661137 kernel: ACPI: LAPIC_NMI (acpi_id[0x72] high edge lint[0x1]) Oct 29 01:25:29.661142 kernel: ACPI: LAPIC_NMI (acpi_id[0x73] high edge lint[0x1]) Oct 29 01:25:29.661149 kernel: ACPI: LAPIC_NMI (acpi_id[0x74] high edge lint[0x1]) Oct 29 01:25:29.661154 kernel: ACPI: LAPIC_NMI (acpi_id[0x75] high edge lint[0x1]) Oct 29 01:25:29.661159 kernel: ACPI: LAPIC_NMI (acpi_id[0x76] high edge lint[0x1]) Oct 29 01:25:29.661164 kernel: ACPI: LAPIC_NMI (acpi_id[0x77] high edge lint[0x1]) Oct 29 01:25:29.661170 kernel: ACPI: LAPIC_NMI (acpi_id[0x78] high edge lint[0x1]) Oct 29 01:25:29.661175 kernel: ACPI: LAPIC_NMI (acpi_id[0x79] high edge lint[0x1]) Oct 29 01:25:29.661180 kernel: ACPI: LAPIC_NMI (acpi_id[0x7a] high edge lint[0x1]) Oct 29 01:25:29.661185 kernel: ACPI: LAPIC_NMI (acpi_id[0x7b] high edge lint[0x1]) Oct 29 01:25:29.661190 kernel: ACPI: LAPIC_NMI (acpi_id[0x7c] high edge lint[0x1]) Oct 29 01:25:29.661197 kernel: ACPI: LAPIC_NMI (acpi_id[0x7d] high edge lint[0x1]) Oct 29 01:25:29.661202 kernel: ACPI: LAPIC_NMI (acpi_id[0x7e] high edge lint[0x1]) Oct 29 01:25:29.661207 kernel: ACPI: LAPIC_NMI (acpi_id[0x7f] high edge lint[0x1]) Oct 29 01:25:29.661213 kernel: IOAPIC[0]: apic_id 1, version 17, address 0xfec00000, GSI 0-23 Oct 29 01:25:29.661219 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 high edge) Oct 29 01:25:29.661224 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Oct 29 01:25:29.661230 kernel: ACPI: HPET id: 0x8086af01 base: 0xfed00000 Oct 29 01:25:29.661235 kernel: TSC deadline timer available Oct 29 01:25:29.661240 kernel: smpboot: Allowing 128 CPUs, 126 hotplug CPUs Oct 29 01:25:29.661247 kernel: [mem 0x80000000-0xefffffff] available for PCI devices Oct 29 01:25:29.661252 kernel: Booting paravirtualized kernel on VMware hypervisor Oct 29 01:25:29.661258 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Oct 29 01:25:29.661263 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:512 nr_cpu_ids:128 nr_node_ids:1 Oct 29 01:25:29.661269 kernel: percpu: Embedded 56 pages/cpu s188696 r8192 d32488 u262144 Oct 29 01:25:29.661274 kernel: pcpu-alloc: s188696 r8192 d32488 u262144 alloc=1*2097152 Oct 29 01:25:29.661280 kernel: pcpu-alloc: [0] 000 001 002 003 004 005 006 007 Oct 29 01:25:29.661285 kernel: pcpu-alloc: [0] 008 009 010 011 012 013 014 015 Oct 29 01:25:29.661290 kernel: pcpu-alloc: [0] 016 017 018 019 020 021 022 023 Oct 29 01:25:29.661296 kernel: pcpu-alloc: [0] 024 025 026 027 028 029 030 031 Oct 29 01:25:29.661301 kernel: pcpu-alloc: [0] 032 033 034 035 036 037 038 039 Oct 29 01:25:29.661307 kernel: pcpu-alloc: [0] 040 041 042 043 044 045 046 047 Oct 29 01:25:29.661312 kernel: pcpu-alloc: [0] 048 049 050 051 052 053 054 055 Oct 29 01:25:29.661324 kernel: pcpu-alloc: [0] 056 057 058 059 060 061 062 063 Oct 29 01:25:29.661330 kernel: pcpu-alloc: [0] 064 065 066 067 068 069 070 071 Oct 29 01:25:29.661336 kernel: pcpu-alloc: [0] 072 073 074 075 076 077 078 079 Oct 29 01:25:29.661342 kernel: pcpu-alloc: [0] 080 081 082 083 084 085 086 087 Oct 29 01:25:29.661347 kernel: pcpu-alloc: [0] 088 089 090 091 092 093 094 095 Oct 29 01:25:29.661354 kernel: pcpu-alloc: [0] 096 097 098 099 100 101 102 103 Oct 29 01:25:29.661359 kernel: pcpu-alloc: [0] 104 105 106 107 108 109 110 111 Oct 29 01:25:29.661367 kernel: pcpu-alloc: [0] 112 113 114 115 116 117 118 119 Oct 29 01:25:29.661375 kernel: pcpu-alloc: [0] 120 121 122 123 124 125 126 127 Oct 29 01:25:29.661385 kernel: Built 1 zonelists, mobility grouping on. Total pages: 515808 Oct 29 01:25:29.661395 kernel: Policy zone: DMA32 Oct 29 01:25:29.661403 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=vmware flatcar.autologin verity.usrhash=201610a31b2ff0ec76573eccf3918f182ba51086e5a85b3aea8675643c4efef7 Oct 29 01:25:29.661409 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Oct 29 01:25:29.661416 kernel: printk: log_buf_len individual max cpu contribution: 4096 bytes Oct 29 01:25:29.661421 kernel: printk: log_buf_len total cpu_extra contributions: 520192 bytes Oct 29 01:25:29.661427 kernel: printk: log_buf_len min size: 262144 bytes Oct 29 01:25:29.661433 kernel: printk: log_buf_len: 1048576 bytes Oct 29 01:25:29.661438 kernel: printk: early log buf free: 239728(91%) Oct 29 01:25:29.661444 kernel: Dentry cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Oct 29 01:25:29.661450 kernel: Inode-cache hash table entries: 131072 (order: 8, 1048576 bytes, linear) Oct 29 01:25:29.661456 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Oct 29 01:25:29.661462 kernel: Memory: 1940392K/2096628K available (12295K kernel code, 2276K rwdata, 13732K rodata, 47496K init, 4084K bss, 155976K reserved, 0K cma-reserved) Oct 29 01:25:29.661469 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=128, Nodes=1 Oct 29 01:25:29.661474 kernel: ftrace: allocating 34614 entries in 136 pages Oct 29 01:25:29.661480 kernel: ftrace: allocated 136 pages with 2 groups Oct 29 01:25:29.661487 kernel: rcu: Hierarchical RCU implementation. Oct 29 01:25:29.661493 kernel: rcu: RCU event tracing is enabled. Oct 29 01:25:29.661500 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=128. Oct 29 01:25:29.661506 kernel: Rude variant of Tasks RCU enabled. Oct 29 01:25:29.661512 kernel: Tracing variant of Tasks RCU enabled. Oct 29 01:25:29.661518 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Oct 29 01:25:29.661523 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=128 Oct 29 01:25:29.661529 kernel: NR_IRQS: 33024, nr_irqs: 1448, preallocated irqs: 16 Oct 29 01:25:29.661535 kernel: random: crng init done Oct 29 01:25:29.661540 kernel: Console: colour VGA+ 80x25 Oct 29 01:25:29.661546 kernel: printk: console [tty0] enabled Oct 29 01:25:29.661552 kernel: printk: console [ttyS0] enabled Oct 29 01:25:29.661558 kernel: ACPI: Core revision 20210730 Oct 29 01:25:29.661564 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 133484882848 ns Oct 29 01:25:29.661570 kernel: APIC: Switch to symmetric I/O mode setup Oct 29 01:25:29.661576 kernel: x2apic enabled Oct 29 01:25:29.661581 kernel: Switched APIC routing to physical x2apic. Oct 29 01:25:29.661587 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Oct 29 01:25:29.661593 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x311fd3cd494, max_idle_ns: 440795223879 ns Oct 29 01:25:29.661599 kernel: Calibrating delay loop (skipped) preset value.. 6816.00 BogoMIPS (lpj=3408000) Oct 29 01:25:29.661605 kernel: Disabled fast string operations Oct 29 01:25:29.661611 kernel: Last level iTLB entries: 4KB 64, 2MB 8, 4MB 8 Oct 29 01:25:29.661617 kernel: Last level dTLB entries: 4KB 64, 2MB 32, 4MB 32, 1GB 4 Oct 29 01:25:29.661623 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Oct 29 01:25:29.661629 kernel: Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks! Oct 29 01:25:29.661635 kernel: Spectre V2 : Spectre BHI mitigation: SW BHB clearing on vm exit Oct 29 01:25:29.661640 kernel: Spectre V2 : Spectre BHI mitigation: SW BHB clearing on syscall Oct 29 01:25:29.661646 kernel: Spectre V2 : Mitigation: Enhanced / Automatic IBRS Oct 29 01:25:29.661652 kernel: Spectre V2 : Spectre v2 / PBRSB-eIBRS: Retire a single CALL on VMEXIT Oct 29 01:25:29.661658 kernel: RETBleed: Mitigation: Enhanced IBRS Oct 29 01:25:29.661664 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Oct 29 01:25:29.661670 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl and seccomp Oct 29 01:25:29.661676 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode Oct 29 01:25:29.661681 kernel: SRBDS: Unknown: Dependent on hypervisor status Oct 29 01:25:29.661687 kernel: GDS: Unknown: Dependent on hypervisor status Oct 29 01:25:29.661692 kernel: active return thunk: its_return_thunk Oct 29 01:25:29.661698 kernel: ITS: Mitigation: Aligned branch/return thunks Oct 29 01:25:29.661704 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Oct 29 01:25:29.661711 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Oct 29 01:25:29.661717 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Oct 29 01:25:29.661722 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Oct 29 01:25:29.661728 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format. Oct 29 01:25:29.661734 kernel: Freeing SMP alternatives memory: 32K Oct 29 01:25:29.661740 kernel: pid_max: default: 131072 minimum: 1024 Oct 29 01:25:29.661745 kernel: LSM: Security Framework initializing Oct 29 01:25:29.661751 kernel: SELinux: Initializing. Oct 29 01:25:29.661756 kernel: Mount-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Oct 29 01:25:29.661763 kernel: Mountpoint-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Oct 29 01:25:29.661769 kernel: smpboot: CPU0: Intel(R) Xeon(R) E-2278G CPU @ 3.40GHz (family: 0x6, model: 0x9e, stepping: 0xd) Oct 29 01:25:29.661775 kernel: Performance Events: Skylake events, core PMU driver. Oct 29 01:25:29.661780 kernel: core: CPUID marked event: 'cpu cycles' unavailable Oct 29 01:25:29.661787 kernel: core: CPUID marked event: 'instructions' unavailable Oct 29 01:25:29.661793 kernel: core: CPUID marked event: 'bus cycles' unavailable Oct 29 01:25:29.665311 kernel: core: CPUID marked event: 'cache references' unavailable Oct 29 01:25:29.665327 kernel: core: CPUID marked event: 'cache misses' unavailable Oct 29 01:25:29.665333 kernel: core: CPUID marked event: 'branch instructions' unavailable Oct 29 01:25:29.665342 kernel: core: CPUID marked event: 'branch misses' unavailable Oct 29 01:25:29.665348 kernel: ... version: 1 Oct 29 01:25:29.665354 kernel: ... bit width: 48 Oct 29 01:25:29.665359 kernel: ... generic registers: 4 Oct 29 01:25:29.665365 kernel: ... value mask: 0000ffffffffffff Oct 29 01:25:29.665371 kernel: ... max period: 000000007fffffff Oct 29 01:25:29.665376 kernel: ... fixed-purpose events: 0 Oct 29 01:25:29.665382 kernel: ... event mask: 000000000000000f Oct 29 01:25:29.665388 kernel: signal: max sigframe size: 1776 Oct 29 01:25:29.665395 kernel: rcu: Hierarchical SRCU implementation. Oct 29 01:25:29.665401 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Oct 29 01:25:29.665406 kernel: smp: Bringing up secondary CPUs ... Oct 29 01:25:29.665412 kernel: x86: Booting SMP configuration: Oct 29 01:25:29.665418 kernel: .... node #0, CPUs: #1 Oct 29 01:25:29.665423 kernel: Disabled fast string operations Oct 29 01:25:29.665429 kernel: smpboot: CPU 1 Converting physical 2 to logical package 1 Oct 29 01:25:29.665435 kernel: smpboot: CPU 1 Converting physical 0 to logical die 1 Oct 29 01:25:29.665441 kernel: smp: Brought up 1 node, 2 CPUs Oct 29 01:25:29.665446 kernel: smpboot: Max logical packages: 128 Oct 29 01:25:29.665453 kernel: smpboot: Total of 2 processors activated (13632.00 BogoMIPS) Oct 29 01:25:29.665459 kernel: devtmpfs: initialized Oct 29 01:25:29.665465 kernel: x86/mm: Memory block size: 128MB Oct 29 01:25:29.665471 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x7feff000-0x7fefffff] (4096 bytes) Oct 29 01:25:29.665477 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Oct 29 01:25:29.665483 kernel: futex hash table entries: 32768 (order: 9, 2097152 bytes, linear) Oct 29 01:25:29.665489 kernel: pinctrl core: initialized pinctrl subsystem Oct 29 01:25:29.665495 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Oct 29 01:25:29.665500 kernel: audit: initializing netlink subsys (disabled) Oct 29 01:25:29.665507 kernel: audit: type=2000 audit(1761701128.085:1): state=initialized audit_enabled=0 res=1 Oct 29 01:25:29.665513 kernel: thermal_sys: Registered thermal governor 'step_wise' Oct 29 01:25:29.665518 kernel: thermal_sys: Registered thermal governor 'user_space' Oct 29 01:25:29.665524 kernel: cpuidle: using governor menu Oct 29 01:25:29.665530 kernel: Simple Boot Flag at 0x36 set to 0x80 Oct 29 01:25:29.665536 kernel: ACPI: bus type PCI registered Oct 29 01:25:29.665541 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Oct 29 01:25:29.665547 kernel: dca service started, version 1.12.1 Oct 29 01:25:29.665553 kernel: PCI: MMCONFIG for domain 0000 [bus 00-7f] at [mem 0xf0000000-0xf7ffffff] (base 0xf0000000) Oct 29 01:25:29.665559 kernel: PCI: MMCONFIG at [mem 0xf0000000-0xf7ffffff] reserved in E820 Oct 29 01:25:29.665565 kernel: PCI: Using configuration type 1 for base access Oct 29 01:25:29.665571 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Oct 29 01:25:29.665577 kernel: HugeTLB registered 1.00 GiB page size, pre-allocated 0 pages Oct 29 01:25:29.665582 kernel: HugeTLB registered 2.00 MiB page size, pre-allocated 0 pages Oct 29 01:25:29.665588 kernel: ACPI: Added _OSI(Module Device) Oct 29 01:25:29.665594 kernel: ACPI: Added _OSI(Processor Device) Oct 29 01:25:29.665599 kernel: ACPI: Added _OSI(Processor Aggregator Device) Oct 29 01:25:29.665605 kernel: ACPI: Added _OSI(Linux-Dell-Video) Oct 29 01:25:29.665612 kernel: ACPI: Added _OSI(Linux-Lenovo-NV-HDMI-Audio) Oct 29 01:25:29.665618 kernel: ACPI: Added _OSI(Linux-HPI-Hybrid-Graphics) Oct 29 01:25:29.665623 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Oct 29 01:25:29.665629 kernel: ACPI: [Firmware Bug]: BIOS _OSI(Linux) query ignored Oct 29 01:25:29.665635 kernel: ACPI: Interpreter enabled Oct 29 01:25:29.665641 kernel: ACPI: PM: (supports S0 S1 S5) Oct 29 01:25:29.665646 kernel: ACPI: Using IOAPIC for interrupt routing Oct 29 01:25:29.665652 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Oct 29 01:25:29.665659 kernel: ACPI: Enabled 4 GPEs in block 00 to 0F Oct 29 01:25:29.665666 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-7f]) Oct 29 01:25:29.665755 kernel: acpi PNP0A03:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Oct 29 01:25:29.665822 kernel: acpi PNP0A03:00: _OSC: platform does not support [AER LTR] Oct 29 01:25:29.665874 kernel: acpi PNP0A03:00: _OSC: OS now controls [PCIeHotplug PME PCIeCapability] Oct 29 01:25:29.665883 kernel: PCI host bridge to bus 0000:00 Oct 29 01:25:29.665933 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Oct 29 01:25:29.665980 kernel: pci_bus 0000:00: root bus resource [mem 0x000cc000-0x000dbfff window] Oct 29 01:25:29.666023 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Oct 29 01:25:29.666065 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Oct 29 01:25:29.666108 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xfeff window] Oct 29 01:25:29.666151 kernel: pci_bus 0000:00: root bus resource [bus 00-7f] Oct 29 01:25:29.666209 kernel: pci 0000:00:00.0: [8086:7190] type 00 class 0x060000 Oct 29 01:25:29.666264 kernel: pci 0000:00:01.0: [8086:7191] type 01 class 0x060400 Oct 29 01:25:29.666321 kernel: pci 0000:00:07.0: [8086:7110] type 00 class 0x060100 Oct 29 01:25:29.666379 kernel: pci 0000:00:07.1: [8086:7111] type 00 class 0x01018a Oct 29 01:25:29.666428 kernel: pci 0000:00:07.1: reg 0x20: [io 0x1060-0x106f] Oct 29 01:25:29.666478 kernel: pci 0000:00:07.1: legacy IDE quirk: reg 0x10: [io 0x01f0-0x01f7] Oct 29 01:25:29.666525 kernel: pci 0000:00:07.1: legacy IDE quirk: reg 0x14: [io 0x03f6] Oct 29 01:25:29.666574 kernel: pci 0000:00:07.1: legacy IDE quirk: reg 0x18: [io 0x0170-0x0177] Oct 29 01:25:29.666623 kernel: pci 0000:00:07.1: legacy IDE quirk: reg 0x1c: [io 0x0376] Oct 29 01:25:29.666677 kernel: pci 0000:00:07.3: [8086:7113] type 00 class 0x068000 Oct 29 01:25:29.666726 kernel: pci 0000:00:07.3: quirk: [io 0x1000-0x103f] claimed by PIIX4 ACPI Oct 29 01:25:29.666775 kernel: pci 0000:00:07.3: quirk: [io 0x1040-0x104f] claimed by PIIX4 SMB Oct 29 01:25:29.666837 kernel: pci 0000:00:07.7: [15ad:0740] type 00 class 0x088000 Oct 29 01:25:29.666888 kernel: pci 0000:00:07.7: reg 0x10: [io 0x1080-0x10bf] Oct 29 01:25:29.666938 kernel: pci 0000:00:07.7: reg 0x14: [mem 0xfebfe000-0xfebfffff 64bit] Oct 29 01:25:29.666991 kernel: pci 0000:00:0f.0: [15ad:0405] type 00 class 0x030000 Oct 29 01:25:29.667039 kernel: pci 0000:00:0f.0: reg 0x10: [io 0x1070-0x107f] Oct 29 01:25:29.667087 kernel: pci 0000:00:0f.0: reg 0x14: [mem 0xe8000000-0xefffffff pref] Oct 29 01:25:29.667134 kernel: pci 0000:00:0f.0: reg 0x18: [mem 0xfe000000-0xfe7fffff] Oct 29 01:25:29.667182 kernel: pci 0000:00:0f.0: reg 0x30: [mem 0x00000000-0x00007fff pref] Oct 29 01:25:29.667229 kernel: pci 0000:00:0f.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Oct 29 01:25:29.667281 kernel: pci 0000:00:11.0: [15ad:0790] type 01 class 0x060401 Oct 29 01:25:29.667335 kernel: pci 0000:00:15.0: [15ad:07a0] type 01 class 0x060400 Oct 29 01:25:29.667384 kernel: pci 0000:00:15.0: PME# supported from D0 D3hot D3cold Oct 29 01:25:29.667437 kernel: pci 0000:00:15.1: [15ad:07a0] type 01 class 0x060400 Oct 29 01:25:29.667489 kernel: pci 0000:00:15.1: PME# supported from D0 D3hot D3cold Oct 29 01:25:29.667542 kernel: pci 0000:00:15.2: [15ad:07a0] type 01 class 0x060400 Oct 29 01:25:29.667591 kernel: pci 0000:00:15.2: PME# supported from D0 D3hot D3cold Oct 29 01:25:29.667645 kernel: pci 0000:00:15.3: [15ad:07a0] type 01 class 0x060400 Oct 29 01:25:29.667700 kernel: pci 0000:00:15.3: PME# supported from D0 D3hot D3cold Oct 29 01:25:29.667769 kernel: pci 0000:00:15.4: [15ad:07a0] type 01 class 0x060400 Oct 29 01:25:29.667827 kernel: pci 0000:00:15.4: PME# supported from D0 D3hot D3cold Oct 29 01:25:29.667881 kernel: pci 0000:00:15.5: [15ad:07a0] type 01 class 0x060400 Oct 29 01:25:29.667929 kernel: pci 0000:00:15.5: PME# supported from D0 D3hot D3cold Oct 29 01:25:29.667984 kernel: pci 0000:00:15.6: [15ad:07a0] type 01 class 0x060400 Oct 29 01:25:29.668033 kernel: pci 0000:00:15.6: PME# supported from D0 D3hot D3cold Oct 29 01:25:29.668089 kernel: pci 0000:00:15.7: [15ad:07a0] type 01 class 0x060400 Oct 29 01:25:29.668138 kernel: pci 0000:00:15.7: PME# supported from D0 D3hot D3cold Oct 29 01:25:29.668190 kernel: pci 0000:00:16.0: [15ad:07a0] type 01 class 0x060400 Oct 29 01:25:29.668240 kernel: pci 0000:00:16.0: PME# supported from D0 D3hot D3cold Oct 29 01:25:29.668294 kernel: pci 0000:00:16.1: [15ad:07a0] type 01 class 0x060400 Oct 29 01:25:29.668345 kernel: pci 0000:00:16.1: PME# supported from D0 D3hot D3cold Oct 29 01:25:29.668397 kernel: pci 0000:00:16.2: [15ad:07a0] type 01 class 0x060400 Oct 29 01:25:29.668446 kernel: pci 0000:00:16.2: PME# supported from D0 D3hot D3cold Oct 29 01:25:29.668497 kernel: pci 0000:00:16.3: [15ad:07a0] type 01 class 0x060400 Oct 29 01:25:29.668563 kernel: pci 0000:00:16.3: PME# supported from D0 D3hot D3cold Oct 29 01:25:29.668620 kernel: pci 0000:00:16.4: [15ad:07a0] type 01 class 0x060400 Oct 29 01:25:29.668669 kernel: pci 0000:00:16.4: PME# supported from D0 D3hot D3cold Oct 29 01:25:29.668721 kernel: pci 0000:00:16.5: [15ad:07a0] type 01 class 0x060400 Oct 29 01:25:29.668770 kernel: pci 0000:00:16.5: PME# supported from D0 D3hot D3cold Oct 29 01:25:29.668841 kernel: pci 0000:00:16.6: [15ad:07a0] type 01 class 0x060400 Oct 29 01:25:29.668892 kernel: pci 0000:00:16.6: PME# supported from D0 D3hot D3cold Oct 29 01:25:29.668947 kernel: pci 0000:00:16.7: [15ad:07a0] type 01 class 0x060400 Oct 29 01:25:29.668997 kernel: pci 0000:00:16.7: PME# supported from D0 D3hot D3cold Oct 29 01:25:29.669048 kernel: pci 0000:00:17.0: [15ad:07a0] type 01 class 0x060400 Oct 29 01:25:29.669097 kernel: pci 0000:00:17.0: PME# supported from D0 D3hot D3cold Oct 29 01:25:29.669150 kernel: pci 0000:00:17.1: [15ad:07a0] type 01 class 0x060400 Oct 29 01:25:29.669199 kernel: pci 0000:00:17.1: PME# supported from D0 D3hot D3cold Oct 29 01:25:29.669254 kernel: pci 0000:00:17.2: [15ad:07a0] type 01 class 0x060400 Oct 29 01:25:29.669303 kernel: pci 0000:00:17.2: PME# supported from D0 D3hot D3cold Oct 29 01:25:29.669355 kernel: pci 0000:00:17.3: [15ad:07a0] type 01 class 0x060400 Oct 29 01:25:29.669404 kernel: pci 0000:00:17.3: PME# supported from D0 D3hot D3cold Oct 29 01:25:29.669458 kernel: pci 0000:00:17.4: [15ad:07a0] type 01 class 0x060400 Oct 29 01:25:29.669507 kernel: pci 0000:00:17.4: PME# supported from D0 D3hot D3cold Oct 29 01:25:29.669561 kernel: pci 0000:00:17.5: [15ad:07a0] type 01 class 0x060400 Oct 29 01:25:29.669620 kernel: pci 0000:00:17.5: PME# supported from D0 D3hot D3cold Oct 29 01:25:29.669684 kernel: pci 0000:00:17.6: [15ad:07a0] type 01 class 0x060400 Oct 29 01:25:29.669735 kernel: pci 0000:00:17.6: PME# supported from D0 D3hot D3cold Oct 29 01:25:29.669786 kernel: pci 0000:00:17.7: [15ad:07a0] type 01 class 0x060400 Oct 29 01:25:29.678913 kernel: pci 0000:00:17.7: PME# supported from D0 D3hot D3cold Oct 29 01:25:29.678995 kernel: pci 0000:00:18.0: [15ad:07a0] type 01 class 0x060400 Oct 29 01:25:29.679048 kernel: pci 0000:00:18.0: PME# supported from D0 D3hot D3cold Oct 29 01:25:29.679107 kernel: pci 0000:00:18.1: [15ad:07a0] type 01 class 0x060400 Oct 29 01:25:29.679157 kernel: pci 0000:00:18.1: PME# supported from D0 D3hot D3cold Oct 29 01:25:29.679209 kernel: pci 0000:00:18.2: [15ad:07a0] type 01 class 0x060400 Oct 29 01:25:29.679257 kernel: pci 0000:00:18.2: PME# supported from D0 D3hot D3cold Oct 29 01:25:29.679309 kernel: pci 0000:00:18.3: [15ad:07a0] type 01 class 0x060400 Oct 29 01:25:29.679361 kernel: pci 0000:00:18.3: PME# supported from D0 D3hot D3cold Oct 29 01:25:29.679414 kernel: pci 0000:00:18.4: [15ad:07a0] type 01 class 0x060400 Oct 29 01:25:29.679463 kernel: pci 0000:00:18.4: PME# supported from D0 D3hot D3cold Oct 29 01:25:29.679519 kernel: pci 0000:00:18.5: [15ad:07a0] type 01 class 0x060400 Oct 29 01:25:29.679568 kernel: pci 0000:00:18.5: PME# supported from D0 D3hot D3cold Oct 29 01:25:29.679621 kernel: pci 0000:00:18.6: [15ad:07a0] type 01 class 0x060400 Oct 29 01:25:29.679672 kernel: pci 0000:00:18.6: PME# supported from D0 D3hot D3cold Oct 29 01:25:29.679739 kernel: pci 0000:00:18.7: [15ad:07a0] type 01 class 0x060400 Oct 29 01:25:29.679788 kernel: pci 0000:00:18.7: PME# supported from D0 D3hot D3cold Oct 29 01:25:29.679856 kernel: pci_bus 0000:01: extended config space not accessible Oct 29 01:25:29.679910 kernel: pci 0000:00:01.0: PCI bridge to [bus 01] Oct 29 01:25:29.679973 kernel: pci_bus 0000:02: extended config space not accessible Oct 29 01:25:29.679984 kernel: acpiphp: Slot [32] registered Oct 29 01:25:29.679991 kernel: acpiphp: Slot [33] registered Oct 29 01:25:29.679996 kernel: acpiphp: Slot [34] registered Oct 29 01:25:29.680002 kernel: acpiphp: Slot [35] registered Oct 29 01:25:29.680008 kernel: acpiphp: Slot [36] registered Oct 29 01:25:29.680014 kernel: acpiphp: Slot [37] registered Oct 29 01:25:29.680020 kernel: acpiphp: Slot [38] registered Oct 29 01:25:29.680026 kernel: acpiphp: Slot [39] registered Oct 29 01:25:29.680032 kernel: acpiphp: Slot [40] registered Oct 29 01:25:29.680038 kernel: acpiphp: Slot [41] registered Oct 29 01:25:29.680044 kernel: acpiphp: Slot [42] registered Oct 29 01:25:29.680050 kernel: acpiphp: Slot [43] registered Oct 29 01:25:29.680056 kernel: acpiphp: Slot [44] registered Oct 29 01:25:29.680062 kernel: acpiphp: Slot [45] registered Oct 29 01:25:29.680067 kernel: acpiphp: Slot [46] registered Oct 29 01:25:29.680080 kernel: acpiphp: Slot [47] registered Oct 29 01:25:29.680086 kernel: acpiphp: Slot [48] registered Oct 29 01:25:29.680092 kernel: acpiphp: Slot [49] registered Oct 29 01:25:29.680098 kernel: acpiphp: Slot [50] registered Oct 29 01:25:29.680105 kernel: acpiphp: Slot [51] registered Oct 29 01:25:29.680111 kernel: acpiphp: Slot [52] registered Oct 29 01:25:29.680117 kernel: acpiphp: Slot [53] registered Oct 29 01:25:29.680122 kernel: acpiphp: Slot [54] registered Oct 29 01:25:29.680128 kernel: acpiphp: Slot [55] registered Oct 29 01:25:29.680133 kernel: acpiphp: Slot [56] registered Oct 29 01:25:29.680139 kernel: acpiphp: Slot [57] registered Oct 29 01:25:29.680144 kernel: acpiphp: Slot [58] registered Oct 29 01:25:29.680150 kernel: acpiphp: Slot [59] registered Oct 29 01:25:29.680157 kernel: acpiphp: Slot [60] registered Oct 29 01:25:29.680162 kernel: acpiphp: Slot [61] registered Oct 29 01:25:29.680168 kernel: acpiphp: Slot [62] registered Oct 29 01:25:29.680173 kernel: acpiphp: Slot [63] registered Oct 29 01:25:29.680226 kernel: pci 0000:00:11.0: PCI bridge to [bus 02] (subtractive decode) Oct 29 01:25:29.680275 kernel: pci 0000:00:11.0: bridge window [io 0x2000-0x3fff] Oct 29 01:25:29.680322 kernel: pci 0000:00:11.0: bridge window [mem 0xfd600000-0xfdffffff] Oct 29 01:25:29.680370 kernel: pci 0000:00:11.0: bridge window [mem 0xe7b00000-0xe7ffffff 64bit pref] Oct 29 01:25:29.680417 kernel: pci 0000:00:11.0: bridge window [mem 0x000a0000-0x000bffff window] (subtractive decode) Oct 29 01:25:29.680467 kernel: pci 0000:00:11.0: bridge window [mem 0x000cc000-0x000dbfff window] (subtractive decode) Oct 29 01:25:29.680515 kernel: pci 0000:00:11.0: bridge window [mem 0xc0000000-0xfebfffff window] (subtractive decode) Oct 29 01:25:29.680563 kernel: pci 0000:00:11.0: bridge window [io 0x0000-0x0cf7 window] (subtractive decode) Oct 29 01:25:29.680610 kernel: pci 0000:00:11.0: bridge window [io 0x0d00-0xfeff window] (subtractive decode) Oct 29 01:25:29.680665 kernel: pci 0000:03:00.0: [15ad:07c0] type 00 class 0x010700 Oct 29 01:25:29.680716 kernel: pci 0000:03:00.0: reg 0x10: [io 0x4000-0x4007] Oct 29 01:25:29.680774 kernel: pci 0000:03:00.0: reg 0x14: [mem 0xfd5f8000-0xfd5fffff 64bit] Oct 29 01:25:29.680847 kernel: pci 0000:03:00.0: reg 0x30: [mem 0x00000000-0x0000ffff pref] Oct 29 01:25:29.680899 kernel: pci 0000:03:00.0: PME# supported from D0 D3hot D3cold Oct 29 01:25:29.680949 kernel: pci 0000:03:00.0: disabling ASPM on pre-1.1 PCIe device. You can enable it with 'pcie_aspm=force' Oct 29 01:25:29.680999 kernel: pci 0000:00:15.0: PCI bridge to [bus 03] Oct 29 01:25:29.681047 kernel: pci 0000:00:15.0: bridge window [io 0x4000-0x4fff] Oct 29 01:25:29.681096 kernel: pci 0000:00:15.0: bridge window [mem 0xfd500000-0xfd5fffff] Oct 29 01:25:29.681145 kernel: pci 0000:00:15.1: PCI bridge to [bus 04] Oct 29 01:25:29.681197 kernel: pci 0000:00:15.1: bridge window [io 0x8000-0x8fff] Oct 29 01:25:29.681245 kernel: pci 0000:00:15.1: bridge window [mem 0xfd100000-0xfd1fffff] Oct 29 01:25:29.681293 kernel: pci 0000:00:15.1: bridge window [mem 0xe7800000-0xe78fffff 64bit pref] Oct 29 01:25:29.681343 kernel: pci 0000:00:15.2: PCI bridge to [bus 05] Oct 29 01:25:29.681391 kernel: pci 0000:00:15.2: bridge window [io 0xc000-0xcfff] Oct 29 01:25:29.681438 kernel: pci 0000:00:15.2: bridge window [mem 0xfcd00000-0xfcdfffff] Oct 29 01:25:29.681486 kernel: pci 0000:00:15.2: bridge window [mem 0xe7400000-0xe74fffff 64bit pref] Oct 29 01:25:29.681536 kernel: pci 0000:00:15.3: PCI bridge to [bus 06] Oct 29 01:25:29.681586 kernel: pci 0000:00:15.3: bridge window [mem 0xfc900000-0xfc9fffff] Oct 29 01:25:29.681635 kernel: pci 0000:00:15.3: bridge window [mem 0xe7000000-0xe70fffff 64bit pref] Oct 29 01:25:29.681684 kernel: pci 0000:00:15.4: PCI bridge to [bus 07] Oct 29 01:25:29.681732 kernel: pci 0000:00:15.4: bridge window [mem 0xfc500000-0xfc5fffff] Oct 29 01:25:29.681780 kernel: pci 0000:00:15.4: bridge window [mem 0xe6c00000-0xe6cfffff 64bit pref] Oct 29 01:25:29.681838 kernel: pci 0000:00:15.5: PCI bridge to [bus 08] Oct 29 01:25:29.681888 kernel: pci 0000:00:15.5: bridge window [mem 0xfc100000-0xfc1fffff] Oct 29 01:25:29.681935 kernel: pci 0000:00:15.5: bridge window [mem 0xe6800000-0xe68fffff 64bit pref] Oct 29 01:25:29.681986 kernel: pci 0000:00:15.6: PCI bridge to [bus 09] Oct 29 01:25:29.682033 kernel: pci 0000:00:15.6: bridge window [mem 0xfbd00000-0xfbdfffff] Oct 29 01:25:29.682081 kernel: pci 0000:00:15.6: bridge window [mem 0xe6400000-0xe64fffff 64bit pref] Oct 29 01:25:29.682130 kernel: pci 0000:00:15.7: PCI bridge to [bus 0a] Oct 29 01:25:29.682181 kernel: pci 0000:00:15.7: bridge window [mem 0xfb900000-0xfb9fffff] Oct 29 01:25:29.682228 kernel: pci 0000:00:15.7: bridge window [mem 0xe6000000-0xe60fffff 64bit pref] Oct 29 01:25:29.682283 kernel: pci 0000:0b:00.0: [15ad:07b0] type 00 class 0x020000 Oct 29 01:25:29.682334 kernel: pci 0000:0b:00.0: reg 0x10: [mem 0xfd4fc000-0xfd4fcfff] Oct 29 01:25:29.682384 kernel: pci 0000:0b:00.0: reg 0x14: [mem 0xfd4fd000-0xfd4fdfff] Oct 29 01:25:29.682433 kernel: pci 0000:0b:00.0: reg 0x18: [mem 0xfd4fe000-0xfd4fffff] Oct 29 01:25:29.682483 kernel: pci 0000:0b:00.0: reg 0x1c: [io 0x5000-0x500f] Oct 29 01:25:29.682533 kernel: pci 0000:0b:00.0: reg 0x30: [mem 0x00000000-0x0000ffff pref] Oct 29 01:25:29.682584 kernel: pci 0000:0b:00.0: supports D1 D2 Oct 29 01:25:29.682634 kernel: pci 0000:0b:00.0: PME# supported from D0 D1 D2 D3hot D3cold Oct 29 01:25:29.682684 kernel: pci 0000:0b:00.0: disabling ASPM on pre-1.1 PCIe device. You can enable it with 'pcie_aspm=force' Oct 29 01:25:29.682733 kernel: pci 0000:00:16.0: PCI bridge to [bus 0b] Oct 29 01:25:29.682782 kernel: pci 0000:00:16.0: bridge window [io 0x5000-0x5fff] Oct 29 01:25:29.682846 kernel: pci 0000:00:16.0: bridge window [mem 0xfd400000-0xfd4fffff] Oct 29 01:25:29.682898 kernel: pci 0000:00:16.1: PCI bridge to [bus 0c] Oct 29 01:25:29.682950 kernel: pci 0000:00:16.1: bridge window [io 0x9000-0x9fff] Oct 29 01:25:29.682997 kernel: pci 0000:00:16.1: bridge window [mem 0xfd000000-0xfd0fffff] Oct 29 01:25:29.683046 kernel: pci 0000:00:16.1: bridge window [mem 0xe7700000-0xe77fffff 64bit pref] Oct 29 01:25:29.683107 kernel: pci 0000:00:16.2: PCI bridge to [bus 0d] Oct 29 01:25:29.683157 kernel: pci 0000:00:16.2: bridge window [io 0xd000-0xdfff] Oct 29 01:25:29.683205 kernel: pci 0000:00:16.2: bridge window [mem 0xfcc00000-0xfccfffff] Oct 29 01:25:29.683252 kernel: pci 0000:00:16.2: bridge window [mem 0xe7300000-0xe73fffff 64bit pref] Oct 29 01:25:29.683303 kernel: pci 0000:00:16.3: PCI bridge to [bus 0e] Oct 29 01:25:29.683353 kernel: pci 0000:00:16.3: bridge window [mem 0xfc800000-0xfc8fffff] Oct 29 01:25:29.683403 kernel: pci 0000:00:16.3: bridge window [mem 0xe6f00000-0xe6ffffff 64bit pref] Oct 29 01:25:29.683453 kernel: pci 0000:00:16.4: PCI bridge to [bus 0f] Oct 29 01:25:29.683501 kernel: pci 0000:00:16.4: bridge window [mem 0xfc400000-0xfc4fffff] Oct 29 01:25:29.683549 kernel: pci 0000:00:16.4: bridge window [mem 0xe6b00000-0xe6bfffff 64bit pref] Oct 29 01:25:29.683598 kernel: pci 0000:00:16.5: PCI bridge to [bus 10] Oct 29 01:25:29.683645 kernel: pci 0000:00:16.5: bridge window [mem 0xfc000000-0xfc0fffff] Oct 29 01:25:29.683692 kernel: pci 0000:00:16.5: bridge window [mem 0xe6700000-0xe67fffff 64bit pref] Oct 29 01:25:29.683743 kernel: pci 0000:00:16.6: PCI bridge to [bus 11] Oct 29 01:25:29.683791 kernel: pci 0000:00:16.6: bridge window [mem 0xfbc00000-0xfbcfffff] Oct 29 01:25:29.693776 kernel: pci 0000:00:16.6: bridge window [mem 0xe6300000-0xe63fffff 64bit pref] Oct 29 01:25:29.693874 kernel: pci 0000:00:16.7: PCI bridge to [bus 12] Oct 29 01:25:29.693954 kernel: pci 0000:00:16.7: bridge window [mem 0xfb800000-0xfb8fffff] Oct 29 01:25:29.694030 kernel: pci 0000:00:16.7: bridge window [mem 0xe5f00000-0xe5ffffff 64bit pref] Oct 29 01:25:29.694105 kernel: pci 0000:00:17.0: PCI bridge to [bus 13] Oct 29 01:25:29.694177 kernel: pci 0000:00:17.0: bridge window [io 0x6000-0x6fff] Oct 29 01:25:29.694257 kernel: pci 0000:00:17.0: bridge window [mem 0xfd300000-0xfd3fffff] Oct 29 01:25:29.694332 kernel: pci 0000:00:17.0: bridge window [mem 0xe7a00000-0xe7afffff 64bit pref] Oct 29 01:25:29.694407 kernel: pci 0000:00:17.1: PCI bridge to [bus 14] Oct 29 01:25:29.694480 kernel: pci 0000:00:17.1: bridge window [io 0xa000-0xafff] Oct 29 01:25:29.694553 kernel: pci 0000:00:17.1: bridge window [mem 0xfcf00000-0xfcffffff] Oct 29 01:25:29.694627 kernel: pci 0000:00:17.1: bridge window [mem 0xe7600000-0xe76fffff 64bit pref] Oct 29 01:25:29.694702 kernel: pci 0000:00:17.2: PCI bridge to [bus 15] Oct 29 01:25:29.694778 kernel: pci 0000:00:17.2: bridge window [io 0xe000-0xefff] Oct 29 01:25:29.700282 kernel: pci 0000:00:17.2: bridge window [mem 0xfcb00000-0xfcbfffff] Oct 29 01:25:29.700353 kernel: pci 0000:00:17.2: bridge window [mem 0xe7200000-0xe72fffff 64bit pref] Oct 29 01:25:29.700712 kernel: pci 0000:00:17.3: PCI bridge to [bus 16] Oct 29 01:25:29.700770 kernel: pci 0000:00:17.3: bridge window [mem 0xfc700000-0xfc7fffff] Oct 29 01:25:29.700830 kernel: pci 0000:00:17.3: bridge window [mem 0xe6e00000-0xe6efffff 64bit pref] Oct 29 01:25:29.700883 kernel: pci 0000:00:17.4: PCI bridge to [bus 17] Oct 29 01:25:29.700932 kernel: pci 0000:00:17.4: bridge window [mem 0xfc300000-0xfc3fffff] Oct 29 01:25:29.700984 kernel: pci 0000:00:17.4: bridge window [mem 0xe6a00000-0xe6afffff 64bit pref] Oct 29 01:25:29.701035 kernel: pci 0000:00:17.5: PCI bridge to [bus 18] Oct 29 01:25:29.701084 kernel: pci 0000:00:17.5: bridge window [mem 0xfbf00000-0xfbffffff] Oct 29 01:25:29.701132 kernel: pci 0000:00:17.5: bridge window [mem 0xe6600000-0xe66fffff 64bit pref] Oct 29 01:25:29.701197 kernel: pci 0000:00:17.6: PCI bridge to [bus 19] Oct 29 01:25:29.701268 kernel: pci 0000:00:17.6: bridge window [mem 0xfbb00000-0xfbbfffff] Oct 29 01:25:29.701318 kernel: pci 0000:00:17.6: bridge window [mem 0xe6200000-0xe62fffff 64bit pref] Oct 29 01:25:29.701367 kernel: pci 0000:00:17.7: PCI bridge to [bus 1a] Oct 29 01:25:29.701418 kernel: pci 0000:00:17.7: bridge window [mem 0xfb700000-0xfb7fffff] Oct 29 01:25:29.701468 kernel: pci 0000:00:17.7: bridge window [mem 0xe5e00000-0xe5efffff 64bit pref] Oct 29 01:25:29.701518 kernel: pci 0000:00:18.0: PCI bridge to [bus 1b] Oct 29 01:25:29.701566 kernel: pci 0000:00:18.0: bridge window [io 0x7000-0x7fff] Oct 29 01:25:29.701944 kernel: pci 0000:00:18.0: bridge window [mem 0xfd200000-0xfd2fffff] Oct 29 01:25:29.702008 kernel: pci 0000:00:18.0: bridge window [mem 0xe7900000-0xe79fffff 64bit pref] Oct 29 01:25:29.702067 kernel: pci 0000:00:18.1: PCI bridge to [bus 1c] Oct 29 01:25:29.702116 kernel: pci 0000:00:18.1: bridge window [io 0xb000-0xbfff] Oct 29 01:25:29.702168 kernel: pci 0000:00:18.1: bridge window [mem 0xfce00000-0xfcefffff] Oct 29 01:25:29.702216 kernel: pci 0000:00:18.1: bridge window [mem 0xe7500000-0xe75fffff 64bit pref] Oct 29 01:25:29.702266 kernel: pci 0000:00:18.2: PCI bridge to [bus 1d] Oct 29 01:25:29.702316 kernel: pci 0000:00:18.2: bridge window [mem 0xfca00000-0xfcafffff] Oct 29 01:25:29.702363 kernel: pci 0000:00:18.2: bridge window [mem 0xe7100000-0xe71fffff 64bit pref] Oct 29 01:25:29.702414 kernel: pci 0000:00:18.3: PCI bridge to [bus 1e] Oct 29 01:25:29.702462 kernel: pci 0000:00:18.3: bridge window [mem 0xfc600000-0xfc6fffff] Oct 29 01:25:29.702510 kernel: pci 0000:00:18.3: bridge window [mem 0xe6d00000-0xe6dfffff 64bit pref] Oct 29 01:25:29.702562 kernel: pci 0000:00:18.4: PCI bridge to [bus 1f] Oct 29 01:25:29.702611 kernel: pci 0000:00:18.4: bridge window [mem 0xfc200000-0xfc2fffff] Oct 29 01:25:29.702658 kernel: pci 0000:00:18.4: bridge window [mem 0xe6900000-0xe69fffff 64bit pref] Oct 29 01:25:29.702709 kernel: pci 0000:00:18.5: PCI bridge to [bus 20] Oct 29 01:25:29.702757 kernel: pci 0000:00:18.5: bridge window [mem 0xfbe00000-0xfbefffff] Oct 29 01:25:29.702813 kernel: pci 0000:00:18.5: bridge window [mem 0xe6500000-0xe65fffff 64bit pref] Oct 29 01:25:29.702869 kernel: pci 0000:00:18.6: PCI bridge to [bus 21] Oct 29 01:25:29.702918 kernel: pci 0000:00:18.6: bridge window [mem 0xfba00000-0xfbafffff] Oct 29 01:25:29.702969 kernel: pci 0000:00:18.6: bridge window [mem 0xe6100000-0xe61fffff 64bit pref] Oct 29 01:25:29.703020 kernel: pci 0000:00:18.7: PCI bridge to [bus 22] Oct 29 01:25:29.703068 kernel: pci 0000:00:18.7: bridge window [mem 0xfb600000-0xfb6fffff] Oct 29 01:25:29.703129 kernel: pci 0000:00:18.7: bridge window [mem 0xe5d00000-0xe5dfffff 64bit pref] Oct 29 01:25:29.703137 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 9 Oct 29 01:25:29.703143 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 0 Oct 29 01:25:29.703149 kernel: ACPI: PCI: Interrupt link LNKB disabled Oct 29 01:25:29.703155 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Oct 29 01:25:29.703163 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 10 Oct 29 01:25:29.703168 kernel: iommu: Default domain type: Translated Oct 29 01:25:29.703174 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Oct 29 01:25:29.703223 kernel: pci 0000:00:0f.0: vgaarb: setting as boot VGA device Oct 29 01:25:29.703273 kernel: pci 0000:00:0f.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Oct 29 01:25:29.703321 kernel: pci 0000:00:0f.0: vgaarb: bridge control possible Oct 29 01:25:29.703329 kernel: vgaarb: loaded Oct 29 01:25:29.703335 kernel: pps_core: LinuxPPS API ver. 1 registered Oct 29 01:25:29.703341 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Oct 29 01:25:29.703348 kernel: PTP clock support registered Oct 29 01:25:29.703354 kernel: PCI: Using ACPI for IRQ routing Oct 29 01:25:29.703360 kernel: PCI: pci_cache_line_size set to 64 bytes Oct 29 01:25:29.703366 kernel: e820: reserve RAM buffer [mem 0x0009ec00-0x0009ffff] Oct 29 01:25:29.703372 kernel: e820: reserve RAM buffer [mem 0x7fee0000-0x7fffffff] Oct 29 01:25:29.703377 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 Oct 29 01:25:29.703383 kernel: hpet0: 16 comparators, 64-bit 14.318180 MHz counter Oct 29 01:25:29.703389 kernel: clocksource: Switched to clocksource tsc-early Oct 29 01:25:29.703395 kernel: VFS: Disk quotas dquot_6.6.0 Oct 29 01:25:29.703402 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Oct 29 01:25:29.703408 kernel: pnp: PnP ACPI init Oct 29 01:25:29.703461 kernel: system 00:00: [io 0x1000-0x103f] has been reserved Oct 29 01:25:29.703507 kernel: system 00:00: [io 0x1040-0x104f] has been reserved Oct 29 01:25:29.703552 kernel: system 00:00: [io 0x0cf0-0x0cf1] has been reserved Oct 29 01:25:29.703601 kernel: system 00:04: [mem 0xfed00000-0xfed003ff] has been reserved Oct 29 01:25:29.703649 kernel: pnp 00:06: [dma 2] Oct 29 01:25:29.703699 kernel: system 00:07: [io 0xfce0-0xfcff] has been reserved Oct 29 01:25:29.703743 kernel: system 00:07: [mem 0xf0000000-0xf7ffffff] has been reserved Oct 29 01:25:29.703787 kernel: system 00:07: [mem 0xfe800000-0xfe9fffff] has been reserved Oct 29 01:25:29.703794 kernel: pnp: PnP ACPI: found 8 devices Oct 29 01:25:29.704071 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Oct 29 01:25:29.704080 kernel: NET: Registered PF_INET protocol family Oct 29 01:25:29.704086 kernel: IP idents hash table entries: 32768 (order: 6, 262144 bytes, linear) Oct 29 01:25:29.704094 kernel: tcp_listen_portaddr_hash hash table entries: 1024 (order: 2, 16384 bytes, linear) Oct 29 01:25:29.704100 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Oct 29 01:25:29.704106 kernel: TCP established hash table entries: 16384 (order: 5, 131072 bytes, linear) Oct 29 01:25:29.704112 kernel: TCP bind hash table entries: 16384 (order: 6, 262144 bytes, linear) Oct 29 01:25:29.704117 kernel: TCP: Hash tables configured (established 16384 bind 16384) Oct 29 01:25:29.704123 kernel: UDP hash table entries: 1024 (order: 3, 32768 bytes, linear) Oct 29 01:25:29.704129 kernel: UDP-Lite hash table entries: 1024 (order: 3, 32768 bytes, linear) Oct 29 01:25:29.704135 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Oct 29 01:25:29.704141 kernel: NET: Registered PF_XDP protocol family Oct 29 01:25:29.704204 kernel: pci 0000:00:15.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 03] add_size 200000 add_align 100000 Oct 29 01:25:29.704258 kernel: pci 0000:00:15.3: bridge window [io 0x1000-0x0fff] to [bus 06] add_size 1000 Oct 29 01:25:29.704312 kernel: pci 0000:00:15.4: bridge window [io 0x1000-0x0fff] to [bus 07] add_size 1000 Oct 29 01:25:29.704363 kernel: pci 0000:00:15.5: bridge window [io 0x1000-0x0fff] to [bus 08] add_size 1000 Oct 29 01:25:29.704415 kernel: pci 0000:00:15.6: bridge window [io 0x1000-0x0fff] to [bus 09] add_size 1000 Oct 29 01:25:29.704465 kernel: pci 0000:00:15.7: bridge window [io 0x1000-0x0fff] to [bus 0a] add_size 1000 Oct 29 01:25:29.704520 kernel: pci 0000:00:16.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 0b] add_size 200000 add_align 100000 Oct 29 01:25:29.704569 kernel: pci 0000:00:16.3: bridge window [io 0x1000-0x0fff] to [bus 0e] add_size 1000 Oct 29 01:25:29.704620 kernel: pci 0000:00:16.4: bridge window [io 0x1000-0x0fff] to [bus 0f] add_size 1000 Oct 29 01:25:29.704670 kernel: pci 0000:00:16.5: bridge window [io 0x1000-0x0fff] to [bus 10] add_size 1000 Oct 29 01:25:29.704721 kernel: pci 0000:00:16.6: bridge window [io 0x1000-0x0fff] to [bus 11] add_size 1000 Oct 29 01:25:29.704771 kernel: pci 0000:00:16.7: bridge window [io 0x1000-0x0fff] to [bus 12] add_size 1000 Oct 29 01:25:29.704837 kernel: pci 0000:00:17.3: bridge window [io 0x1000-0x0fff] to [bus 16] add_size 1000 Oct 29 01:25:29.704888 kernel: pci 0000:00:17.4: bridge window [io 0x1000-0x0fff] to [bus 17] add_size 1000 Oct 29 01:25:29.704938 kernel: pci 0000:00:17.5: bridge window [io 0x1000-0x0fff] to [bus 18] add_size 1000 Oct 29 01:25:29.704988 kernel: pci 0000:00:17.6: bridge window [io 0x1000-0x0fff] to [bus 19] add_size 1000 Oct 29 01:25:29.705059 kernel: pci 0000:00:17.7: bridge window [io 0x1000-0x0fff] to [bus 1a] add_size 1000 Oct 29 01:25:29.705331 kernel: pci 0000:00:18.2: bridge window [io 0x1000-0x0fff] to [bus 1d] add_size 1000 Oct 29 01:25:29.705392 kernel: pci 0000:00:18.3: bridge window [io 0x1000-0x0fff] to [bus 1e] add_size 1000 Oct 29 01:25:29.705444 kernel: pci 0000:00:18.4: bridge window [io 0x1000-0x0fff] to [bus 1f] add_size 1000 Oct 29 01:25:29.705518 kernel: pci 0000:00:18.5: bridge window [io 0x1000-0x0fff] to [bus 20] add_size 1000 Oct 29 01:25:29.705778 kernel: pci 0000:00:18.6: bridge window [io 0x1000-0x0fff] to [bus 21] add_size 1000 Oct 29 01:25:29.705920 kernel: pci 0000:00:18.7: bridge window [io 0x1000-0x0fff] to [bus 22] add_size 1000 Oct 29 01:25:29.705974 kernel: pci 0000:00:15.0: BAR 15: assigned [mem 0xc0000000-0xc01fffff 64bit pref] Oct 29 01:25:29.706031 kernel: pci 0000:00:16.0: BAR 15: assigned [mem 0xc0200000-0xc03fffff 64bit pref] Oct 29 01:25:29.706086 kernel: pci 0000:00:15.3: BAR 13: no space for [io size 0x1000] Oct 29 01:25:29.706136 kernel: pci 0000:00:15.3: BAR 13: failed to assign [io size 0x1000] Oct 29 01:25:29.706185 kernel: pci 0000:00:15.4: BAR 13: no space for [io size 0x1000] Oct 29 01:25:29.706232 kernel: pci 0000:00:15.4: BAR 13: failed to assign [io size 0x1000] Oct 29 01:25:29.706281 kernel: pci 0000:00:15.5: BAR 13: no space for [io size 0x1000] Oct 29 01:25:29.706328 kernel: pci 0000:00:15.5: BAR 13: failed to assign [io size 0x1000] Oct 29 01:25:29.706379 kernel: pci 0000:00:15.6: BAR 13: no space for [io size 0x1000] Oct 29 01:25:29.706426 kernel: pci 0000:00:15.6: BAR 13: failed to assign [io size 0x1000] Oct 29 01:25:29.706475 kernel: pci 0000:00:15.7: BAR 13: no space for [io size 0x1000] Oct 29 01:25:29.706522 kernel: pci 0000:00:15.7: BAR 13: failed to assign [io size 0x1000] Oct 29 01:25:29.706572 kernel: pci 0000:00:16.3: BAR 13: no space for [io size 0x1000] Oct 29 01:25:29.706619 kernel: pci 0000:00:16.3: BAR 13: failed to assign [io size 0x1000] Oct 29 01:25:29.706668 kernel: pci 0000:00:16.4: BAR 13: no space for [io size 0x1000] Oct 29 01:25:29.706714 kernel: pci 0000:00:16.4: BAR 13: failed to assign [io size 0x1000] Oct 29 01:25:29.706763 kernel: pci 0000:00:16.5: BAR 13: no space for [io size 0x1000] Oct 29 01:25:29.706820 kernel: pci 0000:00:16.5: BAR 13: failed to assign [io size 0x1000] Oct 29 01:25:29.706870 kernel: pci 0000:00:16.6: BAR 13: no space for [io size 0x1000] Oct 29 01:25:29.706918 kernel: pci 0000:00:16.6: BAR 13: failed to assign [io size 0x1000] Oct 29 01:25:29.706966 kernel: pci 0000:00:16.7: BAR 13: no space for [io size 0x1000] Oct 29 01:25:29.707015 kernel: pci 0000:00:16.7: BAR 13: failed to assign [io size 0x1000] Oct 29 01:25:29.707063 kernel: pci 0000:00:17.3: BAR 13: no space for [io size 0x1000] Oct 29 01:25:29.707111 kernel: pci 0000:00:17.3: BAR 13: failed to assign [io size 0x1000] Oct 29 01:25:29.707159 kernel: pci 0000:00:17.4: BAR 13: no space for [io size 0x1000] Oct 29 01:25:29.707210 kernel: pci 0000:00:17.4: BAR 13: failed to assign [io size 0x1000] Oct 29 01:25:29.707259 kernel: pci 0000:00:17.5: BAR 13: no space for [io size 0x1000] Oct 29 01:25:29.707307 kernel: pci 0000:00:17.5: BAR 13: failed to assign [io size 0x1000] Oct 29 01:25:29.707355 kernel: pci 0000:00:17.6: BAR 13: no space for [io size 0x1000] Oct 29 01:25:29.707404 kernel: pci 0000:00:17.6: BAR 13: failed to assign [io size 0x1000] Oct 29 01:25:29.707452 kernel: pci 0000:00:17.7: BAR 13: no space for [io size 0x1000] Oct 29 01:25:29.707499 kernel: pci 0000:00:17.7: BAR 13: failed to assign [io size 0x1000] Oct 29 01:25:29.707546 kernel: pci 0000:00:18.2: BAR 13: no space for [io size 0x1000] Oct 29 01:25:29.707597 kernel: pci 0000:00:18.2: BAR 13: failed to assign [io size 0x1000] Oct 29 01:25:29.707645 kernel: pci 0000:00:18.3: BAR 13: no space for [io size 0x1000] Oct 29 01:25:29.707694 kernel: pci 0000:00:18.3: BAR 13: failed to assign [io size 0x1000] Oct 29 01:25:29.707743 kernel: pci 0000:00:18.4: BAR 13: no space for [io size 0x1000] Oct 29 01:25:29.707792 kernel: pci 0000:00:18.4: BAR 13: failed to assign [io size 0x1000] Oct 29 01:25:29.708105 kernel: pci 0000:00:18.5: BAR 13: no space for [io size 0x1000] Oct 29 01:25:29.708159 kernel: pci 0000:00:18.5: BAR 13: failed to assign [io size 0x1000] Oct 29 01:25:29.708421 kernel: pci 0000:00:18.6: BAR 13: no space for [io size 0x1000] Oct 29 01:25:29.708486 kernel: pci 0000:00:18.6: BAR 13: failed to assign [io size 0x1000] Oct 29 01:25:29.708537 kernel: pci 0000:00:18.7: BAR 13: no space for [io size 0x1000] Oct 29 01:25:29.708586 kernel: pci 0000:00:18.7: BAR 13: failed to assign [io size 0x1000] Oct 29 01:25:29.708658 kernel: pci 0000:00:18.7: BAR 13: no space for [io size 0x1000] Oct 29 01:25:29.708962 kernel: pci 0000:00:18.7: BAR 13: failed to assign [io size 0x1000] Oct 29 01:25:29.709240 kernel: pci 0000:00:18.6: BAR 13: no space for [io size 0x1000] Oct 29 01:25:29.709294 kernel: pci 0000:00:18.6: BAR 13: failed to assign [io size 0x1000] Oct 29 01:25:29.709344 kernel: pci 0000:00:18.5: BAR 13: no space for [io size 0x1000] Oct 29 01:25:29.709405 kernel: pci 0000:00:18.5: BAR 13: failed to assign [io size 0x1000] Oct 29 01:25:29.709458 kernel: pci 0000:00:18.4: BAR 13: no space for [io size 0x1000] Oct 29 01:25:29.709508 kernel: pci 0000:00:18.4: BAR 13: failed to assign [io size 0x1000] Oct 29 01:25:29.709556 kernel: pci 0000:00:18.3: BAR 13: no space for [io size 0x1000] Oct 29 01:25:29.709604 kernel: pci 0000:00:18.3: BAR 13: failed to assign [io size 0x1000] Oct 29 01:25:29.709652 kernel: pci 0000:00:18.2: BAR 13: no space for [io size 0x1000] Oct 29 01:25:29.709701 kernel: pci 0000:00:18.2: BAR 13: failed to assign [io size 0x1000] Oct 29 01:25:29.709749 kernel: pci 0000:00:17.7: BAR 13: no space for [io size 0x1000] Oct 29 01:25:29.709798 kernel: pci 0000:00:17.7: BAR 13: failed to assign [io size 0x1000] Oct 29 01:25:29.710170 kernel: pci 0000:00:17.6: BAR 13: no space for [io size 0x1000] Oct 29 01:25:29.710443 kernel: pci 0000:00:17.6: BAR 13: failed to assign [io size 0x1000] Oct 29 01:25:29.710498 kernel: pci 0000:00:17.5: BAR 13: no space for [io size 0x1000] Oct 29 01:25:29.710550 kernel: pci 0000:00:17.5: BAR 13: failed to assign [io size 0x1000] Oct 29 01:25:29.710605 kernel: pci 0000:00:17.4: BAR 13: no space for [io size 0x1000] Oct 29 01:25:29.710654 kernel: pci 0000:00:17.4: BAR 13: failed to assign [io size 0x1000] Oct 29 01:25:29.710704 kernel: pci 0000:00:17.3: BAR 13: no space for [io size 0x1000] Oct 29 01:25:29.710752 kernel: pci 0000:00:17.3: BAR 13: failed to assign [io size 0x1000] Oct 29 01:25:29.710808 kernel: pci 0000:00:16.7: BAR 13: no space for [io size 0x1000] Oct 29 01:25:29.710858 kernel: pci 0000:00:16.7: BAR 13: failed to assign [io size 0x1000] Oct 29 01:25:29.710907 kernel: pci 0000:00:16.6: BAR 13: no space for [io size 0x1000] Oct 29 01:25:29.710957 kernel: pci 0000:00:16.6: BAR 13: failed to assign [io size 0x1000] Oct 29 01:25:29.711006 kernel: pci 0000:00:16.5: BAR 13: no space for [io size 0x1000] Oct 29 01:25:29.711055 kernel: pci 0000:00:16.5: BAR 13: failed to assign [io size 0x1000] Oct 29 01:25:29.711105 kernel: pci 0000:00:16.4: BAR 13: no space for [io size 0x1000] Oct 29 01:25:29.711154 kernel: pci 0000:00:16.4: BAR 13: failed to assign [io size 0x1000] Oct 29 01:25:29.711202 kernel: pci 0000:00:16.3: BAR 13: no space for [io size 0x1000] Oct 29 01:25:29.711251 kernel: pci 0000:00:16.3: BAR 13: failed to assign [io size 0x1000] Oct 29 01:25:29.711300 kernel: pci 0000:00:15.7: BAR 13: no space for [io size 0x1000] Oct 29 01:25:29.711347 kernel: pci 0000:00:15.7: BAR 13: failed to assign [io size 0x1000] Oct 29 01:25:29.711398 kernel: pci 0000:00:15.6: BAR 13: no space for [io size 0x1000] Oct 29 01:25:29.711447 kernel: pci 0000:00:15.6: BAR 13: failed to assign [io size 0x1000] Oct 29 01:25:29.711496 kernel: pci 0000:00:15.5: BAR 13: no space for [io size 0x1000] Oct 29 01:25:29.711544 kernel: pci 0000:00:15.5: BAR 13: failed to assign [io size 0x1000] Oct 29 01:25:29.711593 kernel: pci 0000:00:15.4: BAR 13: no space for [io size 0x1000] Oct 29 01:25:29.711643 kernel: pci 0000:00:15.4: BAR 13: failed to assign [io size 0x1000] Oct 29 01:25:29.711692 kernel: pci 0000:00:15.3: BAR 13: no space for [io size 0x1000] Oct 29 01:25:29.711740 kernel: pci 0000:00:15.3: BAR 13: failed to assign [io size 0x1000] Oct 29 01:25:29.711790 kernel: pci 0000:00:01.0: PCI bridge to [bus 01] Oct 29 01:25:29.712090 kernel: pci 0000:00:11.0: PCI bridge to [bus 02] Oct 29 01:25:29.712147 kernel: pci 0000:00:11.0: bridge window [io 0x2000-0x3fff] Oct 29 01:25:29.712196 kernel: pci 0000:00:11.0: bridge window [mem 0xfd600000-0xfdffffff] Oct 29 01:25:29.712244 kernel: pci 0000:00:11.0: bridge window [mem 0xe7b00000-0xe7ffffff 64bit pref] Oct 29 01:25:29.712296 kernel: pci 0000:03:00.0: BAR 6: assigned [mem 0xfd500000-0xfd50ffff pref] Oct 29 01:25:29.712358 kernel: pci 0000:00:15.0: PCI bridge to [bus 03] Oct 29 01:25:29.712415 kernel: pci 0000:00:15.0: bridge window [io 0x4000-0x4fff] Oct 29 01:25:29.712464 kernel: pci 0000:00:15.0: bridge window [mem 0xfd500000-0xfd5fffff] Oct 29 01:25:29.712513 kernel: pci 0000:00:15.0: bridge window [mem 0xc0000000-0xc01fffff 64bit pref] Oct 29 01:25:29.712566 kernel: pci 0000:00:15.1: PCI bridge to [bus 04] Oct 29 01:25:29.712614 kernel: pci 0000:00:15.1: bridge window [io 0x8000-0x8fff] Oct 29 01:25:29.712662 kernel: pci 0000:00:15.1: bridge window [mem 0xfd100000-0xfd1fffff] Oct 29 01:25:29.712710 kernel: pci 0000:00:15.1: bridge window [mem 0xe7800000-0xe78fffff 64bit pref] Oct 29 01:25:29.712759 kernel: pci 0000:00:15.2: PCI bridge to [bus 05] Oct 29 01:25:29.712818 kernel: pci 0000:00:15.2: bridge window [io 0xc000-0xcfff] Oct 29 01:25:29.712871 kernel: pci 0000:00:15.2: bridge window [mem 0xfcd00000-0xfcdfffff] Oct 29 01:25:29.712919 kernel: pci 0000:00:15.2: bridge window [mem 0xe7400000-0xe74fffff 64bit pref] Oct 29 01:25:29.712967 kernel: pci 0000:00:15.3: PCI bridge to [bus 06] Oct 29 01:25:29.713018 kernel: pci 0000:00:15.3: bridge window [mem 0xfc900000-0xfc9fffff] Oct 29 01:25:29.713066 kernel: pci 0000:00:15.3: bridge window [mem 0xe7000000-0xe70fffff 64bit pref] Oct 29 01:25:29.713355 kernel: pci 0000:00:15.4: PCI bridge to [bus 07] Oct 29 01:25:29.713410 kernel: pci 0000:00:15.4: bridge window [mem 0xfc500000-0xfc5fffff] Oct 29 01:25:29.713473 kernel: pci 0000:00:15.4: bridge window [mem 0xe6c00000-0xe6cfffff 64bit pref] Oct 29 01:25:29.713550 kernel: pci 0000:00:15.5: PCI bridge to [bus 08] Oct 29 01:25:29.713834 kernel: pci 0000:00:15.5: bridge window [mem 0xfc100000-0xfc1fffff] Oct 29 01:25:29.714104 kernel: pci 0000:00:15.5: bridge window [mem 0xe6800000-0xe68fffff 64bit pref] Oct 29 01:25:29.714166 kernel: pci 0000:00:15.6: PCI bridge to [bus 09] Oct 29 01:25:29.714217 kernel: pci 0000:00:15.6: bridge window [mem 0xfbd00000-0xfbdfffff] Oct 29 01:25:29.714286 kernel: pci 0000:00:15.6: bridge window [mem 0xe6400000-0xe64fffff 64bit pref] Oct 29 01:25:29.714346 kernel: pci 0000:00:15.7: PCI bridge to [bus 0a] Oct 29 01:25:29.714405 kernel: pci 0000:00:15.7: bridge window [mem 0xfb900000-0xfb9fffff] Oct 29 01:25:29.714543 kernel: pci 0000:00:15.7: bridge window [mem 0xe6000000-0xe60fffff 64bit pref] Oct 29 01:25:29.714684 kernel: pci 0000:0b:00.0: BAR 6: assigned [mem 0xfd400000-0xfd40ffff pref] Oct 29 01:25:29.714836 kernel: pci 0000:00:16.0: PCI bridge to [bus 0b] Oct 29 01:25:29.714904 kernel: pci 0000:00:16.0: bridge window [io 0x5000-0x5fff] Oct 29 01:25:29.714953 kernel: pci 0000:00:16.0: bridge window [mem 0xfd400000-0xfd4fffff] Oct 29 01:25:29.715002 kernel: pci 0000:00:16.0: bridge window [mem 0xc0200000-0xc03fffff 64bit pref] Oct 29 01:25:29.715052 kernel: pci 0000:00:16.1: PCI bridge to [bus 0c] Oct 29 01:25:29.715102 kernel: pci 0000:00:16.1: bridge window [io 0x9000-0x9fff] Oct 29 01:25:29.715150 kernel: pci 0000:00:16.1: bridge window [mem 0xfd000000-0xfd0fffff] Oct 29 01:25:29.715199 kernel: pci 0000:00:16.1: bridge window [mem 0xe7700000-0xe77fffff 64bit pref] Oct 29 01:25:29.715248 kernel: pci 0000:00:16.2: PCI bridge to [bus 0d] Oct 29 01:25:29.715312 kernel: pci 0000:00:16.2: bridge window [io 0xd000-0xdfff] Oct 29 01:25:29.715377 kernel: pci 0000:00:16.2: bridge window [mem 0xfcc00000-0xfccfffff] Oct 29 01:25:29.715464 kernel: pci 0000:00:16.2: bridge window [mem 0xe7300000-0xe73fffff 64bit pref] Oct 29 01:25:29.715515 kernel: pci 0000:00:16.3: PCI bridge to [bus 0e] Oct 29 01:25:29.715563 kernel: pci 0000:00:16.3: bridge window [mem 0xfc800000-0xfc8fffff] Oct 29 01:25:29.715611 kernel: pci 0000:00:16.3: bridge window [mem 0xe6f00000-0xe6ffffff 64bit pref] Oct 29 01:25:29.715949 kernel: pci 0000:00:16.4: PCI bridge to [bus 0f] Oct 29 01:25:29.716009 kernel: pci 0000:00:16.4: bridge window [mem 0xfc400000-0xfc4fffff] Oct 29 01:25:29.716329 kernel: pci 0000:00:16.4: bridge window [mem 0xe6b00000-0xe6bfffff 64bit pref] Oct 29 01:25:29.716394 kernel: pci 0000:00:16.5: PCI bridge to [bus 10] Oct 29 01:25:29.716450 kernel: pci 0000:00:16.5: bridge window [mem 0xfc000000-0xfc0fffff] Oct 29 01:25:29.716500 kernel: pci 0000:00:16.5: bridge window [mem 0xe6700000-0xe67fffff 64bit pref] Oct 29 01:25:29.716550 kernel: pci 0000:00:16.6: PCI bridge to [bus 11] Oct 29 01:25:29.716600 kernel: pci 0000:00:16.6: bridge window [mem 0xfbc00000-0xfbcfffff] Oct 29 01:25:29.716648 kernel: pci 0000:00:16.6: bridge window [mem 0xe6300000-0xe63fffff 64bit pref] Oct 29 01:25:29.716696 kernel: pci 0000:00:16.7: PCI bridge to [bus 12] Oct 29 01:25:29.716745 kernel: pci 0000:00:16.7: bridge window [mem 0xfb800000-0xfb8fffff] Oct 29 01:25:29.716792 kernel: pci 0000:00:16.7: bridge window [mem 0xe5f00000-0xe5ffffff 64bit pref] Oct 29 01:25:29.717180 kernel: pci 0000:00:17.0: PCI bridge to [bus 13] Oct 29 01:25:29.717238 kernel: pci 0000:00:17.0: bridge window [io 0x6000-0x6fff] Oct 29 01:25:29.717288 kernel: pci 0000:00:17.0: bridge window [mem 0xfd300000-0xfd3fffff] Oct 29 01:25:29.717336 kernel: pci 0000:00:17.0: bridge window [mem 0xe7a00000-0xe7afffff 64bit pref] Oct 29 01:25:29.717387 kernel: pci 0000:00:17.1: PCI bridge to [bus 14] Oct 29 01:25:29.717434 kernel: pci 0000:00:17.1: bridge window [io 0xa000-0xafff] Oct 29 01:25:29.717482 kernel: pci 0000:00:17.1: bridge window [mem 0xfcf00000-0xfcffffff] Oct 29 01:25:29.717530 kernel: pci 0000:00:17.1: bridge window [mem 0xe7600000-0xe76fffff 64bit pref] Oct 29 01:25:29.717580 kernel: pci 0000:00:17.2: PCI bridge to [bus 15] Oct 29 01:25:29.717627 kernel: pci 0000:00:17.2: bridge window [io 0xe000-0xefff] Oct 29 01:25:29.717675 kernel: pci 0000:00:17.2: bridge window [mem 0xfcb00000-0xfcbfffff] Oct 29 01:25:29.717726 kernel: pci 0000:00:17.2: bridge window [mem 0xe7200000-0xe72fffff 64bit pref] Oct 29 01:25:29.717775 kernel: pci 0000:00:17.3: PCI bridge to [bus 16] Oct 29 01:25:29.717832 kernel: pci 0000:00:17.3: bridge window [mem 0xfc700000-0xfc7fffff] Oct 29 01:25:29.717880 kernel: pci 0000:00:17.3: bridge window [mem 0xe6e00000-0xe6efffff 64bit pref] Oct 29 01:25:29.718161 kernel: pci 0000:00:17.4: PCI bridge to [bus 17] Oct 29 01:25:29.718213 kernel: pci 0000:00:17.4: bridge window [mem 0xfc300000-0xfc3fffff] Oct 29 01:25:29.718263 kernel: pci 0000:00:17.4: bridge window [mem 0xe6a00000-0xe6afffff 64bit pref] Oct 29 01:25:29.718335 kernel: pci 0000:00:17.5: PCI bridge to [bus 18] Oct 29 01:25:29.718602 kernel: pci 0000:00:17.5: bridge window [mem 0xfbf00000-0xfbffffff] Oct 29 01:25:29.718660 kernel: pci 0000:00:17.5: bridge window [mem 0xe6600000-0xe66fffff 64bit pref] Oct 29 01:25:29.718710 kernel: pci 0000:00:17.6: PCI bridge to [bus 19] Oct 29 01:25:29.718776 kernel: pci 0000:00:17.6: bridge window [mem 0xfbb00000-0xfbbfffff] Oct 29 01:25:29.719083 kernel: pci 0000:00:17.6: bridge window [mem 0xe6200000-0xe62fffff 64bit pref] Oct 29 01:25:29.719139 kernel: pci 0000:00:17.7: PCI bridge to [bus 1a] Oct 29 01:25:29.719398 kernel: pci 0000:00:17.7: bridge window [mem 0xfb700000-0xfb7fffff] Oct 29 01:25:29.719454 kernel: pci 0000:00:17.7: bridge window [mem 0xe5e00000-0xe5efffff 64bit pref] Oct 29 01:25:29.719796 kernel: pci 0000:00:18.0: PCI bridge to [bus 1b] Oct 29 01:25:29.719889 kernel: pci 0000:00:18.0: bridge window [io 0x7000-0x7fff] Oct 29 01:25:29.719941 kernel: pci 0000:00:18.0: bridge window [mem 0xfd200000-0xfd2fffff] Oct 29 01:25:29.720107 kernel: pci 0000:00:18.0: bridge window [mem 0xe7900000-0xe79fffff 64bit pref] Oct 29 01:25:29.720172 kernel: pci 0000:00:18.1: PCI bridge to [bus 1c] Oct 29 01:25:29.720224 kernel: pci 0000:00:18.1: bridge window [io 0xb000-0xbfff] Oct 29 01:25:29.720567 kernel: pci 0000:00:18.1: bridge window [mem 0xfce00000-0xfcefffff] Oct 29 01:25:29.720623 kernel: pci 0000:00:18.1: bridge window [mem 0xe7500000-0xe75fffff 64bit pref] Oct 29 01:25:29.720676 kernel: pci 0000:00:18.2: PCI bridge to [bus 1d] Oct 29 01:25:29.720726 kernel: pci 0000:00:18.2: bridge window [mem 0xfca00000-0xfcafffff] Oct 29 01:25:29.721154 kernel: pci 0000:00:18.2: bridge window [mem 0xe7100000-0xe71fffff 64bit pref] Oct 29 01:25:29.721220 kernel: pci 0000:00:18.3: PCI bridge to [bus 1e] Oct 29 01:25:29.721275 kernel: pci 0000:00:18.3: bridge window [mem 0xfc600000-0xfc6fffff] Oct 29 01:25:29.721622 kernel: pci 0000:00:18.3: bridge window [mem 0xe6d00000-0xe6dfffff 64bit pref] Oct 29 01:25:29.721680 kernel: pci 0000:00:18.4: PCI bridge to [bus 1f] Oct 29 01:25:29.721731 kernel: pci 0000:00:18.4: bridge window [mem 0xfc200000-0xfc2fffff] Oct 29 01:25:29.721780 kernel: pci 0000:00:18.4: bridge window [mem 0xe6900000-0xe69fffff 64bit pref] Oct 29 01:25:29.721954 kernel: pci 0000:00:18.5: PCI bridge to [bus 20] Oct 29 01:25:29.722007 kernel: pci 0000:00:18.5: bridge window [mem 0xfbe00000-0xfbefffff] Oct 29 01:25:29.722348 kernel: pci 0000:00:18.5: bridge window [mem 0xe6500000-0xe65fffff 64bit pref] Oct 29 01:25:29.722407 kernel: pci 0000:00:18.6: PCI bridge to [bus 21] Oct 29 01:25:29.722459 kernel: pci 0000:00:18.6: bridge window [mem 0xfba00000-0xfbafffff] Oct 29 01:25:29.722512 kernel: pci 0000:00:18.6: bridge window [mem 0xe6100000-0xe61fffff 64bit pref] Oct 29 01:25:29.722562 kernel: pci 0000:00:18.7: PCI bridge to [bus 22] Oct 29 01:25:29.722611 kernel: pci 0000:00:18.7: bridge window [mem 0xfb600000-0xfb6fffff] Oct 29 01:25:29.722659 kernel: pci 0000:00:18.7: bridge window [mem 0xe5d00000-0xe5dfffff 64bit pref] Oct 29 01:25:29.722710 kernel: pci_bus 0000:00: resource 4 [mem 0x000a0000-0x000bffff window] Oct 29 01:25:29.722754 kernel: pci_bus 0000:00: resource 5 [mem 0x000cc000-0x000dbfff window] Oct 29 01:25:29.722797 kernel: pci_bus 0000:00: resource 6 [mem 0xc0000000-0xfebfffff window] Oct 29 01:25:29.722852 kernel: pci_bus 0000:00: resource 7 [io 0x0000-0x0cf7 window] Oct 29 01:25:29.722897 kernel: pci_bus 0000:00: resource 8 [io 0x0d00-0xfeff window] Oct 29 01:25:29.722944 kernel: pci_bus 0000:02: resource 0 [io 0x2000-0x3fff] Oct 29 01:25:29.722990 kernel: pci_bus 0000:02: resource 1 [mem 0xfd600000-0xfdffffff] Oct 29 01:25:29.723177 kernel: pci_bus 0000:02: resource 2 [mem 0xe7b00000-0xe7ffffff 64bit pref] Oct 29 01:25:29.723440 kernel: pci_bus 0000:02: resource 4 [mem 0x000a0000-0x000bffff window] Oct 29 01:25:29.723501 kernel: pci_bus 0000:02: resource 5 [mem 0x000cc000-0x000dbfff window] Oct 29 01:25:29.723565 kernel: pci_bus 0000:02: resource 6 [mem 0xc0000000-0xfebfffff window] Oct 29 01:25:29.723938 kernel: pci_bus 0000:02: resource 7 [io 0x0000-0x0cf7 window] Oct 29 01:25:29.723987 kernel: pci_bus 0000:02: resource 8 [io 0x0d00-0xfeff window] Oct 29 01:25:29.724039 kernel: pci_bus 0000:03: resource 0 [io 0x4000-0x4fff] Oct 29 01:25:29.724086 kernel: pci_bus 0000:03: resource 1 [mem 0xfd500000-0xfd5fffff] Oct 29 01:25:29.724242 kernel: pci_bus 0000:03: resource 2 [mem 0xc0000000-0xc01fffff 64bit pref] Oct 29 01:25:29.724298 kernel: pci_bus 0000:04: resource 0 [io 0x8000-0x8fff] Oct 29 01:25:29.724359 kernel: pci_bus 0000:04: resource 1 [mem 0xfd100000-0xfd1fffff] Oct 29 01:25:29.724410 kernel: pci_bus 0000:04: resource 2 [mem 0xe7800000-0xe78fffff 64bit pref] Oct 29 01:25:29.724462 kernel: pci_bus 0000:05: resource 0 [io 0xc000-0xcfff] Oct 29 01:25:29.724507 kernel: pci_bus 0000:05: resource 1 [mem 0xfcd00000-0xfcdfffff] Oct 29 01:25:29.724552 kernel: pci_bus 0000:05: resource 2 [mem 0xe7400000-0xe74fffff 64bit pref] Oct 29 01:25:29.724613 kernel: pci_bus 0000:06: resource 1 [mem 0xfc900000-0xfc9fffff] Oct 29 01:25:29.724660 kernel: pci_bus 0000:06: resource 2 [mem 0xe7000000-0xe70fffff 64bit pref] Oct 29 01:25:29.724710 kernel: pci_bus 0000:07: resource 1 [mem 0xfc500000-0xfc5fffff] Oct 29 01:25:29.724758 kernel: pci_bus 0000:07: resource 2 [mem 0xe6c00000-0xe6cfffff 64bit pref] Oct 29 01:25:29.725135 kernel: pci_bus 0000:08: resource 1 [mem 0xfc100000-0xfc1fffff] Oct 29 01:25:29.725190 kernel: pci_bus 0000:08: resource 2 [mem 0xe6800000-0xe68fffff 64bit pref] Oct 29 01:25:29.725242 kernel: pci_bus 0000:09: resource 1 [mem 0xfbd00000-0xfbdfffff] Oct 29 01:25:29.725288 kernel: pci_bus 0000:09: resource 2 [mem 0xe6400000-0xe64fffff 64bit pref] Oct 29 01:25:29.725337 kernel: pci_bus 0000:0a: resource 1 [mem 0xfb900000-0xfb9fffff] Oct 29 01:25:29.725385 kernel: pci_bus 0000:0a: resource 2 [mem 0xe6000000-0xe60fffff 64bit pref] Oct 29 01:25:29.725457 kernel: pci_bus 0000:0b: resource 0 [io 0x5000-0x5fff] Oct 29 01:25:29.725504 kernel: pci_bus 0000:0b: resource 1 [mem 0xfd400000-0xfd4fffff] Oct 29 01:25:29.725549 kernel: pci_bus 0000:0b: resource 2 [mem 0xc0200000-0xc03fffff 64bit pref] Oct 29 01:25:29.725598 kernel: pci_bus 0000:0c: resource 0 [io 0x9000-0x9fff] Oct 29 01:25:29.725644 kernel: pci_bus 0000:0c: resource 1 [mem 0xfd000000-0xfd0fffff] Oct 29 01:25:29.725699 kernel: pci_bus 0000:0c: resource 2 [mem 0xe7700000-0xe77fffff 64bit pref] Oct 29 01:25:29.725749 kernel: pci_bus 0000:0d: resource 0 [io 0xd000-0xdfff] Oct 29 01:25:29.725794 kernel: pci_bus 0000:0d: resource 1 [mem 0xfcc00000-0xfccfffff] Oct 29 01:25:29.725855 kernel: pci_bus 0000:0d: resource 2 [mem 0xe7300000-0xe73fffff 64bit pref] Oct 29 01:25:29.725906 kernel: pci_bus 0000:0e: resource 1 [mem 0xfc800000-0xfc8fffff] Oct 29 01:25:29.725951 kernel: pci_bus 0000:0e: resource 2 [mem 0xe6f00000-0xe6ffffff 64bit pref] Oct 29 01:25:29.726000 kernel: pci_bus 0000:0f: resource 1 [mem 0xfc400000-0xfc4fffff] Oct 29 01:25:29.726048 kernel: pci_bus 0000:0f: resource 2 [mem 0xe6b00000-0xe6bfffff 64bit pref] Oct 29 01:25:29.726103 kernel: pci_bus 0000:10: resource 1 [mem 0xfc000000-0xfc0fffff] Oct 29 01:25:29.726148 kernel: pci_bus 0000:10: resource 2 [mem 0xe6700000-0xe67fffff 64bit pref] Oct 29 01:25:29.726201 kernel: pci_bus 0000:11: resource 1 [mem 0xfbc00000-0xfbcfffff] Oct 29 01:25:29.726246 kernel: pci_bus 0000:11: resource 2 [mem 0xe6300000-0xe63fffff 64bit pref] Oct 29 01:25:29.726295 kernel: pci_bus 0000:12: resource 1 [mem 0xfb800000-0xfb8fffff] Oct 29 01:25:29.726484 kernel: pci_bus 0000:12: resource 2 [mem 0xe5f00000-0xe5ffffff 64bit pref] Oct 29 01:25:29.726768 kernel: pci_bus 0000:13: resource 0 [io 0x6000-0x6fff] Oct 29 01:25:29.726858 kernel: pci_bus 0000:13: resource 1 [mem 0xfd300000-0xfd3fffff] Oct 29 01:25:29.726908 kernel: pci_bus 0000:13: resource 2 [mem 0xe7a00000-0xe7afffff 64bit pref] Oct 29 01:25:29.726966 kernel: pci_bus 0000:14: resource 0 [io 0xa000-0xafff] Oct 29 01:25:29.727014 kernel: pci_bus 0000:14: resource 1 [mem 0xfcf00000-0xfcffffff] Oct 29 01:25:29.727059 kernel: pci_bus 0000:14: resource 2 [mem 0xe7600000-0xe76fffff 64bit pref] Oct 29 01:25:29.727111 kernel: pci_bus 0000:15: resource 0 [io 0xe000-0xefff] Oct 29 01:25:29.727157 kernel: pci_bus 0000:15: resource 1 [mem 0xfcb00000-0xfcbfffff] Oct 29 01:25:29.727203 kernel: pci_bus 0000:15: resource 2 [mem 0xe7200000-0xe72fffff 64bit pref] Oct 29 01:25:29.727252 kernel: pci_bus 0000:16: resource 1 [mem 0xfc700000-0xfc7fffff] Oct 29 01:25:29.727383 kernel: pci_bus 0000:16: resource 2 [mem 0xe6e00000-0xe6efffff 64bit pref] Oct 29 01:25:29.727529 kernel: pci_bus 0000:17: resource 1 [mem 0xfc300000-0xfc3fffff] Oct 29 01:25:29.727654 kernel: pci_bus 0000:17: resource 2 [mem 0xe6a00000-0xe6afffff 64bit pref] Oct 29 01:25:29.727710 kernel: pci_bus 0000:18: resource 1 [mem 0xfbf00000-0xfbffffff] Oct 29 01:25:29.727756 kernel: pci_bus 0000:18: resource 2 [mem 0xe6600000-0xe66fffff 64bit pref] Oct 29 01:25:29.727817 kernel: pci_bus 0000:19: resource 1 [mem 0xfbb00000-0xfbbfffff] Oct 29 01:25:29.727868 kernel: pci_bus 0000:19: resource 2 [mem 0xe6200000-0xe62fffff 64bit pref] Oct 29 01:25:29.727918 kernel: pci_bus 0000:1a: resource 1 [mem 0xfb700000-0xfb7fffff] Oct 29 01:25:29.727967 kernel: pci_bus 0000:1a: resource 2 [mem 0xe5e00000-0xe5efffff 64bit pref] Oct 29 01:25:29.728030 kernel: pci_bus 0000:1b: resource 0 [io 0x7000-0x7fff] Oct 29 01:25:29.728107 kernel: pci_bus 0000:1b: resource 1 [mem 0xfd200000-0xfd2fffff] Oct 29 01:25:29.728398 kernel: pci_bus 0000:1b: resource 2 [mem 0xe7900000-0xe79fffff 64bit pref] Oct 29 01:25:29.728460 kernel: pci_bus 0000:1c: resource 0 [io 0xb000-0xbfff] Oct 29 01:25:29.728739 kernel: pci_bus 0000:1c: resource 1 [mem 0xfce00000-0xfcefffff] Oct 29 01:25:29.728790 kernel: pci_bus 0000:1c: resource 2 [mem 0xe7500000-0xe75fffff 64bit pref] Oct 29 01:25:29.728857 kernel: pci_bus 0000:1d: resource 1 [mem 0xfca00000-0xfcafffff] Oct 29 01:25:29.728908 kernel: pci_bus 0000:1d: resource 2 [mem 0xe7100000-0xe71fffff 64bit pref] Oct 29 01:25:29.728962 kernel: pci_bus 0000:1e: resource 1 [mem 0xfc600000-0xfc6fffff] Oct 29 01:25:29.729009 kernel: pci_bus 0000:1e: resource 2 [mem 0xe6d00000-0xe6dfffff 64bit pref] Oct 29 01:25:29.729059 kernel: pci_bus 0000:1f: resource 1 [mem 0xfc200000-0xfc2fffff] Oct 29 01:25:29.729107 kernel: pci_bus 0000:1f: resource 2 [mem 0xe6900000-0xe69fffff 64bit pref] Oct 29 01:25:29.729158 kernel: pci_bus 0000:20: resource 1 [mem 0xfbe00000-0xfbefffff] Oct 29 01:25:29.729203 kernel: pci_bus 0000:20: resource 2 [mem 0xe6500000-0xe65fffff 64bit pref] Oct 29 01:25:29.729254 kernel: pci_bus 0000:21: resource 1 [mem 0xfba00000-0xfbafffff] Oct 29 01:25:29.729300 kernel: pci_bus 0000:21: resource 2 [mem 0xe6100000-0xe61fffff 64bit pref] Oct 29 01:25:29.729351 kernel: pci_bus 0000:22: resource 1 [mem 0xfb600000-0xfb6fffff] Oct 29 01:25:29.729397 kernel: pci_bus 0000:22: resource 2 [mem 0xe5d00000-0xe5dfffff 64bit pref] Oct 29 01:25:29.729456 kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers Oct 29 01:25:29.729465 kernel: PCI: CLS 32 bytes, default 64 Oct 29 01:25:29.729472 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Oct 29 01:25:29.729479 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x311fd3cd494, max_idle_ns: 440795223879 ns Oct 29 01:25:29.729485 kernel: clocksource: Switched to clocksource tsc Oct 29 01:25:29.729491 kernel: Initialise system trusted keyrings Oct 29 01:25:29.729498 kernel: workingset: timestamp_bits=39 max_order=19 bucket_order=0 Oct 29 01:25:29.729505 kernel: Key type asymmetric registered Oct 29 01:25:29.729512 kernel: Asymmetric key parser 'x509' registered Oct 29 01:25:29.729518 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 249) Oct 29 01:25:29.729524 kernel: io scheduler mq-deadline registered Oct 29 01:25:29.729530 kernel: io scheduler kyber registered Oct 29 01:25:29.729537 kernel: io scheduler bfq registered Oct 29 01:25:29.729590 kernel: pcieport 0000:00:15.0: PME: Signaling with IRQ 24 Oct 29 01:25:29.729641 kernel: pcieport 0000:00:15.0: pciehp: Slot #160 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Oct 29 01:25:29.729693 kernel: pcieport 0000:00:15.1: PME: Signaling with IRQ 25 Oct 29 01:25:29.729744 kernel: pcieport 0000:00:15.1: pciehp: Slot #161 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Oct 29 01:25:29.729804 kernel: pcieport 0000:00:15.2: PME: Signaling with IRQ 26 Oct 29 01:25:29.729864 kernel: pcieport 0000:00:15.2: pciehp: Slot #162 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Oct 29 01:25:29.729915 kernel: pcieport 0000:00:15.3: PME: Signaling with IRQ 27 Oct 29 01:25:29.729966 kernel: pcieport 0000:00:15.3: pciehp: Slot #163 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Oct 29 01:25:29.730017 kernel: pcieport 0000:00:15.4: PME: Signaling with IRQ 28 Oct 29 01:25:29.730066 kernel: pcieport 0000:00:15.4: pciehp: Slot #164 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Oct 29 01:25:29.730138 kernel: pcieport 0000:00:15.5: PME: Signaling with IRQ 29 Oct 29 01:25:29.730189 kernel: pcieport 0000:00:15.5: pciehp: Slot #165 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Oct 29 01:25:29.730261 kernel: pcieport 0000:00:15.6: PME: Signaling with IRQ 30 Oct 29 01:25:29.730548 kernel: pcieport 0000:00:15.6: pciehp: Slot #166 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Oct 29 01:25:29.730609 kernel: pcieport 0000:00:15.7: PME: Signaling with IRQ 31 Oct 29 01:25:29.730665 kernel: pcieport 0000:00:15.7: pciehp: Slot #167 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Oct 29 01:25:29.730718 kernel: pcieport 0000:00:16.0: PME: Signaling with IRQ 32 Oct 29 01:25:29.730769 kernel: pcieport 0000:00:16.0: pciehp: Slot #192 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Oct 29 01:25:29.730871 kernel: pcieport 0000:00:16.1: PME: Signaling with IRQ 33 Oct 29 01:25:29.730922 kernel: pcieport 0000:00:16.1: pciehp: Slot #193 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Oct 29 01:25:29.730974 kernel: pcieport 0000:00:16.2: PME: Signaling with IRQ 34 Oct 29 01:25:29.731023 kernel: pcieport 0000:00:16.2: pciehp: Slot #194 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Oct 29 01:25:29.731602 kernel: pcieport 0000:00:16.3: PME: Signaling with IRQ 35 Oct 29 01:25:29.731661 kernel: pcieport 0000:00:16.3: pciehp: Slot #195 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Oct 29 01:25:29.731715 kernel: pcieport 0000:00:16.4: PME: Signaling with IRQ 36 Oct 29 01:25:29.732031 kernel: pcieport 0000:00:16.4: pciehp: Slot #196 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Oct 29 01:25:29.732089 kernel: pcieport 0000:00:16.5: PME: Signaling with IRQ 37 Oct 29 01:25:29.732144 kernel: pcieport 0000:00:16.5: pciehp: Slot #197 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Oct 29 01:25:29.732197 kernel: pcieport 0000:00:16.6: PME: Signaling with IRQ 38 Oct 29 01:25:29.732247 kernel: pcieport 0000:00:16.6: pciehp: Slot #198 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Oct 29 01:25:29.732299 kernel: pcieport 0000:00:16.7: PME: Signaling with IRQ 39 Oct 29 01:25:29.732349 kernel: pcieport 0000:00:16.7: pciehp: Slot #199 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Oct 29 01:25:29.732403 kernel: pcieport 0000:00:17.0: PME: Signaling with IRQ 40 Oct 29 01:25:29.732452 kernel: pcieport 0000:00:17.0: pciehp: Slot #224 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Oct 29 01:25:29.732504 kernel: pcieport 0000:00:17.1: PME: Signaling with IRQ 41 Oct 29 01:25:29.732554 kernel: pcieport 0000:00:17.1: pciehp: Slot #225 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Oct 29 01:25:29.732605 kernel: pcieport 0000:00:17.2: PME: Signaling with IRQ 42 Oct 29 01:25:29.732656 kernel: pcieport 0000:00:17.2: pciehp: Slot #226 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Oct 29 01:25:29.732707 kernel: pcieport 0000:00:17.3: PME: Signaling with IRQ 43 Oct 29 01:25:29.732758 kernel: pcieport 0000:00:17.3: pciehp: Slot #227 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Oct 29 01:25:29.732825 kernel: pcieport 0000:00:17.4: PME: Signaling with IRQ 44 Oct 29 01:25:29.732877 kernel: pcieport 0000:00:17.4: pciehp: Slot #228 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Oct 29 01:25:29.732929 kernel: pcieport 0000:00:17.5: PME: Signaling with IRQ 45 Oct 29 01:25:29.733306 kernel: pcieport 0000:00:17.5: pciehp: Slot #229 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Oct 29 01:25:29.733367 kernel: pcieport 0000:00:17.6: PME: Signaling with IRQ 46 Oct 29 01:25:29.733423 kernel: pcieport 0000:00:17.6: pciehp: Slot #230 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Oct 29 01:25:29.733476 kernel: pcieport 0000:00:17.7: PME: Signaling with IRQ 47 Oct 29 01:25:29.733678 kernel: pcieport 0000:00:17.7: pciehp: Slot #231 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Oct 29 01:25:29.733737 kernel: pcieport 0000:00:18.0: PME: Signaling with IRQ 48 Oct 29 01:25:29.734084 kernel: pcieport 0000:00:18.0: pciehp: Slot #256 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Oct 29 01:25:29.734143 kernel: pcieport 0000:00:18.1: PME: Signaling with IRQ 49 Oct 29 01:25:29.734496 kernel: pcieport 0000:00:18.1: pciehp: Slot #257 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Oct 29 01:25:29.734553 kernel: pcieport 0000:00:18.2: PME: Signaling with IRQ 50 Oct 29 01:25:29.734607 kernel: pcieport 0000:00:18.2: pciehp: Slot #258 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Oct 29 01:25:29.734838 kernel: pcieport 0000:00:18.3: PME: Signaling with IRQ 51 Oct 29 01:25:29.735130 kernel: pcieport 0000:00:18.3: pciehp: Slot #259 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Oct 29 01:25:29.735192 kernel: pcieport 0000:00:18.4: PME: Signaling with IRQ 52 Oct 29 01:25:29.735244 kernel: pcieport 0000:00:18.4: pciehp: Slot #260 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Oct 29 01:25:29.735296 kernel: pcieport 0000:00:18.5: PME: Signaling with IRQ 53 Oct 29 01:25:29.735347 kernel: pcieport 0000:00:18.5: pciehp: Slot #261 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Oct 29 01:25:29.735398 kernel: pcieport 0000:00:18.6: PME: Signaling with IRQ 54 Oct 29 01:25:29.735775 kernel: pcieport 0000:00:18.6: pciehp: Slot #262 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Oct 29 01:25:29.735890 kernel: pcieport 0000:00:18.7: PME: Signaling with IRQ 55 Oct 29 01:25:29.735945 kernel: pcieport 0000:00:18.7: pciehp: Slot #263 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Oct 29 01:25:29.735954 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Oct 29 01:25:29.735961 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Oct 29 01:25:29.735967 kernel: 00:05: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Oct 29 01:25:29.736119 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBC,PNP0f13:MOUS] at 0x60,0x64 irq 1,12 Oct 29 01:25:29.736126 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Oct 29 01:25:29.736135 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Oct 29 01:25:29.736512 kernel: rtc_cmos 00:01: registered as rtc0 Oct 29 01:25:29.736583 kernel: rtc_cmos 00:01: setting system clock to 2025-10-29T01:25:29 UTC (1761701129) Oct 29 01:25:29.736643 kernel: rtc_cmos 00:01: alarms up to one month, y3k, 114 bytes nvram Oct 29 01:25:29.736653 kernel: intel_pstate: CPU model not supported Oct 29 01:25:29.736660 kernel: NET: Registered PF_INET6 protocol family Oct 29 01:25:29.736667 kernel: Segment Routing with IPv6 Oct 29 01:25:29.736673 kernel: In-situ OAM (IOAM) with IPv6 Oct 29 01:25:29.736681 kernel: NET: Registered PF_PACKET protocol family Oct 29 01:25:29.736687 kernel: Key type dns_resolver registered Oct 29 01:25:29.736694 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Oct 29 01:25:29.736700 kernel: IPI shorthand broadcast: enabled Oct 29 01:25:29.736706 kernel: sched_clock: Marking stable (895002125, 222523137)->(1183423681, -65898419) Oct 29 01:25:29.736716 kernel: registered taskstats version 1 Oct 29 01:25:29.736723 kernel: Loading compiled-in X.509 certificates Oct 29 01:25:29.736729 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 5.15.192-flatcar: 88bc8a4d729b2f514b4a44a35b666d3248ded14a' Oct 29 01:25:29.736735 kernel: Key type .fscrypt registered Oct 29 01:25:29.736743 kernel: Key type fscrypt-provisioning registered Oct 29 01:25:29.736749 kernel: ima: No TPM chip found, activating TPM-bypass! Oct 29 01:25:29.736755 kernel: ima: Allocated hash algorithm: sha1 Oct 29 01:25:29.736762 kernel: ima: No architecture policies found Oct 29 01:25:29.736768 kernel: clk: Disabling unused clocks Oct 29 01:25:29.736774 kernel: Freeing unused kernel image (initmem) memory: 47496K Oct 29 01:25:29.736780 kernel: Write protecting the kernel read-only data: 28672k Oct 29 01:25:29.736786 kernel: Freeing unused kernel image (text/rodata gap) memory: 2040K Oct 29 01:25:29.736794 kernel: Freeing unused kernel image (rodata/data gap) memory: 604K Oct 29 01:25:29.737077 kernel: Run /init as init process Oct 29 01:25:29.737086 kernel: with arguments: Oct 29 01:25:29.737093 kernel: /init Oct 29 01:25:29.737102 kernel: with environment: Oct 29 01:25:29.737108 kernel: HOME=/ Oct 29 01:25:29.737114 kernel: TERM=linux Oct 29 01:25:29.737120 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Oct 29 01:25:29.737128 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Oct 29 01:25:29.737142 systemd[1]: Detected virtualization vmware. Oct 29 01:25:29.737151 systemd[1]: Detected architecture x86-64. Oct 29 01:25:29.741844 systemd[1]: Running in initrd. Oct 29 01:25:29.741859 systemd[1]: No hostname configured, using default hostname. Oct 29 01:25:29.741866 systemd[1]: Hostname set to . Oct 29 01:25:29.741873 systemd[1]: Initializing machine ID from random generator. Oct 29 01:25:29.741879 systemd[1]: Queued start job for default target initrd.target. Oct 29 01:25:29.741886 systemd[1]: Started systemd-ask-password-console.path. Oct 29 01:25:29.741895 systemd[1]: Reached target cryptsetup.target. Oct 29 01:25:29.741901 systemd[1]: Reached target paths.target. Oct 29 01:25:29.741907 systemd[1]: Reached target slices.target. Oct 29 01:25:29.741914 systemd[1]: Reached target swap.target. Oct 29 01:25:29.741926 systemd[1]: Reached target timers.target. Oct 29 01:25:29.741933 systemd[1]: Listening on iscsid.socket. Oct 29 01:25:29.741939 systemd[1]: Listening on iscsiuio.socket. Oct 29 01:25:29.741946 systemd[1]: Listening on systemd-journald-audit.socket. Oct 29 01:25:29.741956 systemd[1]: Listening on systemd-journald-dev-log.socket. Oct 29 01:25:29.741963 systemd[1]: Listening on systemd-journald.socket. Oct 29 01:25:29.741969 systemd[1]: Listening on systemd-networkd.socket. Oct 29 01:25:29.741975 systemd[1]: Listening on systemd-udevd-control.socket. Oct 29 01:25:29.741982 systemd[1]: Listening on systemd-udevd-kernel.socket. Oct 29 01:25:29.741988 systemd[1]: Reached target sockets.target. Oct 29 01:25:29.741995 systemd[1]: Starting kmod-static-nodes.service... Oct 29 01:25:29.742001 systemd[1]: Finished network-cleanup.service. Oct 29 01:25:29.742009 systemd[1]: Starting systemd-fsck-usr.service... Oct 29 01:25:29.742015 systemd[1]: Starting systemd-journald.service... Oct 29 01:25:29.742021 systemd[1]: Starting systemd-modules-load.service... Oct 29 01:25:29.742028 systemd[1]: Starting systemd-resolved.service... Oct 29 01:25:29.742034 systemd[1]: Starting systemd-vconsole-setup.service... Oct 29 01:25:29.742040 systemd[1]: Finished kmod-static-nodes.service. Oct 29 01:25:29.742046 systemd[1]: Finished systemd-fsck-usr.service. Oct 29 01:25:29.742053 systemd[1]: Starting systemd-tmpfiles-setup-dev.service... Oct 29 01:25:29.742059 systemd[1]: Finished systemd-vconsole-setup.service. Oct 29 01:25:29.742067 kernel: audit: type=1130 audit(1761701129.660:2): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 29 01:25:29.742077 systemd[1]: Finished systemd-tmpfiles-setup-dev.service. Oct 29 01:25:29.742086 kernel: audit: type=1130 audit(1761701129.668:3): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 29 01:25:29.742092 systemd[1]: Starting dracut-cmdline-ask.service... Oct 29 01:25:29.742099 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Oct 29 01:25:29.742106 systemd[1]: Finished dracut-cmdline-ask.service. Oct 29 01:25:29.742112 kernel: audit: type=1130 audit(1761701129.685:4): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 29 01:25:29.742118 kernel: Bridge firewalling registered Oct 29 01:25:29.742126 systemd[1]: Starting dracut-cmdline.service... Oct 29 01:25:29.742132 systemd[1]: Started systemd-resolved.service. Oct 29 01:25:29.742139 kernel: audit: type=1130 audit(1761701129.695:5): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 29 01:25:29.742145 systemd[1]: Reached target nss-lookup.target. Oct 29 01:25:29.742152 kernel: SCSI subsystem initialized Oct 29 01:25:29.742160 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Oct 29 01:25:29.742166 kernel: device-mapper: uevent: version 1.0.3 Oct 29 01:25:29.742172 kernel: device-mapper: ioctl: 4.45.0-ioctl (2021-03-22) initialised: dm-devel@redhat.com Oct 29 01:25:29.742179 systemd[1]: Finished systemd-modules-load.service. Oct 29 01:25:29.742185 systemd[1]: Starting systemd-sysctl.service... Oct 29 01:25:29.742191 kernel: audit: type=1130 audit(1761701129.737:6): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 29 01:25:29.742202 systemd-journald[216]: Journal started Oct 29 01:25:29.742243 systemd-journald[216]: Runtime Journal (/run/log/journal/710163cc697f4372a571159356ad3787) is 4.8M, max 38.8M, 34.0M free. Oct 29 01:25:29.660000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 29 01:25:29.668000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 29 01:25:29.685000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 29 01:25:29.695000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 29 01:25:29.737000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 29 01:25:29.656950 systemd-modules-load[217]: Inserted module 'overlay' Oct 29 01:25:29.691904 systemd-resolved[218]: Positive Trust Anchors: Oct 29 01:25:29.691910 systemd-resolved[218]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Oct 29 01:25:29.691941 systemd-resolved[218]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Oct 29 01:25:29.744414 systemd[1]: Started systemd-journald.service. Oct 29 01:25:29.694917 systemd-resolved[218]: Defaulting to hostname 'linux'. Oct 29 01:25:29.700412 systemd-modules-load[217]: Inserted module 'br_netfilter' Oct 29 01:25:29.736886 systemd-modules-load[217]: Inserted module 'dm_multipath' Oct 29 01:25:29.744870 dracut-cmdline[232]: dracut-dracut-053 Oct 29 01:25:29.744870 dracut-cmdline[232]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LA Oct 29 01:25:29.744870 dracut-cmdline[232]: BEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=vmware flatcar.autologin verity.usrhash=201610a31b2ff0ec76573eccf3918f182ba51086e5a85b3aea8675643c4efef7 Oct 29 01:25:29.744000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 29 01:25:29.749733 systemd[1]: Finished systemd-sysctl.service. Oct 29 01:25:29.749949 kernel: audit: type=1130 audit(1761701129.744:7): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 29 01:25:29.748000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 29 01:25:29.752811 kernel: audit: type=1130 audit(1761701129.748:8): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 29 01:25:29.759814 kernel: Loading iSCSI transport class v2.0-870. Oct 29 01:25:29.771821 kernel: iscsi: registered transport (tcp) Oct 29 01:25:29.788815 kernel: iscsi: registered transport (qla4xxx) Oct 29 01:25:29.788854 kernel: QLogic iSCSI HBA Driver Oct 29 01:25:29.805633 systemd[1]: Finished dracut-cmdline.service. Oct 29 01:25:29.803000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 29 01:25:29.806301 systemd[1]: Starting dracut-pre-udev.service... Oct 29 01:25:29.809729 kernel: audit: type=1130 audit(1761701129.803:9): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 29 01:25:29.843825 kernel: raid6: avx2x4 gen() 47444 MB/s Oct 29 01:25:29.860818 kernel: raid6: avx2x4 xor() 20407 MB/s Oct 29 01:25:29.877821 kernel: raid6: avx2x2 gen() 51881 MB/s Oct 29 01:25:29.894819 kernel: raid6: avx2x2 xor() 28484 MB/s Oct 29 01:25:29.911819 kernel: raid6: avx2x1 gen() 44945 MB/s Oct 29 01:25:29.928817 kernel: raid6: avx2x1 xor() 27748 MB/s Oct 29 01:25:29.945814 kernel: raid6: sse2x4 gen() 21191 MB/s Oct 29 01:25:29.962822 kernel: raid6: sse2x4 xor() 11852 MB/s Oct 29 01:25:29.979849 kernel: raid6: sse2x2 gen() 21369 MB/s Oct 29 01:25:29.996814 kernel: raid6: sse2x2 xor() 12938 MB/s Oct 29 01:25:30.013822 kernel: raid6: sse2x1 gen() 17957 MB/s Oct 29 01:25:30.031006 kernel: raid6: sse2x1 xor() 8885 MB/s Oct 29 01:25:30.031034 kernel: raid6: using algorithm avx2x2 gen() 51881 MB/s Oct 29 01:25:30.031042 kernel: raid6: .... xor() 28484 MB/s, rmw enabled Oct 29 01:25:30.032192 kernel: raid6: using avx2x2 recovery algorithm Oct 29 01:25:30.040849 kernel: xor: automatically using best checksumming function avx Oct 29 01:25:30.102821 kernel: Btrfs loaded, crc32c=crc32c-intel, zoned=no, fsverity=no Oct 29 01:25:30.110817 kernel: audit: type=1130 audit(1761701130.106:10): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 29 01:25:30.106000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 29 01:25:30.109000 audit: BPF prog-id=7 op=LOAD Oct 29 01:25:30.109000 audit: BPF prog-id=8 op=LOAD Oct 29 01:25:30.108192 systemd[1]: Finished dracut-pre-udev.service. Oct 29 01:25:30.111109 systemd[1]: Starting systemd-udevd.service... Oct 29 01:25:30.123496 systemd-udevd[415]: Using default interface naming scheme 'v252'. Oct 29 01:25:30.126950 systemd[1]: Started systemd-udevd.service. Oct 29 01:25:30.127504 systemd[1]: Starting dracut-pre-trigger.service... Oct 29 01:25:30.125000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 29 01:25:30.135143 dracut-pre-trigger[420]: rd.md=0: removing MD RAID activation Oct 29 01:25:30.149000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 29 01:25:30.150744 systemd[1]: Finished dracut-pre-trigger.service. Oct 29 01:25:30.151269 systemd[1]: Starting systemd-udev-trigger.service... Oct 29 01:25:30.215858 systemd[1]: Finished systemd-udev-trigger.service. Oct 29 01:25:30.214000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 29 01:25:30.270728 kernel: VMware PVSCSI driver - version 1.0.7.0-k Oct 29 01:25:30.270766 kernel: vmw_pvscsi: using 64bit dma Oct 29 01:25:30.270774 kernel: vmw_pvscsi: max_id: 16 Oct 29 01:25:30.270781 kernel: vmw_pvscsi: setting ring_pages to 8 Oct 29 01:25:30.280132 kernel: vmw_pvscsi: enabling reqCallThreshold Oct 29 01:25:30.280167 kernel: vmw_pvscsi: driver-based request coalescing enabled Oct 29 01:25:30.280175 kernel: vmw_pvscsi: using MSI-X Oct 29 01:25:30.281451 kernel: scsi host0: VMware PVSCSI storage adapter rev 2, req/cmp/msg rings: 8/8/1 pages, cmd_per_lun=254 Oct 29 01:25:30.282313 kernel: vmw_pvscsi 0000:03:00.0: VMware PVSCSI rev 2 host #0 Oct 29 01:25:30.283668 kernel: scsi 0:0:0:0: Direct-Access VMware Virtual disk 2.0 PQ: 0 ANSI: 6 Oct 29 01:25:30.295815 kernel: VMware vmxnet3 virtual NIC driver - version 1.6.0.0-k-NAPI Oct 29 01:25:30.301851 kernel: cryptd: max_cpu_qlen set to 1000 Oct 29 01:25:30.309370 kernel: vmxnet3 0000:0b:00.0: # of Tx queues : 2, # of Rx queues : 2 Oct 29 01:25:30.312375 kernel: AVX2 version of gcm_enc/dec engaged. Oct 29 01:25:30.312386 kernel: AES CTR mode by8 optimization enabled Oct 29 01:25:30.312393 kernel: vmxnet3 0000:0b:00.0 eth0: NIC Link is Up 10000 Mbps Oct 29 01:25:30.313813 kernel: vmxnet3 0000:0b:00.0 ens192: renamed from eth0 Oct 29 01:25:30.326271 kernel: sd 0:0:0:0: [sda] 17805312 512-byte logical blocks: (9.12 GB/8.49 GiB) Oct 29 01:25:30.331396 kernel: sd 0:0:0:0: [sda] Write Protect is off Oct 29 01:25:30.331474 kernel: sd 0:0:0:0: [sda] Mode Sense: 31 00 00 00 Oct 29 01:25:30.331539 kernel: sd 0:0:0:0: [sda] Cache data unavailable Oct 29 01:25:30.331601 kernel: sd 0:0:0:0: [sda] Assuming drive cache: write through Oct 29 01:25:30.331661 kernel: libata version 3.00 loaded. Oct 29 01:25:30.331669 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Oct 29 01:25:30.331677 kernel: sd 0:0:0:0: [sda] Attached SCSI disk Oct 29 01:25:30.334813 kernel: ata_piix 0000:00:07.1: version 2.13 Oct 29 01:25:30.337699 kernel: scsi host1: ata_piix Oct 29 01:25:30.337780 kernel: scsi host2: ata_piix Oct 29 01:25:30.337872 kernel: ata1: PATA max UDMA/33 cmd 0x1f0 ctl 0x3f6 bmdma 0x1060 irq 14 Oct 29 01:25:30.337882 kernel: ata2: PATA max UDMA/33 cmd 0x170 ctl 0x376 bmdma 0x1068 irq 15 Oct 29 01:25:30.357817 kernel: BTRFS: device label OEM devid 1 transid 9 /dev/sda6 scanned by (udev-worker) (464) Oct 29 01:25:30.361935 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device. Oct 29 01:25:30.365371 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. Oct 29 01:25:30.366912 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device. Oct 29 01:25:30.367040 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device. Oct 29 01:25:30.367666 systemd[1]: Starting disk-uuid.service... Oct 29 01:25:30.370986 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device. Oct 29 01:25:30.392816 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Oct 29 01:25:30.396813 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Oct 29 01:25:30.502818 kernel: ata2.00: ATAPI: VMware Virtual IDE CDROM Drive, 00000001, max UDMA/33 Oct 29 01:25:30.506818 kernel: scsi 2:0:0:0: CD-ROM NECVMWar VMware IDE CDR10 1.00 PQ: 0 ANSI: 5 Oct 29 01:25:30.534844 kernel: sr 2:0:0:0: [sr0] scsi3-mmc drive: 1x/1x writer dvd-ram cd/rw xa/form2 cdda tray Oct 29 01:25:30.552728 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Oct 29 01:25:30.552740 kernel: sr 2:0:0:0: Attached scsi CD-ROM sr0 Oct 29 01:25:31.398896 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Oct 29 01:25:31.398931 disk-uuid[539]: The operation has completed successfully. Oct 29 01:25:31.473054 systemd[1]: disk-uuid.service: Deactivated successfully. Oct 29 01:25:31.473108 systemd[1]: Finished disk-uuid.service. Oct 29 01:25:31.471000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 29 01:25:31.471000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 29 01:25:31.473688 systemd[1]: Starting verity-setup.service... Oct 29 01:25:31.483813 kernel: device-mapper: verity: sha256 using implementation "sha256-avx2" Oct 29 01:25:31.527325 systemd[1]: Found device dev-mapper-usr.device. Oct 29 01:25:31.527618 systemd[1]: Finished verity-setup.service. Oct 29 01:25:31.525000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 29 01:25:31.528109 systemd[1]: Mounting sysusr-usr.mount... Oct 29 01:25:31.583418 kernel: EXT4-fs (dm-0): mounted filesystem without journal. Opts: norecovery. Quota mode: none. Oct 29 01:25:31.582242 systemd[1]: Mounted sysusr-usr.mount. Oct 29 01:25:31.582860 systemd[1]: Starting afterburn-network-kargs.service... Oct 29 01:25:31.583317 systemd[1]: Starting ignition-setup.service... Oct 29 01:25:31.599817 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Oct 29 01:25:31.599850 kernel: BTRFS info (device sda6): using free space tree Oct 29 01:25:31.599862 kernel: BTRFS info (device sda6): has skinny extents Oct 29 01:25:31.605815 kernel: BTRFS info (device sda6): enabling ssd optimizations Oct 29 01:25:31.611270 systemd[1]: mnt-oem.mount: Deactivated successfully. Oct 29 01:25:31.615000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 29 01:25:31.616749 systemd[1]: Finished ignition-setup.service. Oct 29 01:25:31.617292 systemd[1]: Starting ignition-fetch-offline.service... Oct 29 01:25:31.695059 systemd[1]: Finished afterburn-network-kargs.service. Oct 29 01:25:31.693000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=afterburn-network-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 29 01:25:31.695953 systemd[1]: Starting parse-ip-for-networkd.service... Oct 29 01:25:31.742310 systemd[1]: Finished parse-ip-for-networkd.service. Oct 29 01:25:31.740000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 29 01:25:31.741000 audit: BPF prog-id=9 op=LOAD Oct 29 01:25:31.743220 systemd[1]: Starting systemd-networkd.service... Oct 29 01:25:31.757653 systemd-networkd[732]: lo: Link UP Oct 29 01:25:31.757897 systemd-networkd[732]: lo: Gained carrier Oct 29 01:25:31.758318 systemd-networkd[732]: Enumeration completed Oct 29 01:25:31.758499 systemd[1]: Started systemd-networkd.service. Oct 29 01:25:31.758671 systemd[1]: Reached target network.target. Oct 29 01:25:31.756000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 29 01:25:31.758860 systemd-networkd[732]: ens192: Configuring with /etc/systemd/network/10-dracut-cmdline-99.network. Oct 29 01:25:31.759312 systemd[1]: Starting iscsiuio.service... Oct 29 01:25:31.762382 kernel: vmxnet3 0000:0b:00.0 ens192: intr type 3, mode 0, 3 vectors allocated Oct 29 01:25:31.762505 kernel: vmxnet3 0000:0b:00.0 ens192: NIC Link is Up 10000 Mbps Oct 29 01:25:31.763037 systemd-networkd[732]: ens192: Link UP Oct 29 01:25:31.763176 systemd-networkd[732]: ens192: Gained carrier Oct 29 01:25:31.766121 systemd[1]: Started iscsiuio.service. Oct 29 01:25:31.764000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 29 01:25:31.767064 systemd[1]: Starting iscsid.service... Oct 29 01:25:31.769855 iscsid[737]: iscsid: can't open InitiatorName configuration file /etc/iscsi/initiatorname.iscsi Oct 29 01:25:31.769855 iscsid[737]: iscsid: Warning: InitiatorName file /etc/iscsi/initiatorname.iscsi does not exist or does not contain a properly formatted InitiatorName. If using software iscsi (iscsi_tcp or ib_iser) or partial offload (bnx2i or cxgbi iscsi), you may not be able to log Oct 29 01:25:31.769855 iscsid[737]: into or discover targets. Please create a file /etc/iscsi/initiatorname.iscsi that contains a sting with the format: InitiatorName=iqn.yyyy-mm.[:identifier]. Oct 29 01:25:31.769855 iscsid[737]: Example: InitiatorName=iqn.2001-04.com.redhat:fc6. Oct 29 01:25:31.769855 iscsid[737]: If using hardware iscsi like qla4xxx this message can be ignored. Oct 29 01:25:31.769855 iscsid[737]: iscsid: can't open InitiatorAlias configuration file /etc/iscsi/initiatorname.iscsi Oct 29 01:25:31.769855 iscsid[737]: iscsid: can't open iscsid.safe_logout configuration file /etc/iscsi/iscsid.conf Oct 29 01:25:31.767000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 29 01:25:31.769655 systemd[1]: Started iscsid.service. Oct 29 01:25:31.770476 systemd[1]: Starting dracut-initqueue.service... Oct 29 01:25:31.777310 systemd[1]: Finished dracut-initqueue.service. Oct 29 01:25:31.775000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 29 01:25:31.777474 systemd[1]: Reached target remote-fs-pre.target. Oct 29 01:25:31.777594 systemd[1]: Reached target remote-cryptsetup.target. Oct 29 01:25:31.777989 systemd[1]: Reached target remote-fs.target. Oct 29 01:25:31.778717 systemd[1]: Starting dracut-pre-mount.service... Oct 29 01:25:31.781000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 29 01:25:31.783467 systemd[1]: Finished dracut-pre-mount.service. Oct 29 01:25:31.865613 ignition[604]: Ignition 2.14.0 Oct 29 01:25:31.865899 ignition[604]: Stage: fetch-offline Oct 29 01:25:31.866058 ignition[604]: reading system config file "/usr/lib/ignition/base.d/base.ign" Oct 29 01:25:31.866222 ignition[604]: parsing config with SHA512: bd85a898f7da4744ff98e02742aa4854e1ceea8026a4e95cb6fb599b39b54cff0db353847df13d3c55ae196a9dc5d648977228d55e5da3ea20cd600fa7cec8ed Oct 29 01:25:31.869038 ignition[604]: no config dir at "/usr/lib/ignition/base.platform.d/vmware" Oct 29 01:25:31.869257 ignition[604]: parsed url from cmdline: "" Oct 29 01:25:31.869299 ignition[604]: no config URL provided Oct 29 01:25:31.869415 ignition[604]: reading system config file "/usr/lib/ignition/user.ign" Oct 29 01:25:31.869558 ignition[604]: no config at "/usr/lib/ignition/user.ign" Oct 29 01:25:31.874136 ignition[604]: config successfully fetched Oct 29 01:25:31.874200 ignition[604]: parsing config with SHA512: a3e0eabae07c39a4ea3af73956f05156db361d981bb62174a13d959239e14a988f2100d64b2e82210e8fa862a79421d3b41bfaed3dd8af699b14febea364d81d Oct 29 01:25:31.877065 unknown[604]: fetched base config from "system" Oct 29 01:25:31.877230 unknown[604]: fetched user config from "vmware" Oct 29 01:25:31.877728 ignition[604]: fetch-offline: fetch-offline passed Oct 29 01:25:31.877907 ignition[604]: Ignition finished successfully Oct 29 01:25:31.876000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 29 01:25:31.878530 systemd[1]: Finished ignition-fetch-offline.service. Oct 29 01:25:31.878676 systemd[1]: ignition-fetch.service was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Oct 29 01:25:31.879142 systemd[1]: Starting ignition-kargs.service... Oct 29 01:25:31.884362 ignition[752]: Ignition 2.14.0 Oct 29 01:25:31.884603 ignition[752]: Stage: kargs Oct 29 01:25:31.884778 ignition[752]: reading system config file "/usr/lib/ignition/base.d/base.ign" Oct 29 01:25:31.884939 ignition[752]: parsing config with SHA512: bd85a898f7da4744ff98e02742aa4854e1ceea8026a4e95cb6fb599b39b54cff0db353847df13d3c55ae196a9dc5d648977228d55e5da3ea20cd600fa7cec8ed Oct 29 01:25:31.886250 ignition[752]: no config dir at "/usr/lib/ignition/base.platform.d/vmware" Oct 29 01:25:31.887990 ignition[752]: kargs: kargs passed Oct 29 01:25:31.888041 ignition[752]: Ignition finished successfully Oct 29 01:25:31.888946 systemd[1]: Finished ignition-kargs.service. Oct 29 01:25:31.887000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 29 01:25:31.889549 systemd[1]: Starting ignition-disks.service... Oct 29 01:25:31.893789 ignition[758]: Ignition 2.14.0 Oct 29 01:25:31.894010 ignition[758]: Stage: disks Oct 29 01:25:31.894184 ignition[758]: reading system config file "/usr/lib/ignition/base.d/base.ign" Oct 29 01:25:31.894337 ignition[758]: parsing config with SHA512: bd85a898f7da4744ff98e02742aa4854e1ceea8026a4e95cb6fb599b39b54cff0db353847df13d3c55ae196a9dc5d648977228d55e5da3ea20cd600fa7cec8ed Oct 29 01:25:31.895659 ignition[758]: no config dir at "/usr/lib/ignition/base.platform.d/vmware" Oct 29 01:25:31.897281 ignition[758]: disks: disks passed Oct 29 01:25:31.897431 ignition[758]: Ignition finished successfully Oct 29 01:25:31.896000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 29 01:25:31.898037 systemd[1]: Finished ignition-disks.service. Oct 29 01:25:31.898195 systemd[1]: Reached target initrd-root-device.target. Oct 29 01:25:31.898289 systemd[1]: Reached target local-fs-pre.target. Oct 29 01:25:31.898375 systemd[1]: Reached target local-fs.target. Oct 29 01:25:31.898457 systemd[1]: Reached target sysinit.target. Oct 29 01:25:31.898537 systemd[1]: Reached target basic.target. Oct 29 01:25:31.899141 systemd[1]: Starting systemd-fsck-root.service... Oct 29 01:25:31.912384 systemd-fsck[766]: ROOT: clean, 637/1628000 files, 124069/1617920 blocks Oct 29 01:25:31.911000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 29 01:25:31.913466 systemd[1]: Finished systemd-fsck-root.service. Oct 29 01:25:31.914034 systemd[1]: Mounting sysroot.mount... Oct 29 01:25:31.923815 kernel: EXT4-fs (sda9): mounted filesystem with ordered data mode. Opts: (null). Quota mode: none. Oct 29 01:25:31.924243 systemd[1]: Mounted sysroot.mount. Oct 29 01:25:31.924360 systemd[1]: Reached target initrd-root-fs.target. Oct 29 01:25:31.925476 systemd[1]: Mounting sysroot-usr.mount... Oct 29 01:25:31.925875 systemd[1]: flatcar-metadata-hostname.service was skipped because no trigger condition checks were met. Oct 29 01:25:31.925898 systemd[1]: ignition-remount-sysroot.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Oct 29 01:25:31.925913 systemd[1]: Reached target ignition-diskful.target. Oct 29 01:25:31.927321 systemd[1]: Mounted sysroot-usr.mount. Oct 29 01:25:31.927923 systemd[1]: Starting initrd-setup-root.service... Oct 29 01:25:31.930804 initrd-setup-root[776]: cut: /sysroot/etc/passwd: No such file or directory Oct 29 01:25:31.937089 initrd-setup-root[784]: cut: /sysroot/etc/group: No such file or directory Oct 29 01:25:31.940168 initrd-setup-root[792]: cut: /sysroot/etc/shadow: No such file or directory Oct 29 01:25:31.942420 initrd-setup-root[800]: cut: /sysroot/etc/gshadow: No such file or directory Oct 29 01:25:31.978973 systemd[1]: Finished initrd-setup-root.service. Oct 29 01:25:31.977000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 29 01:25:31.979561 systemd[1]: Starting ignition-mount.service... Oct 29 01:25:31.980040 systemd[1]: Starting sysroot-boot.service... Oct 29 01:25:31.984173 bash[817]: umount: /sysroot/usr/share/oem: not mounted. Oct 29 01:25:31.989504 ignition[818]: INFO : Ignition 2.14.0 Oct 29 01:25:31.989747 ignition[818]: INFO : Stage: mount Oct 29 01:25:31.989944 ignition[818]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" Oct 29 01:25:31.990109 ignition[818]: DEBUG : parsing config with SHA512: bd85a898f7da4744ff98e02742aa4854e1ceea8026a4e95cb6fb599b39b54cff0db353847df13d3c55ae196a9dc5d648977228d55e5da3ea20cd600fa7cec8ed Oct 29 01:25:31.991605 ignition[818]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/vmware" Oct 29 01:25:31.993242 ignition[818]: INFO : mount: mount passed Oct 29 01:25:31.993355 ignition[818]: INFO : Ignition finished successfully Oct 29 01:25:31.994027 systemd[1]: Finished ignition-mount.service. Oct 29 01:25:31.992000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 29 01:25:31.999369 systemd[1]: Finished sysroot-boot.service. Oct 29 01:25:31.997000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 29 01:25:32.545667 systemd[1]: Mounting sysroot-usr-share-oem.mount... Oct 29 01:25:32.569823 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/sda6 scanned by mount (827) Oct 29 01:25:32.573305 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Oct 29 01:25:32.573328 kernel: BTRFS info (device sda6): using free space tree Oct 29 01:25:32.573339 kernel: BTRFS info (device sda6): has skinny extents Oct 29 01:25:32.577814 kernel: BTRFS info (device sda6): enabling ssd optimizations Oct 29 01:25:32.578966 systemd[1]: Mounted sysroot-usr-share-oem.mount. Oct 29 01:25:32.579666 systemd[1]: Starting ignition-files.service... Oct 29 01:25:32.591581 ignition[847]: INFO : Ignition 2.14.0 Oct 29 01:25:32.591878 ignition[847]: INFO : Stage: files Oct 29 01:25:32.592121 ignition[847]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" Oct 29 01:25:32.592335 ignition[847]: DEBUG : parsing config with SHA512: bd85a898f7da4744ff98e02742aa4854e1ceea8026a4e95cb6fb599b39b54cff0db353847df13d3c55ae196a9dc5d648977228d55e5da3ea20cd600fa7cec8ed Oct 29 01:25:32.594436 ignition[847]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/vmware" Oct 29 01:25:32.597282 ignition[847]: DEBUG : files: compiled without relabeling support, skipping Oct 29 01:25:32.597837 ignition[847]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Oct 29 01:25:32.597837 ignition[847]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Oct 29 01:25:32.601945 ignition[847]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Oct 29 01:25:32.602197 ignition[847]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Oct 29 01:25:32.602971 unknown[847]: wrote ssh authorized keys file for user: core Oct 29 01:25:32.603197 ignition[847]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Oct 29 01:25:32.603658 ignition[847]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/etc/flatcar-cgroupv1" Oct 29 01:25:32.603851 ignition[847]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/etc/flatcar-cgroupv1" Oct 29 01:25:32.603851 ignition[847]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/helm-v3.17.0-linux-amd64.tar.gz" Oct 29 01:25:32.603851 ignition[847]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://get.helm.sh/helm-v3.17.0-linux-amd64.tar.gz: attempt #1 Oct 29 01:25:32.646616 ignition[847]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Oct 29 01:25:32.684586 ignition[847]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/helm-v3.17.0-linux-amd64.tar.gz" Oct 29 01:25:32.684884 ignition[847]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Oct 29 01:25:32.685221 ignition[847]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Oct 29 01:25:32.685416 ignition[847]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Oct 29 01:25:32.685676 ignition[847]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Oct 29 01:25:32.685876 ignition[847]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Oct 29 01:25:32.686114 ignition[847]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Oct 29 01:25:32.686307 ignition[847]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Oct 29 01:25:32.686548 ignition[847]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Oct 29 01:25:32.686937 ignition[847]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Oct 29 01:25:32.687170 ignition[847]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Oct 29 01:25:32.687364 ignition[847]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Oct 29 01:25:32.687633 ignition[847]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Oct 29 01:25:32.688076 ignition[847]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/etc/systemd/system/vmtoolsd.service" Oct 29 01:25:32.688301 ignition[847]: INFO : files: createFilesystemsFiles: createFiles: op(b): oem config not found in "/usr/share/oem", looking on oem partition Oct 29 01:25:32.694291 ignition[847]: INFO : files: createFilesystemsFiles: createFiles: op(b): op(c): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem616805380" Oct 29 01:25:32.694518 ignition[847]: CRITICAL : files: createFilesystemsFiles: createFiles: op(b): op(c): [failed] mounting "/dev/disk/by-label/OEM" at "/mnt/oem616805380": device or resource busy Oct 29 01:25:32.694737 ignition[847]: ERROR : files: createFilesystemsFiles: createFiles: op(b): failed to mount ext4 device "/dev/disk/by-label/OEM" at "/mnt/oem616805380", trying btrfs: device or resource busy Oct 29 01:25:32.694971 ignition[847]: INFO : files: createFilesystemsFiles: createFiles: op(b): op(d): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem616805380" Oct 29 01:25:32.695865 ignition[847]: INFO : files: createFilesystemsFiles: createFiles: op(b): op(d): [finished] mounting "/dev/disk/by-label/OEM" at "/mnt/oem616805380" Oct 29 01:25:32.697382 ignition[847]: INFO : files: createFilesystemsFiles: createFiles: op(b): op(e): [started] unmounting "/mnt/oem616805380" Oct 29 01:25:32.697651 ignition[847]: INFO : files: createFilesystemsFiles: createFiles: op(b): op(e): [finished] unmounting "/mnt/oem616805380" Oct 29 01:25:32.697856 ignition[847]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/etc/systemd/system/vmtoolsd.service" Oct 29 01:25:32.698086 ignition[847]: INFO : files: createFilesystemsFiles: createFiles: op(f): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Oct 29 01:25:32.698197 systemd[1]: mnt-oem616805380.mount: Deactivated successfully. Oct 29 01:25:32.698680 ignition[847]: INFO : files: createFilesystemsFiles: createFiles: op(f): GET https://extensions.flatcar.org/extensions/kubernetes-v1.32.4-x86-64.raw: attempt #1 Oct 29 01:25:33.013602 ignition[847]: INFO : files: createFilesystemsFiles: createFiles: op(f): GET result: OK Oct 29 01:25:33.347006 systemd-networkd[732]: ens192: Gained IPv6LL Oct 29 01:25:33.545519 ignition[847]: INFO : files: createFilesystemsFiles: createFiles: op(f): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Oct 29 01:25:33.545797 ignition[847]: INFO : files: createFilesystemsFiles: createFiles: op(10): [started] writing file "/sysroot/etc/systemd/network/00-vmware.network" Oct 29 01:25:33.545982 ignition[847]: INFO : files: createFilesystemsFiles: createFiles: op(10): [finished] writing file "/sysroot/etc/systemd/network/00-vmware.network" Oct 29 01:25:33.545982 ignition[847]: INFO : files: op(11): [started] processing unit "vmtoolsd.service" Oct 29 01:25:33.545982 ignition[847]: INFO : files: op(11): [finished] processing unit "vmtoolsd.service" Oct 29 01:25:33.545982 ignition[847]: INFO : files: op(12): [started] processing unit "containerd.service" Oct 29 01:25:33.545982 ignition[847]: INFO : files: op(12): op(13): [started] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Oct 29 01:25:33.546870 ignition[847]: INFO : files: op(12): op(13): [finished] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Oct 29 01:25:33.546870 ignition[847]: INFO : files: op(12): [finished] processing unit "containerd.service" Oct 29 01:25:33.546870 ignition[847]: INFO : files: op(14): [started] processing unit "prepare-helm.service" Oct 29 01:25:33.546870 ignition[847]: INFO : files: op(14): op(15): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Oct 29 01:25:33.546870 ignition[847]: INFO : files: op(14): op(15): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Oct 29 01:25:33.546870 ignition[847]: INFO : files: op(14): [finished] processing unit "prepare-helm.service" Oct 29 01:25:33.546870 ignition[847]: INFO : files: op(16): [started] processing unit "coreos-metadata.service" Oct 29 01:25:33.546870 ignition[847]: INFO : files: op(16): op(17): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Oct 29 01:25:33.546870 ignition[847]: INFO : files: op(16): op(17): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Oct 29 01:25:33.546870 ignition[847]: INFO : files: op(16): [finished] processing unit "coreos-metadata.service" Oct 29 01:25:33.546870 ignition[847]: INFO : files: op(18): [started] setting preset to enabled for "vmtoolsd.service" Oct 29 01:25:33.546870 ignition[847]: INFO : files: op(18): [finished] setting preset to enabled for "vmtoolsd.service" Oct 29 01:25:33.546870 ignition[847]: INFO : files: op(19): [started] setting preset to enabled for "prepare-helm.service" Oct 29 01:25:33.546870 ignition[847]: INFO : files: op(19): [finished] setting preset to enabled for "prepare-helm.service" Oct 29 01:25:33.546870 ignition[847]: INFO : files: op(1a): [started] setting preset to disabled for "coreos-metadata.service" Oct 29 01:25:33.546870 ignition[847]: INFO : files: op(1a): op(1b): [started] removing enablement symlink(s) for "coreos-metadata.service" Oct 29 01:25:33.594198 ignition[847]: INFO : files: op(1a): op(1b): [finished] removing enablement symlink(s) for "coreos-metadata.service" Oct 29 01:25:33.594393 ignition[847]: INFO : files: op(1a): [finished] setting preset to disabled for "coreos-metadata.service" Oct 29 01:25:33.594393 ignition[847]: INFO : files: createResultFile: createFiles: op(1c): [started] writing file "/sysroot/etc/.ignition-result.json" Oct 29 01:25:33.594393 ignition[847]: INFO : files: createResultFile: createFiles: op(1c): [finished] writing file "/sysroot/etc/.ignition-result.json" Oct 29 01:25:33.594393 ignition[847]: INFO : files: files passed Oct 29 01:25:33.594393 ignition[847]: INFO : Ignition finished successfully Oct 29 01:25:33.595626 systemd[1]: Finished ignition-files.service. Oct 29 01:25:33.593000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 29 01:25:33.596856 systemd[1]: Starting initrd-setup-root-after-ignition.service... Oct 29 01:25:33.596967 systemd[1]: torcx-profile-populate.service was skipped because of an unmet condition check (ConditionPathExists=/sysroot/etc/torcx/next-profile). Oct 29 01:25:33.597355 systemd[1]: Starting ignition-quench.service... Oct 29 01:25:33.599921 systemd[1]: ignition-quench.service: Deactivated successfully. Oct 29 01:25:33.599977 systemd[1]: Finished ignition-quench.service. Oct 29 01:25:33.598000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 29 01:25:33.598000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 29 01:25:33.601409 initrd-setup-root-after-ignition[873]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Oct 29 01:25:33.601881 systemd[1]: Finished initrd-setup-root-after-ignition.service. Oct 29 01:25:33.600000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 29 01:25:33.602033 systemd[1]: Reached target ignition-complete.target. Oct 29 01:25:33.602510 systemd[1]: Starting initrd-parse-etc.service... Oct 29 01:25:33.610394 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Oct 29 01:25:33.610609 systemd[1]: Finished initrd-parse-etc.service. Oct 29 01:25:33.609000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 29 01:25:33.609000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 29 01:25:33.610951 systemd[1]: Reached target initrd-fs.target. Oct 29 01:25:33.611157 systemd[1]: Reached target initrd.target. Oct 29 01:25:33.611378 systemd[1]: dracut-mount.service was skipped because no trigger condition checks were met. Oct 29 01:25:33.611965 systemd[1]: Starting dracut-pre-pivot.service... Oct 29 01:25:33.618249 systemd[1]: Finished dracut-pre-pivot.service. Oct 29 01:25:33.616000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 29 01:25:33.618923 systemd[1]: Starting initrd-cleanup.service... Oct 29 01:25:33.624413 systemd[1]: Stopped target nss-lookup.target. Oct 29 01:25:33.624675 systemd[1]: Stopped target remote-cryptsetup.target. Oct 29 01:25:33.624950 systemd[1]: Stopped target timers.target. Oct 29 01:25:33.625198 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Oct 29 01:25:33.625388 systemd[1]: Stopped dracut-pre-pivot.service. Oct 29 01:25:33.623000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 29 01:25:33.625728 systemd[1]: Stopped target initrd.target. Oct 29 01:25:33.625984 systemd[1]: Stopped target basic.target. Oct 29 01:25:33.626260 systemd[1]: Stopped target ignition-complete.target. Oct 29 01:25:33.626514 systemd[1]: Stopped target ignition-diskful.target. Oct 29 01:25:33.626764 systemd[1]: Stopped target initrd-root-device.target. Oct 29 01:25:33.627053 systemd[1]: Stopped target remote-fs.target. Oct 29 01:25:33.627295 systemd[1]: Stopped target remote-fs-pre.target. Oct 29 01:25:33.627548 systemd[1]: Stopped target sysinit.target. Oct 29 01:25:33.627795 systemd[1]: Stopped target local-fs.target. Oct 29 01:25:33.628058 systemd[1]: Stopped target local-fs-pre.target. Oct 29 01:25:33.628300 systemd[1]: Stopped target swap.target. Oct 29 01:25:33.628516 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Oct 29 01:25:33.628716 systemd[1]: Stopped dracut-pre-mount.service. Oct 29 01:25:33.627000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 29 01:25:33.629045 systemd[1]: Stopped target cryptsetup.target. Oct 29 01:25:33.629308 systemd[1]: dracut-initqueue.service: Deactivated successfully. Oct 29 01:25:33.629495 systemd[1]: Stopped dracut-initqueue.service. Oct 29 01:25:33.627000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 29 01:25:33.629796 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Oct 29 01:25:33.629999 systemd[1]: Stopped ignition-fetch-offline.service. Oct 29 01:25:33.628000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 29 01:25:33.630369 systemd[1]: Stopped target paths.target. Oct 29 01:25:33.630588 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Oct 29 01:25:33.633821 systemd[1]: Stopped systemd-ask-password-console.path. Oct 29 01:25:33.634094 systemd[1]: Stopped target slices.target. Oct 29 01:25:33.634337 systemd[1]: Stopped target sockets.target. Oct 29 01:25:33.634573 systemd[1]: iscsid.socket: Deactivated successfully. Oct 29 01:25:33.634742 systemd[1]: Closed iscsid.socket. Oct 29 01:25:33.634989 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Oct 29 01:25:33.635089 systemd[1]: Stopped initrd-setup-root-after-ignition.service. Oct 29 01:25:33.633000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 29 01:25:33.635312 systemd[1]: ignition-files.service: Deactivated successfully. Oct 29 01:25:33.635367 systemd[1]: Stopped ignition-files.service. Oct 29 01:25:33.633000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 29 01:25:33.636006 systemd[1]: Stopping ignition-mount.service... Oct 29 01:25:33.635000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 29 01:25:33.636000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 29 01:25:33.636000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 29 01:25:33.637000 systemd[1]: Stopping iscsiuio.service... Oct 29 01:25:33.637068 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Oct 29 01:25:33.637128 systemd[1]: Stopped kmod-static-nodes.service. Oct 29 01:25:33.637630 systemd[1]: Stopping sysroot-boot.service... Oct 29 01:25:33.637721 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Oct 29 01:25:33.637784 systemd[1]: Stopped systemd-udev-trigger.service. Oct 29 01:25:33.637999 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Oct 29 01:25:33.638055 systemd[1]: Stopped dracut-pre-trigger.service. Oct 29 01:25:33.640939 systemd[1]: iscsiuio.service: Deactivated successfully. Oct 29 01:25:33.640997 systemd[1]: Stopped iscsiuio.service. Oct 29 01:25:33.643576 ignition[886]: INFO : Ignition 2.14.0 Oct 29 01:25:33.643576 ignition[886]: INFO : Stage: umount Oct 29 01:25:33.643576 ignition[886]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" Oct 29 01:25:33.643576 ignition[886]: DEBUG : parsing config with SHA512: bd85a898f7da4744ff98e02742aa4854e1ceea8026a4e95cb6fb599b39b54cff0db353847df13d3c55ae196a9dc5d648977228d55e5da3ea20cd600fa7cec8ed Oct 29 01:25:33.644260 ignition[886]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/vmware" Oct 29 01:25:33.642000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 29 01:25:33.644651 systemd[1]: initrd-cleanup.service: Deactivated successfully. Oct 29 01:25:33.644706 systemd[1]: Finished initrd-cleanup.service. Oct 29 01:25:33.643000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 29 01:25:33.643000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 29 01:25:33.646178 systemd[1]: iscsiuio.socket: Deactivated successfully. Oct 29 01:25:33.646198 systemd[1]: Closed iscsiuio.socket. Oct 29 01:25:33.648552 systemd[1]: sysroot-boot.mount: Deactivated successfully. Oct 29 01:25:33.650344 ignition[886]: INFO : umount: umount passed Oct 29 01:25:33.650463 ignition[886]: INFO : Ignition finished successfully Oct 29 01:25:33.651057 systemd[1]: ignition-mount.service: Deactivated successfully. Oct 29 01:25:33.651114 systemd[1]: Stopped ignition-mount.service. Oct 29 01:25:33.649000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 29 01:25:33.651314 systemd[1]: Stopped target network.target. Oct 29 01:25:33.651442 systemd[1]: ignition-disks.service: Deactivated successfully. Oct 29 01:25:33.649000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 29 01:25:33.651466 systemd[1]: Stopped ignition-disks.service. Oct 29 01:25:33.649000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 29 01:25:33.651620 systemd[1]: ignition-kargs.service: Deactivated successfully. Oct 29 01:25:33.650000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 29 01:25:33.651640 systemd[1]: Stopped ignition-kargs.service. Oct 29 01:25:33.651779 systemd[1]: ignition-setup.service: Deactivated successfully. Oct 29 01:25:33.651818 systemd[1]: Stopped ignition-setup.service. Oct 29 01:25:33.651989 systemd[1]: Stopping systemd-networkd.service... Oct 29 01:25:33.652197 systemd[1]: Stopping systemd-resolved.service... Oct 29 01:25:33.656347 systemd[1]: systemd-networkd.service: Deactivated successfully. Oct 29 01:25:33.656405 systemd[1]: Stopped systemd-networkd.service. Oct 29 01:25:33.654000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 29 01:25:33.656868 systemd[1]: systemd-networkd.socket: Deactivated successfully. Oct 29 01:25:33.656890 systemd[1]: Closed systemd-networkd.socket. Oct 29 01:25:33.655000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 29 01:25:33.656000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=afterburn-network-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 29 01:25:33.656000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 29 01:25:33.657458 systemd[1]: Stopping network-cleanup.service... Oct 29 01:25:33.656000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 29 01:25:33.657564 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Oct 29 01:25:33.657592 systemd[1]: Stopped parse-ip-for-networkd.service. Oct 29 01:25:33.658000 audit: BPF prog-id=9 op=UNLOAD Oct 29 01:25:33.657737 systemd[1]: afterburn-network-kargs.service: Deactivated successfully. Oct 29 01:25:33.657758 systemd[1]: Stopped afterburn-network-kargs.service. Oct 29 01:25:33.658057 systemd[1]: systemd-sysctl.service: Deactivated successfully. Oct 29 01:25:33.658080 systemd[1]: Stopped systemd-sysctl.service. Oct 29 01:25:33.658363 systemd[1]: systemd-modules-load.service: Deactivated successfully. Oct 29 01:25:33.658384 systemd[1]: Stopped systemd-modules-load.service. Oct 29 01:25:33.660601 systemd[1]: Stopping systemd-udevd.service... Oct 29 01:25:33.661362 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Oct 29 01:25:33.662616 systemd[1]: systemd-resolved.service: Deactivated successfully. Oct 29 01:25:33.661000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 29 01:25:33.663299 systemd[1]: Stopped systemd-resolved.service. Oct 29 01:25:33.663828 systemd[1]: network-cleanup.service: Deactivated successfully. Oct 29 01:25:33.663876 systemd[1]: Stopped network-cleanup.service. Oct 29 01:25:33.662000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=network-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 29 01:25:33.663000 audit: BPF prog-id=6 op=UNLOAD Oct 29 01:25:33.666609 systemd[1]: systemd-udevd.service: Deactivated successfully. Oct 29 01:25:33.666684 systemd[1]: Stopped systemd-udevd.service. Oct 29 01:25:33.665000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 29 01:25:33.666960 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Oct 29 01:25:33.666979 systemd[1]: Closed systemd-udevd-control.socket. Oct 29 01:25:33.667222 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Oct 29 01:25:33.665000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 29 01:25:33.667239 systemd[1]: Closed systemd-udevd-kernel.socket. Oct 29 01:25:33.665000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 29 01:25:33.667390 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Oct 29 01:25:33.666000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 29 01:25:33.667412 systemd[1]: Stopped dracut-pre-udev.service. Oct 29 01:25:33.667568 systemd[1]: dracut-cmdline.service: Deactivated successfully. Oct 29 01:25:33.667589 systemd[1]: Stopped dracut-cmdline.service. Oct 29 01:25:33.666000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 29 01:25:33.667732 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Oct 29 01:25:33.667751 systemd[1]: Stopped dracut-cmdline-ask.service. Oct 29 01:25:33.668289 systemd[1]: Starting initrd-udevadm-cleanup-db.service... Oct 29 01:25:33.668396 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Oct 29 01:25:33.668428 systemd[1]: Stopped systemd-vconsole-setup.service. Oct 29 01:25:33.671507 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Oct 29 01:25:33.671699 systemd[1]: Finished initrd-udevadm-cleanup-db.service. Oct 29 01:25:33.670000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 29 01:25:33.670000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 29 01:25:33.809260 systemd[1]: sysroot-boot.service: Deactivated successfully. Oct 29 01:25:33.809503 systemd[1]: Stopped sysroot-boot.service. Oct 29 01:25:33.812657 kernel: kauditd_printk_skb: 63 callbacks suppressed Oct 29 01:25:33.812676 kernel: audit: type=1131 audit(1761701133.807:74): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 29 01:25:33.807000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 29 01:25:33.812836 systemd[1]: Reached target initrd-switch-root.target. Oct 29 01:25:33.813055 systemd[1]: initrd-setup-root.service: Deactivated successfully. Oct 29 01:25:33.813209 systemd[1]: Stopped initrd-setup-root.service. Oct 29 01:25:33.811000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 29 01:25:33.815810 kernel: audit: type=1131 audit(1761701133.811:75): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 29 01:25:33.816262 systemd[1]: Starting initrd-switch-root.service... Oct 29 01:25:33.829144 systemd[1]: Switching root. Oct 29 01:25:33.828000 audit: BPF prog-id=5 op=UNLOAD Oct 29 01:25:33.831589 kernel: audit: type=1334 audit(1761701133.828:76): prog-id=5 op=UNLOAD Oct 29 01:25:33.831607 kernel: audit: type=1334 audit(1761701133.828:77): prog-id=4 op=UNLOAD Oct 29 01:25:33.828000 audit: BPF prog-id=4 op=UNLOAD Oct 29 01:25:33.832482 kernel: audit: type=1334 audit(1761701133.830:78): prog-id=3 op=UNLOAD Oct 29 01:25:33.830000 audit: BPF prog-id=3 op=UNLOAD Oct 29 01:25:33.832000 audit: BPF prog-id=8 op=UNLOAD Oct 29 01:25:33.836195 kernel: audit: type=1334 audit(1761701133.832:79): prog-id=8 op=UNLOAD Oct 29 01:25:33.836211 kernel: audit: type=1334 audit(1761701133.834:80): prog-id=7 op=UNLOAD Oct 29 01:25:33.834000 audit: BPF prog-id=7 op=UNLOAD Oct 29 01:25:33.849624 iscsid[737]: iscsid shutting down. Oct 29 01:25:33.849817 systemd-journald[216]: Received SIGTERM from PID 1 (systemd). Oct 29 01:25:33.849859 systemd-journald[216]: Journal stopped Oct 29 01:25:36.077066 kernel: audit: type=1335 audit(1761701133.848:81): pid=216 uid=0 auid=4294967295 tty=(none) ses=4294967295 subj=kernel comm="systemd-journal" exe="/usr/lib/systemd/systemd-journald" nl-mcgrp=1 op=disconnect res=1 Oct 29 01:25:36.077084 kernel: SELinux: Class mctp_socket not defined in policy. Oct 29 01:25:36.077093 kernel: SELinux: Class anon_inode not defined in policy. Oct 29 01:25:36.077099 kernel: SELinux: the above unknown classes and permissions will be allowed Oct 29 01:25:36.077105 kernel: SELinux: policy capability network_peer_controls=1 Oct 29 01:25:36.077112 kernel: SELinux: policy capability open_perms=1 Oct 29 01:25:36.077119 kernel: SELinux: policy capability extended_socket_class=1 Oct 29 01:25:36.077125 kernel: SELinux: policy capability always_check_network=0 Oct 29 01:25:36.077131 kernel: SELinux: policy capability cgroup_seclabel=1 Oct 29 01:25:36.077136 kernel: SELinux: policy capability nnp_nosuid_transition=1 Oct 29 01:25:36.077142 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Oct 29 01:25:36.077148 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Oct 29 01:25:36.077154 kernel: audit: type=1403 audit(1761701134.111:82): auid=4294967295 ses=4294967295 lsm=selinux res=1 Oct 29 01:25:36.077163 systemd[1]: Successfully loaded SELinux policy in 42.768ms. Oct 29 01:25:36.077171 systemd[1]: Relabelled /dev, /dev/shm, /run, /sys/fs/cgroup in 6.869ms. Oct 29 01:25:36.077179 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Oct 29 01:25:36.077186 systemd[1]: Detected virtualization vmware. Oct 29 01:25:36.077193 systemd[1]: Detected architecture x86-64. Oct 29 01:25:36.077200 systemd[1]: Detected first boot. Oct 29 01:25:36.077207 systemd[1]: Initializing machine ID from random generator. Oct 29 01:25:36.077215 kernel: SELinux: Context system_u:object_r:container_file_t:s0:c1022,c1023 is not valid (left unmapped). Oct 29 01:25:36.077232 kernel: audit: type=1400 audit(1761701134.263:83): avc: denied { associate } for pid=938 comm="torcx-generator" name="docker" dev="tmpfs" ino=2 scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:tmpfs_t:s0 tclass=filesystem permissive=1 srawcon="system_u:object_r:container_file_t:s0:c1022,c1023" Oct 29 01:25:36.077240 systemd[1]: Populated /etc with preset unit settings. Oct 29 01:25:36.077250 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Oct 29 01:25:36.077258 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Oct 29 01:25:36.077266 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Oct 29 01:25:36.077272 systemd[1]: Queued start job for default target multi-user.target. Oct 29 01:25:36.077279 systemd[1]: Unnecessary job was removed for dev-sda6.device. Oct 29 01:25:36.077285 systemd[1]: Created slice system-addon\x2dconfig.slice. Oct 29 01:25:36.077292 systemd[1]: Created slice system-addon\x2drun.slice. Oct 29 01:25:36.077300 systemd[1]: Created slice system-getty.slice. Oct 29 01:25:36.077307 systemd[1]: Created slice system-modprobe.slice. Oct 29 01:25:36.077313 systemd[1]: Created slice system-serial\x2dgetty.slice. Oct 29 01:25:36.077320 systemd[1]: Created slice system-system\x2dcloudinit.slice. Oct 29 01:25:36.077327 systemd[1]: Created slice system-systemd\x2dfsck.slice. Oct 29 01:25:36.077333 systemd[1]: Created slice user.slice. Oct 29 01:25:36.077340 systemd[1]: Started systemd-ask-password-console.path. Oct 29 01:25:36.077347 systemd[1]: Started systemd-ask-password-wall.path. Oct 29 01:25:36.077355 systemd[1]: Set up automount boot.automount. Oct 29 01:25:36.077362 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount. Oct 29 01:25:36.077370 systemd[1]: Reached target integritysetup.target. Oct 29 01:25:36.077377 systemd[1]: Reached target remote-cryptsetup.target. Oct 29 01:25:36.077384 systemd[1]: Reached target remote-fs.target. Oct 29 01:25:36.077391 systemd[1]: Reached target slices.target. Oct 29 01:25:36.077407 systemd[1]: Reached target swap.target. Oct 29 01:25:36.077416 systemd[1]: Reached target torcx.target. Oct 29 01:25:36.077427 systemd[1]: Reached target veritysetup.target. Oct 29 01:25:36.077442 systemd[1]: Listening on systemd-coredump.socket. Oct 29 01:25:36.077454 systemd[1]: Listening on systemd-initctl.socket. Oct 29 01:25:36.077466 systemd[1]: Listening on systemd-journald-audit.socket. Oct 29 01:25:36.077476 systemd[1]: Listening on systemd-journald-dev-log.socket. Oct 29 01:25:36.077483 systemd[1]: Listening on systemd-journald.socket. Oct 29 01:25:36.077490 systemd[1]: Listening on systemd-networkd.socket. Oct 29 01:25:36.077497 systemd[1]: Listening on systemd-udevd-control.socket. Oct 29 01:25:36.077504 systemd[1]: Listening on systemd-udevd-kernel.socket. Oct 29 01:25:36.077513 systemd[1]: Listening on systemd-userdbd.socket. Oct 29 01:25:36.077520 systemd[1]: Mounting dev-hugepages.mount... Oct 29 01:25:36.077527 systemd[1]: Mounting dev-mqueue.mount... Oct 29 01:25:36.077535 systemd[1]: Mounting media.mount... Oct 29 01:25:36.077542 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Oct 29 01:25:36.077550 systemd[1]: Mounting sys-kernel-debug.mount... Oct 29 01:25:36.077557 systemd[1]: Mounting sys-kernel-tracing.mount... Oct 29 01:25:36.077564 systemd[1]: Mounting tmp.mount... Oct 29 01:25:36.077571 systemd[1]: Starting flatcar-tmpfiles.service... Oct 29 01:25:36.077578 systemd[1]: Starting ignition-delete-config.service... Oct 29 01:25:36.077585 systemd[1]: Starting kmod-static-nodes.service... Oct 29 01:25:36.077592 systemd[1]: Starting modprobe@configfs.service... Oct 29 01:25:36.077599 systemd[1]: Starting modprobe@dm_mod.service... Oct 29 01:25:36.077606 systemd[1]: Starting modprobe@drm.service... Oct 29 01:25:36.077614 systemd[1]: Starting modprobe@efi_pstore.service... Oct 29 01:25:36.077622 systemd[1]: Starting modprobe@fuse.service... Oct 29 01:25:36.077630 systemd[1]: Starting modprobe@loop.service... Oct 29 01:25:36.077637 systemd[1]: setup-nsswitch.service was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Oct 29 01:25:36.077645 systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling. Oct 29 01:25:36.077652 systemd[1]: (This warning is only shown for the first unit using IP firewalling.) Oct 29 01:25:36.077659 systemd[1]: Starting systemd-journald.service... Oct 29 01:25:36.077666 systemd[1]: Starting systemd-modules-load.service... Oct 29 01:25:36.077674 systemd[1]: Starting systemd-network-generator.service... Oct 29 01:25:36.077681 systemd[1]: Starting systemd-remount-fs.service... Oct 29 01:25:36.077688 systemd[1]: Starting systemd-udev-trigger.service... Oct 29 01:25:36.077696 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Oct 29 01:25:36.077703 systemd[1]: Mounted dev-hugepages.mount. Oct 29 01:25:36.077710 systemd[1]: Mounted dev-mqueue.mount. Oct 29 01:25:36.077716 systemd[1]: Mounted media.mount. Oct 29 01:25:36.077724 systemd[1]: Mounted sys-kernel-debug.mount. Oct 29 01:25:36.077731 systemd[1]: Mounted sys-kernel-tracing.mount. Oct 29 01:25:36.077739 systemd[1]: Mounted tmp.mount. Oct 29 01:25:36.077746 systemd[1]: Finished kmod-static-nodes.service. Oct 29 01:25:36.077753 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Oct 29 01:25:36.077760 systemd[1]: Finished modprobe@dm_mod.service. Oct 29 01:25:36.077769 systemd-journald[1031]: Journal started Oct 29 01:25:36.077805 systemd-journald[1031]: Runtime Journal (/run/log/journal/050c35612d1c42cab6329bcad19d678b) is 4.8M, max 38.8M, 34.0M free. Oct 29 01:25:35.995000 audit[1]: AVC avc: denied { audit_read } for pid=1 comm="systemd" capability=37 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 Oct 29 01:25:35.995000 audit[1]: EVENT_LISTENER pid=1 uid=0 auid=4294967295 tty=(none) ses=4294967295 subj=system_u:system_r:kernel_t:s0 comm="systemd" exe="/usr/lib/systemd/systemd" nl-mcgrp=1 op=connect res=1 Oct 29 01:25:36.072000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 29 01:25:36.073000 audit: CONFIG_CHANGE op=set audit_enabled=1 old=1 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 res=1 Oct 29 01:25:36.073000 audit[1031]: SYSCALL arch=c000003e syscall=46 success=yes exit=60 a0=4 a1=7ffe19c3d930 a2=4000 a3=7ffe19c3d9cc items=0 ppid=1 pid=1031 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="systemd-journal" exe="/usr/lib/systemd/systemd-journald" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 29 01:25:36.073000 audit: PROCTITLE proctitle="/usr/lib/systemd/systemd-journald" Oct 29 01:25:36.078261 jq[1017]: true Oct 29 01:25:36.078735 jq[1041]: true Oct 29 01:25:36.090549 systemd[1]: Started systemd-journald.service. Oct 29 01:25:36.077000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 29 01:25:36.077000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 29 01:25:36.079000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 29 01:25:36.079000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 29 01:25:36.079000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 29 01:25:36.080000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 29 01:25:36.080000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 29 01:25:36.080000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 29 01:25:36.080000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 29 01:25:36.080000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-remount-fs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 29 01:25:36.081502 systemd[1]: modprobe@drm.service: Deactivated successfully. Oct 29 01:25:36.081592 systemd[1]: Finished modprobe@drm.service. Oct 29 01:25:36.081840 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Oct 29 01:25:36.081925 systemd[1]: Finished modprobe@efi_pstore.service. Oct 29 01:25:36.082165 systemd[1]: Finished systemd-modules-load.service. Oct 29 01:25:36.082391 systemd[1]: Finished systemd-network-generator.service. Oct 29 01:25:36.082616 systemd[1]: Finished systemd-remount-fs.service. Oct 29 01:25:36.085397 systemd[1]: Reached target network-pre.target. Oct 29 01:25:36.085506 systemd[1]: remount-root.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Oct 29 01:25:36.100821 kernel: loop: module loaded Oct 29 01:25:36.104810 kernel: fuse: init (API version 7.34) Oct 29 01:25:36.107197 systemd[1]: Starting systemd-hwdb-update.service... Oct 29 01:25:36.108070 systemd[1]: Starting systemd-journal-flush.service... Oct 29 01:25:36.108194 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Oct 29 01:25:36.108946 systemd[1]: Starting systemd-random-seed.service... Oct 29 01:25:36.109733 systemd[1]: Starting systemd-sysctl.service... Oct 29 01:25:36.110232 systemd[1]: modprobe@configfs.service: Deactivated successfully. Oct 29 01:25:36.110325 systemd[1]: Finished modprobe@configfs.service. Oct 29 01:25:36.108000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 29 01:25:36.108000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 29 01:25:36.110628 systemd[1]: modprobe@fuse.service: Deactivated successfully. Oct 29 01:25:36.110708 systemd[1]: Finished modprobe@fuse.service. Oct 29 01:25:36.108000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 29 01:25:36.108000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 29 01:25:36.110929 systemd[1]: modprobe@loop.service: Deactivated successfully. Oct 29 01:25:36.111020 systemd[1]: Finished modprobe@loop.service. Oct 29 01:25:36.109000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 29 01:25:36.109000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 29 01:25:36.112006 systemd[1]: Mounting sys-fs-fuse-connections.mount... Oct 29 01:25:36.113599 systemd[1]: Mounting sys-kernel-config.mount... Oct 29 01:25:36.113719 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Oct 29 01:25:36.114587 systemd[1]: Mounted sys-fs-fuse-connections.mount. Oct 29 01:25:36.116739 systemd[1]: Mounted sys-kernel-config.mount. Oct 29 01:25:36.128004 systemd-journald[1031]: Time spent on flushing to /var/log/journal/050c35612d1c42cab6329bcad19d678b is 53.918ms for 1934 entries. Oct 29 01:25:36.128004 systemd-journald[1031]: System Journal (/var/log/journal/050c35612d1c42cab6329bcad19d678b) is 8.0M, max 584.8M, 576.8M free. Oct 29 01:25:36.223230 systemd-journald[1031]: Received client request to flush runtime journal. Oct 29 01:25:36.128000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-random-seed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 29 01:25:36.149000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=flatcar-tmpfiles comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 29 01:25:36.157000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 29 01:25:36.210000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 29 01:25:36.130664 systemd[1]: Finished systemd-random-seed.service. Oct 29 01:25:36.130823 systemd[1]: Reached target first-boot-complete.target. Oct 29 01:25:36.150968 systemd[1]: Finished flatcar-tmpfiles.service. Oct 29 01:25:36.152037 systemd[1]: Starting systemd-sysusers.service... Oct 29 01:25:36.159175 systemd[1]: Finished systemd-sysctl.service. Oct 29 01:25:36.211695 systemd[1]: Finished systemd-udev-trigger.service. Oct 29 01:25:36.212722 systemd[1]: Starting systemd-udev-settle.service... Oct 29 01:25:36.223000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-flush comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 29 01:25:36.224937 systemd[1]: Finished systemd-journal-flush.service. Oct 29 01:25:36.225304 udevadm[1101]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Oct 29 01:25:36.239165 systemd[1]: Finished systemd-sysusers.service. Oct 29 01:25:36.237000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysusers comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 29 01:25:36.240247 systemd[1]: Starting systemd-tmpfiles-setup-dev.service... Oct 29 01:25:36.272000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 29 01:25:36.273738 systemd[1]: Finished systemd-tmpfiles-setup-dev.service. Oct 29 01:25:36.287735 ignition[1051]: Ignition 2.14.0 Oct 29 01:25:36.287967 ignition[1051]: deleting config from guestinfo properties Oct 29 01:25:36.290366 ignition[1051]: Successfully deleted config Oct 29 01:25:36.289000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=ignition-delete-config comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 29 01:25:36.290965 systemd[1]: Finished ignition-delete-config.service. Oct 29 01:25:36.612000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hwdb-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 29 01:25:36.614133 systemd[1]: Finished systemd-hwdb-update.service. Oct 29 01:25:36.615411 systemd[1]: Starting systemd-udevd.service... Oct 29 01:25:36.631013 systemd-udevd[1109]: Using default interface naming scheme 'v252'. Oct 29 01:25:36.647000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 29 01:25:36.649370 systemd[1]: Started systemd-udevd.service. Oct 29 01:25:36.650576 systemd[1]: Starting systemd-networkd.service... Oct 29 01:25:36.656995 systemd[1]: Starting systemd-userdbd.service... Oct 29 01:25:36.681278 systemd[1]: Found device dev-ttyS0.device. Oct 29 01:25:36.682537 systemd[1]: Started systemd-userdbd.service. Oct 29 01:25:36.680000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-userdbd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 29 01:25:36.709842 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input2 Oct 29 01:25:36.713849 kernel: ACPI: button: Power Button [PWRF] Oct 29 01:25:36.784000 audit[1116]: AVC avc: denied { confidentiality } for pid=1116 comm="(udev-worker)" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=1 Oct 29 01:25:36.784000 audit[1116]: SYSCALL arch=c000003e syscall=175 success=yes exit=0 a0=55950f42e660 a1=338ec a2=7fd81ca89bc5 a3=5 items=110 ppid=1109 pid=1116 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="(udev-worker)" exe="/usr/bin/udevadm" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 29 01:25:36.784000 audit: CWD cwd="/" Oct 29 01:25:36.784000 audit: PATH item=0 name=(null) inode=45 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 29 01:25:36.784000 audit: PATH item=1 name=(null) inode=24835 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 29 01:25:36.784000 audit: PATH item=2 name=(null) inode=24835 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 29 01:25:36.784000 audit: PATH item=3 name=(null) inode=24836 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 29 01:25:36.784000 audit: PATH item=4 name=(null) inode=24835 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 29 01:25:36.784000 audit: PATH item=5 name=(null) inode=24837 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 29 01:25:36.784000 audit: PATH item=6 name=(null) inode=24835 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 29 01:25:36.784000 audit: PATH item=7 name=(null) inode=24838 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 29 01:25:36.784000 audit: PATH item=8 name=(null) inode=24838 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 29 01:25:36.784000 audit: PATH item=9 name=(null) inode=24839 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 29 01:25:36.784000 audit: PATH item=10 name=(null) inode=24838 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 29 01:25:36.784000 audit: PATH item=11 name=(null) inode=24840 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 29 01:25:36.784000 audit: PATH item=12 name=(null) inode=24838 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 29 01:25:36.784000 audit: PATH item=13 name=(null) inode=24841 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 29 01:25:36.784000 audit: PATH item=14 name=(null) inode=24838 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 29 01:25:36.784000 audit: PATH item=15 name=(null) inode=24842 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 29 01:25:36.784000 audit: PATH item=16 name=(null) inode=24838 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 29 01:25:36.784000 audit: PATH item=17 name=(null) inode=24843 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 29 01:25:36.784000 audit: PATH item=18 name=(null) inode=24835 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 29 01:25:36.784000 audit: PATH item=19 name=(null) inode=24844 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 29 01:25:36.784000 audit: PATH item=20 name=(null) inode=24844 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 29 01:25:36.784000 audit: PATH item=21 name=(null) inode=24845 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 29 01:25:36.784000 audit: PATH item=22 name=(null) inode=24844 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 29 01:25:36.784000 audit: PATH item=23 name=(null) inode=24846 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 29 01:25:36.784000 audit: PATH item=24 name=(null) inode=24844 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 29 01:25:36.784000 audit: PATH item=25 name=(null) inode=24847 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 29 01:25:36.784000 audit: PATH item=26 name=(null) inode=24844 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 29 01:25:36.784000 audit: PATH item=27 name=(null) inode=24848 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 29 01:25:36.784000 audit: PATH item=28 name=(null) inode=24844 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 29 01:25:36.784000 audit: PATH item=29 name=(null) inode=24849 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 29 01:25:36.784000 audit: PATH item=30 name=(null) inode=24835 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 29 01:25:36.784000 audit: PATH item=31 name=(null) inode=24850 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 29 01:25:36.784000 audit: PATH item=32 name=(null) inode=24850 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 29 01:25:36.784000 audit: PATH item=33 name=(null) inode=24851 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 29 01:25:36.784000 audit: PATH item=34 name=(null) inode=24850 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 29 01:25:36.784000 audit: PATH item=35 name=(null) inode=24852 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 29 01:25:36.784000 audit: PATH item=36 name=(null) inode=24850 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 29 01:25:36.784000 audit: PATH item=37 name=(null) inode=24853 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 29 01:25:36.784000 audit: PATH item=38 name=(null) inode=24850 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 29 01:25:36.784000 audit: PATH item=39 name=(null) inode=24854 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 29 01:25:36.784000 audit: PATH item=40 name=(null) inode=24850 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 29 01:25:36.784000 audit: PATH item=41 name=(null) inode=24855 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 29 01:25:36.784000 audit: PATH item=42 name=(null) inode=24835 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 29 01:25:36.784000 audit: PATH item=43 name=(null) inode=24856 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 29 01:25:36.784000 audit: PATH item=44 name=(null) inode=24856 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 29 01:25:36.784000 audit: PATH item=45 name=(null) inode=24857 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 29 01:25:36.784000 audit: PATH item=46 name=(null) inode=24856 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 29 01:25:36.784000 audit: PATH item=47 name=(null) inode=24858 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 29 01:25:36.784000 audit: PATH item=48 name=(null) inode=24856 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 29 01:25:36.784000 audit: PATH item=49 name=(null) inode=24859 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 29 01:25:36.784000 audit: PATH item=50 name=(null) inode=24856 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 29 01:25:36.784000 audit: PATH item=51 name=(null) inode=24860 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 29 01:25:36.784000 audit: PATH item=52 name=(null) inode=24856 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 29 01:25:36.784000 audit: PATH item=53 name=(null) inode=24861 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 29 01:25:36.784000 audit: PATH item=54 name=(null) inode=45 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 29 01:25:36.784000 audit: PATH item=55 name=(null) inode=24862 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 29 01:25:36.784000 audit: PATH item=56 name=(null) inode=24862 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 29 01:25:36.784000 audit: PATH item=57 name=(null) inode=24863 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 29 01:25:36.784000 audit: PATH item=58 name=(null) inode=24862 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 29 01:25:36.784000 audit: PATH item=59 name=(null) inode=24864 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 29 01:25:36.784000 audit: PATH item=60 name=(null) inode=24862 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 29 01:25:36.784000 audit: PATH item=61 name=(null) inode=24865 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 29 01:25:36.784000 audit: PATH item=62 name=(null) inode=24865 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 29 01:25:36.784000 audit: PATH item=63 name=(null) inode=24866 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 29 01:25:36.784000 audit: PATH item=64 name=(null) inode=24865 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 29 01:25:36.784000 audit: PATH item=65 name=(null) inode=24867 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 29 01:25:36.784000 audit: PATH item=66 name=(null) inode=24865 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 29 01:25:36.784000 audit: PATH item=67 name=(null) inode=24868 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 29 01:25:36.784000 audit: PATH item=68 name=(null) inode=24865 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 29 01:25:36.784000 audit: PATH item=69 name=(null) inode=24869 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 29 01:25:36.784000 audit: PATH item=70 name=(null) inode=24865 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 29 01:25:36.784000 audit: PATH item=71 name=(null) inode=24870 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 29 01:25:36.784000 audit: PATH item=72 name=(null) inode=24862 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 29 01:25:36.784000 audit: PATH item=73 name=(null) inode=24871 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 29 01:25:36.784000 audit: PATH item=74 name=(null) inode=24871 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 29 01:25:36.784000 audit: PATH item=75 name=(null) inode=24872 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 29 01:25:36.784000 audit: PATH item=76 name=(null) inode=24871 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 29 01:25:36.784000 audit: PATH item=77 name=(null) inode=24873 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 29 01:25:36.784000 audit: PATH item=78 name=(null) inode=24871 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 29 01:25:36.784000 audit: PATH item=79 name=(null) inode=24874 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 29 01:25:36.784000 audit: PATH item=80 name=(null) inode=24871 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 29 01:25:36.784000 audit: PATH item=81 name=(null) inode=24875 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 29 01:25:36.784000 audit: PATH item=82 name=(null) inode=24871 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 29 01:25:36.784000 audit: PATH item=83 name=(null) inode=24876 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 29 01:25:36.784000 audit: PATH item=84 name=(null) inode=24862 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 29 01:25:36.784000 audit: PATH item=85 name=(null) inode=24877 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 29 01:25:36.784000 audit: PATH item=86 name=(null) inode=24877 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 29 01:25:36.784000 audit: PATH item=87 name=(null) inode=24878 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 29 01:25:36.784000 audit: PATH item=88 name=(null) inode=24877 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 29 01:25:36.784000 audit: PATH item=89 name=(null) inode=24879 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 29 01:25:36.784000 audit: PATH item=90 name=(null) inode=24877 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 29 01:25:36.784000 audit: PATH item=91 name=(null) inode=24880 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 29 01:25:36.784000 audit: PATH item=92 name=(null) inode=24877 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 29 01:25:36.784000 audit: PATH item=93 name=(null) inode=24881 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 29 01:25:36.784000 audit: PATH item=94 name=(null) inode=24877 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 29 01:25:36.784000 audit: PATH item=95 name=(null) inode=24882 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 29 01:25:36.784000 audit: PATH item=96 name=(null) inode=24862 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 29 01:25:36.784000 audit: PATH item=97 name=(null) inode=24883 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 29 01:25:36.784000 audit: PATH item=98 name=(null) inode=24883 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 29 01:25:36.784000 audit: PATH item=99 name=(null) inode=24884 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 29 01:25:36.784000 audit: PATH item=100 name=(null) inode=24883 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 29 01:25:36.784000 audit: PATH item=101 name=(null) inode=24885 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 29 01:25:36.784000 audit: PATH item=102 name=(null) inode=24883 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 29 01:25:36.784000 audit: PATH item=103 name=(null) inode=24886 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 29 01:25:36.784000 audit: PATH item=104 name=(null) inode=24883 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 29 01:25:36.784000 audit: PATH item=105 name=(null) inode=24887 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 29 01:25:36.784000 audit: PATH item=106 name=(null) inode=24883 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 29 01:25:36.784000 audit: PATH item=107 name=(null) inode=24888 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 29 01:25:36.784000 audit: PATH item=108 name=(null) inode=1 dev=00:07 mode=040700 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:debugfs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 29 01:25:36.784000 audit: PATH item=109 name=(null) inode=24889 dev=00:07 mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:debugfs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 29 01:25:36.784000 audit: PROCTITLE proctitle="(udev-worker)" Oct 29 01:25:36.796874 kernel: vmw_vmci 0000:00:07.7: Found VMCI PCI device at 0x11080, irq 16 Oct 29 01:25:36.797424 kernel: vmw_vmci 0000:00:07.7: Using capabilities 0xc Oct 29 01:25:36.798201 kernel: Guest personality initialized and is active Oct 29 01:25:36.801818 kernel: VMCI host device registered (name=vmci, major=10, minor=125) Oct 29 01:25:36.801888 kernel: Initialized host personality Oct 29 01:25:36.809975 kernel: piix4_smbus 0000:00:07.3: SMBus Host Controller not enabled! Oct 29 01:25:36.811381 systemd-networkd[1110]: lo: Link UP Oct 29 01:25:36.811386 systemd-networkd[1110]: lo: Gained carrier Oct 29 01:25:36.811685 systemd-networkd[1110]: Enumeration completed Oct 29 01:25:36.812295 systemd-networkd[1110]: ens192: Configuring with /etc/systemd/network/00-vmware.network. Oct 29 01:25:36.812623 systemd[1]: Started systemd-networkd.service. Oct 29 01:25:36.810000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 29 01:25:36.815608 kernel: vmxnet3 0000:0b:00.0 ens192: intr type 3, mode 0, 3 vectors allocated Oct 29 01:25:36.815731 kernel: vmxnet3 0000:0b:00.0 ens192: NIC Link is Up 10000 Mbps Oct 29 01:25:36.816861 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): ens192: link becomes ready Oct 29 01:25:36.817067 systemd-networkd[1110]: ens192: Link UP Oct 29 01:25:36.817199 systemd-networkd[1110]: ens192: Gained carrier Oct 29 01:25:36.843818 kernel: input: ImPS/2 Generic Wheel Mouse as /devices/platform/i8042/serio1/input/input3 Oct 29 01:25:36.848811 kernel: mousedev: PS/2 mouse device common for all mice Oct 29 01:25:36.852194 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. Oct 29 01:25:36.864821 (udev-worker)[1113]: id: Truncating stdout of 'dmi_memory_id' up to 16384 byte. Oct 29 01:25:36.871047 systemd[1]: Finished systemd-udev-settle.service. Oct 29 01:25:36.869000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-settle comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 29 01:25:36.872040 systemd[1]: Starting lvm2-activation-early.service... Oct 29 01:25:36.894529 lvm[1143]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Oct 29 01:25:36.918000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 29 01:25:36.920323 systemd[1]: Finished lvm2-activation-early.service. Oct 29 01:25:36.920501 systemd[1]: Reached target cryptsetup.target. Oct 29 01:25:36.921462 systemd[1]: Starting lvm2-activation.service... Oct 29 01:25:36.924814 lvm[1145]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Oct 29 01:25:36.954334 systemd[1]: Finished lvm2-activation.service. Oct 29 01:25:36.952000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 29 01:25:36.954484 systemd[1]: Reached target local-fs-pre.target. Oct 29 01:25:36.954579 systemd[1]: var-lib-machines.mount was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Oct 29 01:25:36.954593 systemd[1]: Reached target local-fs.target. Oct 29 01:25:36.954680 systemd[1]: Reached target machines.target. Oct 29 01:25:36.955679 systemd[1]: Starting ldconfig.service... Oct 29 01:25:36.956319 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Oct 29 01:25:36.956357 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Oct 29 01:25:36.957179 systemd[1]: Starting systemd-boot-update.service... Oct 29 01:25:36.958051 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service... Oct 29 01:25:36.958898 systemd[1]: Starting systemd-machine-id-commit.service... Oct 29 01:25:36.960183 systemd[1]: Starting systemd-sysext.service... Oct 29 01:25:36.963828 systemd[1]: boot.automount: Got automount request for /boot, triggered by 1148 (bootctl) Oct 29 01:25:36.964512 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service... Oct 29 01:25:36.969000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-OEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 29 01:25:36.971353 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service. Oct 29 01:25:36.976580 systemd[1]: Unmounting usr-share-oem.mount... Oct 29 01:25:36.978583 systemd[1]: usr-share-oem.mount: Deactivated successfully. Oct 29 01:25:36.978705 systemd[1]: Unmounted usr-share-oem.mount. Oct 29 01:25:36.994817 kernel: loop0: detected capacity change from 0 to 224512 Oct 29 01:25:37.379643 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Oct 29 01:25:37.380224 systemd[1]: Finished systemd-machine-id-commit.service. Oct 29 01:25:37.378000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-machine-id-commit comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 29 01:25:37.395852 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Oct 29 01:25:37.411814 kernel: loop1: detected capacity change from 0 to 224512 Oct 29 01:25:37.415254 systemd-fsck[1160]: fsck.fat 4.2 (2021-01-31) Oct 29 01:25:37.415254 systemd-fsck[1160]: /dev/sda1: 790 files, 120772/258078 clusters Oct 29 01:25:37.416969 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service. Oct 29 01:25:37.418078 systemd[1]: Mounting boot.mount... Oct 29 01:25:37.415000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 29 01:25:37.436866 systemd[1]: Mounted boot.mount. Oct 29 01:25:37.442254 (sd-sysext)[1164]: Using extensions 'kubernetes'. Oct 29 01:25:37.443164 (sd-sysext)[1164]: Merged extensions into '/usr'. Oct 29 01:25:37.445570 systemd[1]: Finished systemd-boot-update.service. Oct 29 01:25:37.443000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-boot-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 29 01:25:37.454333 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Oct 29 01:25:37.455487 systemd[1]: Mounting usr-share-oem.mount... Oct 29 01:25:37.456310 systemd[1]: Starting modprobe@dm_mod.service... Oct 29 01:25:37.457146 systemd[1]: Starting modprobe@efi_pstore.service... Oct 29 01:25:37.457846 systemd[1]: Starting modprobe@loop.service... Oct 29 01:25:37.457974 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Oct 29 01:25:37.458048 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Oct 29 01:25:37.458138 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Oct 29 01:25:37.458586 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Oct 29 01:25:37.458672 systemd[1]: Finished modprobe@efi_pstore.service. Oct 29 01:25:37.457000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 29 01:25:37.457000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 29 01:25:37.458998 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Oct 29 01:25:37.460362 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Oct 29 01:25:37.460439 systemd[1]: Finished modprobe@dm_mod.service. Oct 29 01:25:37.458000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 29 01:25:37.458000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 29 01:25:37.461703 systemd[1]: modprobe@loop.service: Deactivated successfully. Oct 29 01:25:37.461796 systemd[1]: Finished modprobe@loop.service. Oct 29 01:25:37.460000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 29 01:25:37.460000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 29 01:25:37.462073 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Oct 29 01:25:37.468086 systemd[1]: Mounted usr-share-oem.mount. Oct 29 01:25:37.468782 systemd[1]: Finished systemd-sysext.service. Oct 29 01:25:37.467000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 29 01:25:37.469891 systemd[1]: Starting ensure-sysext.service... Oct 29 01:25:37.471154 systemd[1]: Starting systemd-tmpfiles-setup.service... Oct 29 01:25:37.478933 systemd[1]: Reloading. Oct 29 01:25:37.484620 systemd-tmpfiles[1184]: /usr/lib/tmpfiles.d/legacy.conf:13: Duplicate line for path "/run/lock", ignoring. Oct 29 01:25:37.487823 systemd-tmpfiles[1184]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Oct 29 01:25:37.490464 systemd-tmpfiles[1184]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Oct 29 01:25:37.505099 /usr/lib/systemd/system-generators/torcx-generator[1203]: time="2025-10-29T01:25:37Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.8 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.8 /var/lib/torcx/store]" Oct 29 01:25:37.505541 /usr/lib/systemd/system-generators/torcx-generator[1203]: time="2025-10-29T01:25:37Z" level=info msg="torcx already run" Oct 29 01:25:37.615991 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Oct 29 01:25:37.616005 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Oct 29 01:25:37.632161 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Oct 29 01:25:37.669000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 29 01:25:37.670655 systemd[1]: Finished systemd-tmpfiles-setup.service. Oct 29 01:25:37.672723 systemd[1]: Starting audit-rules.service... Oct 29 01:25:37.673699 systemd[1]: Starting clean-ca-certificates.service... Oct 29 01:25:37.674759 systemd[1]: Starting systemd-journal-catalog-update.service... Oct 29 01:25:37.679319 systemd[1]: Starting systemd-resolved.service... Oct 29 01:25:37.680673 systemd[1]: Starting systemd-timesyncd.service... Oct 29 01:25:37.683749 systemd[1]: Starting systemd-update-utmp.service... Oct 29 01:25:37.684486 systemd[1]: Finished clean-ca-certificates.service. Oct 29 01:25:37.682000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=clean-ca-certificates comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 29 01:25:37.689431 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Oct 29 01:25:37.691000 audit[1278]: SYSTEM_BOOT pid=1278 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg=' comm="systemd-update-utmp" exe="/usr/lib/systemd/systemd-update-utmp" hostname=? addr=? terminal=? res=success' Oct 29 01:25:37.693000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-update-utmp comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 29 01:25:37.694891 systemd[1]: Finished systemd-update-utmp.service. Oct 29 01:25:37.702625 systemd[1]: Starting modprobe@dm_mod.service... Oct 29 01:25:37.703501 systemd[1]: Starting modprobe@efi_pstore.service... Oct 29 01:25:37.704562 systemd[1]: Starting modprobe@loop.service... Oct 29 01:25:37.705100 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Oct 29 01:25:37.705862 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Oct 29 01:25:37.705943 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Oct 29 01:25:37.707862 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Oct 29 01:25:37.707920 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Oct 29 01:25:37.707975 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Oct 29 01:25:37.712250 systemd[1]: Starting modprobe@drm.service... Oct 29 01:25:37.712402 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Oct 29 01:25:37.712473 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Oct 29 01:25:37.713681 systemd[1]: Starting systemd-networkd-wait-online.service... Oct 29 01:25:37.713836 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Oct 29 01:25:37.714000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=ensure-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 29 01:25:37.716267 systemd[1]: Finished ensure-sysext.service. Oct 29 01:25:37.716000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 29 01:25:37.716000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 29 01:25:37.716000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-catalog-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 29 01:25:37.717893 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Oct 29 01:25:37.717985 systemd[1]: Finished modprobe@efi_pstore.service. Oct 29 01:25:37.718271 systemd[1]: Finished systemd-journal-catalog-update.service. Oct 29 01:25:37.718407 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Oct 29 01:25:37.723000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 29 01:25:37.723000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 29 01:25:37.724997 systemd[1]: modprobe@drm.service: Deactivated successfully. Oct 29 01:25:37.725095 systemd[1]: Finished modprobe@drm.service. Oct 29 01:25:37.724000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 29 01:25:37.724000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 29 01:25:37.726402 systemd[1]: modprobe@loop.service: Deactivated successfully. Oct 29 01:25:37.726490 systemd[1]: Finished modprobe@loop.service. Oct 29 01:25:37.725000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 29 01:25:37.725000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 29 01:25:37.727037 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Oct 29 01:25:37.727121 systemd[1]: Finished modprobe@dm_mod.service. Oct 29 01:25:37.727256 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Oct 29 01:25:37.766368 ldconfig[1147]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Oct 29 01:25:37.772071 augenrules[1306]: No rules Oct 29 01:25:37.770000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=add_rule key=(null) list=5 res=1 Oct 29 01:25:37.770000 audit[1306]: SYSCALL arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7ffce15af250 a2=420 a3=0 items=0 ppid=1271 pid=1306 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/sbin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 29 01:25:37.770000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Oct 29 01:25:37.772631 systemd[1]: Finished audit-rules.service. Oct 29 01:25:37.773286 systemd[1]: Finished ldconfig.service. Oct 29 01:25:37.774348 systemd[1]: Starting systemd-update-done.service... Oct 29 01:25:37.774498 systemd[1]: Started systemd-timesyncd.service. Oct 29 01:25:37.774690 systemd[1]: Reached target time-set.target. Oct 29 01:25:37.774902 systemd-resolved[1274]: Positive Trust Anchors: Oct 29 01:25:37.774909 systemd-resolved[1274]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Oct 29 01:25:37.774927 systemd-resolved[1274]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Oct 29 01:25:37.779376 systemd[1]: Finished systemd-update-done.service. Oct 29 01:25:37.794641 systemd-resolved[1274]: Defaulting to hostname 'linux'. Oct 29 01:25:37.795735 systemd[1]: Started systemd-resolved.service. Oct 29 01:25:37.795887 systemd[1]: Reached target network.target. Oct 29 01:25:37.795974 systemd[1]: Reached target nss-lookup.target. Oct 29 01:25:37.796067 systemd[1]: Reached target sysinit.target. Oct 29 01:25:37.796197 systemd[1]: Started motdgen.path. Oct 29 01:25:37.796298 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path. Oct 29 01:25:37.796483 systemd[1]: Started logrotate.timer. Oct 29 01:25:37.796612 systemd[1]: Started mdadm.timer. Oct 29 01:25:37.796692 systemd[1]: Started systemd-tmpfiles-clean.timer. Oct 29 01:25:37.796787 systemd[1]: update-engine-stub.timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Oct 29 01:25:37.796818 systemd[1]: Reached target paths.target. Oct 29 01:25:37.796900 systemd[1]: Reached target timers.target. Oct 29 01:25:37.797172 systemd[1]: Listening on dbus.socket. Oct 29 01:25:37.798157 systemd[1]: Starting docker.socket... Oct 29 01:25:37.799113 systemd[1]: Listening on sshd.socket. Oct 29 01:25:37.799250 systemd[1]: systemd-pcrphase-sysinit.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Oct 29 01:25:37.799427 systemd[1]: Listening on docker.socket. Oct 29 01:25:37.799526 systemd[1]: Reached target sockets.target. Oct 29 01:25:37.799609 systemd[1]: Reached target basic.target. Oct 29 01:25:37.799764 systemd[1]: System is tainted: cgroupsv1 Oct 29 01:25:37.799788 systemd[1]: addon-config@usr-share-oem.service was skipped because no trigger condition checks were met. Oct 29 01:25:37.799834 systemd[1]: addon-run@usr-share-oem.service was skipped because no trigger condition checks were met. Oct 29 01:25:37.800635 systemd[1]: Starting containerd.service... Oct 29 01:25:37.801508 systemd[1]: Starting dbus.service... Oct 29 01:25:37.802897 systemd[1]: Starting enable-oem-cloudinit.service... Oct 29 01:25:37.803832 systemd[1]: Starting extend-filesystems.service... Oct 29 01:25:37.804071 systemd[1]: flatcar-setup-environment.service was skipped because of an unmet condition check (ConditionPathExists=/usr/share/oem/bin/flatcar-setup-environment). Oct 29 01:25:37.804989 jq[1321]: false Oct 29 01:25:37.804870 systemd[1]: Starting motdgen.service... Oct 29 01:25:37.806161 systemd[1]: Starting prepare-helm.service... Oct 29 01:25:37.807967 systemd[1]: Starting ssh-key-proc-cmdline.service... Oct 29 01:25:37.809988 systemd[1]: Starting sshd-keygen.service... Oct 29 01:25:37.813566 systemd[1]: Starting systemd-logind.service... Oct 29 01:25:37.813920 systemd[1]: systemd-pcrphase.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Oct 29 01:25:37.813970 systemd[1]: tcsd.service was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Oct 29 01:25:37.815746 systemd[1]: Starting update-engine.service... Oct 29 01:25:37.817642 systemd[1]: Starting update-ssh-keys-after-ignition.service... Oct 29 01:25:37.818602 systemd[1]: Starting vmtoolsd.service... Oct 29 01:25:37.838558 jq[1334]: true Oct 29 01:25:37.819529 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Oct 29 01:25:37.819664 systemd[1]: Condition check resulted in enable-oem-cloudinit.service being skipped. Oct 29 01:25:37.836469 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Oct 29 01:25:37.836613 systemd[1]: Finished ssh-key-proc-cmdline.service. Oct 29 01:25:37.845707 jq[1340]: true Oct 29 01:25:37.846960 systemd[1]: Started vmtoolsd.service. Oct 29 01:27:16.048150 systemd-timesyncd[1275]: Contacted time server 141.11.228.173:123 (0.flatcar.pool.ntp.org). Oct 29 01:27:16.048213 systemd-timesyncd[1275]: Initial clock synchronization to Wed 2025-10-29 01:27:16.047984 UTC. Oct 29 01:27:16.053840 tar[1337]: linux-amd64/LICENSE Oct 29 01:27:16.053840 tar[1337]: linux-amd64/helm Oct 29 01:27:16.055253 systemd-resolved[1274]: Clock change detected. Flushing caches. Oct 29 01:27:16.059535 systemd[1]: motdgen.service: Deactivated successfully. Oct 29 01:27:16.059686 systemd[1]: Finished motdgen.service. Oct 29 01:27:16.060735 dbus-daemon[1319]: [system] SELinux support is enabled Oct 29 01:27:16.061496 systemd[1]: Started dbus.service. Oct 29 01:27:16.062829 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Oct 29 01:27:16.062845 systemd[1]: Reached target system-config.target. Oct 29 01:27:16.062969 systemd[1]: user-cloudinit-proc-cmdline.service was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Oct 29 01:27:16.062979 systemd[1]: Reached target user-config.target. Oct 29 01:27:16.078358 extend-filesystems[1322]: Found loop1 Oct 29 01:27:16.078664 extend-filesystems[1322]: Found sda Oct 29 01:27:16.078664 extend-filesystems[1322]: Found sda1 Oct 29 01:27:16.078664 extend-filesystems[1322]: Found sda2 Oct 29 01:27:16.078664 extend-filesystems[1322]: Found sda3 Oct 29 01:27:16.078664 extend-filesystems[1322]: Found usr Oct 29 01:27:16.078664 extend-filesystems[1322]: Found sda4 Oct 29 01:27:16.078664 extend-filesystems[1322]: Found sda6 Oct 29 01:27:16.078664 extend-filesystems[1322]: Found sda7 Oct 29 01:27:16.078664 extend-filesystems[1322]: Found sda9 Oct 29 01:27:16.078664 extend-filesystems[1322]: Checking size of /dev/sda9 Oct 29 01:27:16.112789 env[1344]: time="2025-10-29T01:27:16.109413149Z" level=info msg="starting containerd" revision=92b3a9d6f1b3bcc6dc74875cfdea653fe39f09c2 version=1.6.16 Oct 29 01:27:16.115623 bash[1371]: Updated "/home/core/.ssh/authorized_keys" Oct 29 01:27:16.115471 systemd[1]: Finished update-ssh-keys-after-ignition.service. Oct 29 01:27:16.120332 extend-filesystems[1322]: Old size kept for /dev/sda9 Oct 29 01:27:16.120332 extend-filesystems[1322]: Found sr0 Oct 29 01:27:16.119961 systemd[1]: extend-filesystems.service: Deactivated successfully. Oct 29 01:27:16.120087 systemd[1]: Finished extend-filesystems.service. Oct 29 01:27:16.141134 update_engine[1332]: I1029 01:27:16.140561 1332 main.cc:92] Flatcar Update Engine starting Oct 29 01:27:16.144543 systemd[1]: Started update-engine.service. Oct 29 01:27:16.145942 systemd[1]: Started locksmithd.service. Oct 29 01:27:16.146935 update_engine[1332]: I1029 01:27:16.146790 1332 update_check_scheduler.cc:74] Next update check in 8m6s Oct 29 01:27:16.147196 kernel: NET: Registered PF_VSOCK protocol family Oct 29 01:27:16.164949 systemd-logind[1329]: Watching system buttons on /dev/input/event1 (Power Button) Oct 29 01:27:16.164963 systemd-logind[1329]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Oct 29 01:27:16.170765 env[1344]: time="2025-10-29T01:27:16.170731340Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Oct 29 01:27:16.171162 env[1344]: time="2025-10-29T01:27:16.170836683Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Oct 29 01:27:16.176082 env[1344]: time="2025-10-29T01:27:16.176005404Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.15.192-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Oct 29 01:27:16.176148 systemd-logind[1329]: New seat seat0. Oct 29 01:27:16.177483 systemd[1]: Started systemd-logind.service. Oct 29 01:27:16.178802 env[1344]: time="2025-10-29T01:27:16.178231171Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Oct 29 01:27:16.178802 env[1344]: time="2025-10-29T01:27:16.178395990Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Oct 29 01:27:16.178802 env[1344]: time="2025-10-29T01:27:16.178406751Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Oct 29 01:27:16.178802 env[1344]: time="2025-10-29T01:27:16.178414520Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured" Oct 29 01:27:16.178802 env[1344]: time="2025-10-29T01:27:16.178420147Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Oct 29 01:27:16.178802 env[1344]: time="2025-10-29T01:27:16.178469075Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Oct 29 01:27:16.178802 env[1344]: time="2025-10-29T01:27:16.178605703Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Oct 29 01:27:16.178802 env[1344]: time="2025-10-29T01:27:16.178687636Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Oct 29 01:27:16.178802 env[1344]: time="2025-10-29T01:27:16.178697311Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Oct 29 01:27:16.178802 env[1344]: time="2025-10-29T01:27:16.178725233Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured" Oct 29 01:27:16.178802 env[1344]: time="2025-10-29T01:27:16.178733044Z" level=info msg="metadata content store policy set" policy=shared Oct 29 01:27:16.181391 env[1344]: time="2025-10-29T01:27:16.181378780Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Oct 29 01:27:16.181454 env[1344]: time="2025-10-29T01:27:16.181443236Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Oct 29 01:27:16.181503 env[1344]: time="2025-10-29T01:27:16.181493111Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Oct 29 01:27:16.181564 env[1344]: time="2025-10-29T01:27:16.181554530Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Oct 29 01:27:16.181615 env[1344]: time="2025-10-29T01:27:16.181605385Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Oct 29 01:27:16.181664 env[1344]: time="2025-10-29T01:27:16.181654649Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Oct 29 01:27:16.181709 env[1344]: time="2025-10-29T01:27:16.181699526Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Oct 29 01:27:16.181758 env[1344]: time="2025-10-29T01:27:16.181748418Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Oct 29 01:27:16.181806 env[1344]: time="2025-10-29T01:27:16.181797260Z" level=info msg="loading plugin \"io.containerd.service.v1.leases-service\"..." type=io.containerd.service.v1 Oct 29 01:27:16.181851 env[1344]: time="2025-10-29T01:27:16.181842375Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Oct 29 01:27:16.181897 env[1344]: time="2025-10-29T01:27:16.181886794Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Oct 29 01:27:16.181943 env[1344]: time="2025-10-29T01:27:16.181933861Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Oct 29 01:27:16.182033 env[1344]: time="2025-10-29T01:27:16.182024473Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Oct 29 01:27:16.182121 env[1344]: time="2025-10-29T01:27:16.182112848Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Oct 29 01:27:16.182419 env[1344]: time="2025-10-29T01:27:16.182409092Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Oct 29 01:27:16.182479 env[1344]: time="2025-10-29T01:27:16.182467827Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Oct 29 01:27:16.182553 env[1344]: time="2025-10-29T01:27:16.182544223Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Oct 29 01:27:16.182617 env[1344]: time="2025-10-29T01:27:16.182607152Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Oct 29 01:27:16.182668 env[1344]: time="2025-10-29T01:27:16.182658257Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Oct 29 01:27:16.182742 env[1344]: time="2025-10-29T01:27:16.182733440Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Oct 29 01:27:16.182901 env[1344]: time="2025-10-29T01:27:16.182891332Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Oct 29 01:27:16.182951 env[1344]: time="2025-10-29T01:27:16.182941097Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Oct 29 01:27:16.183001 env[1344]: time="2025-10-29T01:27:16.182991358Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Oct 29 01:27:16.183050 env[1344]: time="2025-10-29T01:27:16.183040411Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Oct 29 01:27:16.183165 env[1344]: time="2025-10-29T01:27:16.183154923Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Oct 29 01:27:16.183240 env[1344]: time="2025-10-29T01:27:16.183221852Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Oct 29 01:27:16.183392 env[1344]: time="2025-10-29T01:27:16.183381845Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Oct 29 01:27:16.183469 env[1344]: time="2025-10-29T01:27:16.183458590Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Oct 29 01:27:16.185710 env[1344]: time="2025-10-29T01:27:16.185696807Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Oct 29 01:27:16.185762 env[1344]: time="2025-10-29T01:27:16.185751614Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Oct 29 01:27:16.185817 env[1344]: time="2025-10-29T01:27:16.185806357Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1 Oct 29 01:27:16.185861 env[1344]: time="2025-10-29T01:27:16.185851699Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Oct 29 01:27:16.185916 env[1344]: time="2025-10-29T01:27:16.185906065Z" level=error msg="failed to initialize a tracing processor \"otlp\"" error="no OpenTelemetry endpoint: skip plugin" Oct 29 01:27:16.185977 env[1344]: time="2025-10-29T01:27:16.185967803Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Oct 29 01:27:16.186158 env[1344]: time="2025-10-29T01:27:16.186124902Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:false] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:false SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.6 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Oct 29 01:27:16.188410 env[1344]: time="2025-10-29T01:27:16.186264808Z" level=info msg="Connect containerd service" Oct 29 01:27:16.188410 env[1344]: time="2025-10-29T01:27:16.186286892Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Oct 29 01:27:16.188410 env[1344]: time="2025-10-29T01:27:16.186852661Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Oct 29 01:27:16.188410 env[1344]: time="2025-10-29T01:27:16.186989084Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Oct 29 01:27:16.188410 env[1344]: time="2025-10-29T01:27:16.187015886Z" level=info msg=serving... address=/run/containerd/containerd.sock Oct 29 01:27:16.188410 env[1344]: time="2025-10-29T01:27:16.187272288Z" level=info msg="Start subscribing containerd event" Oct 29 01:27:16.188410 env[1344]: time="2025-10-29T01:27:16.187301165Z" level=info msg="Start recovering state" Oct 29 01:27:16.188410 env[1344]: time="2025-10-29T01:27:16.187348513Z" level=info msg="Start event monitor" Oct 29 01:27:16.188410 env[1344]: time="2025-10-29T01:27:16.187361377Z" level=info msg="Start snapshots syncer" Oct 29 01:27:16.188410 env[1344]: time="2025-10-29T01:27:16.187367315Z" level=info msg="Start cni network conf syncer for default" Oct 29 01:27:16.188410 env[1344]: time="2025-10-29T01:27:16.187371526Z" level=info msg="Start streaming server" Oct 29 01:27:16.187093 systemd[1]: Started containerd.service. Oct 29 01:27:16.201702 env[1344]: time="2025-10-29T01:27:16.200275632Z" level=info msg="containerd successfully booted in 0.097437s" Oct 29 01:27:16.260091 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Oct 29 01:27:16.260144 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Oct 29 01:27:16.494579 tar[1337]: linux-amd64/README.md Oct 29 01:27:16.499557 systemd[1]: Finished prepare-helm.service. Oct 29 01:27:16.634561 locksmithd[1389]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Oct 29 01:27:16.794162 sshd_keygen[1342]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Oct 29 01:27:16.795360 systemd-networkd[1110]: ens192: Gained IPv6LL Oct 29 01:27:16.797511 systemd[1]: Finished systemd-networkd-wait-online.service. Oct 29 01:27:16.797805 systemd[1]: Reached target network-online.target. Oct 29 01:27:16.801015 systemd[1]: Starting kubelet.service... Oct 29 01:27:16.813357 systemd[1]: Finished sshd-keygen.service. Oct 29 01:27:16.814631 systemd[1]: Starting issuegen.service... Oct 29 01:27:16.818496 systemd[1]: issuegen.service: Deactivated successfully. Oct 29 01:27:16.818629 systemd[1]: Finished issuegen.service. Oct 29 01:27:16.819885 systemd[1]: Starting systemd-user-sessions.service... Oct 29 01:27:16.825907 systemd[1]: Finished systemd-user-sessions.service. Oct 29 01:27:16.826889 systemd[1]: Started getty@tty1.service. Oct 29 01:27:16.827714 systemd[1]: Started serial-getty@ttyS0.service. Oct 29 01:27:16.827911 systemd[1]: Reached target getty.target. Oct 29 01:27:18.194176 systemd[1]: Started kubelet.service. Oct 29 01:27:18.194517 systemd[1]: Reached target multi-user.target. Oct 29 01:27:18.195634 systemd[1]: Starting systemd-update-utmp-runlevel.service... Oct 29 01:27:18.200736 systemd[1]: systemd-update-utmp-runlevel.service: Deactivated successfully. Oct 29 01:27:18.200883 systemd[1]: Finished systemd-update-utmp-runlevel.service. Oct 29 01:27:18.201071 systemd[1]: Startup finished in 5.448s (kernel) + 5.940s (userspace) = 11.388s. Oct 29 01:27:18.229556 login[1468]: pam_lastlog(login:session): file /var/log/lastlog is locked/write Oct 29 01:27:18.230736 login[1467]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Oct 29 01:27:18.240333 systemd[1]: Created slice user-500.slice. Oct 29 01:27:18.240983 systemd[1]: Starting user-runtime-dir@500.service... Oct 29 01:27:18.243882 systemd-logind[1329]: New session 2 of user core. Oct 29 01:27:18.247080 systemd[1]: Finished user-runtime-dir@500.service. Oct 29 01:27:18.247875 systemd[1]: Starting user@500.service... Oct 29 01:27:18.250793 (systemd)[1480]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Oct 29 01:27:18.333070 systemd[1480]: Queued start job for default target default.target. Oct 29 01:27:18.333682 systemd[1480]: Reached target paths.target. Oct 29 01:27:18.333756 systemd[1480]: Reached target sockets.target. Oct 29 01:27:18.333820 systemd[1480]: Reached target timers.target. Oct 29 01:27:18.333889 systemd[1480]: Reached target basic.target. Oct 29 01:27:18.334017 systemd[1]: Started user@500.service. Oct 29 01:27:18.334616 systemd[1]: Started session-2.scope. Oct 29 01:27:18.334734 systemd[1480]: Reached target default.target. Oct 29 01:27:18.334827 systemd[1480]: Startup finished in 78ms. Oct 29 01:27:19.230739 login[1468]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Oct 29 01:27:19.234448 systemd[1]: Started session-1.scope. Oct 29 01:27:19.234765 systemd-logind[1329]: New session 1 of user core. Oct 29 01:27:19.293072 kubelet[1474]: E1029 01:27:19.293048 1474 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Oct 29 01:27:19.294208 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Oct 29 01:27:19.294310 systemd[1]: kubelet.service: Failed with result 'exit-code'. Oct 29 01:27:29.544935 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Oct 29 01:27:29.545083 systemd[1]: Stopped kubelet.service. Oct 29 01:27:29.546376 systemd[1]: Starting kubelet.service... Oct 29 01:27:29.856222 systemd[1]: Started kubelet.service. Oct 29 01:27:29.932852 kubelet[1515]: E1029 01:27:29.932824 1515 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Oct 29 01:27:29.935417 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Oct 29 01:27:29.935529 systemd[1]: kubelet.service: Failed with result 'exit-code'. Oct 29 01:27:40.140547 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Oct 29 01:27:40.140663 systemd[1]: Stopped kubelet.service. Oct 29 01:27:40.141779 systemd[1]: Starting kubelet.service... Oct 29 01:27:40.441607 systemd[1]: Started kubelet.service. Oct 29 01:27:40.467136 kubelet[1529]: E1029 01:27:40.467112 1529 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Oct 29 01:27:40.468309 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Oct 29 01:27:40.468400 systemd[1]: kubelet.service: Failed with result 'exit-code'. Oct 29 01:27:46.311006 systemd[1]: Created slice system-sshd.slice. Oct 29 01:27:46.311931 systemd[1]: Started sshd@0-139.178.70.110:22-139.178.68.195:59196.service. Oct 29 01:27:46.358014 sshd[1537]: Accepted publickey for core from 139.178.68.195 port 59196 ssh2: RSA SHA256:dX9xZ5mtK3eVY9XPNyM7eUHC3/BhNnXhscR3rJLIlYA Oct 29 01:27:46.359112 sshd[1537]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Oct 29 01:27:46.362634 systemd[1]: Started session-3.scope. Oct 29 01:27:46.363001 systemd-logind[1329]: New session 3 of user core. Oct 29 01:27:46.411833 systemd[1]: Started sshd@1-139.178.70.110:22-139.178.68.195:59200.service. Oct 29 01:27:46.452074 sshd[1542]: Accepted publickey for core from 139.178.68.195 port 59200 ssh2: RSA SHA256:dX9xZ5mtK3eVY9XPNyM7eUHC3/BhNnXhscR3rJLIlYA Oct 29 01:27:46.453108 sshd[1542]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Oct 29 01:27:46.455824 systemd-logind[1329]: New session 4 of user core. Oct 29 01:27:46.456147 systemd[1]: Started session-4.scope. Oct 29 01:27:46.507804 sshd[1542]: pam_unix(sshd:session): session closed for user core Oct 29 01:27:46.507947 systemd[1]: Started sshd@2-139.178.70.110:22-139.178.68.195:59204.service. Oct 29 01:27:46.510001 systemd-logind[1329]: Session 4 logged out. Waiting for processes to exit. Oct 29 01:27:46.510149 systemd[1]: sshd@1-139.178.70.110:22-139.178.68.195:59200.service: Deactivated successfully. Oct 29 01:27:46.510694 systemd[1]: session-4.scope: Deactivated successfully. Oct 29 01:27:46.511060 systemd-logind[1329]: Removed session 4. Oct 29 01:27:46.546089 sshd[1547]: Accepted publickey for core from 139.178.68.195 port 59204 ssh2: RSA SHA256:dX9xZ5mtK3eVY9XPNyM7eUHC3/BhNnXhscR3rJLIlYA Oct 29 01:27:46.546917 sshd[1547]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Oct 29 01:27:46.550090 systemd-logind[1329]: New session 5 of user core. Oct 29 01:27:46.550442 systemd[1]: Started session-5.scope. Oct 29 01:27:46.599091 sshd[1547]: pam_unix(sshd:session): session closed for user core Oct 29 01:27:46.601750 systemd[1]: Started sshd@3-139.178.70.110:22-139.178.68.195:59216.service. Oct 29 01:27:46.602091 systemd[1]: sshd@2-139.178.70.110:22-139.178.68.195:59204.service: Deactivated successfully. Oct 29 01:27:46.602884 systemd-logind[1329]: Session 5 logged out. Waiting for processes to exit. Oct 29 01:27:46.602925 systemd[1]: session-5.scope: Deactivated successfully. Oct 29 01:27:46.605602 systemd-logind[1329]: Removed session 5. Oct 29 01:27:46.638827 sshd[1554]: Accepted publickey for core from 139.178.68.195 port 59216 ssh2: RSA SHA256:dX9xZ5mtK3eVY9XPNyM7eUHC3/BhNnXhscR3rJLIlYA Oct 29 01:27:46.639968 sshd[1554]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Oct 29 01:27:46.643122 systemd-logind[1329]: New session 6 of user core. Oct 29 01:27:46.643456 systemd[1]: Started session-6.scope. Oct 29 01:27:46.695142 sshd[1554]: pam_unix(sshd:session): session closed for user core Oct 29 01:27:46.697222 systemd[1]: Started sshd@4-139.178.70.110:22-139.178.68.195:59220.service. Oct 29 01:27:46.697856 systemd[1]: sshd@3-139.178.70.110:22-139.178.68.195:59216.service: Deactivated successfully. Oct 29 01:27:46.698622 systemd[1]: session-6.scope: Deactivated successfully. Oct 29 01:27:46.698710 systemd-logind[1329]: Session 6 logged out. Waiting for processes to exit. Oct 29 01:27:46.701729 systemd-logind[1329]: Removed session 6. Oct 29 01:27:46.734558 sshd[1561]: Accepted publickey for core from 139.178.68.195 port 59220 ssh2: RSA SHA256:dX9xZ5mtK3eVY9XPNyM7eUHC3/BhNnXhscR3rJLIlYA Oct 29 01:27:46.735617 sshd[1561]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Oct 29 01:27:46.738695 systemd-logind[1329]: New session 7 of user core. Oct 29 01:27:46.739040 systemd[1]: Started session-7.scope. Oct 29 01:27:46.823479 sudo[1567]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Oct 29 01:27:46.823674 sudo[1567]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Oct 29 01:27:46.831570 dbus-daemon[1319]: Ѝ5BTV: received setenforce notice (enforcing=1042009568) Oct 29 01:27:46.831747 sudo[1567]: pam_unix(sudo:session): session closed for user root Oct 29 01:27:46.835702 systemd[1]: Started sshd@5-139.178.70.110:22-139.178.68.195:59232.service. Oct 29 01:27:46.836315 sshd[1561]: pam_unix(sshd:session): session closed for user core Oct 29 01:27:46.843434 systemd[1]: sshd@4-139.178.70.110:22-139.178.68.195:59220.service: Deactivated successfully. Oct 29 01:27:46.843881 systemd[1]: session-7.scope: Deactivated successfully. Oct 29 01:27:46.844760 systemd-logind[1329]: Session 7 logged out. Waiting for processes to exit. Oct 29 01:27:46.845262 systemd-logind[1329]: Removed session 7. Oct 29 01:27:46.871889 sshd[1569]: Accepted publickey for core from 139.178.68.195 port 59232 ssh2: RSA SHA256:dX9xZ5mtK3eVY9XPNyM7eUHC3/BhNnXhscR3rJLIlYA Oct 29 01:27:46.872388 sshd[1569]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Oct 29 01:27:46.875342 systemd[1]: Started session-8.scope. Oct 29 01:27:46.875539 systemd-logind[1329]: New session 8 of user core. Oct 29 01:27:46.925831 sudo[1576]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Oct 29 01:27:46.926010 sudo[1576]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Oct 29 01:27:46.928158 sudo[1576]: pam_unix(sudo:session): session closed for user root Oct 29 01:27:46.931669 sudo[1575]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Oct 29 01:27:46.932034 sudo[1575]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Oct 29 01:27:46.939086 systemd[1]: Stopping audit-rules.service... Oct 29 01:27:46.939000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=remove_rule key=(null) list=5 res=1 Oct 29 01:27:46.941257 kernel: kauditd_printk_skb: 187 callbacks suppressed Oct 29 01:27:46.941300 kernel: audit: type=1305 audit(1761701266.939:147): auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=remove_rule key=(null) list=5 res=1 Oct 29 01:27:46.941453 auditctl[1579]: No rules Oct 29 01:27:46.943686 systemd[1]: audit-rules.service: Deactivated successfully. Oct 29 01:27:46.943859 systemd[1]: Stopped audit-rules.service. Oct 29 01:27:46.945067 systemd[1]: Starting audit-rules.service... Oct 29 01:27:46.939000 audit[1579]: SYSCALL arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7ffdf4eb7a20 a2=420 a3=0 items=0 ppid=1 pid=1579 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/sbin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 29 01:27:46.939000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D44 Oct 29 01:27:46.952111 kernel: audit: type=1300 audit(1761701266.939:147): arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7ffdf4eb7a20 a2=420 a3=0 items=0 ppid=1 pid=1579 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/sbin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 29 01:27:46.952147 kernel: audit: type=1327 audit(1761701266.939:147): proctitle=2F7362696E2F617564697463746C002D44 Oct 29 01:27:46.942000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 29 01:27:46.957220 kernel: audit: type=1131 audit(1761701266.942:148): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 29 01:27:46.961465 augenrules[1597]: No rules Oct 29 01:27:46.961887 systemd[1]: Finished audit-rules.service. Oct 29 01:27:46.960000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 29 01:27:46.964626 sudo[1575]: pam_unix(sudo:session): session closed for user root Oct 29 01:27:46.963000 audit[1575]: USER_END pid=1575 uid=500 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Oct 29 01:27:46.967958 kernel: audit: type=1130 audit(1761701266.960:149): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 29 01:27:46.967993 kernel: audit: type=1106 audit(1761701266.963:150): pid=1575 uid=500 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Oct 29 01:27:46.968708 sshd[1569]: pam_unix(sshd:session): session closed for user core Oct 29 01:27:46.963000 audit[1575]: CRED_DISP pid=1575 uid=500 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Oct 29 01:27:46.970489 systemd[1]: Started sshd@6-139.178.70.110:22-139.178.68.195:59240.service. Oct 29 01:27:46.975197 kernel: audit: type=1104 audit(1761701266.963:151): pid=1575 uid=500 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Oct 29 01:27:46.975242 kernel: audit: type=1130 audit(1761701266.969:152): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@6-139.178.70.110:22-139.178.68.195:59240 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 29 01:27:46.969000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@6-139.178.70.110:22-139.178.68.195:59240 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 29 01:27:46.975000 audit[1569]: USER_END pid=1569 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Oct 29 01:27:46.977060 systemd[1]: sshd@5-139.178.70.110:22-139.178.68.195:59232.service: Deactivated successfully. Oct 29 01:27:46.977473 systemd[1]: session-8.scope: Deactivated successfully. Oct 29 01:27:46.980715 systemd-logind[1329]: Session 8 logged out. Waiting for processes to exit. Oct 29 01:27:46.981215 kernel: audit: type=1106 audit(1761701266.975:153): pid=1569 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Oct 29 01:27:46.981316 systemd-logind[1329]: Removed session 8. Oct 29 01:27:46.975000 audit[1569]: CRED_DISP pid=1569 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Oct 29 01:27:46.986197 kernel: audit: type=1104 audit(1761701266.975:154): pid=1569 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Oct 29 01:27:46.975000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@5-139.178.70.110:22-139.178.68.195:59232 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 29 01:27:47.007000 audit[1602]: USER_ACCT pid=1602 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Oct 29 01:27:47.009217 sshd[1602]: Accepted publickey for core from 139.178.68.195 port 59240 ssh2: RSA SHA256:dX9xZ5mtK3eVY9XPNyM7eUHC3/BhNnXhscR3rJLIlYA Oct 29 01:27:47.008000 audit[1602]: CRED_ACQ pid=1602 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Oct 29 01:27:47.009000 audit[1602]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffd716df6f0 a2=3 a3=0 items=0 ppid=1 pid=1602 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=9 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 29 01:27:47.009000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Oct 29 01:27:47.010541 sshd[1602]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Oct 29 01:27:47.014070 systemd[1]: Started session-9.scope. Oct 29 01:27:47.015016 systemd-logind[1329]: New session 9 of user core. Oct 29 01:27:47.017000 audit[1602]: USER_START pid=1602 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Oct 29 01:27:47.018000 audit[1607]: CRED_ACQ pid=1607 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Oct 29 01:27:47.063000 audit[1608]: USER_ACCT pid=1608 uid=500 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Oct 29 01:27:47.065215 sudo[1608]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Oct 29 01:27:47.065395 sudo[1608]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Oct 29 01:27:47.064000 audit[1608]: CRED_REFR pid=1608 uid=500 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Oct 29 01:27:47.065000 audit[1608]: USER_START pid=1608 uid=500 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Oct 29 01:27:47.090733 systemd[1]: Starting docker.service... Oct 29 01:27:47.117578 env[1618]: time="2025-10-29T01:27:47.117553911Z" level=info msg="Starting up" Oct 29 01:27:47.118438 env[1618]: time="2025-10-29T01:27:47.118423341Z" level=info msg="parsed scheme: \"unix\"" module=grpc Oct 29 01:27:47.118438 env[1618]: time="2025-10-29T01:27:47.118435811Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc Oct 29 01:27:47.118486 env[1618]: time="2025-10-29T01:27:47.118448651Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/libcontainerd/docker-containerd.sock 0 }] }" module=grpc Oct 29 01:27:47.118486 env[1618]: time="2025-10-29T01:27:47.118454268Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc Oct 29 01:27:47.119531 env[1618]: time="2025-10-29T01:27:47.119515962Z" level=info msg="parsed scheme: \"unix\"" module=grpc Oct 29 01:27:47.119531 env[1618]: time="2025-10-29T01:27:47.119527692Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc Oct 29 01:27:47.119574 env[1618]: time="2025-10-29T01:27:47.119535234Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/libcontainerd/docker-containerd.sock 0 }] }" module=grpc Oct 29 01:27:47.119574 env[1618]: time="2025-10-29T01:27:47.119539740Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc Oct 29 01:27:47.129048 systemd[1]: var-lib-docker-check\x2doverlayfs\x2dsupport3940194017-merged.mount: Deactivated successfully. Oct 29 01:27:47.142553 env[1618]: time="2025-10-29T01:27:47.142534077Z" level=warning msg="Your kernel does not support cgroup blkio weight" Oct 29 01:27:47.142553 env[1618]: time="2025-10-29T01:27:47.142547515Z" level=warning msg="Your kernel does not support cgroup blkio weight_device" Oct 29 01:27:47.142664 env[1618]: time="2025-10-29T01:27:47.142652909Z" level=info msg="Loading containers: start." Oct 29 01:27:47.180000 audit[1648]: NETFILTER_CFG table=nat:2 family=2 entries=2 op=nft_register_chain pid=1648 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 29 01:27:47.180000 audit[1648]: SYSCALL arch=c000003e syscall=46 success=yes exit=116 a0=3 a1=7fff7fe79250 a2=0 a3=7fff7fe7923c items=0 ppid=1618 pid=1648 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 29 01:27:47.180000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D74006E6174002D4E00444F434B4552 Oct 29 01:27:47.182000 audit[1650]: NETFILTER_CFG table=filter:3 family=2 entries=2 op=nft_register_chain pid=1650 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 29 01:27:47.182000 audit[1650]: SYSCALL arch=c000003e syscall=46 success=yes exit=124 a0=3 a1=7ffdaa519490 a2=0 a3=7ffdaa51947c items=0 ppid=1618 pid=1650 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 29 01:27:47.182000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B4552 Oct 29 01:27:47.183000 audit[1652]: NETFILTER_CFG table=filter:4 family=2 entries=1 op=nft_register_chain pid=1652 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 29 01:27:47.183000 audit[1652]: SYSCALL arch=c000003e syscall=46 success=yes exit=112 a0=3 a1=7ffd5fe19420 a2=0 a3=7ffd5fe1940c items=0 ppid=1618 pid=1652 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 29 01:27:47.183000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D49534F4C4154494F4E2D53544147452D31 Oct 29 01:27:47.184000 audit[1654]: NETFILTER_CFG table=filter:5 family=2 entries=1 op=nft_register_chain pid=1654 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 29 01:27:47.184000 audit[1654]: SYSCALL arch=c000003e syscall=46 success=yes exit=112 a0=3 a1=7ffe123c4f00 a2=0 a3=7ffe123c4eec items=0 ppid=1618 pid=1654 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 29 01:27:47.184000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D49534F4C4154494F4E2D53544147452D32 Oct 29 01:27:47.185000 audit[1656]: NETFILTER_CFG table=filter:6 family=2 entries=1 op=nft_register_rule pid=1656 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 29 01:27:47.185000 audit[1656]: SYSCALL arch=c000003e syscall=46 success=yes exit=228 a0=3 a1=7ffd4e00d160 a2=0 a3=7ffd4e00d14c items=0 ppid=1618 pid=1656 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 29 01:27:47.185000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4100444F434B45522D49534F4C4154494F4E2D53544147452D31002D6A0052455455524E Oct 29 01:27:47.200000 audit[1661]: NETFILTER_CFG table=filter:7 family=2 entries=1 op=nft_register_rule pid=1661 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 29 01:27:47.200000 audit[1661]: SYSCALL arch=c000003e syscall=46 success=yes exit=228 a0=3 a1=7ffc3c5a8620 a2=0 a3=7ffc3c5a860c items=0 ppid=1618 pid=1661 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 29 01:27:47.200000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4100444F434B45522D49534F4C4154494F4E2D53544147452D32002D6A0052455455524E Oct 29 01:27:47.203000 audit[1663]: NETFILTER_CFG table=filter:8 family=2 entries=1 op=nft_register_chain pid=1663 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 29 01:27:47.203000 audit[1663]: SYSCALL arch=c000003e syscall=46 success=yes exit=96 a0=3 a1=7ffcc87d4690 a2=0 a3=7ffcc87d467c items=0 ppid=1618 pid=1663 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 29 01:27:47.203000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D55534552 Oct 29 01:27:47.204000 audit[1665]: NETFILTER_CFG table=filter:9 family=2 entries=1 op=nft_register_rule pid=1665 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 29 01:27:47.204000 audit[1665]: SYSCALL arch=c000003e syscall=46 success=yes exit=212 a0=3 a1=7ffc38325360 a2=0 a3=7ffc3832534c items=0 ppid=1618 pid=1665 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 29 01:27:47.204000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4100444F434B45522D55534552002D6A0052455455524E Oct 29 01:27:47.206000 audit[1667]: NETFILTER_CFG table=filter:10 family=2 entries=2 op=nft_register_chain pid=1667 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 29 01:27:47.206000 audit[1667]: SYSCALL arch=c000003e syscall=46 success=yes exit=308 a0=3 a1=7ffff33d12f0 a2=0 a3=7ffff33d12dc items=0 ppid=1618 pid=1667 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 29 01:27:47.206000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D55534552 Oct 29 01:27:47.210000 audit[1671]: NETFILTER_CFG table=filter:11 family=2 entries=1 op=nft_unregister_rule pid=1671 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 29 01:27:47.210000 audit[1671]: SYSCALL arch=c000003e syscall=46 success=yes exit=216 a0=3 a1=7ffde053c680 a2=0 a3=7ffde053c66c items=0 ppid=1618 pid=1671 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 29 01:27:47.210000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4400464F5257415244002D6A00444F434B45522D55534552 Oct 29 01:27:47.214000 audit[1672]: NETFILTER_CFG table=filter:12 family=2 entries=1 op=nft_register_rule pid=1672 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 29 01:27:47.214000 audit[1672]: SYSCALL arch=c000003e syscall=46 success=yes exit=224 a0=3 a1=7fff5c4dd6f0 a2=0 a3=7fff5c4dd6dc items=0 ppid=1618 pid=1672 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 29 01:27:47.214000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D55534552 Oct 29 01:27:47.222198 kernel: Initializing XFRM netlink socket Oct 29 01:27:47.244111 env[1618]: time="2025-10-29T01:27:47.244092646Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address" Oct 29 01:27:47.259000 audit[1680]: NETFILTER_CFG table=nat:13 family=2 entries=2 op=nft_register_chain pid=1680 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 29 01:27:47.259000 audit[1680]: SYSCALL arch=c000003e syscall=46 success=yes exit=492 a0=3 a1=7fffe6da9670 a2=0 a3=7fffe6da965c items=0 ppid=1618 pid=1680 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 29 01:27:47.259000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D74006E6174002D4900504F5354524F5554494E47002D73003137322E31372E302E302F31360000002D6F00646F636B657230002D6A004D415351554552414445 Oct 29 01:27:47.269000 audit[1683]: NETFILTER_CFG table=nat:14 family=2 entries=1 op=nft_register_rule pid=1683 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 29 01:27:47.269000 audit[1683]: SYSCALL arch=c000003e syscall=46 success=yes exit=288 a0=3 a1=7ffdb1f70a60 a2=0 a3=7ffdb1f70a4c items=0 ppid=1618 pid=1683 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 29 01:27:47.269000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D74006E6174002D4900444F434B4552002D6900646F636B657230002D6A0052455455524E Oct 29 01:27:47.271000 audit[1686]: NETFILTER_CFG table=filter:15 family=2 entries=1 op=nft_register_rule pid=1686 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 29 01:27:47.271000 audit[1686]: SYSCALL arch=c000003e syscall=46 success=yes exit=376 a0=3 a1=7ffde6e50520 a2=0 a3=7ffde6e5050c items=0 ppid=1618 pid=1686 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 29 01:27:47.271000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6900646F636B657230002D6F00646F636B657230002D6A00414343455054 Oct 29 01:27:47.272000 audit[1688]: NETFILTER_CFG table=filter:16 family=2 entries=1 op=nft_register_rule pid=1688 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 29 01:27:47.272000 audit[1688]: SYSCALL arch=c000003e syscall=46 success=yes exit=376 a0=3 a1=7ffcc2048490 a2=0 a3=7ffcc204847c items=0 ppid=1618 pid=1688 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 29 01:27:47.272000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6900646F636B6572300000002D6F00646F636B657230002D6A00414343455054 Oct 29 01:27:47.273000 audit[1690]: NETFILTER_CFG table=nat:17 family=2 entries=2 op=nft_register_chain pid=1690 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 29 01:27:47.273000 audit[1690]: SYSCALL arch=c000003e syscall=46 success=yes exit=356 a0=3 a1=7ffd72a18ba0 a2=0 a3=7ffd72a18b8c items=0 ppid=1618 pid=1690 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 29 01:27:47.273000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D74006E6174002D4100505245524F5554494E47002D6D006164647274797065002D2D6473742D74797065004C4F43414C002D6A00444F434B4552 Oct 29 01:27:47.274000 audit[1692]: NETFILTER_CFG table=nat:18 family=2 entries=2 op=nft_register_chain pid=1692 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 29 01:27:47.274000 audit[1692]: SYSCALL arch=c000003e syscall=46 success=yes exit=444 a0=3 a1=7ffc01f9fef0 a2=0 a3=7ffc01f9fedc items=0 ppid=1618 pid=1692 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 29 01:27:47.274000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D74006E6174002D41004F5554505554002D6D006164647274797065002D2D6473742D74797065004C4F43414C002D6A00444F434B45520000002D2D647374003132372E302E302E302F38 Oct 29 01:27:47.276000 audit[1694]: NETFILTER_CFG table=filter:19 family=2 entries=1 op=nft_register_rule pid=1694 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 29 01:27:47.276000 audit[1694]: SYSCALL arch=c000003e syscall=46 success=yes exit=304 a0=3 a1=7fff05683290 a2=0 a3=7fff0568327c items=0 ppid=1618 pid=1694 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 29 01:27:47.276000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6F00646F636B657230002D6A00444F434B4552 Oct 29 01:27:47.281000 audit[1697]: NETFILTER_CFG table=filter:20 family=2 entries=1 op=nft_register_rule pid=1697 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 29 01:27:47.281000 audit[1697]: SYSCALL arch=c000003e syscall=46 success=yes exit=508 a0=3 a1=7ffe779bc860 a2=0 a3=7ffe779bc84c items=0 ppid=1618 pid=1697 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 29 01:27:47.281000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6F00646F636B657230002D6D00636F6E6E747261636B002D2D637473746174650052454C415445442C45535441424C4953484544002D6A00414343455054 Oct 29 01:27:47.282000 audit[1699]: NETFILTER_CFG table=filter:21 family=2 entries=1 op=nft_register_rule pid=1699 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 29 01:27:47.282000 audit[1699]: SYSCALL arch=c000003e syscall=46 success=yes exit=240 a0=3 a1=7ffca5653c80 a2=0 a3=7ffca5653c6c items=0 ppid=1618 pid=1699 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 29 01:27:47.282000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D49534F4C4154494F4E2D53544147452D31 Oct 29 01:27:47.283000 audit[1701]: NETFILTER_CFG table=filter:22 family=2 entries=1 op=nft_register_rule pid=1701 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 29 01:27:47.283000 audit[1701]: SYSCALL arch=c000003e syscall=46 success=yes exit=428 a0=3 a1=7fffead03770 a2=0 a3=7fffead0375c items=0 ppid=1618 pid=1701 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 29 01:27:47.283000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D740066696C746572002D4900444F434B45522D49534F4C4154494F4E2D53544147452D31002D6900646F636B6572300000002D6F00646F636B657230002D6A00444F434B45522D49534F4C4154494F4E2D53544147452D32 Oct 29 01:27:47.284000 audit[1703]: NETFILTER_CFG table=filter:23 family=2 entries=1 op=nft_register_rule pid=1703 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 29 01:27:47.284000 audit[1703]: SYSCALL arch=c000003e syscall=46 success=yes exit=312 a0=3 a1=7ffe2c35eaf0 a2=0 a3=7ffe2c35eadc items=0 ppid=1618 pid=1703 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 29 01:27:47.284000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D740066696C746572002D4900444F434B45522D49534F4C4154494F4E2D53544147452D32002D6F00646F636B657230002D6A0044524F50 Oct 29 01:27:47.286622 systemd-networkd[1110]: docker0: Link UP Oct 29 01:27:47.289000 audit[1707]: NETFILTER_CFG table=filter:24 family=2 entries=1 op=nft_unregister_rule pid=1707 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 29 01:27:47.289000 audit[1707]: SYSCALL arch=c000003e syscall=46 success=yes exit=228 a0=3 a1=7fff68a66260 a2=0 a3=7fff68a6624c items=0 ppid=1618 pid=1707 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 29 01:27:47.289000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4400464F5257415244002D6A00444F434B45522D55534552 Oct 29 01:27:47.293000 audit[1708]: NETFILTER_CFG table=filter:25 family=2 entries=1 op=nft_register_rule pid=1708 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 29 01:27:47.293000 audit[1708]: SYSCALL arch=c000003e syscall=46 success=yes exit=224 a0=3 a1=7fff3e6a89b0 a2=0 a3=7fff3e6a899c items=0 ppid=1618 pid=1708 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 29 01:27:47.293000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D55534552 Oct 29 01:27:47.294954 env[1618]: time="2025-10-29T01:27:47.294940361Z" level=info msg="Loading containers: done." Oct 29 01:27:47.300827 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck2661004223-merged.mount: Deactivated successfully. Oct 29 01:27:47.305155 env[1618]: time="2025-10-29T01:27:47.305131358Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Oct 29 01:27:47.305265 env[1618]: time="2025-10-29T01:27:47.305252194Z" level=info msg="Docker daemon" commit=112bdf3343 graphdriver(s)=overlay2 version=20.10.23 Oct 29 01:27:47.305320 env[1618]: time="2025-10-29T01:27:47.305308285Z" level=info msg="Daemon has completed initialization" Oct 29 01:27:47.311144 systemd[1]: Started docker.service. Oct 29 01:27:47.310000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=docker comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 29 01:27:47.315483 env[1618]: time="2025-10-29T01:27:47.315449223Z" level=info msg="API listen on /run/docker.sock" Oct 29 01:27:48.081468 env[1344]: time="2025-10-29T01:27:48.081274824Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.9\"" Oct 29 01:27:48.610866 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1384565307.mount: Deactivated successfully. Oct 29 01:27:49.660336 env[1344]: time="2025-10-29T01:27:49.660302752Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-apiserver:v1.32.9,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 29 01:27:49.661234 env[1344]: time="2025-10-29T01:27:49.661221368Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:abd2b525baf428ffb8b8b7d1e09761dc5cdb7ed0c7896a9427e29e84f8eafc59,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 29 01:27:49.662333 env[1344]: time="2025-10-29T01:27:49.662316952Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-apiserver:v1.32.9,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 29 01:27:49.663382 env[1344]: time="2025-10-29T01:27:49.663370299Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-apiserver@sha256:6df11cc2ad9679b1117be34d3a0230add88bc0a08fd7a3ebc26b680575e8de97,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 29 01:27:49.663837 env[1344]: time="2025-10-29T01:27:49.663823277Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.9\" returns image reference \"sha256:abd2b525baf428ffb8b8b7d1e09761dc5cdb7ed0c7896a9427e29e84f8eafc59\"" Oct 29 01:27:49.664257 env[1344]: time="2025-10-29T01:27:49.664214496Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.9\"" Oct 29 01:27:50.640421 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Oct 29 01:27:50.639000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 29 01:27:50.639000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 29 01:27:50.640579 systemd[1]: Stopped kubelet.service. Oct 29 01:27:50.641693 systemd[1]: Starting kubelet.service... Oct 29 01:27:50.820363 systemd[1]: Started kubelet.service. Oct 29 01:27:50.819000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 29 01:27:50.863994 kubelet[1747]: E1029 01:27:50.863958 1747 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Oct 29 01:27:50.865057 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Oct 29 01:27:50.865212 systemd[1]: kubelet.service: Failed with result 'exit-code'. Oct 29 01:27:50.864000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Oct 29 01:27:51.004787 env[1344]: time="2025-10-29T01:27:51.004452636Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-controller-manager:v1.32.9,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 29 01:27:51.017009 env[1344]: time="2025-10-29T01:27:51.016988070Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:0debe32fbb7223500fcf8c312f2a568a5abd3ed9274d8ec6780cfb30b8861e91,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 29 01:27:51.020125 env[1344]: time="2025-10-29T01:27:51.020111950Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-controller-manager:v1.32.9,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 29 01:27:51.034948 env[1344]: time="2025-10-29T01:27:51.034933382Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-controller-manager@sha256:243c4b8e3bce271fcb1b78008ab996ab6976b1a20096deac08338fcd17979922,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 29 01:27:51.035308 env[1344]: time="2025-10-29T01:27:51.035294045Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.9\" returns image reference \"sha256:0debe32fbb7223500fcf8c312f2a568a5abd3ed9274d8ec6780cfb30b8861e91\"" Oct 29 01:27:51.035704 env[1344]: time="2025-10-29T01:27:51.035691981Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.9\"" Oct 29 01:27:52.186335 env[1344]: time="2025-10-29T01:27:52.186304030Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-scheduler:v1.32.9,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 29 01:27:52.202489 env[1344]: time="2025-10-29T01:27:52.202454472Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:6934c23b154fcb9bf54ed5913782de746735a49f4daa4732285915050cd44ad5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 29 01:27:52.209640 env[1344]: time="2025-10-29T01:27:52.209611900Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-scheduler:v1.32.9,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 29 01:27:52.218039 env[1344]: time="2025-10-29T01:27:52.218006603Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-scheduler@sha256:50c49520dbd0e8b4076b6a5c77d8014df09ea3d59a73e8bafd2678d51ebb92d5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 29 01:27:52.218722 env[1344]: time="2025-10-29T01:27:52.218697354Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.9\" returns image reference \"sha256:6934c23b154fcb9bf54ed5913782de746735a49f4daa4732285915050cd44ad5\"" Oct 29 01:27:52.219654 env[1344]: time="2025-10-29T01:27:52.219629550Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.9\"" Oct 29 01:27:53.235196 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1946920648.mount: Deactivated successfully. Oct 29 01:27:53.709120 env[1344]: time="2025-10-29T01:27:53.709083756Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy:v1.32.9,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 29 01:27:53.722066 env[1344]: time="2025-10-29T01:27:53.722040846Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:fa3fdca615a501743d8deb39729a96e731312aac8d96accec061d5265360332f,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 29 01:27:53.729088 env[1344]: time="2025-10-29T01:27:53.729071666Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-proxy:v1.32.9,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 29 01:27:53.737134 env[1344]: time="2025-10-29T01:27:53.737119112Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy@sha256:886af02535dc34886e4618b902f8c140d89af57233a245621d29642224516064,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 29 01:27:53.737349 env[1344]: time="2025-10-29T01:27:53.737334203Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.9\" returns image reference \"sha256:fa3fdca615a501743d8deb39729a96e731312aac8d96accec061d5265360332f\"" Oct 29 01:27:53.737735 env[1344]: time="2025-10-29T01:27:53.737695081Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" Oct 29 01:27:54.262889 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount825237133.mount: Deactivated successfully. Oct 29 01:27:55.063016 env[1344]: time="2025-10-29T01:27:55.062960959Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/coredns/coredns:v1.11.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 29 01:27:55.069470 env[1344]: time="2025-10-29T01:27:55.069442239Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 29 01:27:55.070970 env[1344]: time="2025-10-29T01:27:55.070944726Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/coredns/coredns:v1.11.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 29 01:27:55.072385 env[1344]: time="2025-10-29T01:27:55.072361949Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 29 01:27:55.073146 env[1344]: time="2025-10-29T01:27:55.073122064Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\"" Oct 29 01:27:55.073684 env[1344]: time="2025-10-29T01:27:55.073666313Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Oct 29 01:27:55.537219 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2567358476.mount: Deactivated successfully. Oct 29 01:27:55.590529 env[1344]: time="2025-10-29T01:27:55.590499869Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.10,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 29 01:27:55.592015 env[1344]: time="2025-10-29T01:27:55.591998459Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 29 01:27:55.597056 env[1344]: time="2025-10-29T01:27:55.597040066Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.10,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 29 01:27:55.610418 env[1344]: time="2025-10-29T01:27:55.610400703Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 29 01:27:55.610902 env[1344]: time="2025-10-29T01:27:55.610884393Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Oct 29 01:27:55.611709 env[1344]: time="2025-10-29T01:27:55.611693599Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\"" Oct 29 01:27:56.112056 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1080036241.mount: Deactivated successfully. Oct 29 01:27:58.107768 env[1344]: time="2025-10-29T01:27:58.107729028Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/etcd:3.5.16-0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 29 01:27:58.108771 env[1344]: time="2025-10-29T01:27:58.108755184Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 29 01:27:58.109764 env[1344]: time="2025-10-29T01:27:58.109749828Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/etcd:3.5.16-0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 29 01:27:58.110919 env[1344]: time="2025-10-29T01:27:58.110905845Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 29 01:27:58.111422 env[1344]: time="2025-10-29T01:27:58.111404327Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\" returns image reference \"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\"" Oct 29 01:28:00.252000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 29 01:28:00.252103 systemd[1]: Stopped kubelet.service. Oct 29 01:28:00.253533 systemd[1]: Starting kubelet.service... Oct 29 01:28:00.256316 kernel: kauditd_printk_skb: 88 callbacks suppressed Oct 29 01:28:00.256353 kernel: audit: type=1130 audit(1761701280.252:193): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 29 01:28:00.252000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 29 01:28:00.260195 kernel: audit: type=1131 audit(1761701280.252:194): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 29 01:28:00.278357 systemd[1]: Reloading. Oct 29 01:28:00.330023 /usr/lib/systemd/system-generators/torcx-generator[1799]: time="2025-10-29T01:28:00Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.8 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.8 /var/lib/torcx/store]" Oct 29 01:28:00.330255 /usr/lib/systemd/system-generators/torcx-generator[1799]: time="2025-10-29T01:28:00Z" level=info msg="torcx already run" Oct 29 01:28:00.391822 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Oct 29 01:28:00.391925 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Oct 29 01:28:00.403618 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Oct 29 01:28:00.463975 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Oct 29 01:28:00.464109 systemd[1]: kubelet.service: Failed with result 'signal'. Oct 29 01:28:00.464375 systemd[1]: Stopped kubelet.service. Oct 29 01:28:00.464000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Oct 29 01:28:00.467225 kernel: audit: type=1130 audit(1761701280.464:195): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Oct 29 01:28:00.467659 systemd[1]: Starting kubelet.service... Oct 29 01:28:00.929000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 29 01:28:00.929865 systemd[1]: Started kubelet.service. Oct 29 01:28:00.933195 kernel: audit: type=1130 audit(1761701280.929:196): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 29 01:28:01.144664 kubelet[1874]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Oct 29 01:28:01.144936 kubelet[1874]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Oct 29 01:28:01.144995 kubelet[1874]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Oct 29 01:28:01.145116 kubelet[1874]: I1029 01:28:01.145092 1874 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Oct 29 01:28:01.492922 kubelet[1874]: I1029 01:28:01.492904 1874 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Oct 29 01:28:01.493151 kubelet[1874]: I1029 01:28:01.493144 1874 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Oct 29 01:28:01.493502 kubelet[1874]: I1029 01:28:01.493494 1874 server.go:954] "Client rotation is on, will bootstrap in background" Oct 29 01:28:01.517173 kubelet[1874]: E1029 01:28:01.517155 1874 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://139.178.70.110:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 139.178.70.110:6443: connect: connection refused" logger="UnhandledError" Oct 29 01:28:01.517848 kubelet[1874]: I1029 01:28:01.517834 1874 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Oct 29 01:28:01.524943 kubelet[1874]: E1029 01:28:01.524895 1874 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Oct 29 01:28:01.524943 kubelet[1874]: I1029 01:28:01.524942 1874 server.go:1421] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Oct 29 01:28:01.527932 kubelet[1874]: I1029 01:28:01.527920 1874 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Oct 29 01:28:01.528143 kubelet[1874]: I1029 01:28:01.528126 1874 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Oct 29 01:28:01.528248 kubelet[1874]: I1029 01:28:01.528142 1874 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":1} Oct 29 01:28:01.528326 kubelet[1874]: I1029 01:28:01.528252 1874 topology_manager.go:138] "Creating topology manager with none policy" Oct 29 01:28:01.528326 kubelet[1874]: I1029 01:28:01.528267 1874 container_manager_linux.go:304] "Creating device plugin manager" Oct 29 01:28:01.528326 kubelet[1874]: I1029 01:28:01.528323 1874 state_mem.go:36] "Initialized new in-memory state store" Oct 29 01:28:01.530919 kubelet[1874]: I1029 01:28:01.530907 1874 kubelet.go:446] "Attempting to sync node with API server" Oct 29 01:28:01.530951 kubelet[1874]: I1029 01:28:01.530925 1874 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Oct 29 01:28:01.530951 kubelet[1874]: I1029 01:28:01.530934 1874 kubelet.go:352] "Adding apiserver pod source" Oct 29 01:28:01.530951 kubelet[1874]: I1029 01:28:01.530939 1874 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Oct 29 01:28:01.546879 kubelet[1874]: W1029 01:28:01.546857 1874 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://139.178.70.110:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 139.178.70.110:6443: connect: connection refused Oct 29 01:28:01.546981 kubelet[1874]: E1029 01:28:01.546969 1874 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://139.178.70.110:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 139.178.70.110:6443: connect: connection refused" logger="UnhandledError" Oct 29 01:28:01.547246 kubelet[1874]: W1029 01:28:01.547228 1874 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://139.178.70.110:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 139.178.70.110:6443: connect: connection refused Oct 29 01:28:01.547311 kubelet[1874]: E1029 01:28:01.547300 1874 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://139.178.70.110:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 139.178.70.110:6443: connect: connection refused" logger="UnhandledError" Oct 29 01:28:01.549274 kubelet[1874]: I1029 01:28:01.549264 1874 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" Oct 29 01:28:01.549566 kubelet[1874]: I1029 01:28:01.549558 1874 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Oct 29 01:28:01.550147 kubelet[1874]: W1029 01:28:01.550138 1874 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Oct 29 01:28:01.551647 kubelet[1874]: I1029 01:28:01.551638 1874 watchdog_linux.go:99] "Systemd watchdog is not enabled" Oct 29 01:28:01.551713 kubelet[1874]: I1029 01:28:01.551706 1874 server.go:1287] "Started kubelet" Oct 29 01:28:01.566869 kernel: audit: type=1400 audit(1761701281.558:197): avc: denied { mac_admin } for pid=1874 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 29 01:28:01.566912 kernel: audit: type=1401 audit(1761701281.558:197): op=setxattr invalid_context="system_u:object_r:container_file_t:s0" Oct 29 01:28:01.566929 kernel: audit: type=1300 audit(1761701281.558:197): arch=c000003e syscall=188 success=no exit=-22 a0=c000b195f0 a1=c000b03938 a2=c000b195c0 a3=25 items=0 ppid=1 pid=1874 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/usr/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 29 01:28:01.558000 audit[1874]: AVC avc: denied { mac_admin } for pid=1874 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 29 01:28:01.558000 audit: SELINUX_ERR op=setxattr invalid_context="system_u:object_r:container_file_t:s0" Oct 29 01:28:01.558000 audit[1874]: SYSCALL arch=c000003e syscall=188 success=no exit=-22 a0=c000b195f0 a1=c000b03938 a2=c000b195c0 a3=25 items=0 ppid=1 pid=1874 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/usr/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 29 01:28:01.567068 kubelet[1874]: I1029 01:28:01.561787 1874 kubelet.go:1507] "Unprivileged containerized plugins might not work, could not set selinux context on plugin registration dir" path="/var/lib/kubelet/plugins_registry" err="setxattr(label=system_u:object_r:container_file_t:s0) /var/lib/kubelet/plugins_registry: invalid argument" Oct 29 01:28:01.567068 kubelet[1874]: I1029 01:28:01.561814 1874 kubelet.go:1511] "Unprivileged containerized plugins might not work, could not set selinux context on plugins dir" path="/var/lib/kubelet/plugins" err="setxattr(label=system_u:object_r:container_file_t:s0) /var/lib/kubelet/plugins: invalid argument" Oct 29 01:28:01.567068 kubelet[1874]: I1029 01:28:01.561863 1874 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Oct 29 01:28:01.567068 kubelet[1874]: I1029 01:28:01.566297 1874 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Oct 29 01:28:01.567068 kubelet[1874]: I1029 01:28:01.567003 1874 server.go:479] "Adding debug handlers to kubelet server" Oct 29 01:28:01.558000 audit: PROCTITLE proctitle=2F7573722F62696E2F6B7562656C6574002D2D626F6F7473747261702D6B756265636F6E6669673D2F6574632F6B756265726E657465732F626F6F7473747261702D6B7562656C65742E636F6E66002D2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F6B7562656C65742E636F6E66002D2D636F6E6669 Oct 29 01:28:01.570433 kernel: audit: type=1327 audit(1761701281.558:197): proctitle=2F7573722F62696E2F6B7562656C6574002D2D626F6F7473747261702D6B756265636F6E6669673D2F6574632F6B756265726E657465732F626F6F7473747261702D6B7562656C65742E636F6E66002D2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F6B7562656C65742E636F6E66002D2D636F6E6669 Oct 29 01:28:01.570464 kernel: audit: type=1400 audit(1761701281.561:198): avc: denied { mac_admin } for pid=1874 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 29 01:28:01.561000 audit[1874]: AVC avc: denied { mac_admin } for pid=1874 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 29 01:28:01.570776 kubelet[1874]: I1029 01:28:01.570751 1874 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Oct 29 01:28:01.570957 kubelet[1874]: I1029 01:28:01.570950 1874 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Oct 29 01:28:01.572817 kernel: audit: type=1401 audit(1761701281.561:198): op=setxattr invalid_context="system_u:object_r:container_file_t:s0" Oct 29 01:28:01.561000 audit: SELINUX_ERR op=setxattr invalid_context="system_u:object_r:container_file_t:s0" Oct 29 01:28:01.561000 audit[1874]: SYSCALL arch=c000003e syscall=188 success=no exit=-22 a0=c000b179e0 a1=c000b03950 a2=c000b19680 a3=25 items=0 ppid=1 pid=1874 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/usr/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 29 01:28:01.561000 audit: PROCTITLE proctitle=2F7573722F62696E2F6B7562656C6574002D2D626F6F7473747261702D6B756265636F6E6669673D2F6574632F6B756265726E657465732F626F6F7473747261702D6B7562656C65742E636F6E66002D2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F6B7562656C65742E636F6E66002D2D636F6E6669 Oct 29 01:28:01.566000 audit[1887]: NETFILTER_CFG table=mangle:26 family=2 entries=2 op=nft_register_chain pid=1887 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 29 01:28:01.566000 audit[1887]: SYSCALL arch=c000003e syscall=46 success=yes exit=136 a0=3 a1=7fff1f6de3c0 a2=0 a3=7fff1f6de3ac items=0 ppid=1874 pid=1887 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 29 01:28:01.566000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D49505441424C45532D48494E54002D74006D616E676C65 Oct 29 01:28:01.566000 audit[1888]: NETFILTER_CFG table=filter:27 family=2 entries=1 op=nft_register_chain pid=1888 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 29 01:28:01.566000 audit[1888]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7fff15076960 a2=0 a3=7fff1507694c items=0 ppid=1874 pid=1888 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 29 01:28:01.566000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4649524557414C4C002D740066696C746572 Oct 29 01:28:01.570000 audit[1890]: NETFILTER_CFG table=filter:28 family=2 entries=2 op=nft_register_chain pid=1890 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 29 01:28:01.570000 audit[1890]: SYSCALL arch=c000003e syscall=46 success=yes exit=312 a0=3 a1=7ffeaae96eb0 a2=0 a3=7ffeaae96e9c items=0 ppid=1874 pid=1890 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 29 01:28:01.570000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6A004B5542452D4649524557414C4C Oct 29 01:28:01.575241 kubelet[1874]: E1029 01:28:01.571570 1874 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://139.178.70.110:6443/api/v1/namespaces/default/events\": dial tcp 139.178.70.110:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.1872d1fb4f3d32f8 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-10-29 01:28:01.551692536 +0000 UTC m=+0.618267148,LastTimestamp:2025-10-29 01:28:01.551692536 +0000 UTC m=+0.618267148,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Oct 29 01:28:01.575712 kubelet[1874]: I1029 01:28:01.575557 1874 volume_manager.go:297] "Starting Kubelet Volume Manager" Oct 29 01:28:01.575712 kubelet[1874]: I1029 01:28:01.575603 1874 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Oct 29 01:28:01.575835 kubelet[1874]: E1029 01:28:01.575822 1874 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Oct 29 01:28:01.577080 kubelet[1874]: I1029 01:28:01.577070 1874 factory.go:221] Registration of the systemd container factory successfully Oct 29 01:28:01.577210 kubelet[1874]: I1029 01:28:01.577200 1874 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Oct 29 01:28:01.577273 kubelet[1874]: I1029 01:28:01.577259 1874 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Oct 29 01:28:01.577301 kubelet[1874]: I1029 01:28:01.577283 1874 reconciler.go:26] "Reconciler: start to sync state" Oct 29 01:28:01.577781 kubelet[1874]: W1029 01:28:01.577540 1874 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://139.178.70.110:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 139.178.70.110:6443: connect: connection refused Oct 29 01:28:01.577781 kubelet[1874]: E1029 01:28:01.577567 1874 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://139.178.70.110:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 139.178.70.110:6443: connect: connection refused" logger="UnhandledError" Oct 29 01:28:01.577781 kubelet[1874]: E1029 01:28:01.577598 1874 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://139.178.70.110:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 139.178.70.110:6443: connect: connection refused" interval="200ms" Oct 29 01:28:01.578032 kubelet[1874]: E1029 01:28:01.578013 1874 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Oct 29 01:28:01.578744 kubelet[1874]: I1029 01:28:01.578737 1874 factory.go:221] Registration of the containerd container factory successfully Oct 29 01:28:01.578000 audit[1892]: NETFILTER_CFG table=filter:29 family=2 entries=2 op=nft_register_chain pid=1892 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 29 01:28:01.578000 audit[1892]: SYSCALL arch=c000003e syscall=46 success=yes exit=312 a0=3 a1=7ffef0067360 a2=0 a3=7ffef006734c items=0 ppid=1874 pid=1892 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 29 01:28:01.578000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6A004B5542452D4649524557414C4C Oct 29 01:28:01.585000 audit[1895]: NETFILTER_CFG table=filter:30 family=2 entries=1 op=nft_register_rule pid=1895 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 29 01:28:01.585000 audit[1895]: SYSCALL arch=c000003e syscall=46 success=yes exit=924 a0=3 a1=7fff00b32750 a2=0 a3=7fff00b3273c items=0 ppid=1874 pid=1895 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 29 01:28:01.585000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D41004B5542452D4649524557414C4C002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E7400626C6F636B20696E636F6D696E67206C6F63616C6E657420636F6E6E656374696F6E73002D2D647374003132372E302E302E302F38 Oct 29 01:28:01.585880 kubelet[1874]: I1029 01:28:01.585868 1874 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Oct 29 01:28:01.586000 audit[1896]: NETFILTER_CFG table=mangle:31 family=10 entries=2 op=nft_register_chain pid=1896 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 29 01:28:01.586000 audit[1896]: SYSCALL arch=c000003e syscall=46 success=yes exit=136 a0=3 a1=7ffdaebdf340 a2=0 a3=7ffdaebdf32c items=0 ppid=1874 pid=1896 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 29 01:28:01.586000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D49505441424C45532D48494E54002D74006D616E676C65 Oct 29 01:28:01.586580 kubelet[1874]: I1029 01:28:01.586573 1874 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Oct 29 01:28:01.586630 kubelet[1874]: I1029 01:28:01.586620 1874 status_manager.go:227] "Starting to sync pod status with apiserver" Oct 29 01:28:01.586684 kubelet[1874]: I1029 01:28:01.586677 1874 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Oct 29 01:28:01.586726 kubelet[1874]: I1029 01:28:01.586719 1874 kubelet.go:2382] "Starting kubelet main sync loop" Oct 29 01:28:01.586793 kubelet[1874]: E1029 01:28:01.586781 1874 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Oct 29 01:28:01.587000 audit[1897]: NETFILTER_CFG table=mangle:32 family=2 entries=1 op=nft_register_chain pid=1897 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 29 01:28:01.587000 audit[1897]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffe5a5a83e0 a2=0 a3=7ffe5a5a83cc items=0 ppid=1874 pid=1897 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 29 01:28:01.587000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006D616E676C65 Oct 29 01:28:01.588000 audit[1898]: NETFILTER_CFG table=nat:33 family=2 entries=1 op=nft_register_chain pid=1898 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 29 01:28:01.588000 audit[1898]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffd13ed70e0 a2=0 a3=7ffd13ed70cc items=0 ppid=1874 pid=1898 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 29 01:28:01.588000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006E6174 Oct 29 01:28:01.588000 audit[1900]: NETFILTER_CFG table=filter:34 family=2 entries=1 op=nft_register_chain pid=1900 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 29 01:28:01.588000 audit[1900]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffe4373c2e0 a2=0 a3=7ffe4373c2cc items=0 ppid=1874 pid=1900 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 29 01:28:01.588000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D740066696C746572 Oct 29 01:28:01.589000 audit[1901]: NETFILTER_CFG table=mangle:35 family=10 entries=1 op=nft_register_chain pid=1901 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 29 01:28:01.589000 audit[1901]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffe0700bcc0 a2=0 a3=7ffe0700bcac items=0 ppid=1874 pid=1901 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 29 01:28:01.589000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006D616E676C65 Oct 29 01:28:01.590000 audit[1902]: NETFILTER_CFG table=nat:36 family=10 entries=2 op=nft_register_chain pid=1902 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 29 01:28:01.590000 audit[1902]: SYSCALL arch=c000003e syscall=46 success=yes exit=128 a0=3 a1=7ffe2fedbb60 a2=0 a3=7ffe2fedbb4c items=0 ppid=1874 pid=1902 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 29 01:28:01.590000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006E6174 Oct 29 01:28:01.590000 audit[1903]: NETFILTER_CFG table=filter:37 family=10 entries=2 op=nft_register_chain pid=1903 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 29 01:28:01.590000 audit[1903]: SYSCALL arch=c000003e syscall=46 success=yes exit=136 a0=3 a1=7ffe08acd9b0 a2=0 a3=7ffe08acd99c items=0 ppid=1874 pid=1903 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 29 01:28:01.590000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D740066696C746572 Oct 29 01:28:01.591545 kubelet[1874]: W1029 01:28:01.591520 1874 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://139.178.70.110:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 139.178.70.110:6443: connect: connection refused Oct 29 01:28:01.591608 kubelet[1874]: E1029 01:28:01.591598 1874 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://139.178.70.110:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 139.178.70.110:6443: connect: connection refused" logger="UnhandledError" Oct 29 01:28:01.605330 kubelet[1874]: I1029 01:28:01.605316 1874 cpu_manager.go:221] "Starting CPU manager" policy="none" Oct 29 01:28:01.605330 kubelet[1874]: I1029 01:28:01.605325 1874 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Oct 29 01:28:01.605408 kubelet[1874]: I1029 01:28:01.605335 1874 state_mem.go:36] "Initialized new in-memory state store" Oct 29 01:28:01.606355 kubelet[1874]: I1029 01:28:01.606340 1874 policy_none.go:49] "None policy: Start" Oct 29 01:28:01.606399 kubelet[1874]: I1029 01:28:01.606357 1874 memory_manager.go:186] "Starting memorymanager" policy="None" Oct 29 01:28:01.606399 kubelet[1874]: I1029 01:28:01.606366 1874 state_mem.go:35] "Initializing new in-memory state store" Oct 29 01:28:01.609546 kubelet[1874]: I1029 01:28:01.609531 1874 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Oct 29 01:28:01.609000 audit[1874]: AVC avc: denied { mac_admin } for pid=1874 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 29 01:28:01.609000 audit: SELINUX_ERR op=setxattr invalid_context="system_u:object_r:container_file_t:s0" Oct 29 01:28:01.609000 audit[1874]: SYSCALL arch=c000003e syscall=188 success=no exit=-22 a0=c000f1c990 a1=c000e31c50 a2=c000f1c960 a3=25 items=0 ppid=1 pid=1874 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/usr/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 29 01:28:01.609000 audit: PROCTITLE proctitle=2F7573722F62696E2F6B7562656C6574002D2D626F6F7473747261702D6B756265636F6E6669673D2F6574632F6B756265726E657465732F626F6F7473747261702D6B7562656C65742E636F6E66002D2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F6B7562656C65742E636F6E66002D2D636F6E6669 Oct 29 01:28:01.609717 kubelet[1874]: I1029 01:28:01.609566 1874 server.go:94] "Unprivileged containerized plugins might not work. Could not set selinux context on socket dir" path="/var/lib/kubelet/device-plugins/" err="setxattr(label=system_u:object_r:container_file_t:s0) /var/lib/kubelet/device-plugins/: invalid argument" Oct 29 01:28:01.609717 kubelet[1874]: I1029 01:28:01.609621 1874 eviction_manager.go:189] "Eviction manager: starting control loop" Oct 29 01:28:01.609717 kubelet[1874]: I1029 01:28:01.609629 1874 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Oct 29 01:28:01.610529 kubelet[1874]: I1029 01:28:01.610519 1874 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Oct 29 01:28:01.615048 kubelet[1874]: E1029 01:28:01.615035 1874 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Oct 29 01:28:01.615126 kubelet[1874]: E1029 01:28:01.615061 1874 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Oct 29 01:28:01.693698 kubelet[1874]: E1029 01:28:01.693670 1874 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Oct 29 01:28:01.693899 kubelet[1874]: E1029 01:28:01.693882 1874 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Oct 29 01:28:01.696950 kubelet[1874]: E1029 01:28:01.696934 1874 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Oct 29 01:28:01.715063 kubelet[1874]: I1029 01:28:01.715050 1874 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Oct 29 01:28:01.715384 kubelet[1874]: E1029 01:28:01.715368 1874 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://139.178.70.110:6443/api/v1/nodes\": dial tcp 139.178.70.110:6443: connect: connection refused" node="localhost" Oct 29 01:28:01.778151 kubelet[1874]: I1029 01:28:01.778091 1874 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/4654b122dbb389158fe3c0766e603624-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"4654b122dbb389158fe3c0766e603624\") " pod="kube-system/kube-controller-manager-localhost" Oct 29 01:28:01.778280 kubelet[1874]: I1029 01:28:01.778265 1874 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/4654b122dbb389158fe3c0766e603624-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"4654b122dbb389158fe3c0766e603624\") " pod="kube-system/kube-controller-manager-localhost" Oct 29 01:28:01.778356 kubelet[1874]: I1029 01:28:01.778344 1874 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/a1d51be1ff02022474f2598f6e43038f-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"a1d51be1ff02022474f2598f6e43038f\") " pod="kube-system/kube-scheduler-localhost" Oct 29 01:28:01.778445 kubelet[1874]: I1029 01:28:01.778429 1874 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/1a5b3dd70dd2bc9be0e44023a75a46c6-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"1a5b3dd70dd2bc9be0e44023a75a46c6\") " pod="kube-system/kube-apiserver-localhost" Oct 29 01:28:01.778524 kubelet[1874]: I1029 01:28:01.778511 1874 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/4654b122dbb389158fe3c0766e603624-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"4654b122dbb389158fe3c0766e603624\") " pod="kube-system/kube-controller-manager-localhost" Oct 29 01:28:01.778596 kubelet[1874]: I1029 01:28:01.778585 1874 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/4654b122dbb389158fe3c0766e603624-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"4654b122dbb389158fe3c0766e603624\") " pod="kube-system/kube-controller-manager-localhost" Oct 29 01:28:01.778698 kubelet[1874]: I1029 01:28:01.778684 1874 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/4654b122dbb389158fe3c0766e603624-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"4654b122dbb389158fe3c0766e603624\") " pod="kube-system/kube-controller-manager-localhost" Oct 29 01:28:01.778767 kubelet[1874]: I1029 01:28:01.778756 1874 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/1a5b3dd70dd2bc9be0e44023a75a46c6-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"1a5b3dd70dd2bc9be0e44023a75a46c6\") " pod="kube-system/kube-apiserver-localhost" Oct 29 01:28:01.778838 kubelet[1874]: I1029 01:28:01.778826 1874 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/1a5b3dd70dd2bc9be0e44023a75a46c6-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"1a5b3dd70dd2bc9be0e44023a75a46c6\") " pod="kube-system/kube-apiserver-localhost" Oct 29 01:28:01.778997 kubelet[1874]: E1029 01:28:01.778980 1874 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://139.178.70.110:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 139.178.70.110:6443: connect: connection refused" interval="400ms" Oct 29 01:28:01.827384 update_engine[1332]: I1029 01:28:01.827347 1332 update_attempter.cc:509] Updating boot flags... Oct 29 01:28:01.917201 kubelet[1874]: I1029 01:28:01.916992 1874 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Oct 29 01:28:01.917384 kubelet[1874]: E1029 01:28:01.917359 1874 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://139.178.70.110:6443/api/v1/nodes\": dial tcp 139.178.70.110:6443: connect: connection refused" node="localhost" Oct 29 01:28:01.995123 env[1344]: time="2025-10-29T01:28:01.994993978Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:1a5b3dd70dd2bc9be0e44023a75a46c6,Namespace:kube-system,Attempt:0,}" Oct 29 01:28:01.995123 env[1344]: time="2025-10-29T01:28:01.995042399Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:a1d51be1ff02022474f2598f6e43038f,Namespace:kube-system,Attempt:0,}" Oct 29 01:28:01.998076 env[1344]: time="2025-10-29T01:28:01.997893437Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:4654b122dbb389158fe3c0766e603624,Namespace:kube-system,Attempt:0,}" Oct 29 01:28:02.179418 kubelet[1874]: E1029 01:28:02.179390 1874 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://139.178.70.110:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 139.178.70.110:6443: connect: connection refused" interval="800ms" Oct 29 01:28:02.318927 kubelet[1874]: I1029 01:28:02.318720 1874 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Oct 29 01:28:02.318927 kubelet[1874]: E1029 01:28:02.318897 1874 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://139.178.70.110:6443/api/v1/nodes\": dial tcp 139.178.70.110:6443: connect: connection refused" node="localhost" Oct 29 01:28:02.399063 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2781053366.mount: Deactivated successfully. Oct 29 01:28:02.401197 env[1344]: time="2025-10-29T01:28:02.401172148Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 29 01:28:02.401793 env[1344]: time="2025-10-29T01:28:02.401781984Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 29 01:28:02.402843 env[1344]: time="2025-10-29T01:28:02.402831025Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 29 01:28:02.403303 env[1344]: time="2025-10-29T01:28:02.403291803Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 29 01:28:02.404744 env[1344]: time="2025-10-29T01:28:02.404728620Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 29 01:28:02.406560 env[1344]: time="2025-10-29T01:28:02.406539305Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 29 01:28:02.407201 env[1344]: time="2025-10-29T01:28:02.407177477Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 29 01:28:02.408372 env[1344]: time="2025-10-29T01:28:02.408356833Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 29 01:28:02.410781 env[1344]: time="2025-10-29T01:28:02.410765630Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 29 01:28:02.411138 env[1344]: time="2025-10-29T01:28:02.411124228Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 29 01:28:02.411495 env[1344]: time="2025-10-29T01:28:02.411480897Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 29 01:28:02.413916 env[1344]: time="2025-10-29T01:28:02.413899288Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 29 01:28:02.421673 env[1344]: time="2025-10-29T01:28:02.418053092Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 29 01:28:02.421673 env[1344]: time="2025-10-29T01:28:02.418253124Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 29 01:28:02.421673 env[1344]: time="2025-10-29T01:28:02.418260840Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 29 01:28:02.421673 env[1344]: time="2025-10-29T01:28:02.418400129Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/aa2e124451a4c9bf721e5f31183656d0ab6ffaa3c6c4cdb7d2a2aeaa17d843f2 pid=1927 runtime=io.containerd.runc.v2 Oct 29 01:28:02.428029 env[1344]: time="2025-10-29T01:28:02.427991865Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 29 01:28:02.428269 env[1344]: time="2025-10-29T01:28:02.428255500Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 29 01:28:02.428352 env[1344]: time="2025-10-29T01:28:02.428318103Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 29 01:28:02.432101 env[1344]: time="2025-10-29T01:28:02.431081029Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 29 01:28:02.432101 env[1344]: time="2025-10-29T01:28:02.431098752Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 29 01:28:02.432101 env[1344]: time="2025-10-29T01:28:02.431105337Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 29 01:28:02.432101 env[1344]: time="2025-10-29T01:28:02.431165857Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/4ef719d750329ae546918c5b286d4c4d5a0c1d235e8a21134bc42b8f51dd6946 pid=1961 runtime=io.containerd.runc.v2 Oct 29 01:28:02.432101 env[1344]: time="2025-10-29T01:28:02.428437816Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/369d484a129b7b02005d412fe515ca9b241edc815d1b494fa3bc201e399bb3f1 pid=1951 runtime=io.containerd.runc.v2 Oct 29 01:28:02.482143 env[1344]: time="2025-10-29T01:28:02.482120100Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:4654b122dbb389158fe3c0766e603624,Namespace:kube-system,Attempt:0,} returns sandbox id \"aa2e124451a4c9bf721e5f31183656d0ab6ffaa3c6c4cdb7d2a2aeaa17d843f2\"" Oct 29 01:28:02.485435 env[1344]: time="2025-10-29T01:28:02.485417658Z" level=info msg="CreateContainer within sandbox \"aa2e124451a4c9bf721e5f31183656d0ab6ffaa3c6c4cdb7d2a2aeaa17d843f2\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Oct 29 01:28:02.490679 env[1344]: time="2025-10-29T01:28:02.490650180Z" level=info msg="CreateContainer within sandbox \"aa2e124451a4c9bf721e5f31183656d0ab6ffaa3c6c4cdb7d2a2aeaa17d843f2\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"e7fdc3698d4293d1fe6a3343d3ff3e1d2acfec374778eec8b946b89227cd3127\"" Oct 29 01:28:02.491685 env[1344]: time="2025-10-29T01:28:02.491672408Z" level=info msg="StartContainer for \"e7fdc3698d4293d1fe6a3343d3ff3e1d2acfec374778eec8b946b89227cd3127\"" Oct 29 01:28:02.500786 env[1344]: time="2025-10-29T01:28:02.500759747Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:a1d51be1ff02022474f2598f6e43038f,Namespace:kube-system,Attempt:0,} returns sandbox id \"369d484a129b7b02005d412fe515ca9b241edc815d1b494fa3bc201e399bb3f1\"" Oct 29 01:28:02.502933 env[1344]: time="2025-10-29T01:28:02.502911527Z" level=info msg="CreateContainer within sandbox \"369d484a129b7b02005d412fe515ca9b241edc815d1b494fa3bc201e399bb3f1\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Oct 29 01:28:02.521836 env[1344]: time="2025-10-29T01:28:02.521804068Z" level=info msg="CreateContainer within sandbox \"369d484a129b7b02005d412fe515ca9b241edc815d1b494fa3bc201e399bb3f1\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"6046ba8f8ca121d9258d9321163e9d80441217f24dd2fd29d1012fa7ff01c642\"" Oct 29 01:28:02.522427 env[1344]: time="2025-10-29T01:28:02.522402746Z" level=info msg="StartContainer for \"6046ba8f8ca121d9258d9321163e9d80441217f24dd2fd29d1012fa7ff01c642\"" Oct 29 01:28:02.525339 env[1344]: time="2025-10-29T01:28:02.525323201Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:1a5b3dd70dd2bc9be0e44023a75a46c6,Namespace:kube-system,Attempt:0,} returns sandbox id \"4ef719d750329ae546918c5b286d4c4d5a0c1d235e8a21134bc42b8f51dd6946\"" Oct 29 01:28:02.527594 env[1344]: time="2025-10-29T01:28:02.527579544Z" level=info msg="CreateContainer within sandbox \"4ef719d750329ae546918c5b286d4c4d5a0c1d235e8a21134bc42b8f51dd6946\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Oct 29 01:28:02.534713 env[1344]: time="2025-10-29T01:28:02.534688507Z" level=info msg="CreateContainer within sandbox \"4ef719d750329ae546918c5b286d4c4d5a0c1d235e8a21134bc42b8f51dd6946\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"56f30d021dd739a1f380951feb7b7e2ad4101f6fb519b8dd612bfd2a5df2d37b\"" Oct 29 01:28:02.535116 env[1344]: time="2025-10-29T01:28:02.535103668Z" level=info msg="StartContainer for \"56f30d021dd739a1f380951feb7b7e2ad4101f6fb519b8dd612bfd2a5df2d37b\"" Oct 29 01:28:02.543554 kubelet[1874]: W1029 01:28:02.540571 1874 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://139.178.70.110:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 139.178.70.110:6443: connect: connection refused Oct 29 01:28:02.543554 kubelet[1874]: E1029 01:28:02.540615 1874 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://139.178.70.110:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 139.178.70.110:6443: connect: connection refused" logger="UnhandledError" Oct 29 01:28:02.574412 env[1344]: time="2025-10-29T01:28:02.574384887Z" level=info msg="StartContainer for \"e7fdc3698d4293d1fe6a3343d3ff3e1d2acfec374778eec8b946b89227cd3127\" returns successfully" Oct 29 01:28:02.585429 env[1344]: time="2025-10-29T01:28:02.585400519Z" level=info msg="StartContainer for \"6046ba8f8ca121d9258d9321163e9d80441217f24dd2fd29d1012fa7ff01c642\" returns successfully" Oct 29 01:28:02.593908 kubelet[1874]: E1029 01:28:02.593891 1874 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Oct 29 01:28:02.596386 kubelet[1874]: E1029 01:28:02.596371 1874 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Oct 29 01:28:02.604714 env[1344]: time="2025-10-29T01:28:02.604684316Z" level=info msg="StartContainer for \"56f30d021dd739a1f380951feb7b7e2ad4101f6fb519b8dd612bfd2a5df2d37b\" returns successfully" Oct 29 01:28:02.703008 kubelet[1874]: W1029 01:28:02.702916 1874 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://139.178.70.110:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 139.178.70.110:6443: connect: connection refused Oct 29 01:28:02.703008 kubelet[1874]: E1029 01:28:02.702969 1874 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://139.178.70.110:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 139.178.70.110:6443: connect: connection refused" logger="UnhandledError" Oct 29 01:28:02.980454 kubelet[1874]: E1029 01:28:02.980386 1874 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://139.178.70.110:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 139.178.70.110:6443: connect: connection refused" interval="1.6s" Oct 29 01:28:02.980564 kubelet[1874]: W1029 01:28:02.980537 1874 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://139.178.70.110:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 139.178.70.110:6443: connect: connection refused Oct 29 01:28:02.980603 kubelet[1874]: E1029 01:28:02.980572 1874 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://139.178.70.110:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 139.178.70.110:6443: connect: connection refused" logger="UnhandledError" Oct 29 01:28:03.120332 kubelet[1874]: I1029 01:28:03.120313 1874 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Oct 29 01:28:03.120522 kubelet[1874]: E1029 01:28:03.120506 1874 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://139.178.70.110:6443/api/v1/nodes\": dial tcp 139.178.70.110:6443: connect: connection refused" node="localhost" Oct 29 01:28:03.155923 kubelet[1874]: W1029 01:28:03.155888 1874 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://139.178.70.110:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 139.178.70.110:6443: connect: connection refused Oct 29 01:28:03.156013 kubelet[1874]: E1029 01:28:03.155928 1874 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://139.178.70.110:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 139.178.70.110:6443: connect: connection refused" logger="UnhandledError" Oct 29 01:28:03.598457 kubelet[1874]: E1029 01:28:03.598438 1874 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Oct 29 01:28:03.598723 kubelet[1874]: E1029 01:28:03.598703 1874 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Oct 29 01:28:04.363227 kubelet[1874]: E1029 01:28:04.363203 1874 csi_plugin.go:308] Failed to initialize CSINode: error updating CSINode annotation: timed out waiting for the condition; caused by: nodes "localhost" not found Oct 29 01:28:04.584279 kubelet[1874]: E1029 01:28:04.584249 1874 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Oct 29 01:28:04.600191 kubelet[1874]: E1029 01:28:04.600159 1874 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Oct 29 01:28:04.694635 kubelet[1874]: E1029 01:28:04.694605 1874 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Oct 29 01:28:04.711134 kubelet[1874]: E1029 01:28:04.711113 1874 csi_plugin.go:308] Failed to initialize CSINode: error updating CSINode annotation: timed out waiting for the condition; caused by: nodes "localhost" not found Oct 29 01:28:04.721927 kubelet[1874]: I1029 01:28:04.721795 1874 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Oct 29 01:28:04.730572 kubelet[1874]: I1029 01:28:04.730557 1874 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Oct 29 01:28:04.730670 kubelet[1874]: E1029 01:28:04.730658 1874 kubelet_node_status.go:548] "Error updating node status, will retry" err="error getting node \"localhost\": node \"localhost\" not found" Oct 29 01:28:04.736908 kubelet[1874]: E1029 01:28:04.736893 1874 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Oct 29 01:28:04.837415 kubelet[1874]: E1029 01:28:04.837382 1874 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Oct 29 01:28:04.938069 kubelet[1874]: E1029 01:28:04.938032 1874 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Oct 29 01:28:05.038822 kubelet[1874]: E1029 01:28:05.038750 1874 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Oct 29 01:28:05.139676 kubelet[1874]: E1029 01:28:05.139652 1874 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Oct 29 01:28:05.240539 kubelet[1874]: E1029 01:28:05.240477 1874 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Oct 29 01:28:05.341360 kubelet[1874]: E1029 01:28:05.341291 1874 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Oct 29 01:28:05.441734 kubelet[1874]: E1029 01:28:05.441705 1874 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Oct 29 01:28:05.542546 kubelet[1874]: E1029 01:28:05.542516 1874 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Oct 29 01:28:05.601230 kubelet[1874]: E1029 01:28:05.601144 1874 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Oct 29 01:28:05.643070 kubelet[1874]: E1029 01:28:05.643044 1874 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Oct 29 01:28:05.743739 kubelet[1874]: E1029 01:28:05.743716 1874 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Oct 29 01:28:05.844476 kubelet[1874]: E1029 01:28:05.844457 1874 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Oct 29 01:28:05.977084 kubelet[1874]: I1029 01:28:05.977064 1874 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Oct 29 01:28:05.983668 kubelet[1874]: I1029 01:28:05.983637 1874 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Oct 29 01:28:05.986408 kubelet[1874]: I1029 01:28:05.986394 1874 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Oct 29 01:28:06.093107 systemd[1]: Reloading. Oct 29 01:28:06.145378 /usr/lib/systemd/system-generators/torcx-generator[2185]: time="2025-10-29T01:28:06Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.8 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.8 /var/lib/torcx/store]" Oct 29 01:28:06.145636 /usr/lib/systemd/system-generators/torcx-generator[2185]: time="2025-10-29T01:28:06Z" level=info msg="torcx already run" Oct 29 01:28:06.202691 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Oct 29 01:28:06.202799 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Oct 29 01:28:06.215387 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Oct 29 01:28:06.270921 systemd[1]: Stopping kubelet.service... Oct 29 01:28:06.286409 systemd[1]: kubelet.service: Deactivated successfully. Oct 29 01:28:06.286615 systemd[1]: Stopped kubelet.service. Oct 29 01:28:06.288674 kernel: kauditd_printk_skb: 42 callbacks suppressed Oct 29 01:28:06.288733 kernel: audit: type=1131 audit(1761701286.286:212): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 29 01:28:06.286000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 29 01:28:06.288462 systemd[1]: Starting kubelet.service... Oct 29 01:28:07.330000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 29 01:28:07.330974 systemd[1]: Started kubelet.service. Oct 29 01:28:07.335233 kernel: audit: type=1130 audit(1761701287.330:213): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 29 01:28:07.387084 kubelet[2259]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Oct 29 01:28:07.387317 kubelet[2259]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Oct 29 01:28:07.387357 kubelet[2259]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Oct 29 01:28:07.387454 kubelet[2259]: I1029 01:28:07.387433 2259 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Oct 29 01:28:07.392337 kubelet[2259]: I1029 01:28:07.392325 2259 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Oct 29 01:28:07.392399 kubelet[2259]: I1029 01:28:07.392390 2259 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Oct 29 01:28:07.392586 kubelet[2259]: I1029 01:28:07.392577 2259 server.go:954] "Client rotation is on, will bootstrap in background" Oct 29 01:28:07.393283 kubelet[2259]: I1029 01:28:07.393274 2259 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Oct 29 01:28:07.443595 kubelet[2259]: I1029 01:28:07.443580 2259 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Oct 29 01:28:07.452176 kubelet[2259]: E1029 01:28:07.452158 2259 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Oct 29 01:28:07.453802 kubelet[2259]: I1029 01:28:07.453789 2259 server.go:1421] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Oct 29 01:28:07.455905 kubelet[2259]: I1029 01:28:07.455893 2259 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Oct 29 01:28:07.456162 kubelet[2259]: I1029 01:28:07.456146 2259 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Oct 29 01:28:07.456270 kubelet[2259]: I1029 01:28:07.456161 2259 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":1} Oct 29 01:28:07.456334 kubelet[2259]: I1029 01:28:07.456276 2259 topology_manager.go:138] "Creating topology manager with none policy" Oct 29 01:28:07.456334 kubelet[2259]: I1029 01:28:07.456283 2259 container_manager_linux.go:304] "Creating device plugin manager" Oct 29 01:28:07.456334 kubelet[2259]: I1029 01:28:07.456308 2259 state_mem.go:36] "Initialized new in-memory state store" Oct 29 01:28:07.462092 kubelet[2259]: I1029 01:28:07.462079 2259 kubelet.go:446] "Attempting to sync node with API server" Oct 29 01:28:07.462133 kubelet[2259]: I1029 01:28:07.462097 2259 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Oct 29 01:28:07.462133 kubelet[2259]: I1029 01:28:07.462111 2259 kubelet.go:352] "Adding apiserver pod source" Oct 29 01:28:07.462133 kubelet[2259]: I1029 01:28:07.462119 2259 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Oct 29 01:28:07.463595 kubelet[2259]: I1029 01:28:07.463581 2259 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" Oct 29 01:28:07.463815 kubelet[2259]: I1029 01:28:07.463803 2259 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Oct 29 01:28:07.464056 kubelet[2259]: I1029 01:28:07.464045 2259 watchdog_linux.go:99] "Systemd watchdog is not enabled" Oct 29 01:28:07.464086 kubelet[2259]: I1029 01:28:07.464062 2259 server.go:1287] "Started kubelet" Oct 29 01:28:07.467000 audit[2259]: AVC avc: denied { mac_admin } for pid=2259 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 29 01:28:07.468238 kubelet[2259]: I1029 01:28:07.468221 2259 kubelet.go:1507] "Unprivileged containerized plugins might not work, could not set selinux context on plugin registration dir" path="/var/lib/kubelet/plugins_registry" err="setxattr(label=system_u:object_r:container_file_t:s0) /var/lib/kubelet/plugins_registry: invalid argument" Oct 29 01:28:07.468309 kubelet[2259]: I1029 01:28:07.468299 2259 kubelet.go:1511] "Unprivileged containerized plugins might not work, could not set selinux context on plugins dir" path="/var/lib/kubelet/plugins" err="setxattr(label=system_u:object_r:container_file_t:s0) /var/lib/kubelet/plugins: invalid argument" Oct 29 01:28:07.468370 kubelet[2259]: I1029 01:28:07.468358 2259 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Oct 29 01:28:07.467000 audit: SELINUX_ERR op=setxattr invalid_context="system_u:object_r:container_file_t:s0" Oct 29 01:28:07.471533 kernel: audit: type=1400 audit(1761701287.467:214): avc: denied { mac_admin } for pid=2259 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 29 01:28:07.472107 kernel: audit: type=1401 audit(1761701287.467:214): op=setxattr invalid_context="system_u:object_r:container_file_t:s0" Oct 29 01:28:07.472145 kernel: audit: type=1300 audit(1761701287.467:214): arch=c000003e syscall=188 success=no exit=-22 a0=c0006b44b0 a1=c00095b968 a2=c0006b4480 a3=25 items=0 ppid=1 pid=2259 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/usr/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 29 01:28:07.467000 audit[2259]: SYSCALL arch=c000003e syscall=188 success=no exit=-22 a0=c0006b44b0 a1=c00095b968 a2=c0006b4480 a3=25 items=0 ppid=1 pid=2259 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/usr/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 29 01:28:07.467000 audit: PROCTITLE proctitle=2F7573722F62696E2F6B7562656C6574002D2D626F6F7473747261702D6B756265636F6E6669673D2F6574632F6B756265726E657465732F626F6F7473747261702D6B7562656C65742E636F6E66002D2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F6B7562656C65742E636F6E66002D2D636F6E6669 Oct 29 01:28:07.479432 kernel: audit: type=1327 audit(1761701287.467:214): proctitle=2F7573722F62696E2F6B7562656C6574002D2D626F6F7473747261702D6B756265636F6E6669673D2F6574632F6B756265726E657465732F626F6F7473747261702D6B7562656C65742E636F6E66002D2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F6B7562656C65742E636F6E66002D2D636F6E6669 Oct 29 01:28:07.468000 audit[2259]: AVC avc: denied { mac_admin } for pid=2259 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 29 01:28:07.481949 kubelet[2259]: I1029 01:28:07.481922 2259 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Oct 29 01:28:07.482673 kernel: audit: type=1400 audit(1761701287.468:215): avc: denied { mac_admin } for pid=2259 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 29 01:28:07.482767 kubelet[2259]: I1029 01:28:07.482758 2259 server.go:479] "Adding debug handlers to kubelet server" Oct 29 01:28:07.468000 audit: SELINUX_ERR op=setxattr invalid_context="system_u:object_r:container_file_t:s0" Oct 29 01:28:07.483386 kubelet[2259]: I1029 01:28:07.483361 2259 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Oct 29 01:28:07.483526 kubelet[2259]: I1029 01:28:07.483518 2259 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Oct 29 01:28:07.483681 kubelet[2259]: I1029 01:28:07.483671 2259 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Oct 29 01:28:07.468000 audit[2259]: SYSCALL arch=c000003e syscall=188 success=no exit=-22 a0=c000730720 a1=c00095b980 a2=c0006b4540 a3=25 items=0 ppid=1 pid=2259 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/usr/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 29 01:28:07.485439 kubelet[2259]: I1029 01:28:07.485431 2259 volume_manager.go:297] "Starting Kubelet Volume Manager" Oct 29 01:28:07.485577 kubelet[2259]: E1029 01:28:07.485567 2259 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Oct 29 01:28:07.486779 kubelet[2259]: I1029 01:28:07.486768 2259 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Oct 29 01:28:07.486892 kubelet[2259]: I1029 01:28:07.486885 2259 reconciler.go:26] "Reconciler: start to sync state" Oct 29 01:28:07.488004 kubelet[2259]: I1029 01:28:07.487975 2259 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Oct 29 01:28:07.488483 kernel: audit: type=1401 audit(1761701287.468:215): op=setxattr invalid_context="system_u:object_r:container_file_t:s0" Oct 29 01:28:07.488566 kernel: audit: type=1300 audit(1761701287.468:215): arch=c000003e syscall=188 success=no exit=-22 a0=c000730720 a1=c00095b980 a2=c0006b4540 a3=25 items=0 ppid=1 pid=2259 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/usr/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 29 01:28:07.488804 kubelet[2259]: I1029 01:28:07.488795 2259 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Oct 29 01:28:07.488874 kubelet[2259]: I1029 01:28:07.488866 2259 status_manager.go:227] "Starting to sync pod status with apiserver" Oct 29 01:28:07.468000 audit: PROCTITLE proctitle=2F7573722F62696E2F6B7562656C6574002D2D626F6F7473747261702D6B756265636F6E6669673D2F6574632F6B756265726E657465732F626F6F7473747261702D6B7562656C65742E636F6E66002D2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F6B7562656C65742E636F6E66002D2D636F6E6669 Oct 29 01:28:07.488949 kubelet[2259]: I1029 01:28:07.488941 2259 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Oct 29 01:28:07.488996 kubelet[2259]: I1029 01:28:07.488989 2259 kubelet.go:2382] "Starting kubelet main sync loop" Oct 29 01:28:07.489071 kubelet[2259]: E1029 01:28:07.489060 2259 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Oct 29 01:28:07.492071 kernel: audit: type=1327 audit(1761701287.468:215): proctitle=2F7573722F62696E2F6B7562656C6574002D2D626F6F7473747261702D6B756265636F6E6669673D2F6574632F6B756265726E657465732F626F6F7473747261702D6B7562656C65742E636F6E66002D2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F6B7562656C65742E636F6E66002D2D636F6E6669 Oct 29 01:28:07.514842 kubelet[2259]: I1029 01:28:07.512429 2259 factory.go:221] Registration of the systemd container factory successfully Oct 29 01:28:07.514842 kubelet[2259]: I1029 01:28:07.512493 2259 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Oct 29 01:28:07.514842 kubelet[2259]: I1029 01:28:07.513683 2259 factory.go:221] Registration of the containerd container factory successfully Oct 29 01:28:07.537361 kubelet[2259]: E1029 01:28:07.537340 2259 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Oct 29 01:28:07.564234 kubelet[2259]: I1029 01:28:07.564169 2259 cpu_manager.go:221] "Starting CPU manager" policy="none" Oct 29 01:28:07.564234 kubelet[2259]: I1029 01:28:07.564216 2259 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Oct 29 01:28:07.564234 kubelet[2259]: I1029 01:28:07.564231 2259 state_mem.go:36] "Initialized new in-memory state store" Oct 29 01:28:07.564364 kubelet[2259]: I1029 01:28:07.564343 2259 state_mem.go:88] "Updated default CPUSet" cpuSet="" Oct 29 01:28:07.564364 kubelet[2259]: I1029 01:28:07.564351 2259 state_mem.go:96] "Updated CPUSet assignments" assignments={} Oct 29 01:28:07.564364 kubelet[2259]: I1029 01:28:07.564361 2259 policy_none.go:49] "None policy: Start" Oct 29 01:28:07.564419 kubelet[2259]: I1029 01:28:07.564367 2259 memory_manager.go:186] "Starting memorymanager" policy="None" Oct 29 01:28:07.564419 kubelet[2259]: I1029 01:28:07.564372 2259 state_mem.go:35] "Initializing new in-memory state store" Oct 29 01:28:07.564460 kubelet[2259]: I1029 01:28:07.564434 2259 state_mem.go:75] "Updated machine memory state" Oct 29 01:28:07.565084 kubelet[2259]: I1029 01:28:07.565073 2259 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Oct 29 01:28:07.564000 audit[2259]: AVC avc: denied { mac_admin } for pid=2259 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 29 01:28:07.564000 audit: SELINUX_ERR op=setxattr invalid_context="system_u:object_r:container_file_t:s0" Oct 29 01:28:07.564000 audit[2259]: SYSCALL arch=c000003e syscall=188 success=no exit=-22 a0=c00129bec0 a1=c000da2a20 a2=c00129be90 a3=25 items=0 ppid=1 pid=2259 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/usr/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 29 01:28:07.564000 audit: PROCTITLE proctitle=2F7573722F62696E2F6B7562656C6574002D2D626F6F7473747261702D6B756265636F6E6669673D2F6574632F6B756265726E657465732F626F6F7473747261702D6B7562656C65742E636F6E66002D2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F6B7562656C65742E636F6E66002D2D636F6E6669 Oct 29 01:28:07.565884 kubelet[2259]: I1029 01:28:07.565807 2259 server.go:94] "Unprivileged containerized plugins might not work. Could not set selinux context on socket dir" path="/var/lib/kubelet/device-plugins/" err="setxattr(label=system_u:object_r:container_file_t:s0) /var/lib/kubelet/device-plugins/: invalid argument" Oct 29 01:28:07.565909 kubelet[2259]: I1029 01:28:07.565890 2259 eviction_manager.go:189] "Eviction manager: starting control loop" Oct 29 01:28:07.565909 kubelet[2259]: I1029 01:28:07.565897 2259 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Oct 29 01:28:07.566072 kubelet[2259]: I1029 01:28:07.566061 2259 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Oct 29 01:28:07.567907 kubelet[2259]: E1029 01:28:07.567642 2259 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Oct 29 01:28:07.590167 kubelet[2259]: I1029 01:28:07.590112 2259 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Oct 29 01:28:07.593211 kubelet[2259]: E1029 01:28:07.592965 2259 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" Oct 29 01:28:07.596406 kubelet[2259]: I1029 01:28:07.596396 2259 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Oct 29 01:28:07.596993 kubelet[2259]: I1029 01:28:07.596983 2259 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Oct 29 01:28:07.599130 kubelet[2259]: E1029 01:28:07.599118 2259 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" already exists" pod="kube-system/kube-controller-manager-localhost" Oct 29 01:28:07.599252 kubelet[2259]: E1029 01:28:07.599243 2259 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Oct 29 01:28:07.667842 kubelet[2259]: I1029 01:28:07.667822 2259 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Oct 29 01:28:07.674388 kubelet[2259]: I1029 01:28:07.673965 2259 kubelet_node_status.go:124] "Node was previously registered" node="localhost" Oct 29 01:28:07.674388 kubelet[2259]: I1029 01:28:07.674006 2259 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Oct 29 01:28:07.688349 kubelet[2259]: I1029 01:28:07.688328 2259 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/1a5b3dd70dd2bc9be0e44023a75a46c6-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"1a5b3dd70dd2bc9be0e44023a75a46c6\") " pod="kube-system/kube-apiserver-localhost" Oct 29 01:28:07.688349 kubelet[2259]: I1029 01:28:07.688348 2259 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/1a5b3dd70dd2bc9be0e44023a75a46c6-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"1a5b3dd70dd2bc9be0e44023a75a46c6\") " pod="kube-system/kube-apiserver-localhost" Oct 29 01:28:07.688492 kubelet[2259]: I1029 01:28:07.688371 2259 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/1a5b3dd70dd2bc9be0e44023a75a46c6-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"1a5b3dd70dd2bc9be0e44023a75a46c6\") " pod="kube-system/kube-apiserver-localhost" Oct 29 01:28:07.688492 kubelet[2259]: I1029 01:28:07.688383 2259 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/4654b122dbb389158fe3c0766e603624-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"4654b122dbb389158fe3c0766e603624\") " pod="kube-system/kube-controller-manager-localhost" Oct 29 01:28:07.688492 kubelet[2259]: I1029 01:28:07.688392 2259 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/4654b122dbb389158fe3c0766e603624-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"4654b122dbb389158fe3c0766e603624\") " pod="kube-system/kube-controller-manager-localhost" Oct 29 01:28:07.688492 kubelet[2259]: I1029 01:28:07.688406 2259 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/a1d51be1ff02022474f2598f6e43038f-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"a1d51be1ff02022474f2598f6e43038f\") " pod="kube-system/kube-scheduler-localhost" Oct 29 01:28:07.688492 kubelet[2259]: I1029 01:28:07.688421 2259 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/4654b122dbb389158fe3c0766e603624-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"4654b122dbb389158fe3c0766e603624\") " pod="kube-system/kube-controller-manager-localhost" Oct 29 01:28:07.688666 kubelet[2259]: I1029 01:28:07.688434 2259 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/4654b122dbb389158fe3c0766e603624-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"4654b122dbb389158fe3c0766e603624\") " pod="kube-system/kube-controller-manager-localhost" Oct 29 01:28:07.688666 kubelet[2259]: I1029 01:28:07.688445 2259 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/4654b122dbb389158fe3c0766e603624-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"4654b122dbb389158fe3c0766e603624\") " pod="kube-system/kube-controller-manager-localhost" Oct 29 01:28:08.462597 kubelet[2259]: I1029 01:28:08.462574 2259 apiserver.go:52] "Watching apiserver" Oct 29 01:28:08.487870 kubelet[2259]: I1029 01:28:08.487843 2259 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Oct 29 01:28:08.548632 kubelet[2259]: I1029 01:28:08.548605 2259 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Oct 29 01:28:08.548807 kubelet[2259]: I1029 01:28:08.548793 2259 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Oct 29 01:28:08.555513 kubelet[2259]: E1029 01:28:08.555499 2259 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" Oct 29 01:28:08.555730 kubelet[2259]: E1029 01:28:08.555718 2259 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Oct 29 01:28:08.567839 kubelet[2259]: I1029 01:28:08.567811 2259 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=3.56779028 podStartE2EDuration="3.56779028s" podCreationTimestamp="2025-10-29 01:28:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-29 01:28:08.564086096 +0000 UTC m=+1.220201126" watchObservedRunningTime="2025-10-29 01:28:08.56779028 +0000 UTC m=+1.223905308" Oct 29 01:28:08.571392 kubelet[2259]: I1029 01:28:08.571371 2259 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=3.571363075 podStartE2EDuration="3.571363075s" podCreationTimestamp="2025-10-29 01:28:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-29 01:28:08.568110509 +0000 UTC m=+1.224225541" watchObservedRunningTime="2025-10-29 01:28:08.571363075 +0000 UTC m=+1.227478102" Oct 29 01:28:08.575707 kubelet[2259]: I1029 01:28:08.575675 2259 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=3.575665679 podStartE2EDuration="3.575665679s" podCreationTimestamp="2025-10-29 01:28:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-29 01:28:08.571667584 +0000 UTC m=+1.227782611" watchObservedRunningTime="2025-10-29 01:28:08.575665679 +0000 UTC m=+1.231780706" Oct 29 01:28:11.241341 kubelet[2259]: I1029 01:28:11.241317 2259 kuberuntime_manager.go:1702] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Oct 29 01:28:11.241658 env[1344]: time="2025-10-29T01:28:11.241572139Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Oct 29 01:28:11.241857 kubelet[2259]: I1029 01:28:11.241707 2259 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Oct 29 01:28:12.117926 kubelet[2259]: I1029 01:28:12.117892 2259 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/c0d7227b-8a9b-4356-b4de-e03e31b731b6-kube-proxy\") pod \"kube-proxy-27vgs\" (UID: \"c0d7227b-8a9b-4356-b4de-e03e31b731b6\") " pod="kube-system/kube-proxy-27vgs" Oct 29 01:28:12.118042 kubelet[2259]: I1029 01:28:12.117934 2259 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kd9hb\" (UniqueName: \"kubernetes.io/projected/c0d7227b-8a9b-4356-b4de-e03e31b731b6-kube-api-access-kd9hb\") pod \"kube-proxy-27vgs\" (UID: \"c0d7227b-8a9b-4356-b4de-e03e31b731b6\") " pod="kube-system/kube-proxy-27vgs" Oct 29 01:28:12.118042 kubelet[2259]: I1029 01:28:12.117951 2259 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/c0d7227b-8a9b-4356-b4de-e03e31b731b6-xtables-lock\") pod \"kube-proxy-27vgs\" (UID: \"c0d7227b-8a9b-4356-b4de-e03e31b731b6\") " pod="kube-system/kube-proxy-27vgs" Oct 29 01:28:12.118042 kubelet[2259]: I1029 01:28:12.117978 2259 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/c0d7227b-8a9b-4356-b4de-e03e31b731b6-lib-modules\") pod \"kube-proxy-27vgs\" (UID: \"c0d7227b-8a9b-4356-b4de-e03e31b731b6\") " pod="kube-system/kube-proxy-27vgs" Oct 29 01:28:12.226724 kubelet[2259]: I1029 01:28:12.226694 2259 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory" Oct 29 01:28:12.336428 env[1344]: time="2025-10-29T01:28:12.336345767Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-27vgs,Uid:c0d7227b-8a9b-4356-b4de-e03e31b731b6,Namespace:kube-system,Attempt:0,}" Oct 29 01:28:12.351616 env[1344]: time="2025-10-29T01:28:12.351560250Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 29 01:28:12.351746 env[1344]: time="2025-10-29T01:28:12.351726801Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 29 01:28:12.351855 env[1344]: time="2025-10-29T01:28:12.351830283Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 29 01:28:12.352079 env[1344]: time="2025-10-29T01:28:12.352045089Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/da9c05a77e8c3efb6a3d3583a124ff7f9acf6682bccccbf206ea4da92303dd02 pid=2309 runtime=io.containerd.runc.v2 Oct 29 01:28:12.367636 systemd[1]: run-containerd-runc-k8s.io-da9c05a77e8c3efb6a3d3583a124ff7f9acf6682bccccbf206ea4da92303dd02-runc.6iqROY.mount: Deactivated successfully. Oct 29 01:28:12.408877 env[1344]: time="2025-10-29T01:28:12.408852676Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-27vgs,Uid:c0d7227b-8a9b-4356-b4de-e03e31b731b6,Namespace:kube-system,Attempt:0,} returns sandbox id \"da9c05a77e8c3efb6a3d3583a124ff7f9acf6682bccccbf206ea4da92303dd02\"" Oct 29 01:28:12.410564 env[1344]: time="2025-10-29T01:28:12.410548713Z" level=info msg="CreateContainer within sandbox \"da9c05a77e8c3efb6a3d3583a124ff7f9acf6682bccccbf206ea4da92303dd02\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Oct 29 01:28:12.418649 env[1344]: time="2025-10-29T01:28:12.418625188Z" level=info msg="CreateContainer within sandbox \"da9c05a77e8c3efb6a3d3583a124ff7f9acf6682bccccbf206ea4da92303dd02\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"1a8f9428e8855492883744a94f8c1244c0c0ab1f551f3615648373e3658e40f7\"" Oct 29 01:28:12.418975 env[1344]: time="2025-10-29T01:28:12.418957921Z" level=info msg="StartContainer for \"1a8f9428e8855492883744a94f8c1244c0c0ab1f551f3615648373e3658e40f7\"" Oct 29 01:28:12.419826 kubelet[2259]: I1029 01:28:12.419807 2259 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7r49p\" (UniqueName: \"kubernetes.io/projected/48a775e2-3575-4588-a47f-bdcc46244fe6-kube-api-access-7r49p\") pod \"tigera-operator-7dcd859c48-k2xg5\" (UID: \"48a775e2-3575-4588-a47f-bdcc46244fe6\") " pod="tigera-operator/tigera-operator-7dcd859c48-k2xg5" Oct 29 01:28:12.420030 kubelet[2259]: I1029 01:28:12.419830 2259 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/48a775e2-3575-4588-a47f-bdcc46244fe6-var-lib-calico\") pod \"tigera-operator-7dcd859c48-k2xg5\" (UID: \"48a775e2-3575-4588-a47f-bdcc46244fe6\") " pod="tigera-operator/tigera-operator-7dcd859c48-k2xg5" Oct 29 01:28:12.450293 env[1344]: time="2025-10-29T01:28:12.450264904Z" level=info msg="StartContainer for \"1a8f9428e8855492883744a94f8c1244c0c0ab1f551f3615648373e3658e40f7\" returns successfully" Oct 29 01:28:12.698920 env[1344]: time="2025-10-29T01:28:12.698885478Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-7dcd859c48-k2xg5,Uid:48a775e2-3575-4588-a47f-bdcc46244fe6,Namespace:tigera-operator,Attempt:0,}" Oct 29 01:28:12.711376 env[1344]: time="2025-10-29T01:28:12.711333663Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 29 01:28:12.711511 env[1344]: time="2025-10-29T01:28:12.711386437Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 29 01:28:12.711511 env[1344]: time="2025-10-29T01:28:12.711407448Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 29 01:28:12.711626 env[1344]: time="2025-10-29T01:28:12.711602082Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/2065b92e3a9330f197d324bb77948f132edf55e17351edd1dc8bc1e96bb47204 pid=2383 runtime=io.containerd.runc.v2 Oct 29 01:28:12.758016 env[1344]: time="2025-10-29T01:28:12.757988871Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-7dcd859c48-k2xg5,Uid:48a775e2-3575-4588-a47f-bdcc46244fe6,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"2065b92e3a9330f197d324bb77948f132edf55e17351edd1dc8bc1e96bb47204\"" Oct 29 01:28:12.759085 env[1344]: time="2025-10-29T01:28:12.759071888Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.7\"" Oct 29 01:28:12.889200 kernel: kauditd_printk_skb: 4 callbacks suppressed Oct 29 01:28:12.889264 kernel: audit: type=1325 audit(1761701292.887:217): table=mangle:38 family=2 entries=1 op=nft_register_chain pid=2452 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 29 01:28:12.887000 audit[2452]: NETFILTER_CFG table=mangle:38 family=2 entries=1 op=nft_register_chain pid=2452 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 29 01:28:12.887000 audit[2452]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffd05c1ada0 a2=0 a3=7ffd05c1ad8c items=0 ppid=2360 pid=2452 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 29 01:28:12.887000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006D616E676C65 Oct 29 01:28:12.896847 kernel: audit: type=1300 audit(1761701292.887:217): arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffd05c1ada0 a2=0 a3=7ffd05c1ad8c items=0 ppid=2360 pid=2452 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 29 01:28:12.896895 kernel: audit: type=1327 audit(1761701292.887:217): proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006D616E676C65 Oct 29 01:28:12.891000 audit[2453]: NETFILTER_CFG table=nat:39 family=2 entries=1 op=nft_register_chain pid=2453 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 29 01:28:12.891000 audit[2453]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7fffbdea58d0 a2=0 a3=7fffbdea58bc items=0 ppid=2360 pid=2453 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 29 01:28:12.902874 kernel: audit: type=1325 audit(1761701292.891:218): table=nat:39 family=2 entries=1 op=nft_register_chain pid=2453 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 29 01:28:12.902912 kernel: audit: type=1300 audit(1761701292.891:218): arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7fffbdea58d0 a2=0 a3=7fffbdea58bc items=0 ppid=2360 pid=2453 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 29 01:28:12.902930 kernel: audit: type=1327 audit(1761701292.891:218): proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006E6174 Oct 29 01:28:12.891000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006E6174 Oct 29 01:28:12.894000 audit[2455]: NETFILTER_CFG table=filter:40 family=2 entries=1 op=nft_register_chain pid=2455 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 29 01:28:12.906496 kernel: audit: type=1325 audit(1761701292.894:219): table=filter:40 family=2 entries=1 op=nft_register_chain pid=2455 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 29 01:28:12.894000 audit[2455]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffd53b85400 a2=0 a3=7ffd53b853ec items=0 ppid=2360 pid=2455 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 29 01:28:12.910724 kernel: audit: type=1300 audit(1761701292.894:219): arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffd53b85400 a2=0 a3=7ffd53b853ec items=0 ppid=2360 pid=2455 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 29 01:28:12.894000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D740066696C746572 Oct 29 01:28:12.912851 kernel: audit: type=1327 audit(1761701292.894:219): proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D740066696C746572 Oct 29 01:28:12.897000 audit[2454]: NETFILTER_CFG table=mangle:41 family=10 entries=1 op=nft_register_chain pid=2454 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 29 01:28:12.897000 audit[2454]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffdf07b2180 a2=0 a3=7ffdf07b216c items=0 ppid=2360 pid=2454 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 29 01:28:12.897000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006D616E676C65 Oct 29 01:28:12.915216 kernel: audit: type=1325 audit(1761701292.897:220): table=mangle:41 family=10 entries=1 op=nft_register_chain pid=2454 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 29 01:28:12.897000 audit[2456]: NETFILTER_CFG table=nat:42 family=10 entries=1 op=nft_register_chain pid=2456 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 29 01:28:12.897000 audit[2456]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffeef879ce0 a2=0 a3=7ffeef879ccc items=0 ppid=2360 pid=2456 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 29 01:28:12.897000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006E6174 Oct 29 01:28:12.899000 audit[2457]: NETFILTER_CFG table=filter:43 family=10 entries=1 op=nft_register_chain pid=2457 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 29 01:28:12.899000 audit[2457]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffdd7c2ff20 a2=0 a3=7ffdd7c2ff0c items=0 ppid=2360 pid=2457 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 29 01:28:12.899000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D740066696C746572 Oct 29 01:28:13.012000 audit[2458]: NETFILTER_CFG table=filter:44 family=2 entries=1 op=nft_register_chain pid=2458 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 29 01:28:13.012000 audit[2458]: SYSCALL arch=c000003e syscall=46 success=yes exit=108 a0=3 a1=7ffe378f2090 a2=0 a3=7ffe378f207c items=0 ppid=2360 pid=2458 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 29 01:28:13.012000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D45585445524E414C2D5345525649434553002D740066696C746572 Oct 29 01:28:13.015000 audit[2460]: NETFILTER_CFG table=filter:45 family=2 entries=1 op=nft_register_rule pid=2460 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 29 01:28:13.015000 audit[2460]: SYSCALL arch=c000003e syscall=46 success=yes exit=752 a0=3 a1=7fff5a2a2fb0 a2=0 a3=7fff5a2a2f9c items=0 ppid=2360 pid=2460 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 29 01:28:13.015000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C652073657276696365 Oct 29 01:28:13.018000 audit[2463]: NETFILTER_CFG table=filter:46 family=2 entries=1 op=nft_register_rule pid=2463 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 29 01:28:13.018000 audit[2463]: SYSCALL arch=c000003e syscall=46 success=yes exit=752 a0=3 a1=7fff1bf3eed0 a2=0 a3=7fff1bf3eebc items=0 ppid=2360 pid=2463 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 29 01:28:13.018000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C65207365727669 Oct 29 01:28:13.018000 audit[2464]: NETFILTER_CFG table=filter:47 family=2 entries=1 op=nft_register_chain pid=2464 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 29 01:28:13.018000 audit[2464]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffec5a29610 a2=0 a3=7ffec5a295fc items=0 ppid=2360 pid=2464 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 29 01:28:13.018000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4E4F4445504F525453002D740066696C746572 Oct 29 01:28:13.022000 audit[2466]: NETFILTER_CFG table=filter:48 family=2 entries=1 op=nft_register_rule pid=2466 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 29 01:28:13.022000 audit[2466]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7ffe16319040 a2=0 a3=7ffe1631902c items=0 ppid=2360 pid=2466 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 29 01:28:13.022000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206865616C746820636865636B207365727669636520706F727473002D6A004B5542452D4E4F4445504F525453 Oct 29 01:28:13.023000 audit[2467]: NETFILTER_CFG table=filter:49 family=2 entries=1 op=nft_register_chain pid=2467 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 29 01:28:13.023000 audit[2467]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffdedeb8540 a2=0 a3=7ffdedeb852c items=0 ppid=2360 pid=2467 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 29 01:28:13.023000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D740066696C746572 Oct 29 01:28:13.024000 audit[2469]: NETFILTER_CFG table=filter:50 family=2 entries=1 op=nft_register_rule pid=2469 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 29 01:28:13.024000 audit[2469]: SYSCALL arch=c000003e syscall=46 success=yes exit=744 a0=3 a1=7ffd72b82b40 a2=0 a3=7ffd72b82b2c items=0 ppid=2360 pid=2469 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 29 01:28:13.024000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D Oct 29 01:28:13.027000 audit[2472]: NETFILTER_CFG table=filter:51 family=2 entries=1 op=nft_register_rule pid=2472 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 29 01:28:13.027000 audit[2472]: SYSCALL arch=c000003e syscall=46 success=yes exit=744 a0=3 a1=7ffd206c0c60 a2=0 a3=7ffd206c0c4c items=0 ppid=2360 pid=2472 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 29 01:28:13.027000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D53 Oct 29 01:28:13.027000 audit[2473]: NETFILTER_CFG table=filter:52 family=2 entries=1 op=nft_register_chain pid=2473 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 29 01:28:13.027000 audit[2473]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffc105b6750 a2=0 a3=7ffc105b673c items=0 ppid=2360 pid=2473 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 29 01:28:13.027000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D464F5257415244002D740066696C746572 Oct 29 01:28:13.029000 audit[2475]: NETFILTER_CFG table=filter:53 family=2 entries=1 op=nft_register_rule pid=2475 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 29 01:28:13.029000 audit[2475]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7ffc88c82040 a2=0 a3=7ffc88c8202c items=0 ppid=2360 pid=2475 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 29 01:28:13.029000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320666F7277617264696E672072756C6573002D6A004B5542452D464F5257415244 Oct 29 01:28:13.030000 audit[2476]: NETFILTER_CFG table=filter:54 family=2 entries=1 op=nft_register_chain pid=2476 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 29 01:28:13.030000 audit[2476]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffec7690830 a2=0 a3=7ffec769081c items=0 ppid=2360 pid=2476 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 29 01:28:13.030000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D4649524557414C4C002D740066696C746572 Oct 29 01:28:13.032000 audit[2478]: NETFILTER_CFG table=filter:55 family=2 entries=1 op=nft_register_rule pid=2478 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 29 01:28:13.032000 audit[2478]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7fffb84cac40 a2=0 a3=7fffb84cac2c items=0 ppid=2360 pid=2478 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 29 01:28:13.032000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A Oct 29 01:28:13.034000 audit[2481]: NETFILTER_CFG table=filter:56 family=2 entries=1 op=nft_register_rule pid=2481 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 29 01:28:13.034000 audit[2481]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7ffde3b5bf20 a2=0 a3=7ffde3b5bf0c items=0 ppid=2360 pid=2481 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 29 01:28:13.034000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A Oct 29 01:28:13.037000 audit[2484]: NETFILTER_CFG table=filter:57 family=2 entries=1 op=nft_register_rule pid=2484 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 29 01:28:13.037000 audit[2484]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7ffc8fdfa150 a2=0 a3=7ffc8fdfa13c items=0 ppid=2360 pid=2484 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 29 01:28:13.037000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D Oct 29 01:28:13.037000 audit[2485]: NETFILTER_CFG table=nat:58 family=2 entries=1 op=nft_register_chain pid=2485 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 29 01:28:13.037000 audit[2485]: SYSCALL arch=c000003e syscall=46 success=yes exit=96 a0=3 a1=7ffdcbf96ee0 a2=0 a3=7ffdcbf96ecc items=0 ppid=2360 pid=2485 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 29 01:28:13.037000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D74006E6174 Oct 29 01:28:13.039000 audit[2487]: NETFILTER_CFG table=nat:59 family=2 entries=1 op=nft_register_rule pid=2487 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 29 01:28:13.039000 audit[2487]: SYSCALL arch=c000003e syscall=46 success=yes exit=524 a0=3 a1=7ffde6f1e6e0 a2=0 a3=7ffde6f1e6cc items=0 ppid=2360 pid=2487 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 29 01:28:13.039000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Oct 29 01:28:13.042000 audit[2490]: NETFILTER_CFG table=nat:60 family=2 entries=1 op=nft_register_rule pid=2490 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 29 01:28:13.042000 audit[2490]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7fffbbbf7030 a2=0 a3=7fffbbbf701c items=0 ppid=2360 pid=2490 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 29 01:28:13.042000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900505245524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Oct 29 01:28:13.042000 audit[2491]: NETFILTER_CFG table=nat:61 family=2 entries=1 op=nft_register_chain pid=2491 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 29 01:28:13.042000 audit[2491]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffe1cbd1950 a2=0 a3=7ffe1cbd193c items=0 ppid=2360 pid=2491 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 29 01:28:13.042000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D504F5354524F5554494E47002D74006E6174 Oct 29 01:28:13.044000 audit[2493]: NETFILTER_CFG table=nat:62 family=2 entries=1 op=nft_register_rule pid=2493 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 29 01:28:13.044000 audit[2493]: SYSCALL arch=c000003e syscall=46 success=yes exit=532 a0=3 a1=7ffc1d38ff80 a2=0 a3=7ffc1d38ff6c items=0 ppid=2360 pid=2493 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 29 01:28:13.044000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900504F5354524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320706F7374726F7574696E672072756C6573002D6A004B5542452D504F5354524F5554494E47 Oct 29 01:28:13.064000 audit[2499]: NETFILTER_CFG table=filter:63 family=2 entries=8 op=nft_register_rule pid=2499 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Oct 29 01:28:13.064000 audit[2499]: SYSCALL arch=c000003e syscall=46 success=yes exit=5248 a0=3 a1=7fffc8ed9f50 a2=0 a3=7fffc8ed9f3c items=0 ppid=2360 pid=2499 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 29 01:28:13.064000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Oct 29 01:28:13.071000 audit[2499]: NETFILTER_CFG table=nat:64 family=2 entries=14 op=nft_register_chain pid=2499 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Oct 29 01:28:13.071000 audit[2499]: SYSCALL arch=c000003e syscall=46 success=yes exit=5508 a0=3 a1=7fffc8ed9f50 a2=0 a3=7fffc8ed9f3c items=0 ppid=2360 pid=2499 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 29 01:28:13.071000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Oct 29 01:28:13.073000 audit[2504]: NETFILTER_CFG table=filter:65 family=10 entries=1 op=nft_register_chain pid=2504 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 29 01:28:13.073000 audit[2504]: SYSCALL arch=c000003e syscall=46 success=yes exit=108 a0=3 a1=7fffe8833350 a2=0 a3=7fffe883333c items=0 ppid=2360 pid=2504 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 29 01:28:13.073000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D45585445524E414C2D5345525649434553002D740066696C746572 Oct 29 01:28:13.075000 audit[2506]: NETFILTER_CFG table=filter:66 family=10 entries=2 op=nft_register_chain pid=2506 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 29 01:28:13.075000 audit[2506]: SYSCALL arch=c000003e syscall=46 success=yes exit=836 a0=3 a1=7ffe39ea41d0 a2=0 a3=7ffe39ea41bc items=0 ppid=2360 pid=2506 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 29 01:28:13.075000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C6520736572766963 Oct 29 01:28:13.078000 audit[2509]: NETFILTER_CFG table=filter:67 family=10 entries=2 op=nft_register_chain pid=2509 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 29 01:28:13.078000 audit[2509]: SYSCALL arch=c000003e syscall=46 success=yes exit=836 a0=3 a1=7ffe7a8c84b0 a2=0 a3=7ffe7a8c849c items=0 ppid=2360 pid=2509 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 29 01:28:13.078000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C652073657276 Oct 29 01:28:13.080000 audit[2510]: NETFILTER_CFG table=filter:68 family=10 entries=1 op=nft_register_chain pid=2510 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 29 01:28:13.080000 audit[2510]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7fffb88f58a0 a2=0 a3=7fffb88f588c items=0 ppid=2360 pid=2510 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 29 01:28:13.080000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4E4F4445504F525453002D740066696C746572 Oct 29 01:28:13.082000 audit[2512]: NETFILTER_CFG table=filter:69 family=10 entries=1 op=nft_register_rule pid=2512 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 29 01:28:13.082000 audit[2512]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7ffec19b4a50 a2=0 a3=7ffec19b4a3c items=0 ppid=2360 pid=2512 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 29 01:28:13.082000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206865616C746820636865636B207365727669636520706F727473002D6A004B5542452D4E4F4445504F525453 Oct 29 01:28:13.083000 audit[2513]: NETFILTER_CFG table=filter:70 family=10 entries=1 op=nft_register_chain pid=2513 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 29 01:28:13.083000 audit[2513]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffdfd8bff80 a2=0 a3=7ffdfd8bff6c items=0 ppid=2360 pid=2513 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 29 01:28:13.083000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D740066696C746572 Oct 29 01:28:13.085000 audit[2515]: NETFILTER_CFG table=filter:71 family=10 entries=1 op=nft_register_rule pid=2515 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 29 01:28:13.085000 audit[2515]: SYSCALL arch=c000003e syscall=46 success=yes exit=744 a0=3 a1=7ffc53a70e80 a2=0 a3=7ffc53a70e6c items=0 ppid=2360 pid=2515 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 29 01:28:13.085000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B554245 Oct 29 01:28:13.087000 audit[2518]: NETFILTER_CFG table=filter:72 family=10 entries=2 op=nft_register_chain pid=2518 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 29 01:28:13.087000 audit[2518]: SYSCALL arch=c000003e syscall=46 success=yes exit=828 a0=3 a1=7ffe03c93570 a2=0 a3=7ffe03c9355c items=0 ppid=2360 pid=2518 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 29 01:28:13.087000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D Oct 29 01:28:13.088000 audit[2519]: NETFILTER_CFG table=filter:73 family=10 entries=1 op=nft_register_chain pid=2519 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 29 01:28:13.088000 audit[2519]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffd82c5fd60 a2=0 a3=7ffd82c5fd4c items=0 ppid=2360 pid=2519 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 29 01:28:13.088000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D464F5257415244002D740066696C746572 Oct 29 01:28:13.090000 audit[2521]: NETFILTER_CFG table=filter:74 family=10 entries=1 op=nft_register_rule pid=2521 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 29 01:28:13.090000 audit[2521]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7ffe4781f0f0 a2=0 a3=7ffe4781f0dc items=0 ppid=2360 pid=2521 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 29 01:28:13.090000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320666F7277617264696E672072756C6573002D6A004B5542452D464F5257415244 Oct 29 01:28:13.091000 audit[2522]: NETFILTER_CFG table=filter:75 family=10 entries=1 op=nft_register_chain pid=2522 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 29 01:28:13.091000 audit[2522]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffc57bc5ac0 a2=0 a3=7ffc57bc5aac items=0 ppid=2360 pid=2522 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 29 01:28:13.091000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D4649524557414C4C002D740066696C746572 Oct 29 01:28:13.092000 audit[2524]: NETFILTER_CFG table=filter:76 family=10 entries=1 op=nft_register_rule pid=2524 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 29 01:28:13.092000 audit[2524]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7fff573ca160 a2=0 a3=7fff573ca14c items=0 ppid=2360 pid=2524 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 29 01:28:13.092000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A Oct 29 01:28:13.095000 audit[2527]: NETFILTER_CFG table=filter:77 family=10 entries=1 op=nft_register_rule pid=2527 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 29 01:28:13.095000 audit[2527]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7ffd7a8642c0 a2=0 a3=7ffd7a8642ac items=0 ppid=2360 pid=2527 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 29 01:28:13.095000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D Oct 29 01:28:13.097000 audit[2530]: NETFILTER_CFG table=filter:78 family=10 entries=1 op=nft_register_rule pid=2530 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 29 01:28:13.097000 audit[2530]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7ffc67511010 a2=0 a3=7ffc67510ffc items=0 ppid=2360 pid=2530 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 29 01:28:13.097000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C Oct 29 01:28:13.098000 audit[2531]: NETFILTER_CFG table=nat:79 family=10 entries=1 op=nft_register_chain pid=2531 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 29 01:28:13.098000 audit[2531]: SYSCALL arch=c000003e syscall=46 success=yes exit=96 a0=3 a1=7ffcfdca33a0 a2=0 a3=7ffcfdca338c items=0 ppid=2360 pid=2531 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 29 01:28:13.098000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D74006E6174 Oct 29 01:28:13.099000 audit[2533]: NETFILTER_CFG table=nat:80 family=10 entries=2 op=nft_register_chain pid=2533 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 29 01:28:13.099000 audit[2533]: SYSCALL arch=c000003e syscall=46 success=yes exit=600 a0=3 a1=7fff3f107620 a2=0 a3=7fff3f10760c items=0 ppid=2360 pid=2533 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 29 01:28:13.099000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Oct 29 01:28:13.102000 audit[2536]: NETFILTER_CFG table=nat:81 family=10 entries=2 op=nft_register_chain pid=2536 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 29 01:28:13.102000 audit[2536]: SYSCALL arch=c000003e syscall=46 success=yes exit=608 a0=3 a1=7fff897274f0 a2=0 a3=7fff897274dc items=0 ppid=2360 pid=2536 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 29 01:28:13.102000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900505245524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Oct 29 01:28:13.102000 audit[2537]: NETFILTER_CFG table=nat:82 family=10 entries=1 op=nft_register_chain pid=2537 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 29 01:28:13.102000 audit[2537]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7fff4a276cf0 a2=0 a3=7fff4a276cdc items=0 ppid=2360 pid=2537 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 29 01:28:13.102000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D504F5354524F5554494E47002D74006E6174 Oct 29 01:28:13.104000 audit[2539]: NETFILTER_CFG table=nat:83 family=10 entries=2 op=nft_register_chain pid=2539 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 29 01:28:13.104000 audit[2539]: SYSCALL arch=c000003e syscall=46 success=yes exit=612 a0=3 a1=7ffc2c634440 a2=0 a3=7ffc2c63442c items=0 ppid=2360 pid=2539 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 29 01:28:13.104000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900504F5354524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320706F7374726F7574696E672072756C6573002D6A004B5542452D504F5354524F5554494E47 Oct 29 01:28:13.105000 audit[2540]: NETFILTER_CFG table=filter:84 family=10 entries=1 op=nft_register_chain pid=2540 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 29 01:28:13.105000 audit[2540]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffe19421eb0 a2=0 a3=7ffe19421e9c items=0 ppid=2360 pid=2540 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 29 01:28:13.105000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4649524557414C4C002D740066696C746572 Oct 29 01:28:13.106000 audit[2542]: NETFILTER_CFG table=filter:85 family=10 entries=1 op=nft_register_rule pid=2542 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 29 01:28:13.106000 audit[2542]: SYSCALL arch=c000003e syscall=46 success=yes exit=228 a0=3 a1=7fff5cbfdd40 a2=0 a3=7fff5cbfdd2c items=0 ppid=2360 pid=2542 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 29 01:28:13.106000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6A004B5542452D4649524557414C4C Oct 29 01:28:13.109000 audit[2545]: NETFILTER_CFG table=filter:86 family=10 entries=1 op=nft_register_rule pid=2545 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 29 01:28:13.109000 audit[2545]: SYSCALL arch=c000003e syscall=46 success=yes exit=228 a0=3 a1=7ffc082aee30 a2=0 a3=7ffc082aee1c items=0 ppid=2360 pid=2545 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 29 01:28:13.109000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6A004B5542452D4649524557414C4C Oct 29 01:28:13.113000 audit[2547]: NETFILTER_CFG table=filter:87 family=10 entries=3 op=nft_register_rule pid=2547 subj=system_u:system_r:kernel_t:s0 comm="ip6tables-resto" Oct 29 01:28:13.113000 audit[2547]: SYSCALL arch=c000003e syscall=46 success=yes exit=2088 a0=3 a1=7ffeea398c50 a2=0 a3=7ffeea398c3c items=0 ppid=2360 pid=2547 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables-resto" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 29 01:28:13.113000 audit: PROCTITLE proctitle=6970367461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Oct 29 01:28:13.113000 audit[2547]: NETFILTER_CFG table=nat:88 family=10 entries=7 op=nft_register_chain pid=2547 subj=system_u:system_r:kernel_t:s0 comm="ip6tables-resto" Oct 29 01:28:13.113000 audit[2547]: SYSCALL arch=c000003e syscall=46 success=yes exit=2056 a0=3 a1=7ffeea398c50 a2=0 a3=7ffeea398c3c items=0 ppid=2360 pid=2547 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables-resto" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 29 01:28:13.113000 audit: PROCTITLE proctitle=6970367461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Oct 29 01:28:14.034765 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount760682834.mount: Deactivated successfully. Oct 29 01:28:14.855823 env[1344]: time="2025-10-29T01:28:14.855790470Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/tigera/operator:v1.38.7,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 29 01:28:14.865768 env[1344]: time="2025-10-29T01:28:14.865739131Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:f2c1be207523e593db82e3b8cf356a12f3ad8d1aad2225f8114b2cf9d6486cf1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 29 01:28:14.874579 env[1344]: time="2025-10-29T01:28:14.874380267Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/tigera/operator:v1.38.7,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 29 01:28:14.877397 env[1344]: time="2025-10-29T01:28:14.877375497Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/tigera/operator@sha256:1b629a1403f5b6d7243f7dd523d04b8a50352a33c1d4d6970b6002a8733acf2e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 29 01:28:14.877853 env[1344]: time="2025-10-29T01:28:14.877829759Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.7\" returns image reference \"sha256:f2c1be207523e593db82e3b8cf356a12f3ad8d1aad2225f8114b2cf9d6486cf1\"" Oct 29 01:28:14.880563 env[1344]: time="2025-10-29T01:28:14.880538605Z" level=info msg="CreateContainer within sandbox \"2065b92e3a9330f197d324bb77948f132edf55e17351edd1dc8bc1e96bb47204\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Oct 29 01:28:14.914040 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3283440327.mount: Deactivated successfully. Oct 29 01:28:14.918625 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount70946916.mount: Deactivated successfully. Oct 29 01:28:14.920594 env[1344]: time="2025-10-29T01:28:14.920563727Z" level=info msg="CreateContainer within sandbox \"2065b92e3a9330f197d324bb77948f132edf55e17351edd1dc8bc1e96bb47204\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"6f049e7135da5be514bd4156f4883265fe0840fc21f5b2cd0ef98192864cbb3c\"" Oct 29 01:28:14.921341 env[1344]: time="2025-10-29T01:28:14.921322910Z" level=info msg="StartContainer for \"6f049e7135da5be514bd4156f4883265fe0840fc21f5b2cd0ef98192864cbb3c\"" Oct 29 01:28:14.958856 env[1344]: time="2025-10-29T01:28:14.958834099Z" level=info msg="StartContainer for \"6f049e7135da5be514bd4156f4883265fe0840fc21f5b2cd0ef98192864cbb3c\" returns successfully" Oct 29 01:28:15.563794 kubelet[2259]: I1029 01:28:15.563493 2259 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-27vgs" podStartSLOduration=3.563479279 podStartE2EDuration="3.563479279s" podCreationTimestamp="2025-10-29 01:28:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-29 01:28:12.56016241 +0000 UTC m=+5.216277443" watchObservedRunningTime="2025-10-29 01:28:15.563479279 +0000 UTC m=+8.219594303" Oct 29 01:28:19.229193 kubelet[2259]: I1029 01:28:19.229156 2259 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-7dcd859c48-k2xg5" podStartSLOduration=5.109077403 podStartE2EDuration="7.229146397s" podCreationTimestamp="2025-10-29 01:28:12 +0000 UTC" firstStartedPulling="2025-10-29 01:28:12.758712738 +0000 UTC m=+5.414827761" lastFinishedPulling="2025-10-29 01:28:14.878781731 +0000 UTC m=+7.534896755" observedRunningTime="2025-10-29 01:28:15.564109722 +0000 UTC m=+8.220224752" watchObservedRunningTime="2025-10-29 01:28:19.229146397 +0000 UTC m=+11.885261428" Oct 29 01:28:19.990593 sudo[1608]: pam_unix(sudo:session): session closed for user root Oct 29 01:28:19.995248 kernel: kauditd_printk_skb: 143 callbacks suppressed Oct 29 01:28:19.995284 kernel: audit: type=1106 audit(1761701299.990:268): pid=1608 uid=500 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Oct 29 01:28:19.995302 kernel: audit: type=1104 audit(1761701299.990:269): pid=1608 uid=500 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Oct 29 01:28:19.990000 audit[1608]: USER_END pid=1608 uid=500 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Oct 29 01:28:19.990000 audit[1608]: CRED_DISP pid=1608 uid=500 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Oct 29 01:28:20.003783 sshd[1602]: pam_unix(sshd:session): session closed for user core Oct 29 01:28:20.005000 audit[1602]: USER_END pid=1602 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Oct 29 01:28:20.005000 audit[1602]: CRED_DISP pid=1602 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Oct 29 01:28:20.011115 systemd[1]: sshd@6-139.178.70.110:22-139.178.68.195:59240.service: Deactivated successfully. Oct 29 01:28:20.012066 systemd[1]: session-9.scope: Deactivated successfully. Oct 29 01:28:20.012402 systemd-logind[1329]: Session 9 logged out. Waiting for processes to exit. Oct 29 01:28:20.013794 kernel: audit: type=1106 audit(1761701300.005:270): pid=1602 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Oct 29 01:28:20.013848 kernel: audit: type=1104 audit(1761701300.005:271): pid=1602 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Oct 29 01:28:20.013419 systemd-logind[1329]: Removed session 9. Oct 29 01:28:20.010000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@6-139.178.70.110:22-139.178.68.195:59240 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 29 01:28:20.019227 kernel: audit: type=1131 audit(1761701300.010:272): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@6-139.178.70.110:22-139.178.68.195:59240 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 29 01:28:20.590000 audit[2631]: NETFILTER_CFG table=filter:89 family=2 entries=14 op=nft_register_rule pid=2631 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Oct 29 01:28:20.593201 kernel: audit: type=1325 audit(1761701300.590:273): table=filter:89 family=2 entries=14 op=nft_register_rule pid=2631 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Oct 29 01:28:20.590000 audit[2631]: SYSCALL arch=c000003e syscall=46 success=yes exit=5248 a0=3 a1=7ffe14c99650 a2=0 a3=7ffe14c9963c items=0 ppid=2360 pid=2631 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 29 01:28:20.599200 kernel: audit: type=1300 audit(1761701300.590:273): arch=c000003e syscall=46 success=yes exit=5248 a0=3 a1=7ffe14c99650 a2=0 a3=7ffe14c9963c items=0 ppid=2360 pid=2631 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 29 01:28:20.590000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Oct 29 01:28:20.601203 kernel: audit: type=1327 audit(1761701300.590:273): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Oct 29 01:28:20.601000 audit[2631]: NETFILTER_CFG table=nat:90 family=2 entries=12 op=nft_register_rule pid=2631 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Oct 29 01:28:20.604200 kernel: audit: type=1325 audit(1761701300.601:274): table=nat:90 family=2 entries=12 op=nft_register_rule pid=2631 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Oct 29 01:28:20.604232 kernel: audit: type=1300 audit(1761701300.601:274): arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7ffe14c99650 a2=0 a3=0 items=0 ppid=2360 pid=2631 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 29 01:28:20.601000 audit[2631]: SYSCALL arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7ffe14c99650 a2=0 a3=0 items=0 ppid=2360 pid=2631 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 29 01:28:20.601000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Oct 29 01:28:20.640000 audit[2633]: NETFILTER_CFG table=filter:91 family=2 entries=15 op=nft_register_rule pid=2633 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Oct 29 01:28:20.640000 audit[2633]: SYSCALL arch=c000003e syscall=46 success=yes exit=5992 a0=3 a1=7ffd7d3ac550 a2=0 a3=7ffd7d3ac53c items=0 ppid=2360 pid=2633 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 29 01:28:20.640000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Oct 29 01:28:20.645000 audit[2633]: NETFILTER_CFG table=nat:92 family=2 entries=12 op=nft_register_rule pid=2633 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Oct 29 01:28:20.645000 audit[2633]: SYSCALL arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7ffd7d3ac550 a2=0 a3=0 items=0 ppid=2360 pid=2633 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 29 01:28:20.645000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Oct 29 01:28:22.019000 audit[2635]: NETFILTER_CFG table=filter:93 family=2 entries=17 op=nft_register_rule pid=2635 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Oct 29 01:28:22.019000 audit[2635]: SYSCALL arch=c000003e syscall=46 success=yes exit=6736 a0=3 a1=7ffccfea1390 a2=0 a3=7ffccfea137c items=0 ppid=2360 pid=2635 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 29 01:28:22.019000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Oct 29 01:28:22.027000 audit[2635]: NETFILTER_CFG table=nat:94 family=2 entries=12 op=nft_register_rule pid=2635 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Oct 29 01:28:22.027000 audit[2635]: SYSCALL arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7ffccfea1390 a2=0 a3=0 items=0 ppid=2360 pid=2635 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 29 01:28:22.027000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Oct 29 01:28:23.049000 audit[2637]: NETFILTER_CFG table=filter:95 family=2 entries=19 op=nft_register_rule pid=2637 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Oct 29 01:28:23.049000 audit[2637]: SYSCALL arch=c000003e syscall=46 success=yes exit=7480 a0=3 a1=7fff088807e0 a2=0 a3=7fff088807cc items=0 ppid=2360 pid=2637 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 29 01:28:23.049000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Oct 29 01:28:23.054000 audit[2637]: NETFILTER_CFG table=nat:96 family=2 entries=12 op=nft_register_rule pid=2637 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Oct 29 01:28:23.054000 audit[2637]: SYSCALL arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7fff088807e0 a2=0 a3=0 items=0 ppid=2360 pid=2637 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 29 01:28:23.054000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Oct 29 01:28:23.900000 audit[2639]: NETFILTER_CFG table=filter:97 family=2 entries=21 op=nft_register_rule pid=2639 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Oct 29 01:28:23.900000 audit[2639]: SYSCALL arch=c000003e syscall=46 success=yes exit=8224 a0=3 a1=7ffe574d38c0 a2=0 a3=7ffe574d38ac items=0 ppid=2360 pid=2639 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 29 01:28:23.900000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Oct 29 01:28:23.904000 audit[2639]: NETFILTER_CFG table=nat:98 family=2 entries=12 op=nft_register_rule pid=2639 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Oct 29 01:28:23.904000 audit[2639]: SYSCALL arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7ffe574d38c0 a2=0 a3=0 items=0 ppid=2360 pid=2639 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 29 01:28:23.904000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Oct 29 01:28:23.994747 kubelet[2259]: I1029 01:28:23.994715 2259 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-64gzl\" (UniqueName: \"kubernetes.io/projected/61a4a29d-22f4-470b-b7cf-52cbff01e605-kube-api-access-64gzl\") pod \"calico-typha-84744f8cb6-sc4sw\" (UID: \"61a4a29d-22f4-470b-b7cf-52cbff01e605\") " pod="calico-system/calico-typha-84744f8cb6-sc4sw" Oct 29 01:28:23.994747 kubelet[2259]: I1029 01:28:23.994748 2259 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/61a4a29d-22f4-470b-b7cf-52cbff01e605-tigera-ca-bundle\") pod \"calico-typha-84744f8cb6-sc4sw\" (UID: \"61a4a29d-22f4-470b-b7cf-52cbff01e605\") " pod="calico-system/calico-typha-84744f8cb6-sc4sw" Oct 29 01:28:23.995030 kubelet[2259]: I1029 01:28:23.994783 2259 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/61a4a29d-22f4-470b-b7cf-52cbff01e605-typha-certs\") pod \"calico-typha-84744f8cb6-sc4sw\" (UID: \"61a4a29d-22f4-470b-b7cf-52cbff01e605\") " pod="calico-system/calico-typha-84744f8cb6-sc4sw" Oct 29 01:28:24.195904 kubelet[2259]: I1029 01:28:24.195829 2259 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/1ab5ba0e-8341-4839-ac62-f2e9667ee4d1-node-certs\") pod \"calico-node-tb6kq\" (UID: \"1ab5ba0e-8341-4839-ac62-f2e9667ee4d1\") " pod="calico-system/calico-node-tb6kq" Oct 29 01:28:24.195904 kubelet[2259]: I1029 01:28:24.195870 2259 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/1ab5ba0e-8341-4839-ac62-f2e9667ee4d1-cni-bin-dir\") pod \"calico-node-tb6kq\" (UID: \"1ab5ba0e-8341-4839-ac62-f2e9667ee4d1\") " pod="calico-system/calico-node-tb6kq" Oct 29 01:28:24.195904 kubelet[2259]: I1029 01:28:24.195881 2259 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/1ab5ba0e-8341-4839-ac62-f2e9667ee4d1-cni-log-dir\") pod \"calico-node-tb6kq\" (UID: \"1ab5ba0e-8341-4839-ac62-f2e9667ee4d1\") " pod="calico-system/calico-node-tb6kq" Oct 29 01:28:24.195904 kubelet[2259]: I1029 01:28:24.195891 2259 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1ab5ba0e-8341-4839-ac62-f2e9667ee4d1-tigera-ca-bundle\") pod \"calico-node-tb6kq\" (UID: \"1ab5ba0e-8341-4839-ac62-f2e9667ee4d1\") " pod="calico-system/calico-node-tb6kq" Oct 29 01:28:24.195904 kubelet[2259]: I1029 01:28:24.195901 2259 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/1ab5ba0e-8341-4839-ac62-f2e9667ee4d1-xtables-lock\") pod \"calico-node-tb6kq\" (UID: \"1ab5ba0e-8341-4839-ac62-f2e9667ee4d1\") " pod="calico-system/calico-node-tb6kq" Oct 29 01:28:24.196078 kubelet[2259]: I1029 01:28:24.195911 2259 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/1ab5ba0e-8341-4839-ac62-f2e9667ee4d1-policysync\") pod \"calico-node-tb6kq\" (UID: \"1ab5ba0e-8341-4839-ac62-f2e9667ee4d1\") " pod="calico-system/calico-node-tb6kq" Oct 29 01:28:24.196078 kubelet[2259]: I1029 01:28:24.195920 2259 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/1ab5ba0e-8341-4839-ac62-f2e9667ee4d1-var-lib-calico\") pod \"calico-node-tb6kq\" (UID: \"1ab5ba0e-8341-4839-ac62-f2e9667ee4d1\") " pod="calico-system/calico-node-tb6kq" Oct 29 01:28:24.196078 kubelet[2259]: I1029 01:28:24.195938 2259 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/1ab5ba0e-8341-4839-ac62-f2e9667ee4d1-var-run-calico\") pod \"calico-node-tb6kq\" (UID: \"1ab5ba0e-8341-4839-ac62-f2e9667ee4d1\") " pod="calico-system/calico-node-tb6kq" Oct 29 01:28:24.196078 kubelet[2259]: I1029 01:28:24.195949 2259 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/1ab5ba0e-8341-4839-ac62-f2e9667ee4d1-cni-net-dir\") pod \"calico-node-tb6kq\" (UID: \"1ab5ba0e-8341-4839-ac62-f2e9667ee4d1\") " pod="calico-system/calico-node-tb6kq" Oct 29 01:28:24.196078 kubelet[2259]: I1029 01:28:24.195963 2259 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/1ab5ba0e-8341-4839-ac62-f2e9667ee4d1-flexvol-driver-host\") pod \"calico-node-tb6kq\" (UID: \"1ab5ba0e-8341-4839-ac62-f2e9667ee4d1\") " pod="calico-system/calico-node-tb6kq" Oct 29 01:28:24.196172 kubelet[2259]: I1029 01:28:24.195976 2259 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/1ab5ba0e-8341-4839-ac62-f2e9667ee4d1-lib-modules\") pod \"calico-node-tb6kq\" (UID: \"1ab5ba0e-8341-4839-ac62-f2e9667ee4d1\") " pod="calico-system/calico-node-tb6kq" Oct 29 01:28:24.196172 kubelet[2259]: I1029 01:28:24.195986 2259 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-njg9v\" (UniqueName: \"kubernetes.io/projected/1ab5ba0e-8341-4839-ac62-f2e9667ee4d1-kube-api-access-njg9v\") pod \"calico-node-tb6kq\" (UID: \"1ab5ba0e-8341-4839-ac62-f2e9667ee4d1\") " pod="calico-system/calico-node-tb6kq" Oct 29 01:28:24.247901 env[1344]: time="2025-10-29T01:28:24.247854692Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-84744f8cb6-sc4sw,Uid:61a4a29d-22f4-470b-b7cf-52cbff01e605,Namespace:calico-system,Attempt:0,}" Oct 29 01:28:24.279034 env[1344]: time="2025-10-29T01:28:24.278890717Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 29 01:28:24.279034 env[1344]: time="2025-10-29T01:28:24.278924499Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 29 01:28:24.279034 env[1344]: time="2025-10-29T01:28:24.278934200Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 29 01:28:24.279237 env[1344]: time="2025-10-29T01:28:24.279093049Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/9106ea080ee93127828d2da7d822ec686ab1dc04c5ad235756eb2a46709bf224 pid=2649 runtime=io.containerd.runc.v2 Oct 29 01:28:24.320241 kubelet[2259]: E1029 01:28:24.320175 2259 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 01:28:24.320458 kubelet[2259]: W1029 01:28:24.320442 2259 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 01:28:24.323897 kubelet[2259]: E1029 01:28:24.323345 2259 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 01:28:24.327983 kubelet[2259]: E1029 01:28:24.324539 2259 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 01:28:24.327983 kubelet[2259]: W1029 01:28:24.324551 2259 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 01:28:24.327983 kubelet[2259]: E1029 01:28:24.324563 2259 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 01:28:24.327983 kubelet[2259]: E1029 01:28:24.327827 2259 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-6w9mz" podUID="1b41dfbb-cb8d-4095-9219-ece15b48c5c3" Oct 29 01:28:24.363112 env[1344]: time="2025-10-29T01:28:24.363080426Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-84744f8cb6-sc4sw,Uid:61a4a29d-22f4-470b-b7cf-52cbff01e605,Namespace:calico-system,Attempt:0,} returns sandbox id \"9106ea080ee93127828d2da7d822ec686ab1dc04c5ad235756eb2a46709bf224\"" Oct 29 01:28:24.364195 env[1344]: time="2025-10-29T01:28:24.364165378Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.4\"" Oct 29 01:28:24.396453 kubelet[2259]: E1029 01:28:24.396432 2259 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 01:28:24.396453 kubelet[2259]: W1029 01:28:24.396448 2259 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 01:28:24.396574 kubelet[2259]: E1029 01:28:24.396463 2259 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 01:28:24.396574 kubelet[2259]: E1029 01:28:24.396559 2259 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 01:28:24.396574 kubelet[2259]: W1029 01:28:24.396564 2259 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 01:28:24.396574 kubelet[2259]: E1029 01:28:24.396570 2259 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 01:28:24.396672 kubelet[2259]: E1029 01:28:24.396643 2259 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 01:28:24.396672 kubelet[2259]: W1029 01:28:24.396648 2259 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 01:28:24.396672 kubelet[2259]: E1029 01:28:24.396653 2259 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 01:28:24.399150 kubelet[2259]: E1029 01:28:24.399136 2259 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 01:28:24.399150 kubelet[2259]: W1029 01:28:24.399147 2259 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 01:28:24.399234 kubelet[2259]: E1029 01:28:24.399155 2259 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 01:28:24.399322 kubelet[2259]: E1029 01:28:24.399280 2259 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 01:28:24.399322 kubelet[2259]: W1029 01:28:24.399286 2259 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 01:28:24.399322 kubelet[2259]: E1029 01:28:24.399292 2259 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 01:28:24.399878 kubelet[2259]: E1029 01:28:24.399867 2259 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 01:28:24.399878 kubelet[2259]: W1029 01:28:24.399876 2259 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 01:28:24.399934 kubelet[2259]: E1029 01:28:24.399882 2259 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 01:28:24.400004 kubelet[2259]: E1029 01:28:24.399994 2259 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 01:28:24.400004 kubelet[2259]: W1029 01:28:24.400003 2259 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 01:28:24.400080 kubelet[2259]: E1029 01:28:24.400010 2259 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 01:28:24.400481 kubelet[2259]: E1029 01:28:24.400468 2259 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 01:28:24.400481 kubelet[2259]: W1029 01:28:24.400476 2259 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 01:28:24.400547 kubelet[2259]: E1029 01:28:24.400483 2259 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 01:28:24.400600 kubelet[2259]: E1029 01:28:24.400588 2259 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 01:28:24.400600 kubelet[2259]: W1029 01:28:24.400596 2259 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 01:28:24.400658 kubelet[2259]: E1029 01:28:24.400602 2259 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 01:28:24.400687 kubelet[2259]: E1029 01:28:24.400677 2259 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 01:28:24.400687 kubelet[2259]: W1029 01:28:24.400684 2259 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 01:28:24.400741 kubelet[2259]: E1029 01:28:24.400690 2259 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 01:28:24.400770 kubelet[2259]: E1029 01:28:24.400760 2259 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 01:28:24.400770 kubelet[2259]: W1029 01:28:24.400767 2259 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 01:28:24.406276 kubelet[2259]: E1029 01:28:24.400772 2259 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 01:28:24.406276 kubelet[2259]: E1029 01:28:24.400844 2259 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 01:28:24.406276 kubelet[2259]: W1029 01:28:24.400848 2259 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 01:28:24.406276 kubelet[2259]: E1029 01:28:24.400853 2259 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 01:28:24.406276 kubelet[2259]: E1029 01:28:24.400951 2259 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 01:28:24.406276 kubelet[2259]: W1029 01:28:24.400956 2259 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 01:28:24.406276 kubelet[2259]: E1029 01:28:24.400961 2259 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 01:28:24.406276 kubelet[2259]: E1029 01:28:24.401054 2259 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 01:28:24.406276 kubelet[2259]: W1029 01:28:24.401059 2259 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 01:28:24.406276 kubelet[2259]: E1029 01:28:24.401064 2259 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 01:28:24.406465 kubelet[2259]: E1029 01:28:24.401152 2259 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 01:28:24.406465 kubelet[2259]: W1029 01:28:24.401157 2259 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 01:28:24.406465 kubelet[2259]: E1029 01:28:24.401162 2259 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 01:28:24.406465 kubelet[2259]: E1029 01:28:24.401278 2259 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 01:28:24.406465 kubelet[2259]: W1029 01:28:24.401283 2259 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 01:28:24.406465 kubelet[2259]: E1029 01:28:24.401287 2259 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 01:28:24.406465 kubelet[2259]: E1029 01:28:24.401410 2259 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 01:28:24.406465 kubelet[2259]: W1029 01:28:24.401415 2259 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 01:28:24.406465 kubelet[2259]: E1029 01:28:24.401420 2259 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 01:28:24.406465 kubelet[2259]: E1029 01:28:24.401519 2259 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 01:28:24.406647 kubelet[2259]: W1029 01:28:24.401525 2259 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 01:28:24.406647 kubelet[2259]: E1029 01:28:24.401531 2259 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 01:28:24.406647 kubelet[2259]: E1029 01:28:24.401619 2259 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 01:28:24.406647 kubelet[2259]: W1029 01:28:24.401624 2259 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 01:28:24.406647 kubelet[2259]: E1029 01:28:24.401629 2259 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 01:28:24.406647 kubelet[2259]: E1029 01:28:24.401717 2259 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 01:28:24.406647 kubelet[2259]: W1029 01:28:24.401722 2259 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 01:28:24.406647 kubelet[2259]: E1029 01:28:24.401728 2259 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 01:28:24.406647 kubelet[2259]: E1029 01:28:24.401861 2259 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 01:28:24.406647 kubelet[2259]: W1029 01:28:24.401866 2259 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 01:28:24.406823 kubelet[2259]: E1029 01:28:24.401870 2259 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 01:28:24.406823 kubelet[2259]: I1029 01:28:24.401885 2259 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/1b41dfbb-cb8d-4095-9219-ece15b48c5c3-socket-dir\") pod \"csi-node-driver-6w9mz\" (UID: \"1b41dfbb-cb8d-4095-9219-ece15b48c5c3\") " pod="calico-system/csi-node-driver-6w9mz" Oct 29 01:28:24.406823 kubelet[2259]: E1029 01:28:24.401990 2259 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 01:28:24.406823 kubelet[2259]: W1029 01:28:24.401996 2259 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 01:28:24.406823 kubelet[2259]: E1029 01:28:24.402005 2259 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 01:28:24.406823 kubelet[2259]: I1029 01:28:24.402014 2259 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/1b41dfbb-cb8d-4095-9219-ece15b48c5c3-kubelet-dir\") pod \"csi-node-driver-6w9mz\" (UID: \"1b41dfbb-cb8d-4095-9219-ece15b48c5c3\") " pod="calico-system/csi-node-driver-6w9mz" Oct 29 01:28:24.406823 kubelet[2259]: E1029 01:28:24.402109 2259 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 01:28:24.406823 kubelet[2259]: W1029 01:28:24.402117 2259 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 01:28:24.406823 kubelet[2259]: E1029 01:28:24.402129 2259 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 01:28:24.406982 kubelet[2259]: E1029 01:28:24.402247 2259 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 01:28:24.406982 kubelet[2259]: W1029 01:28:24.402251 2259 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 01:28:24.406982 kubelet[2259]: E1029 01:28:24.402259 2259 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 01:28:24.406982 kubelet[2259]: E1029 01:28:24.402354 2259 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 01:28:24.406982 kubelet[2259]: W1029 01:28:24.402359 2259 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 01:28:24.406982 kubelet[2259]: E1029 01:28:24.402368 2259 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 01:28:24.406982 kubelet[2259]: I1029 01:28:24.402380 2259 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/1b41dfbb-cb8d-4095-9219-ece15b48c5c3-varrun\") pod \"csi-node-driver-6w9mz\" (UID: \"1b41dfbb-cb8d-4095-9219-ece15b48c5c3\") " pod="calico-system/csi-node-driver-6w9mz" Oct 29 01:28:24.406982 kubelet[2259]: E1029 01:28:24.402483 2259 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 01:28:24.406982 kubelet[2259]: W1029 01:28:24.402488 2259 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 01:28:24.407147 kubelet[2259]: E1029 01:28:24.402494 2259 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 01:28:24.407147 kubelet[2259]: I1029 01:28:24.402511 2259 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/1b41dfbb-cb8d-4095-9219-ece15b48c5c3-registration-dir\") pod \"csi-node-driver-6w9mz\" (UID: \"1b41dfbb-cb8d-4095-9219-ece15b48c5c3\") " pod="calico-system/csi-node-driver-6w9mz" Oct 29 01:28:24.407147 kubelet[2259]: E1029 01:28:24.402591 2259 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 01:28:24.407147 kubelet[2259]: W1029 01:28:24.402596 2259 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 01:28:24.407147 kubelet[2259]: E1029 01:28:24.402606 2259 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 01:28:24.407147 kubelet[2259]: E1029 01:28:24.402693 2259 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 01:28:24.407147 kubelet[2259]: W1029 01:28:24.402699 2259 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 01:28:24.407147 kubelet[2259]: E1029 01:28:24.402709 2259 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 01:28:24.407147 kubelet[2259]: E1029 01:28:24.402803 2259 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 01:28:24.407323 kubelet[2259]: W1029 01:28:24.402808 2259 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 01:28:24.407323 kubelet[2259]: E1029 01:28:24.402815 2259 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 01:28:24.407323 kubelet[2259]: I1029 01:28:24.402823 2259 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8hhz7\" (UniqueName: \"kubernetes.io/projected/1b41dfbb-cb8d-4095-9219-ece15b48c5c3-kube-api-access-8hhz7\") pod \"csi-node-driver-6w9mz\" (UID: \"1b41dfbb-cb8d-4095-9219-ece15b48c5c3\") " pod="calico-system/csi-node-driver-6w9mz" Oct 29 01:28:24.407323 kubelet[2259]: E1029 01:28:24.402914 2259 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 01:28:24.407323 kubelet[2259]: W1029 01:28:24.402918 2259 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 01:28:24.407323 kubelet[2259]: E1029 01:28:24.402927 2259 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 01:28:24.407323 kubelet[2259]: E1029 01:28:24.403009 2259 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 01:28:24.407323 kubelet[2259]: W1029 01:28:24.403014 2259 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 01:28:24.407323 kubelet[2259]: E1029 01:28:24.403018 2259 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 01:28:24.407491 kubelet[2259]: E1029 01:28:24.403114 2259 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 01:28:24.407491 kubelet[2259]: W1029 01:28:24.403119 2259 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 01:28:24.407491 kubelet[2259]: E1029 01:28:24.403127 2259 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 01:28:24.407491 kubelet[2259]: E1029 01:28:24.403227 2259 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 01:28:24.407491 kubelet[2259]: W1029 01:28:24.403232 2259 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 01:28:24.407491 kubelet[2259]: E1029 01:28:24.403237 2259 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 01:28:24.407491 kubelet[2259]: E1029 01:28:24.403344 2259 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 01:28:24.407491 kubelet[2259]: W1029 01:28:24.403348 2259 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 01:28:24.407491 kubelet[2259]: E1029 01:28:24.403353 2259 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 01:28:24.407491 kubelet[2259]: E1029 01:28:24.403443 2259 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 01:28:24.407672 kubelet[2259]: W1029 01:28:24.403449 2259 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 01:28:24.407672 kubelet[2259]: E1029 01:28:24.403453 2259 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 01:28:24.435360 env[1344]: time="2025-10-29T01:28:24.435039347Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-tb6kq,Uid:1ab5ba0e-8341-4839-ac62-f2e9667ee4d1,Namespace:calico-system,Attempt:0,}" Oct 29 01:28:24.477057 env[1344]: time="2025-10-29T01:28:24.475611873Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 29 01:28:24.477057 env[1344]: time="2025-10-29T01:28:24.475643170Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 29 01:28:24.477057 env[1344]: time="2025-10-29T01:28:24.475650969Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 29 01:28:24.477057 env[1344]: time="2025-10-29T01:28:24.475766222Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/ba5ce9995e62117ad9cbd7cd15ef5f3966a1cacdf51f66ce307f32a2afd262b6 pid=2739 runtime=io.containerd.runc.v2 Oct 29 01:28:24.503831 kubelet[2259]: E1029 01:28:24.503809 2259 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 01:28:24.503831 kubelet[2259]: W1029 01:28:24.503824 2259 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 01:28:24.503951 kubelet[2259]: E1029 01:28:24.503837 2259 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 01:28:24.503978 kubelet[2259]: E1029 01:28:24.503956 2259 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 01:28:24.503978 kubelet[2259]: W1029 01:28:24.503961 2259 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 01:28:24.503978 kubelet[2259]: E1029 01:28:24.503967 2259 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 01:28:24.504071 kubelet[2259]: E1029 01:28:24.504059 2259 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 01:28:24.504313 kubelet[2259]: W1029 01:28:24.504072 2259 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 01:28:24.504313 kubelet[2259]: E1029 01:28:24.504079 2259 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 01:28:24.504313 kubelet[2259]: E1029 01:28:24.504230 2259 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 01:28:24.504313 kubelet[2259]: W1029 01:28:24.504235 2259 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 01:28:24.504313 kubelet[2259]: E1029 01:28:24.504243 2259 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 01:28:24.504415 kubelet[2259]: E1029 01:28:24.504365 2259 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 01:28:24.504415 kubelet[2259]: W1029 01:28:24.504376 2259 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 01:28:24.504415 kubelet[2259]: E1029 01:28:24.504382 2259 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 01:28:24.504743 kubelet[2259]: E1029 01:28:24.504473 2259 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 01:28:24.504743 kubelet[2259]: W1029 01:28:24.504478 2259 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 01:28:24.504743 kubelet[2259]: E1029 01:28:24.504484 2259 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 01:28:24.504743 kubelet[2259]: E1029 01:28:24.504573 2259 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 01:28:24.504743 kubelet[2259]: W1029 01:28:24.504577 2259 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 01:28:24.504743 kubelet[2259]: E1029 01:28:24.504591 2259 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 01:28:24.504743 kubelet[2259]: E1029 01:28:24.504687 2259 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 01:28:24.504743 kubelet[2259]: W1029 01:28:24.504692 2259 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 01:28:24.504743 kubelet[2259]: E1029 01:28:24.504717 2259 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 01:28:24.504932 kubelet[2259]: E1029 01:28:24.504826 2259 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 01:28:24.504932 kubelet[2259]: W1029 01:28:24.504839 2259 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 01:28:24.504932 kubelet[2259]: E1029 01:28:24.504846 2259 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 01:28:24.505866 kubelet[2259]: E1029 01:28:24.505854 2259 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 01:28:24.505866 kubelet[2259]: W1029 01:28:24.505864 2259 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 01:28:24.505932 kubelet[2259]: E1029 01:28:24.505873 2259 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 01:28:24.506067 kubelet[2259]: E1029 01:28:24.506054 2259 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 01:28:24.506067 kubelet[2259]: W1029 01:28:24.506063 2259 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 01:28:24.506131 kubelet[2259]: E1029 01:28:24.506109 2259 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 01:28:24.506383 kubelet[2259]: E1029 01:28:24.506366 2259 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 01:28:24.506383 kubelet[2259]: W1029 01:28:24.506377 2259 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 01:28:24.506452 kubelet[2259]: E1029 01:28:24.506411 2259 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 01:28:24.508067 kubelet[2259]: E1029 01:28:24.506556 2259 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 01:28:24.508067 kubelet[2259]: W1029 01:28:24.506561 2259 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 01:28:24.508067 kubelet[2259]: E1029 01:28:24.506594 2259 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 01:28:24.508067 kubelet[2259]: E1029 01:28:24.506703 2259 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 01:28:24.508067 kubelet[2259]: W1029 01:28:24.506708 2259 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 01:28:24.508067 kubelet[2259]: E1029 01:28:24.506756 2259 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 01:28:24.508067 kubelet[2259]: E1029 01:28:24.506811 2259 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 01:28:24.508067 kubelet[2259]: W1029 01:28:24.506815 2259 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 01:28:24.508067 kubelet[2259]: E1029 01:28:24.506837 2259 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 01:28:24.508067 kubelet[2259]: E1029 01:28:24.506981 2259 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 01:28:24.508314 kubelet[2259]: W1029 01:28:24.506988 2259 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 01:28:24.508314 kubelet[2259]: E1029 01:28:24.506995 2259 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 01:28:24.508314 kubelet[2259]: E1029 01:28:24.507094 2259 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 01:28:24.508314 kubelet[2259]: W1029 01:28:24.507099 2259 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 01:28:24.508314 kubelet[2259]: E1029 01:28:24.507106 2259 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 01:28:24.508314 kubelet[2259]: E1029 01:28:24.507245 2259 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 01:28:24.508314 kubelet[2259]: W1029 01:28:24.507250 2259 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 01:28:24.508314 kubelet[2259]: E1029 01:28:24.507257 2259 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 01:28:24.508314 kubelet[2259]: E1029 01:28:24.507385 2259 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 01:28:24.508314 kubelet[2259]: W1029 01:28:24.507390 2259 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 01:28:24.508714 kubelet[2259]: E1029 01:28:24.507397 2259 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 01:28:24.508714 kubelet[2259]: E1029 01:28:24.507491 2259 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 01:28:24.508714 kubelet[2259]: W1029 01:28:24.507495 2259 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 01:28:24.508714 kubelet[2259]: E1029 01:28:24.507519 2259 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 01:28:24.508714 kubelet[2259]: E1029 01:28:24.507611 2259 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 01:28:24.508714 kubelet[2259]: W1029 01:28:24.507617 2259 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 01:28:24.508714 kubelet[2259]: E1029 01:28:24.507624 2259 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 01:28:24.508714 kubelet[2259]: E1029 01:28:24.507785 2259 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 01:28:24.508714 kubelet[2259]: W1029 01:28:24.507791 2259 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 01:28:24.508714 kubelet[2259]: E1029 01:28:24.507796 2259 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 01:28:24.508897 kubelet[2259]: E1029 01:28:24.508086 2259 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 01:28:24.508897 kubelet[2259]: W1029 01:28:24.508091 2259 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 01:28:24.508897 kubelet[2259]: E1029 01:28:24.508097 2259 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 01:28:24.509337 kubelet[2259]: E1029 01:28:24.509324 2259 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 01:28:24.509382 kubelet[2259]: W1029 01:28:24.509346 2259 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 01:28:24.509382 kubelet[2259]: E1029 01:28:24.509358 2259 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 01:28:24.510513 kubelet[2259]: E1029 01:28:24.510501 2259 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 01:28:24.510513 kubelet[2259]: W1029 01:28:24.510510 2259 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 01:28:24.510576 kubelet[2259]: E1029 01:28:24.510526 2259 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 01:28:24.510626 kubelet[2259]: E1029 01:28:24.510618 2259 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 01:28:24.510626 kubelet[2259]: W1029 01:28:24.510625 2259 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 01:28:24.510679 kubelet[2259]: E1029 01:28:24.510630 2259 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 01:28:24.512380 env[1344]: time="2025-10-29T01:28:24.512353900Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-tb6kq,Uid:1ab5ba0e-8341-4839-ac62-f2e9667ee4d1,Namespace:calico-system,Attempt:0,} returns sandbox id \"ba5ce9995e62117ad9cbd7cd15ef5f3966a1cacdf51f66ce307f32a2afd262b6\"" Oct 29 01:28:24.916000 audit[2803]: NETFILTER_CFG table=filter:99 family=2 entries=22 op=nft_register_rule pid=2803 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Oct 29 01:28:24.916000 audit[2803]: SYSCALL arch=c000003e syscall=46 success=yes exit=8224 a0=3 a1=7ffcf648a070 a2=0 a3=7ffcf648a05c items=0 ppid=2360 pid=2803 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 29 01:28:24.916000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Oct 29 01:28:24.919000 audit[2803]: NETFILTER_CFG table=nat:100 family=2 entries=12 op=nft_register_rule pid=2803 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Oct 29 01:28:24.919000 audit[2803]: SYSCALL arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7ffcf648a070 a2=0 a3=0 items=0 ppid=2360 pid=2803 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 29 01:28:24.919000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Oct 29 01:28:25.102638 systemd[1]: run-containerd-runc-k8s.io-9106ea080ee93127828d2da7d822ec686ab1dc04c5ad235756eb2a46709bf224-runc.jYyF7v.mount: Deactivated successfully. Oct 29 01:28:25.489614 kubelet[2259]: E1029 01:28:25.489589 2259 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-6w9mz" podUID="1b41dfbb-cb8d-4095-9219-ece15b48c5c3" Oct 29 01:28:26.372443 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3160293872.mount: Deactivated successfully. Oct 29 01:28:27.158662 env[1344]: time="2025-10-29T01:28:27.158637510Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/typha:v3.30.4,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 29 01:28:27.159946 env[1344]: time="2025-10-29T01:28:27.159930461Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:aa1490366a77160b4cc8f9af82281ab7201ffda0882871f860e1eb1c4f825958,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 29 01:28:27.161043 env[1344]: time="2025-10-29T01:28:27.161031476Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/typha:v3.30.4,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 29 01:28:27.162150 env[1344]: time="2025-10-29T01:28:27.162134545Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/typha@sha256:6f437220b5b3c627fb4a0fc8dc323363101f3c22a8f337612c2a1ddfb73b810c,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 29 01:28:27.162842 env[1344]: time="2025-10-29T01:28:27.162826595Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.4\" returns image reference \"sha256:aa1490366a77160b4cc8f9af82281ab7201ffda0882871f860e1eb1c4f825958\"" Oct 29 01:28:27.164657 env[1344]: time="2025-10-29T01:28:27.164641081Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\"" Oct 29 01:28:27.174968 env[1344]: time="2025-10-29T01:28:27.174943233Z" level=info msg="CreateContainer within sandbox \"9106ea080ee93127828d2da7d822ec686ab1dc04c5ad235756eb2a46709bf224\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Oct 29 01:28:27.204039 env[1344]: time="2025-10-29T01:28:27.204014758Z" level=info msg="CreateContainer within sandbox \"9106ea080ee93127828d2da7d822ec686ab1dc04c5ad235756eb2a46709bf224\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"29a7398c32fbd6933cacaaa258365a78540f1f93977aa0826f14033f9af524f6\"" Oct 29 01:28:27.204537 env[1344]: time="2025-10-29T01:28:27.204526473Z" level=info msg="StartContainer for \"29a7398c32fbd6933cacaaa258365a78540f1f93977aa0826f14033f9af524f6\"" Oct 29 01:28:27.253502 env[1344]: time="2025-10-29T01:28:27.253472144Z" level=info msg="StartContainer for \"29a7398c32fbd6933cacaaa258365a78540f1f93977aa0826f14033f9af524f6\" returns successfully" Oct 29 01:28:27.331572 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount63250111.mount: Deactivated successfully. Oct 29 01:28:27.490091 kubelet[2259]: E1029 01:28:27.490021 2259 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-6w9mz" podUID="1b41dfbb-cb8d-4095-9219-ece15b48c5c3" Oct 29 01:28:27.583924 kubelet[2259]: I1029 01:28:27.583889 2259 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-84744f8cb6-sc4sw" podStartSLOduration=1.783353998 podStartE2EDuration="4.583879052s" podCreationTimestamp="2025-10-29 01:28:23 +0000 UTC" firstStartedPulling="2025-10-29 01:28:24.363855318 +0000 UTC m=+17.019970338" lastFinishedPulling="2025-10-29 01:28:27.164380367 +0000 UTC m=+19.820495392" observedRunningTime="2025-10-29 01:28:27.583534901 +0000 UTC m=+20.239649932" watchObservedRunningTime="2025-10-29 01:28:27.583879052 +0000 UTC m=+20.239994083" Oct 29 01:28:27.622711 kubelet[2259]: E1029 01:28:27.622693 2259 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 01:28:27.622871 kubelet[2259]: W1029 01:28:27.622856 2259 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 01:28:27.622942 kubelet[2259]: E1029 01:28:27.622930 2259 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 01:28:27.623133 kubelet[2259]: E1029 01:28:27.623124 2259 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 01:28:27.623213 kubelet[2259]: W1029 01:28:27.623203 2259 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 01:28:27.623285 kubelet[2259]: E1029 01:28:27.623274 2259 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 01:28:27.623465 kubelet[2259]: E1029 01:28:27.623456 2259 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 01:28:27.623527 kubelet[2259]: W1029 01:28:27.623516 2259 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 01:28:27.623599 kubelet[2259]: E1029 01:28:27.623588 2259 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 01:28:27.623835 kubelet[2259]: E1029 01:28:27.623798 2259 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 01:28:27.623899 kubelet[2259]: W1029 01:28:27.623888 2259 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 01:28:27.623966 kubelet[2259]: E1029 01:28:27.623955 2259 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 01:28:27.624164 kubelet[2259]: E1029 01:28:27.624155 2259 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 01:28:27.624274 kubelet[2259]: W1029 01:28:27.624263 2259 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 01:28:27.624339 kubelet[2259]: E1029 01:28:27.624327 2259 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 01:28:27.624518 kubelet[2259]: E1029 01:28:27.624509 2259 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 01:28:27.624576 kubelet[2259]: W1029 01:28:27.624565 2259 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 01:28:27.624642 kubelet[2259]: E1029 01:28:27.624631 2259 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 01:28:27.624815 kubelet[2259]: E1029 01:28:27.624807 2259 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 01:28:27.624876 kubelet[2259]: W1029 01:28:27.624865 2259 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 01:28:27.624939 kubelet[2259]: E1029 01:28:27.624929 2259 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 01:28:27.625113 kubelet[2259]: E1029 01:28:27.625105 2259 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 01:28:27.625174 kubelet[2259]: W1029 01:28:27.625163 2259 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 01:28:27.625264 kubelet[2259]: E1029 01:28:27.625253 2259 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 01:28:27.625464 kubelet[2259]: E1029 01:28:27.625455 2259 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 01:28:27.625525 kubelet[2259]: W1029 01:28:27.625514 2259 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 01:28:27.625589 kubelet[2259]: E1029 01:28:27.625578 2259 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 01:28:27.625757 kubelet[2259]: E1029 01:28:27.625749 2259 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 01:28:27.625818 kubelet[2259]: W1029 01:28:27.625807 2259 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 01:28:27.625884 kubelet[2259]: E1029 01:28:27.625872 2259 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 01:28:27.626058 kubelet[2259]: E1029 01:28:27.626049 2259 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 01:28:27.626118 kubelet[2259]: W1029 01:28:27.626107 2259 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 01:28:27.673279 kubelet[2259]: E1029 01:28:27.626222 2259 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 01:28:27.673279 kubelet[2259]: E1029 01:28:27.626344 2259 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 01:28:27.673279 kubelet[2259]: W1029 01:28:27.626350 2259 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 01:28:27.673279 kubelet[2259]: E1029 01:28:27.626356 2259 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 01:28:27.673279 kubelet[2259]: E1029 01:28:27.626511 2259 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 01:28:27.673279 kubelet[2259]: W1029 01:28:27.626517 2259 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 01:28:27.673279 kubelet[2259]: E1029 01:28:27.626523 2259 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 01:28:27.673279 kubelet[2259]: E1029 01:28:27.626637 2259 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 01:28:27.673279 kubelet[2259]: W1029 01:28:27.626645 2259 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 01:28:27.673279 kubelet[2259]: E1029 01:28:27.626651 2259 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 01:28:27.673627 kubelet[2259]: E1029 01:28:27.626767 2259 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 01:28:27.673627 kubelet[2259]: W1029 01:28:27.626772 2259 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 01:28:27.673627 kubelet[2259]: E1029 01:28:27.626778 2259 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 01:28:27.673627 kubelet[2259]: E1029 01:28:27.626935 2259 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 01:28:27.673627 kubelet[2259]: W1029 01:28:27.626941 2259 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 01:28:27.673627 kubelet[2259]: E1029 01:28:27.626947 2259 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 01:28:27.673627 kubelet[2259]: E1029 01:28:27.627114 2259 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 01:28:27.673627 kubelet[2259]: W1029 01:28:27.627121 2259 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 01:28:27.673627 kubelet[2259]: E1029 01:28:27.627133 2259 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 01:28:27.673627 kubelet[2259]: E1029 01:28:27.627264 2259 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 01:28:27.673894 kubelet[2259]: W1029 01:28:27.627274 2259 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 01:28:27.673894 kubelet[2259]: E1029 01:28:27.627290 2259 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 01:28:27.673894 kubelet[2259]: E1029 01:28:27.627431 2259 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 01:28:27.673894 kubelet[2259]: W1029 01:28:27.627440 2259 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 01:28:27.673894 kubelet[2259]: E1029 01:28:27.627450 2259 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 01:28:27.673894 kubelet[2259]: E1029 01:28:27.627576 2259 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 01:28:27.673894 kubelet[2259]: W1029 01:28:27.627583 2259 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 01:28:27.673894 kubelet[2259]: E1029 01:28:27.627590 2259 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 01:28:27.673894 kubelet[2259]: E1029 01:28:27.627732 2259 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 01:28:27.673894 kubelet[2259]: W1029 01:28:27.627738 2259 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 01:28:27.677795 kubelet[2259]: E1029 01:28:27.627751 2259 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 01:28:27.677795 kubelet[2259]: E1029 01:28:27.627916 2259 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 01:28:27.677795 kubelet[2259]: W1029 01:28:27.627922 2259 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 01:28:27.677795 kubelet[2259]: E1029 01:28:27.627933 2259 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 01:28:27.677795 kubelet[2259]: E1029 01:28:27.628050 2259 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 01:28:27.677795 kubelet[2259]: W1029 01:28:27.628057 2259 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 01:28:27.677795 kubelet[2259]: E1029 01:28:27.628069 2259 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 01:28:27.677795 kubelet[2259]: E1029 01:28:27.628213 2259 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 01:28:27.677795 kubelet[2259]: W1029 01:28:27.628220 2259 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 01:28:27.677795 kubelet[2259]: E1029 01:28:27.628230 2259 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 01:28:27.682044 kubelet[2259]: E1029 01:28:27.628350 2259 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 01:28:27.682044 kubelet[2259]: W1029 01:28:27.628357 2259 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 01:28:27.682044 kubelet[2259]: E1029 01:28:27.628364 2259 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 01:28:27.682044 kubelet[2259]: E1029 01:28:27.628505 2259 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 01:28:27.682044 kubelet[2259]: W1029 01:28:27.628512 2259 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 01:28:27.682044 kubelet[2259]: E1029 01:28:27.628525 2259 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 01:28:27.682044 kubelet[2259]: E1029 01:28:27.628624 2259 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 01:28:27.682044 kubelet[2259]: W1029 01:28:27.628631 2259 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 01:28:27.682044 kubelet[2259]: E1029 01:28:27.628638 2259 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 01:28:27.682044 kubelet[2259]: E1029 01:28:27.628809 2259 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 01:28:27.683474 kubelet[2259]: W1029 01:28:27.628815 2259 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 01:28:27.683474 kubelet[2259]: E1029 01:28:27.628827 2259 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 01:28:27.683474 kubelet[2259]: E1029 01:28:27.628955 2259 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 01:28:27.683474 kubelet[2259]: W1029 01:28:27.628961 2259 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 01:28:27.683474 kubelet[2259]: E1029 01:28:27.628972 2259 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 01:28:27.683474 kubelet[2259]: E1029 01:28:27.629099 2259 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 01:28:27.683474 kubelet[2259]: W1029 01:28:27.629105 2259 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 01:28:27.683474 kubelet[2259]: E1029 01:28:27.629117 2259 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 01:28:27.683474 kubelet[2259]: E1029 01:28:27.629279 2259 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 01:28:27.683474 kubelet[2259]: W1029 01:28:27.629286 2259 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 01:28:27.683730 kubelet[2259]: E1029 01:28:27.629297 2259 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 01:28:27.683730 kubelet[2259]: E1029 01:28:27.629462 2259 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 01:28:27.683730 kubelet[2259]: W1029 01:28:27.629469 2259 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 01:28:27.683730 kubelet[2259]: E1029 01:28:27.629477 2259 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 01:28:27.683730 kubelet[2259]: E1029 01:28:27.629590 2259 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 01:28:27.683730 kubelet[2259]: W1029 01:28:27.629596 2259 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 01:28:27.683730 kubelet[2259]: E1029 01:28:27.629604 2259 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 01:28:28.566503 env[1344]: time="2025-10-29T01:28:28.566480583Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 29 01:28:28.570948 env[1344]: time="2025-10-29T01:28:28.570931854Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:570719e9c34097019014ae2ad94edf4e523bc6892e77fb1c64c23e5b7f390fe5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 29 01:28:28.572987 env[1344]: time="2025-10-29T01:28:28.572971249Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 29 01:28:28.575323 kubelet[2259]: I1029 01:28:28.575303 2259 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Oct 29 01:28:28.576140 env[1344]: time="2025-10-29T01:28:28.576119259Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:50bdfe370b7308fa9957ed1eaccd094aa4f27f9a4f1dfcfef2f8a7696a1551e1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 29 01:28:28.576542 env[1344]: time="2025-10-29T01:28:28.576519978Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" returns image reference \"sha256:570719e9c34097019014ae2ad94edf4e523bc6892e77fb1c64c23e5b7f390fe5\"" Oct 29 01:28:28.579451 env[1344]: time="2025-10-29T01:28:28.579429297Z" level=info msg="CreateContainer within sandbox \"ba5ce9995e62117ad9cbd7cd15ef5f3966a1cacdf51f66ce307f32a2afd262b6\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Oct 29 01:28:28.605125 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount653899906.mount: Deactivated successfully. Oct 29 01:28:28.622498 env[1344]: time="2025-10-29T01:28:28.622467934Z" level=info msg="CreateContainer within sandbox \"ba5ce9995e62117ad9cbd7cd15ef5f3966a1cacdf51f66ce307f32a2afd262b6\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"cec571b1ec94443d424e8307a91bbb58c73ed09c8d14824fc9b4de5bfcad83ad\"" Oct 29 01:28:28.624121 env[1344]: time="2025-10-29T01:28:28.624065266Z" level=info msg="StartContainer for \"cec571b1ec94443d424e8307a91bbb58c73ed09c8d14824fc9b4de5bfcad83ad\"" Oct 29 01:28:28.631727 kubelet[2259]: E1029 01:28:28.631702 2259 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 01:28:28.631808 kubelet[2259]: W1029 01:28:28.631732 2259 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 01:28:28.631808 kubelet[2259]: E1029 01:28:28.631751 2259 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 01:28:28.631908 kubelet[2259]: E1029 01:28:28.631893 2259 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 01:28:28.631908 kubelet[2259]: W1029 01:28:28.631905 2259 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 01:28:28.631988 kubelet[2259]: E1029 01:28:28.631916 2259 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 01:28:28.632062 kubelet[2259]: E1029 01:28:28.632041 2259 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 01:28:28.632062 kubelet[2259]: W1029 01:28:28.632058 2259 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 01:28:28.632129 kubelet[2259]: E1029 01:28:28.632066 2259 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 01:28:28.633103 kubelet[2259]: E1029 01:28:28.632221 2259 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 01:28:28.633103 kubelet[2259]: W1029 01:28:28.632230 2259 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 01:28:28.633103 kubelet[2259]: E1029 01:28:28.632237 2259 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 01:28:28.633103 kubelet[2259]: E1029 01:28:28.632382 2259 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 01:28:28.633103 kubelet[2259]: W1029 01:28:28.632391 2259 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 01:28:28.633103 kubelet[2259]: E1029 01:28:28.632398 2259 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 01:28:28.633103 kubelet[2259]: E1029 01:28:28.632504 2259 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 01:28:28.633103 kubelet[2259]: W1029 01:28:28.632510 2259 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 01:28:28.633103 kubelet[2259]: E1029 01:28:28.632523 2259 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 01:28:28.633103 kubelet[2259]: E1029 01:28:28.632637 2259 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 01:28:28.633452 kubelet[2259]: W1029 01:28:28.632643 2259 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 01:28:28.633452 kubelet[2259]: E1029 01:28:28.632662 2259 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 01:28:28.633452 kubelet[2259]: E1029 01:28:28.632792 2259 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 01:28:28.633452 kubelet[2259]: W1029 01:28:28.632798 2259 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 01:28:28.633452 kubelet[2259]: E1029 01:28:28.632812 2259 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 01:28:28.633452 kubelet[2259]: E1029 01:28:28.632931 2259 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 01:28:28.633452 kubelet[2259]: W1029 01:28:28.632944 2259 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 01:28:28.633452 kubelet[2259]: E1029 01:28:28.632950 2259 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 01:28:28.633452 kubelet[2259]: E1029 01:28:28.633068 2259 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 01:28:28.633452 kubelet[2259]: W1029 01:28:28.633084 2259 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 01:28:28.633951 kubelet[2259]: E1029 01:28:28.633093 2259 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 01:28:28.633951 kubelet[2259]: E1029 01:28:28.633220 2259 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 01:28:28.633951 kubelet[2259]: W1029 01:28:28.633229 2259 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 01:28:28.633951 kubelet[2259]: E1029 01:28:28.633236 2259 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 01:28:28.633951 kubelet[2259]: E1029 01:28:28.633346 2259 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 01:28:28.633951 kubelet[2259]: W1029 01:28:28.633364 2259 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 01:28:28.633951 kubelet[2259]: E1029 01:28:28.633373 2259 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 01:28:28.633951 kubelet[2259]: E1029 01:28:28.633539 2259 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 01:28:28.633951 kubelet[2259]: W1029 01:28:28.633547 2259 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 01:28:28.633951 kubelet[2259]: E1029 01:28:28.633553 2259 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 01:28:28.634290 kubelet[2259]: E1029 01:28:28.633667 2259 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 01:28:28.634290 kubelet[2259]: W1029 01:28:28.633673 2259 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 01:28:28.634290 kubelet[2259]: E1029 01:28:28.633687 2259 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 01:28:28.634290 kubelet[2259]: E1029 01:28:28.633807 2259 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 01:28:28.634290 kubelet[2259]: W1029 01:28:28.633823 2259 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 01:28:28.634290 kubelet[2259]: E1029 01:28:28.633834 2259 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 01:28:28.634290 kubelet[2259]: E1029 01:28:28.633985 2259 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 01:28:28.634290 kubelet[2259]: W1029 01:28:28.633991 2259 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 01:28:28.634290 kubelet[2259]: E1029 01:28:28.634004 2259 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 01:28:28.634290 kubelet[2259]: E1029 01:28:28.634148 2259 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 01:28:28.634554 kubelet[2259]: W1029 01:28:28.634154 2259 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 01:28:28.634554 kubelet[2259]: E1029 01:28:28.634161 2259 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 01:28:28.634554 kubelet[2259]: E1029 01:28:28.634351 2259 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 01:28:28.634554 kubelet[2259]: W1029 01:28:28.634358 2259 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 01:28:28.634554 kubelet[2259]: E1029 01:28:28.634365 2259 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 01:28:28.636010 kubelet[2259]: E1029 01:28:28.635990 2259 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 01:28:28.636010 kubelet[2259]: W1029 01:28:28.636002 2259 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 01:28:28.636134 kubelet[2259]: E1029 01:28:28.636016 2259 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 01:28:28.636331 kubelet[2259]: E1029 01:28:28.636315 2259 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 01:28:28.636331 kubelet[2259]: W1029 01:28:28.636325 2259 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 01:28:28.636470 kubelet[2259]: E1029 01:28:28.636388 2259 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 01:28:28.636587 kubelet[2259]: E1029 01:28:28.636569 2259 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 01:28:28.636674 kubelet[2259]: W1029 01:28:28.636611 2259 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 01:28:28.638109 kubelet[2259]: E1029 01:28:28.638092 2259 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 01:28:28.638765 kubelet[2259]: E1029 01:28:28.638750 2259 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 01:28:28.638765 kubelet[2259]: W1029 01:28:28.638761 2259 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 01:28:28.638828 kubelet[2259]: E1029 01:28:28.638815 2259 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 01:28:28.638891 kubelet[2259]: E1029 01:28:28.638881 2259 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 01:28:28.638891 kubelet[2259]: W1029 01:28:28.638888 2259 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 01:28:28.638956 kubelet[2259]: E1029 01:28:28.638935 2259 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 01:28:28.639014 kubelet[2259]: E1029 01:28:28.639003 2259 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 01:28:28.639014 kubelet[2259]: W1029 01:28:28.639012 2259 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 01:28:28.639077 kubelet[2259]: E1029 01:28:28.639030 2259 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 01:28:28.639128 kubelet[2259]: E1029 01:28:28.639118 2259 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 01:28:28.639163 kubelet[2259]: W1029 01:28:28.639125 2259 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 01:28:28.639163 kubelet[2259]: E1029 01:28:28.639135 2259 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 01:28:28.639268 kubelet[2259]: E1029 01:28:28.639257 2259 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 01:28:28.639268 kubelet[2259]: W1029 01:28:28.639267 2259 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 01:28:28.639330 kubelet[2259]: E1029 01:28:28.639283 2259 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 01:28:28.639484 kubelet[2259]: E1029 01:28:28.639475 2259 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 01:28:28.639520 kubelet[2259]: W1029 01:28:28.639488 2259 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 01:28:28.639520 kubelet[2259]: E1029 01:28:28.639496 2259 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 01:28:28.639595 kubelet[2259]: E1029 01:28:28.639586 2259 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 01:28:28.639595 kubelet[2259]: W1029 01:28:28.639593 2259 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 01:28:28.639664 kubelet[2259]: E1029 01:28:28.639602 2259 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 01:28:28.639706 kubelet[2259]: E1029 01:28:28.639696 2259 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 01:28:28.639736 kubelet[2259]: W1029 01:28:28.639711 2259 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 01:28:28.639736 kubelet[2259]: E1029 01:28:28.639721 2259 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 01:28:28.639838 kubelet[2259]: E1029 01:28:28.639828 2259 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 01:28:28.639838 kubelet[2259]: W1029 01:28:28.639835 2259 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 01:28:28.639894 kubelet[2259]: E1029 01:28:28.639843 2259 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 01:28:28.640094 kubelet[2259]: E1029 01:28:28.640078 2259 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 01:28:28.640094 kubelet[2259]: W1029 01:28:28.640090 2259 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 01:28:28.640148 kubelet[2259]: E1029 01:28:28.640099 2259 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 01:28:28.640222 kubelet[2259]: E1029 01:28:28.640213 2259 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 01:28:28.640222 kubelet[2259]: W1029 01:28:28.640219 2259 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 01:28:28.640295 kubelet[2259]: E1029 01:28:28.640225 2259 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 01:28:28.640381 kubelet[2259]: E1029 01:28:28.640372 2259 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 01:28:28.640381 kubelet[2259]: W1029 01:28:28.640379 2259 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 01:28:28.640445 kubelet[2259]: E1029 01:28:28.640385 2259 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 01:28:28.693384 env[1344]: time="2025-10-29T01:28:28.693356530Z" level=info msg="StartContainer for \"cec571b1ec94443d424e8307a91bbb58c73ed09c8d14824fc9b4de5bfcad83ad\" returns successfully" Oct 29 01:28:28.705242 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-cec571b1ec94443d424e8307a91bbb58c73ed09c8d14824fc9b4de5bfcad83ad-rootfs.mount: Deactivated successfully. Oct 29 01:28:29.005701 env[1344]: time="2025-10-29T01:28:29.005673577Z" level=info msg="shim disconnected" id=cec571b1ec94443d424e8307a91bbb58c73ed09c8d14824fc9b4de5bfcad83ad Oct 29 01:28:29.005874 env[1344]: time="2025-10-29T01:28:29.005862998Z" level=warning msg="cleaning up after shim disconnected" id=cec571b1ec94443d424e8307a91bbb58c73ed09c8d14824fc9b4de5bfcad83ad namespace=k8s.io Oct 29 01:28:29.005930 env[1344]: time="2025-10-29T01:28:29.005913407Z" level=info msg="cleaning up dead shim" Oct 29 01:28:29.011269 env[1344]: time="2025-10-29T01:28:29.011243655Z" level=warning msg="cleanup warnings time=\"2025-10-29T01:28:29Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2960 runtime=io.containerd.runc.v2\n" Oct 29 01:28:29.490506 kubelet[2259]: E1029 01:28:29.490461 2259 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-6w9mz" podUID="1b41dfbb-cb8d-4095-9219-ece15b48c5c3" Oct 29 01:28:29.577365 env[1344]: time="2025-10-29T01:28:29.577336662Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.4\"" Oct 29 01:28:31.489867 kubelet[2259]: E1029 01:28:31.489837 2259 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-6w9mz" podUID="1b41dfbb-cb8d-4095-9219-ece15b48c5c3" Oct 29 01:28:32.766376 env[1344]: time="2025-10-29T01:28:32.766346999Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/cni:v3.30.4,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 29 01:28:32.768976 env[1344]: time="2025-10-29T01:28:32.768959760Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:24e1e7377c738d4080eb462a29e2c6756d383d8d25ad87b7f49165581f20c3cd,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 29 01:28:32.769990 env[1344]: time="2025-10-29T01:28:32.769974210Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/cni:v3.30.4,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 29 01:28:32.770929 env[1344]: time="2025-10-29T01:28:32.770912778Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/cni@sha256:273501a9cfbd848ade2b6a8452dfafdd3adb4f9bf9aec45c398a5d19b8026627,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 29 01:28:32.771238 env[1344]: time="2025-10-29T01:28:32.771219919Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.4\" returns image reference \"sha256:24e1e7377c738d4080eb462a29e2c6756d383d8d25ad87b7f49165581f20c3cd\"" Oct 29 01:28:32.773796 env[1344]: time="2025-10-29T01:28:32.773776165Z" level=info msg="CreateContainer within sandbox \"ba5ce9995e62117ad9cbd7cd15ef5f3966a1cacdf51f66ce307f32a2afd262b6\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Oct 29 01:28:32.780090 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1267257564.mount: Deactivated successfully. Oct 29 01:28:32.781450 env[1344]: time="2025-10-29T01:28:32.781433134Z" level=info msg="CreateContainer within sandbox \"ba5ce9995e62117ad9cbd7cd15ef5f3966a1cacdf51f66ce307f32a2afd262b6\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"64349a385ba106d4dd7c39b8d62c9cc2fcd407cdd23722d27ab226ff49f5166d\"" Oct 29 01:28:32.782633 env[1344]: time="2025-10-29T01:28:32.782615172Z" level=info msg="StartContainer for \"64349a385ba106d4dd7c39b8d62c9cc2fcd407cdd23722d27ab226ff49f5166d\"" Oct 29 01:28:32.823605 env[1344]: time="2025-10-29T01:28:32.823274820Z" level=info msg="StartContainer for \"64349a385ba106d4dd7c39b8d62c9cc2fcd407cdd23722d27ab226ff49f5166d\" returns successfully" Oct 29 01:28:33.509143 kubelet[2259]: E1029 01:28:33.509119 2259 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-6w9mz" podUID="1b41dfbb-cb8d-4095-9219-ece15b48c5c3" Oct 29 01:28:33.778258 systemd[1]: run-containerd-runc-k8s.io-64349a385ba106d4dd7c39b8d62c9cc2fcd407cdd23722d27ab226ff49f5166d-runc.QLHxgP.mount: Deactivated successfully. Oct 29 01:28:34.062213 env[1344]: time="2025-10-29T01:28:34.061942271Z" level=error msg="failed to reload cni configuration after receiving fs change event(\"/etc/cni/net.d/calico-kubeconfig\": WRITE)" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Oct 29 01:28:34.075824 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-64349a385ba106d4dd7c39b8d62c9cc2fcd407cdd23722d27ab226ff49f5166d-rootfs.mount: Deactivated successfully. Oct 29 01:28:34.079644 env[1344]: time="2025-10-29T01:28:34.079610546Z" level=info msg="shim disconnected" id=64349a385ba106d4dd7c39b8d62c9cc2fcd407cdd23722d27ab226ff49f5166d Oct 29 01:28:34.079644 env[1344]: time="2025-10-29T01:28:34.079639110Z" level=warning msg="cleaning up after shim disconnected" id=64349a385ba106d4dd7c39b8d62c9cc2fcd407cdd23722d27ab226ff49f5166d namespace=k8s.io Oct 29 01:28:34.079644 env[1344]: time="2025-10-29T01:28:34.079645870Z" level=info msg="cleaning up dead shim" Oct 29 01:28:34.085671 env[1344]: time="2025-10-29T01:28:34.085649040Z" level=warning msg="cleanup warnings time=\"2025-10-29T01:28:34Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3031 runtime=io.containerd.runc.v2\n" Oct 29 01:28:34.159724 kubelet[2259]: I1029 01:28:34.158880 2259 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Oct 29 01:28:34.309620 kubelet[2259]: I1029 01:28:34.309597 2259 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/d5ad8127-18bd-4b5b-a585-d5881c52dc3e-whisker-backend-key-pair\") pod \"whisker-657794849-7mwqt\" (UID: \"d5ad8127-18bd-4b5b-a585-d5881c52dc3e\") " pod="calico-system/whisker-657794849-7mwqt" Oct 29 01:28:34.309620 kubelet[2259]: I1029 01:28:34.309620 2259 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fkrm8\" (UniqueName: \"kubernetes.io/projected/d5ad8127-18bd-4b5b-a585-d5881c52dc3e-kube-api-access-fkrm8\") pod \"whisker-657794849-7mwqt\" (UID: \"d5ad8127-18bd-4b5b-a585-d5881c52dc3e\") " pod="calico-system/whisker-657794849-7mwqt" Oct 29 01:28:34.309749 kubelet[2259]: I1029 01:28:34.309642 2259 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5af81d8c-f0dd-4f37-b2dd-db8e64891fd3-config\") pod \"goldmane-666569f655-h4k8p\" (UID: \"5af81d8c-f0dd-4f37-b2dd-db8e64891fd3\") " pod="calico-system/goldmane-666569f655-h4k8p" Oct 29 01:28:34.309749 kubelet[2259]: I1029 01:28:34.309656 2259 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7mzpv\" (UniqueName: \"kubernetes.io/projected/7dd39e5b-dcdd-482f-ab49-9053a64b98c9-kube-api-access-7mzpv\") pod \"calico-apiserver-79569d88b4-p6h9g\" (UID: \"7dd39e5b-dcdd-482f-ab49-9053a64b98c9\") " pod="calico-apiserver/calico-apiserver-79569d88b4-p6h9g" Oct 29 01:28:34.309749 kubelet[2259]: I1029 01:28:34.309669 2259 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/a0f7d88c-b007-49f1-8b19-74afbf972b6c-config-volume\") pod \"coredns-668d6bf9bc-tlq7c\" (UID: \"a0f7d88c-b007-49f1-8b19-74afbf972b6c\") " pod="kube-system/coredns-668d6bf9bc-tlq7c" Oct 29 01:28:34.309749 kubelet[2259]: I1029 01:28:34.309677 2259 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d5ad8127-18bd-4b5b-a585-d5881c52dc3e-whisker-ca-bundle\") pod \"whisker-657794849-7mwqt\" (UID: \"d5ad8127-18bd-4b5b-a585-d5881c52dc3e\") " pod="calico-system/whisker-657794849-7mwqt" Oct 29 01:28:34.309749 kubelet[2259]: I1029 01:28:34.309691 2259 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/c80c9899-41ee-40c7-92c7-ab20d72dcefe-calico-apiserver-certs\") pod \"calico-apiserver-79569d88b4-q8r9t\" (UID: \"c80c9899-41ee-40c7-92c7-ab20d72dcefe\") " pod="calico-apiserver/calico-apiserver-79569d88b4-q8r9t" Oct 29 01:28:34.309852 kubelet[2259]: I1029 01:28:34.309702 2259 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zwtb5\" (UniqueName: \"kubernetes.io/projected/c80c9899-41ee-40c7-92c7-ab20d72dcefe-kube-api-access-zwtb5\") pod \"calico-apiserver-79569d88b4-q8r9t\" (UID: \"c80c9899-41ee-40c7-92c7-ab20d72dcefe\") " pod="calico-apiserver/calico-apiserver-79569d88b4-q8r9t" Oct 29 01:28:34.309852 kubelet[2259]: I1029 01:28:34.309721 2259 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0c8f51f5-a169-488a-a224-1fc1684a62fb-tigera-ca-bundle\") pod \"calico-kube-controllers-7b46bb89cf-x7bmp\" (UID: \"0c8f51f5-a169-488a-a224-1fc1684a62fb\") " pod="calico-system/calico-kube-controllers-7b46bb89cf-x7bmp" Oct 29 01:28:34.309852 kubelet[2259]: I1029 01:28:34.309733 2259 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nhhxx\" (UniqueName: \"kubernetes.io/projected/2fd41a65-6531-4e39-b3e6-b8b8fe6bf795-kube-api-access-nhhxx\") pod \"coredns-668d6bf9bc-h6hwn\" (UID: \"2fd41a65-6531-4e39-b3e6-b8b8fe6bf795\") " pod="kube-system/coredns-668d6bf9bc-h6hwn" Oct 29 01:28:34.309852 kubelet[2259]: I1029 01:28:34.309743 2259 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5af81d8c-f0dd-4f37-b2dd-db8e64891fd3-goldmane-ca-bundle\") pod \"goldmane-666569f655-h4k8p\" (UID: \"5af81d8c-f0dd-4f37-b2dd-db8e64891fd3\") " pod="calico-system/goldmane-666569f655-h4k8p" Oct 29 01:28:34.309852 kubelet[2259]: I1029 01:28:34.309754 2259 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p9wcz\" (UniqueName: \"kubernetes.io/projected/5af81d8c-f0dd-4f37-b2dd-db8e64891fd3-kube-api-access-p9wcz\") pod \"goldmane-666569f655-h4k8p\" (UID: \"5af81d8c-f0dd-4f37-b2dd-db8e64891fd3\") " pod="calico-system/goldmane-666569f655-h4k8p" Oct 29 01:28:34.309948 kubelet[2259]: I1029 01:28:34.309765 2259 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pfxx4\" (UniqueName: \"kubernetes.io/projected/a0f7d88c-b007-49f1-8b19-74afbf972b6c-kube-api-access-pfxx4\") pod \"coredns-668d6bf9bc-tlq7c\" (UID: \"a0f7d88c-b007-49f1-8b19-74afbf972b6c\") " pod="kube-system/coredns-668d6bf9bc-tlq7c" Oct 29 01:28:34.309948 kubelet[2259]: I1029 01:28:34.309774 2259 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/2fd41a65-6531-4e39-b3e6-b8b8fe6bf795-config-volume\") pod \"coredns-668d6bf9bc-h6hwn\" (UID: \"2fd41a65-6531-4e39-b3e6-b8b8fe6bf795\") " pod="kube-system/coredns-668d6bf9bc-h6hwn" Oct 29 01:28:34.309948 kubelet[2259]: I1029 01:28:34.309783 2259 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p7wfq\" (UniqueName: \"kubernetes.io/projected/0c8f51f5-a169-488a-a224-1fc1684a62fb-kube-api-access-p7wfq\") pod \"calico-kube-controllers-7b46bb89cf-x7bmp\" (UID: \"0c8f51f5-a169-488a-a224-1fc1684a62fb\") " pod="calico-system/calico-kube-controllers-7b46bb89cf-x7bmp" Oct 29 01:28:34.309948 kubelet[2259]: I1029 01:28:34.309802 2259 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/5af81d8c-f0dd-4f37-b2dd-db8e64891fd3-goldmane-key-pair\") pod \"goldmane-666569f655-h4k8p\" (UID: \"5af81d8c-f0dd-4f37-b2dd-db8e64891fd3\") " pod="calico-system/goldmane-666569f655-h4k8p" Oct 29 01:28:34.309948 kubelet[2259]: I1029 01:28:34.309812 2259 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/7dd39e5b-dcdd-482f-ab49-9053a64b98c9-calico-apiserver-certs\") pod \"calico-apiserver-79569d88b4-p6h9g\" (UID: \"7dd39e5b-dcdd-482f-ab49-9053a64b98c9\") " pod="calico-apiserver/calico-apiserver-79569d88b4-p6h9g" Oct 29 01:28:34.504584 env[1344]: time="2025-10-29T01:28:34.504555751Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-tlq7c,Uid:a0f7d88c-b007-49f1-8b19-74afbf972b6c,Namespace:kube-system,Attempt:0,}" Oct 29 01:28:34.504946 env[1344]: time="2025-10-29T01:28:34.504929808Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7b46bb89cf-x7bmp,Uid:0c8f51f5-a169-488a-a224-1fc1684a62fb,Namespace:calico-system,Attempt:0,}" Oct 29 01:28:34.514665 env[1344]: time="2025-10-29T01:28:34.514650543Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-79569d88b4-p6h9g,Uid:7dd39e5b-dcdd-482f-ab49-9053a64b98c9,Namespace:calico-apiserver,Attempt:0,}" Oct 29 01:28:34.523382 env[1344]: time="2025-10-29T01:28:34.523356758Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-79569d88b4-q8r9t,Uid:c80c9899-41ee-40c7-92c7-ab20d72dcefe,Namespace:calico-apiserver,Attempt:0,}" Oct 29 01:28:34.523677 env[1344]: time="2025-10-29T01:28:34.523664067Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-h4k8p,Uid:5af81d8c-f0dd-4f37-b2dd-db8e64891fd3,Namespace:calico-system,Attempt:0,}" Oct 29 01:28:34.528111 env[1344]: time="2025-10-29T01:28:34.528093423Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-h6hwn,Uid:2fd41a65-6531-4e39-b3e6-b8b8fe6bf795,Namespace:kube-system,Attempt:0,}" Oct 29 01:28:34.533780 env[1344]: time="2025-10-29T01:28:34.533766475Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-657794849-7mwqt,Uid:d5ad8127-18bd-4b5b-a585-d5881c52dc3e,Namespace:calico-system,Attempt:0,}" Oct 29 01:28:34.609296 env[1344]: time="2025-10-29T01:28:34.609269859Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.4\"" Oct 29 01:28:34.762325 env[1344]: time="2025-10-29T01:28:34.762241598Z" level=error msg="Failed to destroy network for sandbox \"5078cf5db7c6fcf114348afd5b40bee3e2913748cb30352026ca3581831b05a9\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 29 01:28:34.762773 env[1344]: time="2025-10-29T01:28:34.762749201Z" level=error msg="encountered an error cleaning up failed sandbox \"5078cf5db7c6fcf114348afd5b40bee3e2913748cb30352026ca3581831b05a9\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 29 01:28:34.762816 env[1344]: time="2025-10-29T01:28:34.762787477Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-h6hwn,Uid:2fd41a65-6531-4e39-b3e6-b8b8fe6bf795,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"5078cf5db7c6fcf114348afd5b40bee3e2913748cb30352026ca3581831b05a9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 29 01:28:34.765736 kubelet[2259]: E1029 01:28:34.765707 2259 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5078cf5db7c6fcf114348afd5b40bee3e2913748cb30352026ca3581831b05a9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 29 01:28:34.767334 kubelet[2259]: E1029 01:28:34.767310 2259 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5078cf5db7c6fcf114348afd5b40bee3e2913748cb30352026ca3581831b05a9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-h6hwn" Oct 29 01:28:34.767382 kubelet[2259]: E1029 01:28:34.767335 2259 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5078cf5db7c6fcf114348afd5b40bee3e2913748cb30352026ca3581831b05a9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-h6hwn" Oct 29 01:28:34.767431 kubelet[2259]: E1029 01:28:34.767373 2259 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-h6hwn_kube-system(2fd41a65-6531-4e39-b3e6-b8b8fe6bf795)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-h6hwn_kube-system(2fd41a65-6531-4e39-b3e6-b8b8fe6bf795)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"5078cf5db7c6fcf114348afd5b40bee3e2913748cb30352026ca3581831b05a9\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-h6hwn" podUID="2fd41a65-6531-4e39-b3e6-b8b8fe6bf795" Oct 29 01:28:34.793844 env[1344]: time="2025-10-29T01:28:34.793804177Z" level=error msg="Failed to destroy network for sandbox \"7aabbe45860a2ecfd505f0fb85250a46dfb4fe17fd4e862e14869bf553fa1325\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 29 01:28:34.795297 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-7aabbe45860a2ecfd505f0fb85250a46dfb4fe17fd4e862e14869bf553fa1325-shm.mount: Deactivated successfully. Oct 29 01:28:34.796271 env[1344]: time="2025-10-29T01:28:34.796248183Z" level=error msg="encountered an error cleaning up failed sandbox \"7aabbe45860a2ecfd505f0fb85250a46dfb4fe17fd4e862e14869bf553fa1325\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 29 01:28:34.796312 env[1344]: time="2025-10-29T01:28:34.796283700Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-657794849-7mwqt,Uid:d5ad8127-18bd-4b5b-a585-d5881c52dc3e,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"7aabbe45860a2ecfd505f0fb85250a46dfb4fe17fd4e862e14869bf553fa1325\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 29 01:28:34.796456 kubelet[2259]: E1029 01:28:34.796433 2259 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7aabbe45860a2ecfd505f0fb85250a46dfb4fe17fd4e862e14869bf553fa1325\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 29 01:28:34.796508 kubelet[2259]: E1029 01:28:34.796475 2259 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7aabbe45860a2ecfd505f0fb85250a46dfb4fe17fd4e862e14869bf553fa1325\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-657794849-7mwqt" Oct 29 01:28:34.796508 kubelet[2259]: E1029 01:28:34.796490 2259 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7aabbe45860a2ecfd505f0fb85250a46dfb4fe17fd4e862e14869bf553fa1325\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-657794849-7mwqt" Oct 29 01:28:34.796611 kubelet[2259]: E1029 01:28:34.796521 2259 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-657794849-7mwqt_calico-system(d5ad8127-18bd-4b5b-a585-d5881c52dc3e)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-657794849-7mwqt_calico-system(d5ad8127-18bd-4b5b-a585-d5881c52dc3e)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"7aabbe45860a2ecfd505f0fb85250a46dfb4fe17fd4e862e14869bf553fa1325\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-657794849-7mwqt" podUID="d5ad8127-18bd-4b5b-a585-d5881c52dc3e" Oct 29 01:28:34.810691 env[1344]: time="2025-10-29T01:28:34.810657824Z" level=error msg="Failed to destroy network for sandbox \"3fbc9aee4c6d01189acde1e9b04926aec3023e3093d6348538b76d1cb923de68\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 29 01:28:34.812176 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-3fbc9aee4c6d01189acde1e9b04926aec3023e3093d6348538b76d1cb923de68-shm.mount: Deactivated successfully. Oct 29 01:28:34.816989 env[1344]: time="2025-10-29T01:28:34.816957505Z" level=error msg="encountered an error cleaning up failed sandbox \"3fbc9aee4c6d01189acde1e9b04926aec3023e3093d6348538b76d1cb923de68\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 29 01:28:34.817037 env[1344]: time="2025-10-29T01:28:34.816998234Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-79569d88b4-p6h9g,Uid:7dd39e5b-dcdd-482f-ab49-9053a64b98c9,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"3fbc9aee4c6d01189acde1e9b04926aec3023e3093d6348538b76d1cb923de68\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 29 01:28:34.817391 kubelet[2259]: E1029 01:28:34.817165 2259 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3fbc9aee4c6d01189acde1e9b04926aec3023e3093d6348538b76d1cb923de68\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 29 01:28:34.817391 kubelet[2259]: E1029 01:28:34.817210 2259 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3fbc9aee4c6d01189acde1e9b04926aec3023e3093d6348538b76d1cb923de68\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-79569d88b4-p6h9g" Oct 29 01:28:34.817391 kubelet[2259]: E1029 01:28:34.817224 2259 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3fbc9aee4c6d01189acde1e9b04926aec3023e3093d6348538b76d1cb923de68\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-79569d88b4-p6h9g" Oct 29 01:28:34.818809 kubelet[2259]: E1029 01:28:34.817247 2259 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-79569d88b4-p6h9g_calico-apiserver(7dd39e5b-dcdd-482f-ab49-9053a64b98c9)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-79569d88b4-p6h9g_calico-apiserver(7dd39e5b-dcdd-482f-ab49-9053a64b98c9)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"3fbc9aee4c6d01189acde1e9b04926aec3023e3093d6348538b76d1cb923de68\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-79569d88b4-p6h9g" podUID="7dd39e5b-dcdd-482f-ab49-9053a64b98c9" Oct 29 01:28:34.821387 env[1344]: time="2025-10-29T01:28:34.821363687Z" level=error msg="Failed to destroy network for sandbox \"a5e1537dc4ee2923171974c601e5d18f412a5db4e75059365e920b87d08dff91\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 29 01:28:34.822895 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-a5e1537dc4ee2923171974c601e5d18f412a5db4e75059365e920b87d08dff91-shm.mount: Deactivated successfully. Oct 29 01:28:34.823907 env[1344]: time="2025-10-29T01:28:34.823878186Z" level=error msg="encountered an error cleaning up failed sandbox \"a5e1537dc4ee2923171974c601e5d18f412a5db4e75059365e920b87d08dff91\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 29 01:28:34.823951 env[1344]: time="2025-10-29T01:28:34.823908744Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7b46bb89cf-x7bmp,Uid:0c8f51f5-a169-488a-a224-1fc1684a62fb,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"a5e1537dc4ee2923171974c601e5d18f412a5db4e75059365e920b87d08dff91\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 29 01:28:34.824085 kubelet[2259]: E1029 01:28:34.824063 2259 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a5e1537dc4ee2923171974c601e5d18f412a5db4e75059365e920b87d08dff91\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 29 01:28:34.824141 kubelet[2259]: E1029 01:28:34.824095 2259 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a5e1537dc4ee2923171974c601e5d18f412a5db4e75059365e920b87d08dff91\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-7b46bb89cf-x7bmp" Oct 29 01:28:34.824141 kubelet[2259]: E1029 01:28:34.824115 2259 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a5e1537dc4ee2923171974c601e5d18f412a5db4e75059365e920b87d08dff91\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-7b46bb89cf-x7bmp" Oct 29 01:28:34.824931 kubelet[2259]: E1029 01:28:34.824145 2259 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-7b46bb89cf-x7bmp_calico-system(0c8f51f5-a169-488a-a224-1fc1684a62fb)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-7b46bb89cf-x7bmp_calico-system(0c8f51f5-a169-488a-a224-1fc1684a62fb)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"a5e1537dc4ee2923171974c601e5d18f412a5db4e75059365e920b87d08dff91\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-7b46bb89cf-x7bmp" podUID="0c8f51f5-a169-488a-a224-1fc1684a62fb" Oct 29 01:28:34.830747 env[1344]: time="2025-10-29T01:28:34.830722060Z" level=error msg="Failed to destroy network for sandbox \"e06da40828e97ad77a9a3345000de382dcb1325707331b2dd9a825df9dc4a5eb\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 29 01:28:34.832253 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-e06da40828e97ad77a9a3345000de382dcb1325707331b2dd9a825df9dc4a5eb-shm.mount: Deactivated successfully. Oct 29 01:28:34.833608 env[1344]: time="2025-10-29T01:28:34.833583633Z" level=error msg="encountered an error cleaning up failed sandbox \"e06da40828e97ad77a9a3345000de382dcb1325707331b2dd9a825df9dc4a5eb\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 29 01:28:34.834278 env[1344]: time="2025-10-29T01:28:34.833615885Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-h4k8p,Uid:5af81d8c-f0dd-4f37-b2dd-db8e64891fd3,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"e06da40828e97ad77a9a3345000de382dcb1325707331b2dd9a825df9dc4a5eb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 29 01:28:34.834336 kubelet[2259]: E1029 01:28:34.833762 2259 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e06da40828e97ad77a9a3345000de382dcb1325707331b2dd9a825df9dc4a5eb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 29 01:28:34.834336 kubelet[2259]: E1029 01:28:34.833821 2259 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e06da40828e97ad77a9a3345000de382dcb1325707331b2dd9a825df9dc4a5eb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-666569f655-h4k8p" Oct 29 01:28:34.834336 kubelet[2259]: E1029 01:28:34.833835 2259 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e06da40828e97ad77a9a3345000de382dcb1325707331b2dd9a825df9dc4a5eb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-666569f655-h4k8p" Oct 29 01:28:34.834412 kubelet[2259]: E1029 01:28:34.833862 2259 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-666569f655-h4k8p_calico-system(5af81d8c-f0dd-4f37-b2dd-db8e64891fd3)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-666569f655-h4k8p_calico-system(5af81d8c-f0dd-4f37-b2dd-db8e64891fd3)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"e06da40828e97ad77a9a3345000de382dcb1325707331b2dd9a825df9dc4a5eb\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-666569f655-h4k8p" podUID="5af81d8c-f0dd-4f37-b2dd-db8e64891fd3" Oct 29 01:28:34.842845 env[1344]: time="2025-10-29T01:28:34.842817464Z" level=error msg="Failed to destroy network for sandbox \"a3bccb4383767c1535ad6c5527d11b0ad5fbc197c104bcb77f6e7590e8c0eacd\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 29 01:28:34.843162 env[1344]: time="2025-10-29T01:28:34.843141983Z" level=error msg="encountered an error cleaning up failed sandbox \"a3bccb4383767c1535ad6c5527d11b0ad5fbc197c104bcb77f6e7590e8c0eacd\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 29 01:28:34.843246 env[1344]: time="2025-10-29T01:28:34.843229702Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-tlq7c,Uid:a0f7d88c-b007-49f1-8b19-74afbf972b6c,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"a3bccb4383767c1535ad6c5527d11b0ad5fbc197c104bcb77f6e7590e8c0eacd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 29 01:28:34.843611 kubelet[2259]: E1029 01:28:34.843385 2259 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a3bccb4383767c1535ad6c5527d11b0ad5fbc197c104bcb77f6e7590e8c0eacd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 29 01:28:34.843611 kubelet[2259]: E1029 01:28:34.843425 2259 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a3bccb4383767c1535ad6c5527d11b0ad5fbc197c104bcb77f6e7590e8c0eacd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-tlq7c" Oct 29 01:28:34.843611 kubelet[2259]: E1029 01:28:34.843447 2259 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a3bccb4383767c1535ad6c5527d11b0ad5fbc197c104bcb77f6e7590e8c0eacd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-tlq7c" Oct 29 01:28:34.843719 kubelet[2259]: E1029 01:28:34.843479 2259 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-tlq7c_kube-system(a0f7d88c-b007-49f1-8b19-74afbf972b6c)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-tlq7c_kube-system(a0f7d88c-b007-49f1-8b19-74afbf972b6c)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"a3bccb4383767c1535ad6c5527d11b0ad5fbc197c104bcb77f6e7590e8c0eacd\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-tlq7c" podUID="a0f7d88c-b007-49f1-8b19-74afbf972b6c" Oct 29 01:28:34.845258 env[1344]: time="2025-10-29T01:28:34.845236234Z" level=error msg="Failed to destroy network for sandbox \"853cbab672cc73cbd1c7268f5392c2db6a7e2d751e33edefaeb317d1466c5b2e\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 29 01:28:34.845805 env[1344]: time="2025-10-29T01:28:34.845423652Z" level=error msg="encountered an error cleaning up failed sandbox \"853cbab672cc73cbd1c7268f5392c2db6a7e2d751e33edefaeb317d1466c5b2e\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 29 01:28:34.845805 env[1344]: time="2025-10-29T01:28:34.845446817Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-79569d88b4-q8r9t,Uid:c80c9899-41ee-40c7-92c7-ab20d72dcefe,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"853cbab672cc73cbd1c7268f5392c2db6a7e2d751e33edefaeb317d1466c5b2e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 29 01:28:34.845902 kubelet[2259]: E1029 01:28:34.845524 2259 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"853cbab672cc73cbd1c7268f5392c2db6a7e2d751e33edefaeb317d1466c5b2e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 29 01:28:34.845902 kubelet[2259]: E1029 01:28:34.845543 2259 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"853cbab672cc73cbd1c7268f5392c2db6a7e2d751e33edefaeb317d1466c5b2e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-79569d88b4-q8r9t" Oct 29 01:28:34.845902 kubelet[2259]: E1029 01:28:34.845553 2259 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"853cbab672cc73cbd1c7268f5392c2db6a7e2d751e33edefaeb317d1466c5b2e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-79569d88b4-q8r9t" Oct 29 01:28:34.845972 kubelet[2259]: E1029 01:28:34.845607 2259 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-79569d88b4-q8r9t_calico-apiserver(c80c9899-41ee-40c7-92c7-ab20d72dcefe)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-79569d88b4-q8r9t_calico-apiserver(c80c9899-41ee-40c7-92c7-ab20d72dcefe)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"853cbab672cc73cbd1c7268f5392c2db6a7e2d751e33edefaeb317d1466c5b2e\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-79569d88b4-q8r9t" podUID="c80c9899-41ee-40c7-92c7-ab20d72dcefe" Oct 29 01:28:35.494366 env[1344]: time="2025-10-29T01:28:35.494330353Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-6w9mz,Uid:1b41dfbb-cb8d-4095-9219-ece15b48c5c3,Namespace:calico-system,Attempt:0,}" Oct 29 01:28:35.533524 env[1344]: time="2025-10-29T01:28:35.533477275Z" level=error msg="Failed to destroy network for sandbox \"7b42392c3e1a8892f88e255c936c939e6c7bf0a8aa6d73ae86be23c467721e03\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 29 01:28:35.533739 env[1344]: time="2025-10-29T01:28:35.533718100Z" level=error msg="encountered an error cleaning up failed sandbox \"7b42392c3e1a8892f88e255c936c939e6c7bf0a8aa6d73ae86be23c467721e03\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 29 01:28:35.533784 env[1344]: time="2025-10-29T01:28:35.533747056Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-6w9mz,Uid:1b41dfbb-cb8d-4095-9219-ece15b48c5c3,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"7b42392c3e1a8892f88e255c936c939e6c7bf0a8aa6d73ae86be23c467721e03\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 29 01:28:35.534274 kubelet[2259]: E1029 01:28:35.533928 2259 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7b42392c3e1a8892f88e255c936c939e6c7bf0a8aa6d73ae86be23c467721e03\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 29 01:28:35.534274 kubelet[2259]: E1029 01:28:35.533982 2259 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7b42392c3e1a8892f88e255c936c939e6c7bf0a8aa6d73ae86be23c467721e03\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-6w9mz" Oct 29 01:28:35.534274 kubelet[2259]: E1029 01:28:35.534018 2259 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7b42392c3e1a8892f88e255c936c939e6c7bf0a8aa6d73ae86be23c467721e03\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-6w9mz" Oct 29 01:28:35.534410 kubelet[2259]: E1029 01:28:35.534044 2259 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-6w9mz_calico-system(1b41dfbb-cb8d-4095-9219-ece15b48c5c3)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-6w9mz_calico-system(1b41dfbb-cb8d-4095-9219-ece15b48c5c3)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"7b42392c3e1a8892f88e255c936c939e6c7bf0a8aa6d73ae86be23c467721e03\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-6w9mz" podUID="1b41dfbb-cb8d-4095-9219-ece15b48c5c3" Oct 29 01:28:35.612225 env[1344]: time="2025-10-29T01:28:35.611237102Z" level=info msg="StopPodSandbox for \"7aabbe45860a2ecfd505f0fb85250a46dfb4fe17fd4e862e14869bf553fa1325\"" Oct 29 01:28:35.612913 kubelet[2259]: I1029 01:28:35.612895 2259 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7aabbe45860a2ecfd505f0fb85250a46dfb4fe17fd4e862e14869bf553fa1325" Oct 29 01:28:35.612965 kubelet[2259]: I1029 01:28:35.612923 2259 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a3bccb4383767c1535ad6c5527d11b0ad5fbc197c104bcb77f6e7590e8c0eacd" Oct 29 01:28:35.613217 env[1344]: time="2025-10-29T01:28:35.613202878Z" level=info msg="StopPodSandbox for \"a3bccb4383767c1535ad6c5527d11b0ad5fbc197c104bcb77f6e7590e8c0eacd\"" Oct 29 01:28:35.613490 kubelet[2259]: I1029 01:28:35.613477 2259 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e06da40828e97ad77a9a3345000de382dcb1325707331b2dd9a825df9dc4a5eb" Oct 29 01:28:35.613713 env[1344]: time="2025-10-29T01:28:35.613699774Z" level=info msg="StopPodSandbox for \"e06da40828e97ad77a9a3345000de382dcb1325707331b2dd9a825df9dc4a5eb\"" Oct 29 01:28:35.615309 kubelet[2259]: I1029 01:28:35.615197 2259 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7b42392c3e1a8892f88e255c936c939e6c7bf0a8aa6d73ae86be23c467721e03" Oct 29 01:28:35.615891 env[1344]: time="2025-10-29T01:28:35.615876982Z" level=info msg="StopPodSandbox for \"7b42392c3e1a8892f88e255c936c939e6c7bf0a8aa6d73ae86be23c467721e03\"" Oct 29 01:28:35.617927 kubelet[2259]: I1029 01:28:35.617893 2259 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3fbc9aee4c6d01189acde1e9b04926aec3023e3093d6348538b76d1cb923de68" Oct 29 01:28:35.618834 env[1344]: time="2025-10-29T01:28:35.618812484Z" level=info msg="StopPodSandbox for \"3fbc9aee4c6d01189acde1e9b04926aec3023e3093d6348538b76d1cb923de68\"" Oct 29 01:28:35.619529 kubelet[2259]: I1029 01:28:35.619516 2259 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a5e1537dc4ee2923171974c601e5d18f412a5db4e75059365e920b87d08dff91" Oct 29 01:28:35.620524 env[1344]: time="2025-10-29T01:28:35.620500167Z" level=info msg="StopPodSandbox for \"a5e1537dc4ee2923171974c601e5d18f412a5db4e75059365e920b87d08dff91\"" Oct 29 01:28:35.621462 kubelet[2259]: I1029 01:28:35.621446 2259 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5078cf5db7c6fcf114348afd5b40bee3e2913748cb30352026ca3581831b05a9" Oct 29 01:28:35.622606 env[1344]: time="2025-10-29T01:28:35.622588396Z" level=info msg="StopPodSandbox for \"5078cf5db7c6fcf114348afd5b40bee3e2913748cb30352026ca3581831b05a9\"" Oct 29 01:28:35.623839 kubelet[2259]: I1029 01:28:35.623523 2259 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="853cbab672cc73cbd1c7268f5392c2db6a7e2d751e33edefaeb317d1466c5b2e" Oct 29 01:28:35.624123 env[1344]: time="2025-10-29T01:28:35.624108200Z" level=info msg="StopPodSandbox for \"853cbab672cc73cbd1c7268f5392c2db6a7e2d751e33edefaeb317d1466c5b2e\"" Oct 29 01:28:35.648765 env[1344]: time="2025-10-29T01:28:35.648728174Z" level=error msg="StopPodSandbox for \"7b42392c3e1a8892f88e255c936c939e6c7bf0a8aa6d73ae86be23c467721e03\" failed" error="failed to destroy network for sandbox \"7b42392c3e1a8892f88e255c936c939e6c7bf0a8aa6d73ae86be23c467721e03\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 29 01:28:35.648970 kubelet[2259]: E1029 01:28:35.648947 2259 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"7b42392c3e1a8892f88e255c936c939e6c7bf0a8aa6d73ae86be23c467721e03\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="7b42392c3e1a8892f88e255c936c939e6c7bf0a8aa6d73ae86be23c467721e03" Oct 29 01:28:35.649036 kubelet[2259]: E1029 01:28:35.649000 2259 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"7b42392c3e1a8892f88e255c936c939e6c7bf0a8aa6d73ae86be23c467721e03"} Oct 29 01:28:35.649071 kubelet[2259]: E1029 01:28:35.649041 2259 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"1b41dfbb-cb8d-4095-9219-ece15b48c5c3\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"7b42392c3e1a8892f88e255c936c939e6c7bf0a8aa6d73ae86be23c467721e03\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Oct 29 01:28:35.649071 kubelet[2259]: E1029 01:28:35.649057 2259 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"1b41dfbb-cb8d-4095-9219-ece15b48c5c3\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"7b42392c3e1a8892f88e255c936c939e6c7bf0a8aa6d73ae86be23c467721e03\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-6w9mz" podUID="1b41dfbb-cb8d-4095-9219-ece15b48c5c3" Oct 29 01:28:35.649485 env[1344]: time="2025-10-29T01:28:35.649462109Z" level=error msg="StopPodSandbox for \"e06da40828e97ad77a9a3345000de382dcb1325707331b2dd9a825df9dc4a5eb\" failed" error="failed to destroy network for sandbox \"e06da40828e97ad77a9a3345000de382dcb1325707331b2dd9a825df9dc4a5eb\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 29 01:28:35.649674 kubelet[2259]: E1029 01:28:35.649600 2259 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"e06da40828e97ad77a9a3345000de382dcb1325707331b2dd9a825df9dc4a5eb\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="e06da40828e97ad77a9a3345000de382dcb1325707331b2dd9a825df9dc4a5eb" Oct 29 01:28:35.649674 kubelet[2259]: E1029 01:28:35.649620 2259 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"e06da40828e97ad77a9a3345000de382dcb1325707331b2dd9a825df9dc4a5eb"} Oct 29 01:28:35.649674 kubelet[2259]: E1029 01:28:35.649648 2259 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"5af81d8c-f0dd-4f37-b2dd-db8e64891fd3\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"e06da40828e97ad77a9a3345000de382dcb1325707331b2dd9a825df9dc4a5eb\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Oct 29 01:28:35.649674 kubelet[2259]: E1029 01:28:35.649660 2259 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"5af81d8c-f0dd-4f37-b2dd-db8e64891fd3\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"e06da40828e97ad77a9a3345000de382dcb1325707331b2dd9a825df9dc4a5eb\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-666569f655-h4k8p" podUID="5af81d8c-f0dd-4f37-b2dd-db8e64891fd3" Oct 29 01:28:35.682041 env[1344]: time="2025-10-29T01:28:35.681999357Z" level=error msg="StopPodSandbox for \"3fbc9aee4c6d01189acde1e9b04926aec3023e3093d6348538b76d1cb923de68\" failed" error="failed to destroy network for sandbox \"3fbc9aee4c6d01189acde1e9b04926aec3023e3093d6348538b76d1cb923de68\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 29 01:28:35.682374 kubelet[2259]: E1029 01:28:35.682334 2259 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"3fbc9aee4c6d01189acde1e9b04926aec3023e3093d6348538b76d1cb923de68\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="3fbc9aee4c6d01189acde1e9b04926aec3023e3093d6348538b76d1cb923de68" Oct 29 01:28:35.682439 kubelet[2259]: E1029 01:28:35.682383 2259 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"3fbc9aee4c6d01189acde1e9b04926aec3023e3093d6348538b76d1cb923de68"} Oct 29 01:28:35.682439 kubelet[2259]: E1029 01:28:35.682408 2259 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"7dd39e5b-dcdd-482f-ab49-9053a64b98c9\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"3fbc9aee4c6d01189acde1e9b04926aec3023e3093d6348538b76d1cb923de68\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Oct 29 01:28:35.682439 kubelet[2259]: E1029 01:28:35.682431 2259 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"7dd39e5b-dcdd-482f-ab49-9053a64b98c9\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"3fbc9aee4c6d01189acde1e9b04926aec3023e3093d6348538b76d1cb923de68\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-79569d88b4-p6h9g" podUID="7dd39e5b-dcdd-482f-ab49-9053a64b98c9" Oct 29 01:28:35.689603 env[1344]: time="2025-10-29T01:28:35.689566758Z" level=error msg="StopPodSandbox for \"7aabbe45860a2ecfd505f0fb85250a46dfb4fe17fd4e862e14869bf553fa1325\" failed" error="failed to destroy network for sandbox \"7aabbe45860a2ecfd505f0fb85250a46dfb4fe17fd4e862e14869bf553fa1325\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 29 01:28:35.689932 kubelet[2259]: E1029 01:28:35.689823 2259 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"7aabbe45860a2ecfd505f0fb85250a46dfb4fe17fd4e862e14869bf553fa1325\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="7aabbe45860a2ecfd505f0fb85250a46dfb4fe17fd4e862e14869bf553fa1325" Oct 29 01:28:35.689932 kubelet[2259]: E1029 01:28:35.689863 2259 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"7aabbe45860a2ecfd505f0fb85250a46dfb4fe17fd4e862e14869bf553fa1325"} Oct 29 01:28:35.689932 kubelet[2259]: E1029 01:28:35.689884 2259 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"d5ad8127-18bd-4b5b-a585-d5881c52dc3e\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"7aabbe45860a2ecfd505f0fb85250a46dfb4fe17fd4e862e14869bf553fa1325\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Oct 29 01:28:35.689932 kubelet[2259]: E1029 01:28:35.689897 2259 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"d5ad8127-18bd-4b5b-a585-d5881c52dc3e\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"7aabbe45860a2ecfd505f0fb85250a46dfb4fe17fd4e862e14869bf553fa1325\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-657794849-7mwqt" podUID="d5ad8127-18bd-4b5b-a585-d5881c52dc3e" Oct 29 01:28:35.691890 env[1344]: time="2025-10-29T01:28:35.691838385Z" level=error msg="StopPodSandbox for \"a3bccb4383767c1535ad6c5527d11b0ad5fbc197c104bcb77f6e7590e8c0eacd\" failed" error="failed to destroy network for sandbox \"a3bccb4383767c1535ad6c5527d11b0ad5fbc197c104bcb77f6e7590e8c0eacd\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 29 01:28:35.692039 kubelet[2259]: E1029 01:28:35.692020 2259 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"a3bccb4383767c1535ad6c5527d11b0ad5fbc197c104bcb77f6e7590e8c0eacd\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="a3bccb4383767c1535ad6c5527d11b0ad5fbc197c104bcb77f6e7590e8c0eacd" Oct 29 01:28:35.692085 kubelet[2259]: E1029 01:28:35.692043 2259 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"a3bccb4383767c1535ad6c5527d11b0ad5fbc197c104bcb77f6e7590e8c0eacd"} Oct 29 01:28:35.692085 kubelet[2259]: E1029 01:28:35.692059 2259 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"a0f7d88c-b007-49f1-8b19-74afbf972b6c\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"a3bccb4383767c1535ad6c5527d11b0ad5fbc197c104bcb77f6e7590e8c0eacd\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Oct 29 01:28:35.692085 kubelet[2259]: E1029 01:28:35.692071 2259 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"a0f7d88c-b007-49f1-8b19-74afbf972b6c\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"a3bccb4383767c1535ad6c5527d11b0ad5fbc197c104bcb77f6e7590e8c0eacd\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-tlq7c" podUID="a0f7d88c-b007-49f1-8b19-74afbf972b6c" Oct 29 01:28:35.700467 env[1344]: time="2025-10-29T01:28:35.700436147Z" level=error msg="StopPodSandbox for \"5078cf5db7c6fcf114348afd5b40bee3e2913748cb30352026ca3581831b05a9\" failed" error="failed to destroy network for sandbox \"5078cf5db7c6fcf114348afd5b40bee3e2913748cb30352026ca3581831b05a9\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 29 01:28:35.700787 kubelet[2259]: E1029 01:28:35.700685 2259 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"5078cf5db7c6fcf114348afd5b40bee3e2913748cb30352026ca3581831b05a9\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="5078cf5db7c6fcf114348afd5b40bee3e2913748cb30352026ca3581831b05a9" Oct 29 01:28:35.700787 kubelet[2259]: E1029 01:28:35.700723 2259 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"5078cf5db7c6fcf114348afd5b40bee3e2913748cb30352026ca3581831b05a9"} Oct 29 01:28:35.700787 kubelet[2259]: E1029 01:28:35.700746 2259 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"2fd41a65-6531-4e39-b3e6-b8b8fe6bf795\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"5078cf5db7c6fcf114348afd5b40bee3e2913748cb30352026ca3581831b05a9\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Oct 29 01:28:35.700787 kubelet[2259]: E1029 01:28:35.700759 2259 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"2fd41a65-6531-4e39-b3e6-b8b8fe6bf795\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"5078cf5db7c6fcf114348afd5b40bee3e2913748cb30352026ca3581831b05a9\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-h6hwn" podUID="2fd41a65-6531-4e39-b3e6-b8b8fe6bf795" Oct 29 01:28:35.704173 env[1344]: time="2025-10-29T01:28:35.704148158Z" level=error msg="StopPodSandbox for \"a5e1537dc4ee2923171974c601e5d18f412a5db4e75059365e920b87d08dff91\" failed" error="failed to destroy network for sandbox \"a5e1537dc4ee2923171974c601e5d18f412a5db4e75059365e920b87d08dff91\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 29 01:28:35.704426 kubelet[2259]: E1029 01:28:35.704334 2259 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"a5e1537dc4ee2923171974c601e5d18f412a5db4e75059365e920b87d08dff91\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="a5e1537dc4ee2923171974c601e5d18f412a5db4e75059365e920b87d08dff91" Oct 29 01:28:35.704426 kubelet[2259]: E1029 01:28:35.704360 2259 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"a5e1537dc4ee2923171974c601e5d18f412a5db4e75059365e920b87d08dff91"} Oct 29 01:28:35.704426 kubelet[2259]: E1029 01:28:35.704388 2259 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"0c8f51f5-a169-488a-a224-1fc1684a62fb\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"a5e1537dc4ee2923171974c601e5d18f412a5db4e75059365e920b87d08dff91\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Oct 29 01:28:35.704426 kubelet[2259]: E1029 01:28:35.704408 2259 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"0c8f51f5-a169-488a-a224-1fc1684a62fb\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"a5e1537dc4ee2923171974c601e5d18f412a5db4e75059365e920b87d08dff91\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-7b46bb89cf-x7bmp" podUID="0c8f51f5-a169-488a-a224-1fc1684a62fb" Oct 29 01:28:35.704832 env[1344]: time="2025-10-29T01:28:35.704802197Z" level=error msg="StopPodSandbox for \"853cbab672cc73cbd1c7268f5392c2db6a7e2d751e33edefaeb317d1466c5b2e\" failed" error="failed to destroy network for sandbox \"853cbab672cc73cbd1c7268f5392c2db6a7e2d751e33edefaeb317d1466c5b2e\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 29 01:28:35.704904 kubelet[2259]: E1029 01:28:35.704885 2259 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"853cbab672cc73cbd1c7268f5392c2db6a7e2d751e33edefaeb317d1466c5b2e\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="853cbab672cc73cbd1c7268f5392c2db6a7e2d751e33edefaeb317d1466c5b2e" Oct 29 01:28:35.704940 kubelet[2259]: E1029 01:28:35.704905 2259 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"853cbab672cc73cbd1c7268f5392c2db6a7e2d751e33edefaeb317d1466c5b2e"} Oct 29 01:28:35.704940 kubelet[2259]: E1029 01:28:35.704921 2259 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"c80c9899-41ee-40c7-92c7-ab20d72dcefe\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"853cbab672cc73cbd1c7268f5392c2db6a7e2d751e33edefaeb317d1466c5b2e\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Oct 29 01:28:35.705007 kubelet[2259]: E1029 01:28:35.704935 2259 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"c80c9899-41ee-40c7-92c7-ab20d72dcefe\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"853cbab672cc73cbd1c7268f5392c2db6a7e2d751e33edefaeb317d1466c5b2e\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-79569d88b4-q8r9t" podUID="c80c9899-41ee-40c7-92c7-ab20d72dcefe" Oct 29 01:28:35.778999 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-853cbab672cc73cbd1c7268f5392c2db6a7e2d751e33edefaeb317d1466c5b2e-shm.mount: Deactivated successfully. Oct 29 01:28:35.779106 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-a3bccb4383767c1535ad6c5527d11b0ad5fbc197c104bcb77f6e7590e8c0eacd-shm.mount: Deactivated successfully. Oct 29 01:28:39.545269 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3286046756.mount: Deactivated successfully. Oct 29 01:28:39.623705 env[1344]: time="2025-10-29T01:28:39.623664948Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/node:v3.30.4,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 29 01:28:39.624913 env[1344]: time="2025-10-29T01:28:39.624890500Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:833e8e11d9dc187377eab6f31e275114a6b0f8f0afc3bf578a2a00507e85afc9,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 29 01:28:39.625832 env[1344]: time="2025-10-29T01:28:39.625813162Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/node:v3.30.4,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 29 01:28:39.626654 env[1344]: time="2025-10-29T01:28:39.626633092Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/node@sha256:e92cca333202c87d07bf57f38182fd68f0779f912ef55305eda1fccc9f33667c,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 29 01:28:39.627137 env[1344]: time="2025-10-29T01:28:39.627112359Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.4\" returns image reference \"sha256:833e8e11d9dc187377eab6f31e275114a6b0f8f0afc3bf578a2a00507e85afc9\"" Oct 29 01:28:39.666428 env[1344]: time="2025-10-29T01:28:39.666406619Z" level=info msg="CreateContainer within sandbox \"ba5ce9995e62117ad9cbd7cd15ef5f3966a1cacdf51f66ce307f32a2afd262b6\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Oct 29 01:28:39.718284 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4062485618.mount: Deactivated successfully. Oct 29 01:28:39.720732 env[1344]: time="2025-10-29T01:28:39.720706206Z" level=info msg="CreateContainer within sandbox \"ba5ce9995e62117ad9cbd7cd15ef5f3966a1cacdf51f66ce307f32a2afd262b6\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"66720a4fa800a80f57832ea2f4eca7a3b4fbacada8b8b5341a62852577409ba7\"" Oct 29 01:28:39.723203 env[1344]: time="2025-10-29T01:28:39.723157307Z" level=info msg="StartContainer for \"66720a4fa800a80f57832ea2f4eca7a3b4fbacada8b8b5341a62852577409ba7\"" Oct 29 01:28:39.759208 env[1344]: time="2025-10-29T01:28:39.759022168Z" level=info msg="StartContainer for \"66720a4fa800a80f57832ea2f4eca7a3b4fbacada8b8b5341a62852577409ba7\" returns successfully" Oct 29 01:28:40.252255 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Oct 29 01:28:40.252714 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Oct 29 01:28:40.477628 env[1344]: time="2025-10-29T01:28:40.477272367Z" level=info msg="StopPodSandbox for \"7aabbe45860a2ecfd505f0fb85250a46dfb4fe17fd4e862e14869bf553fa1325\"" Oct 29 01:28:40.676233 systemd[1]: run-containerd-runc-k8s.io-66720a4fa800a80f57832ea2f4eca7a3b4fbacada8b8b5341a62852577409ba7-runc.sR98Vi.mount: Deactivated successfully. Oct 29 01:28:40.844211 env[1344]: 2025-10-29 01:28:40.569 [INFO][3457] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="7aabbe45860a2ecfd505f0fb85250a46dfb4fe17fd4e862e14869bf553fa1325" Oct 29 01:28:40.844211 env[1344]: 2025-10-29 01:28:40.569 [INFO][3457] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="7aabbe45860a2ecfd505f0fb85250a46dfb4fe17fd4e862e14869bf553fa1325" iface="eth0" netns="/var/run/netns/cni-2a0bcc20-bc32-c3a5-ba2e-5ea4b458bc94" Oct 29 01:28:40.844211 env[1344]: 2025-10-29 01:28:40.569 [INFO][3457] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="7aabbe45860a2ecfd505f0fb85250a46dfb4fe17fd4e862e14869bf553fa1325" iface="eth0" netns="/var/run/netns/cni-2a0bcc20-bc32-c3a5-ba2e-5ea4b458bc94" Oct 29 01:28:40.844211 env[1344]: 2025-10-29 01:28:40.570 [INFO][3457] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="7aabbe45860a2ecfd505f0fb85250a46dfb4fe17fd4e862e14869bf553fa1325" iface="eth0" netns="/var/run/netns/cni-2a0bcc20-bc32-c3a5-ba2e-5ea4b458bc94" Oct 29 01:28:40.844211 env[1344]: 2025-10-29 01:28:40.570 [INFO][3457] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="7aabbe45860a2ecfd505f0fb85250a46dfb4fe17fd4e862e14869bf553fa1325" Oct 29 01:28:40.844211 env[1344]: 2025-10-29 01:28:40.570 [INFO][3457] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="7aabbe45860a2ecfd505f0fb85250a46dfb4fe17fd4e862e14869bf553fa1325" Oct 29 01:28:40.844211 env[1344]: 2025-10-29 01:28:40.821 [INFO][3464] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="7aabbe45860a2ecfd505f0fb85250a46dfb4fe17fd4e862e14869bf553fa1325" HandleID="k8s-pod-network.7aabbe45860a2ecfd505f0fb85250a46dfb4fe17fd4e862e14869bf553fa1325" Workload="localhost-k8s-whisker--657794849--7mwqt-eth0" Oct 29 01:28:40.844211 env[1344]: 2025-10-29 01:28:40.825 [INFO][3464] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Oct 29 01:28:40.844211 env[1344]: 2025-10-29 01:28:40.826 [INFO][3464] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Oct 29 01:28:40.844211 env[1344]: 2025-10-29 01:28:40.840 [WARNING][3464] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="7aabbe45860a2ecfd505f0fb85250a46dfb4fe17fd4e862e14869bf553fa1325" HandleID="k8s-pod-network.7aabbe45860a2ecfd505f0fb85250a46dfb4fe17fd4e862e14869bf553fa1325" Workload="localhost-k8s-whisker--657794849--7mwqt-eth0" Oct 29 01:28:40.844211 env[1344]: 2025-10-29 01:28:40.840 [INFO][3464] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="7aabbe45860a2ecfd505f0fb85250a46dfb4fe17fd4e862e14869bf553fa1325" HandleID="k8s-pod-network.7aabbe45860a2ecfd505f0fb85250a46dfb4fe17fd4e862e14869bf553fa1325" Workload="localhost-k8s-whisker--657794849--7mwqt-eth0" Oct 29 01:28:40.844211 env[1344]: 2025-10-29 01:28:40.841 [INFO][3464] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Oct 29 01:28:40.844211 env[1344]: 2025-10-29 01:28:40.842 [INFO][3457] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="7aabbe45860a2ecfd505f0fb85250a46dfb4fe17fd4e862e14869bf553fa1325" Oct 29 01:28:40.847465 env[1344]: time="2025-10-29T01:28:40.846776252Z" level=info msg="TearDown network for sandbox \"7aabbe45860a2ecfd505f0fb85250a46dfb4fe17fd4e862e14869bf553fa1325\" successfully" Oct 29 01:28:40.847465 env[1344]: time="2025-10-29T01:28:40.846798435Z" level=info msg="StopPodSandbox for \"7aabbe45860a2ecfd505f0fb85250a46dfb4fe17fd4e862e14869bf553fa1325\" returns successfully" Oct 29 01:28:40.845819 systemd[1]: run-netns-cni\x2d2a0bcc20\x2dbc32\x2dc3a5\x2dba2e\x2d5ea4b458bc94.mount: Deactivated successfully. Oct 29 01:28:40.977211 kubelet[2259]: I1029 01:28:40.977119 2259 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/d5ad8127-18bd-4b5b-a585-d5881c52dc3e-whisker-backend-key-pair\") pod \"d5ad8127-18bd-4b5b-a585-d5881c52dc3e\" (UID: \"d5ad8127-18bd-4b5b-a585-d5881c52dc3e\") " Oct 29 01:28:40.978034 kubelet[2259]: I1029 01:28:40.977528 2259 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fkrm8\" (UniqueName: \"kubernetes.io/projected/d5ad8127-18bd-4b5b-a585-d5881c52dc3e-kube-api-access-fkrm8\") pod \"d5ad8127-18bd-4b5b-a585-d5881c52dc3e\" (UID: \"d5ad8127-18bd-4b5b-a585-d5881c52dc3e\") " Oct 29 01:28:40.978034 kubelet[2259]: I1029 01:28:40.977557 2259 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d5ad8127-18bd-4b5b-a585-d5881c52dc3e-whisker-ca-bundle\") pod \"d5ad8127-18bd-4b5b-a585-d5881c52dc3e\" (UID: \"d5ad8127-18bd-4b5b-a585-d5881c52dc3e\") " Oct 29 01:28:40.980667 kubelet[2259]: I1029 01:28:40.978525 2259 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d5ad8127-18bd-4b5b-a585-d5881c52dc3e-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "d5ad8127-18bd-4b5b-a585-d5881c52dc3e" (UID: "d5ad8127-18bd-4b5b-a585-d5881c52dc3e"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Oct 29 01:28:40.981559 kubelet[2259]: I1029 01:28:40.981545 2259 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d5ad8127-18bd-4b5b-a585-d5881c52dc3e-kube-api-access-fkrm8" (OuterVolumeSpecName: "kube-api-access-fkrm8") pod "d5ad8127-18bd-4b5b-a585-d5881c52dc3e" (UID: "d5ad8127-18bd-4b5b-a585-d5881c52dc3e"). InnerVolumeSpecName "kube-api-access-fkrm8". PluginName "kubernetes.io/projected", VolumeGIDValue "" Oct 29 01:28:40.983945 kubelet[2259]: I1029 01:28:40.983923 2259 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d5ad8127-18bd-4b5b-a585-d5881c52dc3e-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "d5ad8127-18bd-4b5b-a585-d5881c52dc3e" (UID: "d5ad8127-18bd-4b5b-a585-d5881c52dc3e"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGIDValue "" Oct 29 01:28:41.078658 kubelet[2259]: I1029 01:28:41.078629 2259 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-fkrm8\" (UniqueName: \"kubernetes.io/projected/d5ad8127-18bd-4b5b-a585-d5881c52dc3e-kube-api-access-fkrm8\") on node \"localhost\" DevicePath \"\"" Oct 29 01:28:41.078658 kubelet[2259]: I1029 01:28:41.078655 2259 reconciler_common.go:299] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d5ad8127-18bd-4b5b-a585-d5881c52dc3e-whisker-ca-bundle\") on node \"localhost\" DevicePath \"\"" Oct 29 01:28:41.078658 kubelet[2259]: I1029 01:28:41.078664 2259 reconciler_common.go:299] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/d5ad8127-18bd-4b5b-a585-d5881c52dc3e-whisker-backend-key-pair\") on node \"localhost\" DevicePath \"\"" Oct 29 01:28:41.545510 systemd[1]: var-lib-kubelet-pods-d5ad8127\x2d18bd\x2d4b5b\x2da585\x2dd5881c52dc3e-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dfkrm8.mount: Deactivated successfully. Oct 29 01:28:41.545593 systemd[1]: var-lib-kubelet-pods-d5ad8127\x2d18bd\x2d4b5b\x2da585\x2dd5881c52dc3e-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Oct 29 01:28:41.665058 kernel: kauditd_printk_skb: 31 callbacks suppressed Oct 29 01:28:41.672515 kernel: audit: type=1400 audit(1761701321.661:285): avc: denied { write } for pid=3571 comm="tee" name="fd" dev="proc" ino=36408 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 Oct 29 01:28:41.674315 kernel: audit: type=1300 audit(1761701321.661:285): arch=c000003e syscall=257 success=yes exit=3 a0=ffffff9c a1=7ffce822b7e5 a2=241 a3=1b6 items=1 ppid=3518 pid=3571 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 29 01:28:41.661000 audit[3571]: AVC avc: denied { write } for pid=3571 comm="tee" name="fd" dev="proc" ino=36408 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 Oct 29 01:28:41.683543 kernel: audit: type=1307 audit(1761701321.661:285): cwd="/etc/service/enabled/felix/log" Oct 29 01:28:41.683585 kernel: audit: type=1302 audit(1761701321.661:285): item=0 name="/dev/fd/63" inode=36399 dev=00:0c mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 29 01:28:41.661000 audit[3571]: SYSCALL arch=c000003e syscall=257 success=yes exit=3 a0=ffffff9c a1=7ffce822b7e5 a2=241 a3=1b6 items=1 ppid=3518 pid=3571 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 29 01:28:41.661000 audit: CWD cwd="/etc/service/enabled/felix/log" Oct 29 01:28:41.661000 audit: PATH item=0 name="/dev/fd/63" inode=36399 dev=00:0c mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 29 01:28:41.696575 kernel: audit: type=1327 audit(1761701321.661:285): proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 Oct 29 01:28:41.696622 kernel: audit: type=1400 audit(1761701321.693:286): avc: denied { write } for pid=3567 comm="tee" name="fd" dev="proc" ino=36445 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 Oct 29 01:28:41.661000 audit: PROCTITLE proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 Oct 29 01:28:41.693000 audit[3567]: AVC avc: denied { write } for pid=3567 comm="tee" name="fd" dev="proc" ino=36445 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 Oct 29 01:28:41.703722 kubelet[2259]: I1029 01:28:41.698352 2259 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-tb6kq" podStartSLOduration=2.571038319 podStartE2EDuration="17.692977922s" podCreationTimestamp="2025-10-29 01:28:24 +0000 UTC" firstStartedPulling="2025-10-29 01:28:24.512929544 +0000 UTC m=+17.169044568" lastFinishedPulling="2025-10-29 01:28:39.634869142 +0000 UTC m=+32.290984171" observedRunningTime="2025-10-29 01:28:40.6622933 +0000 UTC m=+33.318408332" watchObservedRunningTime="2025-10-29 01:28:41.692977922 +0000 UTC m=+34.349092949" Oct 29 01:28:41.718000 audit[3574]: AVC avc: denied { write } for pid=3574 comm="tee" name="fd" dev="proc" ino=36450 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 Oct 29 01:28:41.693000 audit[3567]: SYSCALL arch=c000003e syscall=257 success=yes exit=3 a0=ffffff9c a1=7ffd503257e5 a2=241 a3=1b6 items=1 ppid=3513 pid=3567 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 29 01:28:41.725221 kernel: audit: type=1400 audit(1761701321.718:287): avc: denied { write } for pid=3574 comm="tee" name="fd" dev="proc" ino=36450 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 Oct 29 01:28:41.725267 kernel: audit: type=1300 audit(1761701321.693:286): arch=c000003e syscall=257 success=yes exit=3 a0=ffffff9c a1=7ffd503257e5 a2=241 a3=1b6 items=1 ppid=3513 pid=3567 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 29 01:28:41.693000 audit: CWD cwd="/etc/service/enabled/bird6/log" Oct 29 01:28:41.726308 kernel: audit: type=1307 audit(1761701321.693:286): cwd="/etc/service/enabled/bird6/log" Oct 29 01:28:41.726338 kernel: audit: type=1302 audit(1761701321.693:286): item=0 name="/dev/fd/63" inode=36382 dev=00:0c mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 29 01:28:41.693000 audit: PATH item=0 name="/dev/fd/63" inode=36382 dev=00:0c mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 29 01:28:41.693000 audit: PROCTITLE proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 Oct 29 01:28:41.718000 audit[3576]: AVC avc: denied { write } for pid=3576 comm="tee" name="fd" dev="proc" ino=35830 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 Oct 29 01:28:41.718000 audit[3576]: SYSCALL arch=c000003e syscall=257 success=yes exit=3 a0=ffffff9c a1=7ffdb4fba7e6 a2=241 a3=1b6 items=1 ppid=3514 pid=3576 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 29 01:28:41.718000 audit: CWD cwd="/etc/service/enabled/bird/log" Oct 29 01:28:41.718000 audit: PATH item=0 name="/dev/fd/63" inode=36405 dev=00:0c mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 29 01:28:41.718000 audit: PROCTITLE proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 Oct 29 01:28:41.741000 audit[3583]: AVC avc: denied { write } for pid=3583 comm="tee" name="fd" dev="proc" ino=36455 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 Oct 29 01:28:41.718000 audit[3574]: SYSCALL arch=c000003e syscall=257 success=yes exit=3 a0=ffffff9c a1=7ffd894377e5 a2=241 a3=1b6 items=1 ppid=3517 pid=3574 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 29 01:28:41.718000 audit: CWD cwd="/etc/service/enabled/confd/log" Oct 29 01:28:41.718000 audit: PATH item=0 name="/dev/fd/63" inode=36404 dev=00:0c mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 29 01:28:41.718000 audit: PROCTITLE proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 Oct 29 01:28:41.742000 audit[3569]: AVC avc: denied { write } for pid=3569 comm="tee" name="fd" dev="proc" ino=36460 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 Oct 29 01:28:41.741000 audit[3583]: SYSCALL arch=c000003e syscall=257 success=yes exit=3 a0=ffffff9c a1=7ffe27b027d6 a2=241 a3=1b6 items=1 ppid=3525 pid=3583 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 29 01:28:41.741000 audit: CWD cwd="/etc/service/enabled/node-status-reporter/log" Oct 29 01:28:41.741000 audit: PATH item=0 name="/dev/fd/63" inode=36416 dev=00:0c mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 29 01:28:41.741000 audit: PROCTITLE proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 Oct 29 01:28:41.742000 audit[3569]: SYSCALL arch=c000003e syscall=257 success=yes exit=3 a0=ffffff9c a1=7ffe4d21a7d5 a2=241 a3=1b6 items=1 ppid=3520 pid=3569 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 29 01:28:41.742000 audit: CWD cwd="/etc/service/enabled/allocate-tunnel-addrs/log" Oct 29 01:28:41.742000 audit: PATH item=0 name="/dev/fd/63" inode=36396 dev=00:0c mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 29 01:28:41.742000 audit: PROCTITLE proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 Oct 29 01:28:41.750000 audit[3608]: AVC avc: denied { write } for pid=3608 comm="tee" name="fd" dev="proc" ino=36465 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 Oct 29 01:28:41.750000 audit[3608]: SYSCALL arch=c000003e syscall=257 success=yes exit=3 a0=ffffff9c a1=7ffff1f367e7 a2=241 a3=1b6 items=1 ppid=3539 pid=3608 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 29 01:28:41.750000 audit: CWD cwd="/etc/service/enabled/cni/log" Oct 29 01:28:41.750000 audit: PATH item=0 name="/dev/fd/63" inode=36457 dev=00:0c mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 29 01:28:41.750000 audit: PROCTITLE proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 Oct 29 01:28:41.889007 kubelet[2259]: I1029 01:28:41.888945 2259 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/d15c73c2-757b-4aa9-bf20-2183e910769f-whisker-backend-key-pair\") pod \"whisker-55bf5bd85-l59k9\" (UID: \"d15c73c2-757b-4aa9-bf20-2183e910769f\") " pod="calico-system/whisker-55bf5bd85-l59k9" Oct 29 01:28:41.890984 kubelet[2259]: I1029 01:28:41.890969 2259 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5dmjb\" (UniqueName: \"kubernetes.io/projected/d15c73c2-757b-4aa9-bf20-2183e910769f-kube-api-access-5dmjb\") pod \"whisker-55bf5bd85-l59k9\" (UID: \"d15c73c2-757b-4aa9-bf20-2183e910769f\") " pod="calico-system/whisker-55bf5bd85-l59k9" Oct 29 01:28:41.891100 kubelet[2259]: I1029 01:28:41.891090 2259 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d15c73c2-757b-4aa9-bf20-2183e910769f-whisker-ca-bundle\") pod \"whisker-55bf5bd85-l59k9\" (UID: \"d15c73c2-757b-4aa9-bf20-2183e910769f\") " pod="calico-system/whisker-55bf5bd85-l59k9" Oct 29 01:28:42.061995 env[1344]: time="2025-10-29T01:28:42.061801365Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-55bf5bd85-l59k9,Uid:d15c73c2-757b-4aa9-bf20-2183e910769f,Namespace:calico-system,Attempt:0,}" Oct 29 01:28:42.128377 kubelet[2259]: I1029 01:28:42.128357 2259 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Oct 29 01:28:42.162209 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Oct 29 01:28:42.164061 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cali7feb3b10753: link becomes ready Oct 29 01:28:42.167660 systemd-networkd[1110]: cali7feb3b10753: Link UP Oct 29 01:28:42.167761 systemd-networkd[1110]: cali7feb3b10753: Gained carrier Oct 29 01:28:42.180606 env[1344]: 2025-10-29 01:28:42.089 [INFO][3616] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Oct 29 01:28:42.180606 env[1344]: 2025-10-29 01:28:42.096 [INFO][3616] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-whisker--55bf5bd85--l59k9-eth0 whisker-55bf5bd85- calico-system d15c73c2-757b-4aa9-bf20-2183e910769f 865 0 2025-10-29 01:28:41 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:55bf5bd85 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s localhost whisker-55bf5bd85-l59k9 eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] cali7feb3b10753 [] [] }} ContainerID="f52986fd96bce0b93b6800273b1e1f9fdf0fffe236b2a8ea1dcc9bbec2c9104d" Namespace="calico-system" Pod="whisker-55bf5bd85-l59k9" WorkloadEndpoint="localhost-k8s-whisker--55bf5bd85--l59k9-" Oct 29 01:28:42.180606 env[1344]: 2025-10-29 01:28:42.096 [INFO][3616] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="f52986fd96bce0b93b6800273b1e1f9fdf0fffe236b2a8ea1dcc9bbec2c9104d" Namespace="calico-system" Pod="whisker-55bf5bd85-l59k9" WorkloadEndpoint="localhost-k8s-whisker--55bf5bd85--l59k9-eth0" Oct 29 01:28:42.180606 env[1344]: 2025-10-29 01:28:42.111 [INFO][3629] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="f52986fd96bce0b93b6800273b1e1f9fdf0fffe236b2a8ea1dcc9bbec2c9104d" HandleID="k8s-pod-network.f52986fd96bce0b93b6800273b1e1f9fdf0fffe236b2a8ea1dcc9bbec2c9104d" Workload="localhost-k8s-whisker--55bf5bd85--l59k9-eth0" Oct 29 01:28:42.180606 env[1344]: 2025-10-29 01:28:42.111 [INFO][3629] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="f52986fd96bce0b93b6800273b1e1f9fdf0fffe236b2a8ea1dcc9bbec2c9104d" HandleID="k8s-pod-network.f52986fd96bce0b93b6800273b1e1f9fdf0fffe236b2a8ea1dcc9bbec2c9104d" Workload="localhost-k8s-whisker--55bf5bd85--l59k9-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000251860), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"whisker-55bf5bd85-l59k9", "timestamp":"2025-10-29 01:28:42.111175072 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Oct 29 01:28:42.180606 env[1344]: 2025-10-29 01:28:42.111 [INFO][3629] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Oct 29 01:28:42.180606 env[1344]: 2025-10-29 01:28:42.111 [INFO][3629] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Oct 29 01:28:42.180606 env[1344]: 2025-10-29 01:28:42.111 [INFO][3629] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Oct 29 01:28:42.180606 env[1344]: 2025-10-29 01:28:42.126 [INFO][3629] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.f52986fd96bce0b93b6800273b1e1f9fdf0fffe236b2a8ea1dcc9bbec2c9104d" host="localhost" Oct 29 01:28:42.180606 env[1344]: 2025-10-29 01:28:42.134 [INFO][3629] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Oct 29 01:28:42.180606 env[1344]: 2025-10-29 01:28:42.137 [INFO][3629] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Oct 29 01:28:42.180606 env[1344]: 2025-10-29 01:28:42.138 [INFO][3629] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Oct 29 01:28:42.180606 env[1344]: 2025-10-29 01:28:42.139 [INFO][3629] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Oct 29 01:28:42.180606 env[1344]: 2025-10-29 01:28:42.139 [INFO][3629] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.f52986fd96bce0b93b6800273b1e1f9fdf0fffe236b2a8ea1dcc9bbec2c9104d" host="localhost" Oct 29 01:28:42.180606 env[1344]: 2025-10-29 01:28:42.140 [INFO][3629] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.f52986fd96bce0b93b6800273b1e1f9fdf0fffe236b2a8ea1dcc9bbec2c9104d Oct 29 01:28:42.180606 env[1344]: 2025-10-29 01:28:42.142 [INFO][3629] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.f52986fd96bce0b93b6800273b1e1f9fdf0fffe236b2a8ea1dcc9bbec2c9104d" host="localhost" Oct 29 01:28:42.180606 env[1344]: 2025-10-29 01:28:42.146 [INFO][3629] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.129/26] block=192.168.88.128/26 handle="k8s-pod-network.f52986fd96bce0b93b6800273b1e1f9fdf0fffe236b2a8ea1dcc9bbec2c9104d" host="localhost" Oct 29 01:28:42.180606 env[1344]: 2025-10-29 01:28:42.146 [INFO][3629] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.129/26] handle="k8s-pod-network.f52986fd96bce0b93b6800273b1e1f9fdf0fffe236b2a8ea1dcc9bbec2c9104d" host="localhost" Oct 29 01:28:42.180606 env[1344]: 2025-10-29 01:28:42.146 [INFO][3629] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Oct 29 01:28:42.180606 env[1344]: 2025-10-29 01:28:42.146 [INFO][3629] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.129/26] IPv6=[] ContainerID="f52986fd96bce0b93b6800273b1e1f9fdf0fffe236b2a8ea1dcc9bbec2c9104d" HandleID="k8s-pod-network.f52986fd96bce0b93b6800273b1e1f9fdf0fffe236b2a8ea1dcc9bbec2c9104d" Workload="localhost-k8s-whisker--55bf5bd85--l59k9-eth0" Oct 29 01:28:42.181388 env[1344]: 2025-10-29 01:28:42.148 [INFO][3616] cni-plugin/k8s.go 418: Populated endpoint ContainerID="f52986fd96bce0b93b6800273b1e1f9fdf0fffe236b2a8ea1dcc9bbec2c9104d" Namespace="calico-system" Pod="whisker-55bf5bd85-l59k9" WorkloadEndpoint="localhost-k8s-whisker--55bf5bd85--l59k9-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--55bf5bd85--l59k9-eth0", GenerateName:"whisker-55bf5bd85-", Namespace:"calico-system", SelfLink:"", UID:"d15c73c2-757b-4aa9-bf20-2183e910769f", ResourceVersion:"865", Generation:0, CreationTimestamp:time.Date(2025, time.October, 29, 1, 28, 41, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"55bf5bd85", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"whisker-55bf5bd85-l59k9", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali7feb3b10753", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 29 01:28:42.181388 env[1344]: 2025-10-29 01:28:42.148 [INFO][3616] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.129/32] ContainerID="f52986fd96bce0b93b6800273b1e1f9fdf0fffe236b2a8ea1dcc9bbec2c9104d" Namespace="calico-system" Pod="whisker-55bf5bd85-l59k9" WorkloadEndpoint="localhost-k8s-whisker--55bf5bd85--l59k9-eth0" Oct 29 01:28:42.181388 env[1344]: 2025-10-29 01:28:42.148 [INFO][3616] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali7feb3b10753 ContainerID="f52986fd96bce0b93b6800273b1e1f9fdf0fffe236b2a8ea1dcc9bbec2c9104d" Namespace="calico-system" Pod="whisker-55bf5bd85-l59k9" WorkloadEndpoint="localhost-k8s-whisker--55bf5bd85--l59k9-eth0" Oct 29 01:28:42.181388 env[1344]: 2025-10-29 01:28:42.162 [INFO][3616] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="f52986fd96bce0b93b6800273b1e1f9fdf0fffe236b2a8ea1dcc9bbec2c9104d" Namespace="calico-system" Pod="whisker-55bf5bd85-l59k9" WorkloadEndpoint="localhost-k8s-whisker--55bf5bd85--l59k9-eth0" Oct 29 01:28:42.181388 env[1344]: 2025-10-29 01:28:42.162 [INFO][3616] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="f52986fd96bce0b93b6800273b1e1f9fdf0fffe236b2a8ea1dcc9bbec2c9104d" Namespace="calico-system" Pod="whisker-55bf5bd85-l59k9" WorkloadEndpoint="localhost-k8s-whisker--55bf5bd85--l59k9-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--55bf5bd85--l59k9-eth0", GenerateName:"whisker-55bf5bd85-", Namespace:"calico-system", SelfLink:"", UID:"d15c73c2-757b-4aa9-bf20-2183e910769f", ResourceVersion:"865", Generation:0, CreationTimestamp:time.Date(2025, time.October, 29, 1, 28, 41, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"55bf5bd85", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"f52986fd96bce0b93b6800273b1e1f9fdf0fffe236b2a8ea1dcc9bbec2c9104d", Pod:"whisker-55bf5bd85-l59k9", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali7feb3b10753", MAC:"5a:2c:85:4f:e4:42", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 29 01:28:42.181388 env[1344]: 2025-10-29 01:28:42.178 [INFO][3616] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="f52986fd96bce0b93b6800273b1e1f9fdf0fffe236b2a8ea1dcc9bbec2c9104d" Namespace="calico-system" Pod="whisker-55bf5bd85-l59k9" WorkloadEndpoint="localhost-k8s-whisker--55bf5bd85--l59k9-eth0" Oct 29 01:28:42.194626 env[1344]: time="2025-10-29T01:28:42.194547176Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 29 01:28:42.194626 env[1344]: time="2025-10-29T01:28:42.194605464Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 29 01:28:42.195234 env[1344]: time="2025-10-29T01:28:42.194613175Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 29 01:28:42.195234 env[1344]: time="2025-10-29T01:28:42.194704257Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/f52986fd96bce0b93b6800273b1e1f9fdf0fffe236b2a8ea1dcc9bbec2c9104d pid=3650 runtime=io.containerd.runc.v2 Oct 29 01:28:42.221938 systemd-resolved[1274]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Oct 29 01:28:42.252306 env[1344]: time="2025-10-29T01:28:42.252281130Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-55bf5bd85-l59k9,Uid:d15c73c2-757b-4aa9-bf20-2183e910769f,Namespace:calico-system,Attempt:0,} returns sandbox id \"f52986fd96bce0b93b6800273b1e1f9fdf0fffe236b2a8ea1dcc9bbec2c9104d\"" Oct 29 01:28:42.256154 env[1344]: time="2025-10-29T01:28:42.256137051Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Oct 29 01:28:42.275000 audit[3684]: NETFILTER_CFG table=filter:101 family=2 entries=21 op=nft_register_rule pid=3684 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Oct 29 01:28:42.275000 audit[3684]: SYSCALL arch=c000003e syscall=46 success=yes exit=7480 a0=3 a1=7ffcb0e2bbb0 a2=0 a3=7ffcb0e2bb9c items=0 ppid=2360 pid=3684 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 29 01:28:42.275000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Oct 29 01:28:42.280000 audit[3684]: NETFILTER_CFG table=nat:102 family=2 entries=19 op=nft_register_chain pid=3684 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Oct 29 01:28:42.280000 audit[3684]: SYSCALL arch=c000003e syscall=46 success=yes exit=6276 a0=3 a1=7ffcb0e2bbb0 a2=0 a3=7ffcb0e2bb9c items=0 ppid=2360 pid=3684 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 29 01:28:42.280000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Oct 29 01:28:42.609992 env[1344]: time="2025-10-29T01:28:42.609887840Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Oct 29 01:28:42.610455 env[1344]: time="2025-10-29T01:28:42.610399717Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Oct 29 01:28:42.610808 kubelet[2259]: E1029 01:28:42.610638 2259 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Oct 29 01:28:42.610808 kubelet[2259]: E1029 01:28:42.610684 2259 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Oct 29 01:28:42.614679 kubelet[2259]: E1029 01:28:42.614649 2259 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:5c343149867d44f28e7fda414822142b,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-5dmjb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-55bf5bd85-l59k9_calico-system(d15c73c2-757b-4aa9-bf20-2183e910769f): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Oct 29 01:28:42.616732 env[1344]: time="2025-10-29T01:28:42.616716533Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Oct 29 01:28:42.665448 systemd[1]: run-containerd-runc-k8s.io-66720a4fa800a80f57832ea2f4eca7a3b4fbacada8b8b5341a62852577409ba7-runc.vLjIU1.mount: Deactivated successfully. Oct 29 01:28:42.958557 env[1344]: time="2025-10-29T01:28:42.958507528Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Oct 29 01:28:42.964723 env[1344]: time="2025-10-29T01:28:42.964691149Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Oct 29 01:28:42.965040 kubelet[2259]: E1029 01:28:42.964903 2259 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Oct 29 01:28:42.965040 kubelet[2259]: E1029 01:28:42.964933 2259 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Oct 29 01:28:42.965121 kubelet[2259]: E1029 01:28:42.965011 2259 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-5dmjb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-55bf5bd85-l59k9_calico-system(d15c73c2-757b-4aa9-bf20-2183e910769f): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Oct 29 01:28:42.968908 kubelet[2259]: E1029 01:28:42.968879 2259 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-55bf5bd85-l59k9" podUID="d15c73c2-757b-4aa9-bf20-2183e910769f" Oct 29 01:28:43.056000 audit[3760]: AVC avc: denied { bpf } for pid=3760 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 29 01:28:43.056000 audit[3760]: AVC avc: denied { bpf } for pid=3760 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 29 01:28:43.056000 audit[3760]: AVC avc: denied { perfmon } for pid=3760 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 29 01:28:43.056000 audit[3760]: AVC avc: denied { perfmon } for pid=3760 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 29 01:28:43.056000 audit[3760]: AVC avc: denied { perfmon } for pid=3760 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 29 01:28:43.056000 audit[3760]: AVC avc: denied { perfmon } for pid=3760 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 29 01:28:43.056000 audit[3760]: AVC avc: denied { perfmon } for pid=3760 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 29 01:28:43.056000 audit[3760]: AVC avc: denied { bpf } for pid=3760 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 29 01:28:43.056000 audit[3760]: AVC avc: denied { bpf } for pid=3760 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 29 01:28:43.056000 audit: BPF prog-id=10 op=LOAD Oct 29 01:28:43.056000 audit[3760]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7fffd7f4f8b0 a2=98 a3=1fffffffffffffff items=0 ppid=3719 pid=3760 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 29 01:28:43.056000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Oct 29 01:28:43.057000 audit: BPF prog-id=10 op=UNLOAD Oct 29 01:28:43.058000 audit[3760]: AVC avc: denied { bpf } for pid=3760 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 29 01:28:43.058000 audit[3760]: AVC avc: denied { bpf } for pid=3760 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 29 01:28:43.058000 audit[3760]: AVC avc: denied { perfmon } for pid=3760 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 29 01:28:43.058000 audit[3760]: AVC avc: denied { perfmon } for pid=3760 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 29 01:28:43.058000 audit[3760]: AVC avc: denied { perfmon } for pid=3760 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 29 01:28:43.058000 audit[3760]: AVC avc: denied { perfmon } for pid=3760 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 29 01:28:43.058000 audit[3760]: AVC avc: denied { perfmon } for pid=3760 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 29 01:28:43.058000 audit[3760]: AVC avc: denied { bpf } for pid=3760 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 29 01:28:43.058000 audit[3760]: AVC avc: denied { bpf } for pid=3760 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 29 01:28:43.058000 audit: BPF prog-id=11 op=LOAD Oct 29 01:28:43.058000 audit[3760]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7fffd7f4f790 a2=94 a3=3 items=0 ppid=3719 pid=3760 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 29 01:28:43.058000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Oct 29 01:28:43.058000 audit: BPF prog-id=11 op=UNLOAD Oct 29 01:28:43.058000 audit[3760]: AVC avc: denied { bpf } for pid=3760 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 29 01:28:43.058000 audit[3760]: AVC avc: denied { bpf } for pid=3760 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 29 01:28:43.058000 audit[3760]: AVC avc: denied { perfmon } for pid=3760 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 29 01:28:43.058000 audit[3760]: AVC avc: denied { perfmon } for pid=3760 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 29 01:28:43.058000 audit[3760]: AVC avc: denied { perfmon } for pid=3760 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 29 01:28:43.058000 audit[3760]: AVC avc: denied { perfmon } for pid=3760 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 29 01:28:43.058000 audit[3760]: AVC avc: denied { perfmon } for pid=3760 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 29 01:28:43.058000 audit[3760]: AVC avc: denied { bpf } for pid=3760 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 29 01:28:43.058000 audit[3760]: AVC avc: denied { bpf } for pid=3760 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 29 01:28:43.058000 audit: BPF prog-id=12 op=LOAD Oct 29 01:28:43.058000 audit[3760]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7fffd7f4f7d0 a2=94 a3=7fffd7f4f9b0 items=0 ppid=3719 pid=3760 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 29 01:28:43.058000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Oct 29 01:28:43.058000 audit: BPF prog-id=12 op=UNLOAD Oct 29 01:28:43.058000 audit[3760]: AVC avc: denied { perfmon } for pid=3760 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 29 01:28:43.058000 audit[3760]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=0 a1=7fffd7f4f8a0 a2=50 a3=a000000085 items=0 ppid=3719 pid=3760 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 29 01:28:43.058000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Oct 29 01:28:43.059000 audit[3761]: AVC avc: denied { bpf } for pid=3761 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 29 01:28:43.059000 audit[3761]: AVC avc: denied { bpf } for pid=3761 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 29 01:28:43.059000 audit[3761]: AVC avc: denied { perfmon } for pid=3761 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 29 01:28:43.059000 audit[3761]: AVC avc: denied { perfmon } for pid=3761 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 29 01:28:43.059000 audit[3761]: AVC avc: denied { perfmon } for pid=3761 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 29 01:28:43.059000 audit[3761]: AVC avc: denied { perfmon } for pid=3761 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 29 01:28:43.059000 audit[3761]: AVC avc: denied { perfmon } for pid=3761 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 29 01:28:43.059000 audit[3761]: AVC avc: denied { bpf } for pid=3761 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 29 01:28:43.059000 audit[3761]: AVC avc: denied { bpf } for pid=3761 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 29 01:28:43.059000 audit: BPF prog-id=13 op=LOAD Oct 29 01:28:43.059000 audit[3761]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffd6dbf8430 a2=98 a3=3 items=0 ppid=3719 pid=3761 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 29 01:28:43.059000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Oct 29 01:28:43.059000 audit: BPF prog-id=13 op=UNLOAD Oct 29 01:28:43.060000 audit[3761]: AVC avc: denied { bpf } for pid=3761 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 29 01:28:43.060000 audit[3761]: AVC avc: denied { bpf } for pid=3761 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 29 01:28:43.060000 audit[3761]: AVC avc: denied { perfmon } for pid=3761 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 29 01:28:43.060000 audit[3761]: AVC avc: denied { perfmon } for pid=3761 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 29 01:28:43.060000 audit[3761]: AVC avc: denied { perfmon } for pid=3761 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 29 01:28:43.060000 audit[3761]: AVC avc: denied { perfmon } for pid=3761 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 29 01:28:43.060000 audit[3761]: AVC avc: denied { perfmon } for pid=3761 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 29 01:28:43.060000 audit[3761]: AVC avc: denied { bpf } for pid=3761 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 29 01:28:43.060000 audit[3761]: AVC avc: denied { bpf } for pid=3761 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 29 01:28:43.060000 audit: BPF prog-id=14 op=LOAD Oct 29 01:28:43.060000 audit[3761]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7ffd6dbf8220 a2=94 a3=54428f items=0 ppid=3719 pid=3761 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 29 01:28:43.060000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Oct 29 01:28:43.060000 audit: BPF prog-id=14 op=UNLOAD Oct 29 01:28:43.060000 audit[3761]: AVC avc: denied { bpf } for pid=3761 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 29 01:28:43.060000 audit[3761]: AVC avc: denied { bpf } for pid=3761 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 29 01:28:43.060000 audit[3761]: AVC avc: denied { perfmon } for pid=3761 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 29 01:28:43.060000 audit[3761]: AVC avc: denied { perfmon } for pid=3761 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 29 01:28:43.060000 audit[3761]: AVC avc: denied { perfmon } for pid=3761 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 29 01:28:43.060000 audit[3761]: AVC avc: denied { perfmon } for pid=3761 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 29 01:28:43.060000 audit[3761]: AVC avc: denied { perfmon } for pid=3761 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 29 01:28:43.060000 audit[3761]: AVC avc: denied { bpf } for pid=3761 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 29 01:28:43.060000 audit[3761]: AVC avc: denied { bpf } for pid=3761 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 29 01:28:43.060000 audit: BPF prog-id=15 op=LOAD Oct 29 01:28:43.060000 audit[3761]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7ffd6dbf8250 a2=94 a3=2 items=0 ppid=3719 pid=3761 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 29 01:28:43.060000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Oct 29 01:28:43.060000 audit: BPF prog-id=15 op=UNLOAD Oct 29 01:28:43.129000 audit[3761]: AVC avc: denied { bpf } for pid=3761 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 29 01:28:43.129000 audit[3761]: AVC avc: denied { bpf } for pid=3761 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 29 01:28:43.129000 audit[3761]: AVC avc: denied { perfmon } for pid=3761 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 29 01:28:43.129000 audit[3761]: AVC avc: denied { perfmon } for pid=3761 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 29 01:28:43.129000 audit[3761]: AVC avc: denied { perfmon } for pid=3761 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 29 01:28:43.129000 audit[3761]: AVC avc: denied { perfmon } for pid=3761 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 29 01:28:43.129000 audit[3761]: AVC avc: denied { perfmon } for pid=3761 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 29 01:28:43.129000 audit[3761]: AVC avc: denied { bpf } for pid=3761 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 29 01:28:43.129000 audit[3761]: AVC avc: denied { bpf } for pid=3761 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 29 01:28:43.129000 audit: BPF prog-id=16 op=LOAD Oct 29 01:28:43.129000 audit[3761]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7ffd6dbf8110 a2=94 a3=1 items=0 ppid=3719 pid=3761 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 29 01:28:43.129000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Oct 29 01:28:43.129000 audit: BPF prog-id=16 op=UNLOAD Oct 29 01:28:43.129000 audit[3761]: AVC avc: denied { perfmon } for pid=3761 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 29 01:28:43.129000 audit[3761]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=0 a1=7ffd6dbf81e0 a2=50 a3=7ffd6dbf82c0 items=0 ppid=3719 pid=3761 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 29 01:28:43.129000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Oct 29 01:28:43.136000 audit[3761]: AVC avc: denied { bpf } for pid=3761 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 29 01:28:43.136000 audit[3761]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=12 a1=7ffd6dbf8120 a2=28 a3=0 items=0 ppid=3719 pid=3761 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 29 01:28:43.136000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Oct 29 01:28:43.136000 audit[3761]: AVC avc: denied { bpf } for pid=3761 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 29 01:28:43.136000 audit[3761]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=12 a1=7ffd6dbf8150 a2=28 a3=0 items=0 ppid=3719 pid=3761 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 29 01:28:43.136000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Oct 29 01:28:43.136000 audit[3761]: AVC avc: denied { bpf } for pid=3761 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 29 01:28:43.136000 audit[3761]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=12 a1=7ffd6dbf8060 a2=28 a3=0 items=0 ppid=3719 pid=3761 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 29 01:28:43.136000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Oct 29 01:28:43.136000 audit[3761]: AVC avc: denied { bpf } for pid=3761 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 29 01:28:43.136000 audit[3761]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=12 a1=7ffd6dbf8170 a2=28 a3=0 items=0 ppid=3719 pid=3761 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 29 01:28:43.136000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Oct 29 01:28:43.136000 audit[3761]: AVC avc: denied { bpf } for pid=3761 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 29 01:28:43.136000 audit[3761]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=12 a1=7ffd6dbf8150 a2=28 a3=0 items=0 ppid=3719 pid=3761 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 29 01:28:43.136000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Oct 29 01:28:43.136000 audit[3761]: AVC avc: denied { bpf } for pid=3761 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 29 01:28:43.136000 audit[3761]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=12 a1=7ffd6dbf8140 a2=28 a3=0 items=0 ppid=3719 pid=3761 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 29 01:28:43.136000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Oct 29 01:28:43.136000 audit[3761]: AVC avc: denied { bpf } for pid=3761 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 29 01:28:43.136000 audit[3761]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=12 a1=7ffd6dbf8170 a2=28 a3=0 items=0 ppid=3719 pid=3761 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 29 01:28:43.136000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Oct 29 01:28:43.136000 audit[3761]: AVC avc: denied { bpf } for pid=3761 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 29 01:28:43.136000 audit[3761]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=12 a1=7ffd6dbf8150 a2=28 a3=0 items=0 ppid=3719 pid=3761 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 29 01:28:43.136000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Oct 29 01:28:43.136000 audit[3761]: AVC avc: denied { bpf } for pid=3761 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 29 01:28:43.136000 audit[3761]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=12 a1=7ffd6dbf8170 a2=28 a3=0 items=0 ppid=3719 pid=3761 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 29 01:28:43.136000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Oct 29 01:28:43.136000 audit[3761]: AVC avc: denied { bpf } for pid=3761 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 29 01:28:43.136000 audit[3761]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=12 a1=7ffd6dbf8140 a2=28 a3=0 items=0 ppid=3719 pid=3761 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 29 01:28:43.136000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Oct 29 01:28:43.137000 audit[3761]: AVC avc: denied { bpf } for pid=3761 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 29 01:28:43.137000 audit[3761]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=12 a1=7ffd6dbf81b0 a2=28 a3=0 items=0 ppid=3719 pid=3761 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 29 01:28:43.137000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Oct 29 01:28:43.137000 audit[3761]: AVC avc: denied { perfmon } for pid=3761 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 29 01:28:43.137000 audit[3761]: SYSCALL arch=c000003e syscall=321 success=yes exit=5 a0=0 a1=7ffd6dbf7f60 a2=50 a3=1 items=0 ppid=3719 pid=3761 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 29 01:28:43.137000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Oct 29 01:28:43.137000 audit[3761]: AVC avc: denied { bpf } for pid=3761 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 29 01:28:43.137000 audit[3761]: AVC avc: denied { bpf } for pid=3761 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 29 01:28:43.137000 audit[3761]: AVC avc: denied { perfmon } for pid=3761 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 29 01:28:43.137000 audit[3761]: AVC avc: denied { perfmon } for pid=3761 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 29 01:28:43.137000 audit[3761]: AVC avc: denied { perfmon } for pid=3761 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 29 01:28:43.137000 audit[3761]: AVC avc: denied { perfmon } for pid=3761 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 29 01:28:43.137000 audit[3761]: AVC avc: denied { perfmon } for pid=3761 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 29 01:28:43.137000 audit[3761]: AVC avc: denied { bpf } for pid=3761 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 29 01:28:43.137000 audit[3761]: AVC avc: denied { bpf } for pid=3761 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 29 01:28:43.137000 audit: BPF prog-id=17 op=LOAD Oct 29 01:28:43.137000 audit[3761]: SYSCALL arch=c000003e syscall=321 success=yes exit=6 a0=5 a1=7ffd6dbf7f60 a2=94 a3=5 items=0 ppid=3719 pid=3761 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 29 01:28:43.137000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Oct 29 01:28:43.137000 audit: BPF prog-id=17 op=UNLOAD Oct 29 01:28:43.137000 audit[3761]: AVC avc: denied { perfmon } for pid=3761 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 29 01:28:43.137000 audit[3761]: SYSCALL arch=c000003e syscall=321 success=yes exit=5 a0=0 a1=7ffd6dbf8010 a2=50 a3=1 items=0 ppid=3719 pid=3761 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 29 01:28:43.137000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Oct 29 01:28:43.137000 audit[3761]: AVC avc: denied { bpf } for pid=3761 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 29 01:28:43.137000 audit[3761]: SYSCALL arch=c000003e syscall=321 success=yes exit=0 a0=16 a1=7ffd6dbf8130 a2=4 a3=38 items=0 ppid=3719 pid=3761 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 29 01:28:43.137000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Oct 29 01:28:43.137000 audit[3761]: AVC avc: denied { bpf } for pid=3761 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 29 01:28:43.137000 audit[3761]: AVC avc: denied { bpf } for pid=3761 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 29 01:28:43.137000 audit[3761]: AVC avc: denied { perfmon } for pid=3761 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 29 01:28:43.137000 audit[3761]: AVC avc: denied { bpf } for pid=3761 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 29 01:28:43.137000 audit[3761]: AVC avc: denied { perfmon } for pid=3761 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 29 01:28:43.137000 audit[3761]: AVC avc: denied { perfmon } for pid=3761 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 29 01:28:43.137000 audit[3761]: AVC avc: denied { perfmon } for pid=3761 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 29 01:28:43.137000 audit[3761]: AVC avc: denied { perfmon } for pid=3761 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 29 01:28:43.137000 audit[3761]: AVC avc: denied { perfmon } for pid=3761 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 29 01:28:43.137000 audit[3761]: AVC avc: denied { bpf } for pid=3761 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 29 01:28:43.137000 audit[3761]: AVC avc: denied { confidentiality } for pid=3761 comm="bpftool" lockdown_reason="use of bpf to read kernel RAM" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Oct 29 01:28:43.137000 audit[3761]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=5 a1=7ffd6dbf8180 a2=94 a3=6 items=0 ppid=3719 pid=3761 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 29 01:28:43.137000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Oct 29 01:28:43.137000 audit[3761]: AVC avc: denied { bpf } for pid=3761 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 29 01:28:43.137000 audit[3761]: AVC avc: denied { bpf } for pid=3761 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 29 01:28:43.137000 audit[3761]: AVC avc: denied { perfmon } for pid=3761 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 29 01:28:43.137000 audit[3761]: AVC avc: denied { bpf } for pid=3761 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 29 01:28:43.137000 audit[3761]: AVC avc: denied { perfmon } for pid=3761 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 29 01:28:43.137000 audit[3761]: AVC avc: denied { perfmon } for pid=3761 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 29 01:28:43.137000 audit[3761]: AVC avc: denied { perfmon } for pid=3761 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 29 01:28:43.137000 audit[3761]: AVC avc: denied { perfmon } for pid=3761 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 29 01:28:43.137000 audit[3761]: AVC avc: denied { perfmon } for pid=3761 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 29 01:28:43.137000 audit[3761]: AVC avc: denied { bpf } for pid=3761 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 29 01:28:43.137000 audit[3761]: AVC avc: denied { confidentiality } for pid=3761 comm="bpftool" lockdown_reason="use of bpf to read kernel RAM" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Oct 29 01:28:43.137000 audit[3761]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=5 a1=7ffd6dbf7930 a2=94 a3=88 items=0 ppid=3719 pid=3761 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 29 01:28:43.137000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Oct 29 01:28:43.137000 audit[3761]: AVC avc: denied { bpf } for pid=3761 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 29 01:28:43.137000 audit[3761]: AVC avc: denied { bpf } for pid=3761 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 29 01:28:43.137000 audit[3761]: AVC avc: denied { perfmon } for pid=3761 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 29 01:28:43.137000 audit[3761]: AVC avc: denied { bpf } for pid=3761 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 29 01:28:43.137000 audit[3761]: AVC avc: denied { perfmon } for pid=3761 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 29 01:28:43.137000 audit[3761]: AVC avc: denied { perfmon } for pid=3761 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 29 01:28:43.137000 audit[3761]: AVC avc: denied { perfmon } for pid=3761 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 29 01:28:43.137000 audit[3761]: AVC avc: denied { perfmon } for pid=3761 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 29 01:28:43.137000 audit[3761]: AVC avc: denied { perfmon } for pid=3761 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 29 01:28:43.137000 audit[3761]: AVC avc: denied { bpf } for pid=3761 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 29 01:28:43.137000 audit[3761]: AVC avc: denied { confidentiality } for pid=3761 comm="bpftool" lockdown_reason="use of bpf to read kernel RAM" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Oct 29 01:28:43.137000 audit[3761]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=5 a1=7ffd6dbf7930 a2=94 a3=88 items=0 ppid=3719 pid=3761 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 29 01:28:43.137000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Oct 29 01:28:43.144000 audit[3764]: AVC avc: denied { bpf } for pid=3764 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 29 01:28:43.144000 audit[3764]: AVC avc: denied { bpf } for pid=3764 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 29 01:28:43.144000 audit[3764]: AVC avc: denied { perfmon } for pid=3764 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 29 01:28:43.144000 audit[3764]: AVC avc: denied { perfmon } for pid=3764 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 29 01:28:43.144000 audit[3764]: AVC avc: denied { perfmon } for pid=3764 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 29 01:28:43.144000 audit[3764]: AVC avc: denied { perfmon } for pid=3764 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 29 01:28:43.144000 audit[3764]: AVC avc: denied { perfmon } for pid=3764 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 29 01:28:43.144000 audit[3764]: AVC avc: denied { bpf } for pid=3764 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 29 01:28:43.144000 audit[3764]: AVC avc: denied { bpf } for pid=3764 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 29 01:28:43.144000 audit: BPF prog-id=18 op=LOAD Oct 29 01:28:43.144000 audit[3764]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffde3a5c4c0 a2=98 a3=1999999999999999 items=0 ppid=3719 pid=3764 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 29 01:28:43.144000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Oct 29 01:28:43.144000 audit: BPF prog-id=18 op=UNLOAD Oct 29 01:28:43.144000 audit[3764]: AVC avc: denied { bpf } for pid=3764 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 29 01:28:43.144000 audit[3764]: AVC avc: denied { bpf } for pid=3764 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 29 01:28:43.144000 audit[3764]: AVC avc: denied { perfmon } for pid=3764 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 29 01:28:43.144000 audit[3764]: AVC avc: denied { perfmon } for pid=3764 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 29 01:28:43.144000 audit[3764]: AVC avc: denied { perfmon } for pid=3764 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 29 01:28:43.144000 audit[3764]: AVC avc: denied { perfmon } for pid=3764 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 29 01:28:43.144000 audit[3764]: AVC avc: denied { perfmon } for pid=3764 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 29 01:28:43.144000 audit[3764]: AVC avc: denied { bpf } for pid=3764 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 29 01:28:43.144000 audit[3764]: AVC avc: denied { bpf } for pid=3764 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 29 01:28:43.144000 audit: BPF prog-id=19 op=LOAD Oct 29 01:28:43.144000 audit[3764]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffde3a5c3a0 a2=94 a3=ffff items=0 ppid=3719 pid=3764 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 29 01:28:43.144000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Oct 29 01:28:43.145000 audit: BPF prog-id=19 op=UNLOAD Oct 29 01:28:43.145000 audit[3764]: AVC avc: denied { bpf } for pid=3764 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 29 01:28:43.145000 audit[3764]: AVC avc: denied { bpf } for pid=3764 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 29 01:28:43.145000 audit[3764]: AVC avc: denied { perfmon } for pid=3764 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 29 01:28:43.145000 audit[3764]: AVC avc: denied { perfmon } for pid=3764 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 29 01:28:43.145000 audit[3764]: AVC avc: denied { perfmon } for pid=3764 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 29 01:28:43.145000 audit[3764]: AVC avc: denied { perfmon } for pid=3764 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 29 01:28:43.145000 audit[3764]: AVC avc: denied { perfmon } for pid=3764 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 29 01:28:43.145000 audit[3764]: AVC avc: denied { bpf } for pid=3764 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 29 01:28:43.145000 audit[3764]: AVC avc: denied { bpf } for pid=3764 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 29 01:28:43.145000 audit: BPF prog-id=20 op=LOAD Oct 29 01:28:43.145000 audit[3764]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffde3a5c3e0 a2=94 a3=7ffde3a5c5c0 items=0 ppid=3719 pid=3764 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 29 01:28:43.145000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Oct 29 01:28:43.145000 audit: BPF prog-id=20 op=UNLOAD Oct 29 01:28:43.190686 systemd-networkd[1110]: vxlan.calico: Link UP Oct 29 01:28:43.190690 systemd-networkd[1110]: vxlan.calico: Gained carrier Oct 29 01:28:43.214000 audit[3791]: AVC avc: denied { bpf } for pid=3791 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 29 01:28:43.214000 audit[3791]: AVC avc: denied { bpf } for pid=3791 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 29 01:28:43.214000 audit[3791]: AVC avc: denied { perfmon } for pid=3791 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 29 01:28:43.214000 audit[3791]: AVC avc: denied { perfmon } for pid=3791 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 29 01:28:43.214000 audit[3791]: AVC avc: denied { perfmon } for pid=3791 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 29 01:28:43.214000 audit[3791]: AVC avc: denied { perfmon } for pid=3791 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 29 01:28:43.214000 audit[3791]: AVC avc: denied { perfmon } for pid=3791 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 29 01:28:43.214000 audit[3791]: AVC avc: denied { bpf } for pid=3791 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 29 01:28:43.214000 audit[3791]: AVC avc: denied { bpf } for pid=3791 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 29 01:28:43.214000 audit: BPF prog-id=21 op=LOAD Oct 29 01:28:43.214000 audit[3791]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffe31c943a0 a2=98 a3=0 items=0 ppid=3719 pid=3791 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 29 01:28:43.214000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Oct 29 01:28:43.214000 audit: BPF prog-id=21 op=UNLOAD Oct 29 01:28:43.217000 audit[3791]: AVC avc: denied { bpf } for pid=3791 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 29 01:28:43.217000 audit[3791]: AVC avc: denied { bpf } for pid=3791 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 29 01:28:43.217000 audit[3791]: AVC avc: denied { perfmon } for pid=3791 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 29 01:28:43.217000 audit[3791]: AVC avc: denied { perfmon } for pid=3791 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 29 01:28:43.217000 audit[3791]: AVC avc: denied { perfmon } for pid=3791 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 29 01:28:43.217000 audit[3791]: AVC avc: denied { perfmon } for pid=3791 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 29 01:28:43.217000 audit[3791]: AVC avc: denied { perfmon } for pid=3791 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 29 01:28:43.217000 audit[3791]: AVC avc: denied { bpf } for pid=3791 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 29 01:28:43.217000 audit[3791]: AVC avc: denied { bpf } for pid=3791 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 29 01:28:43.217000 audit: BPF prog-id=22 op=LOAD Oct 29 01:28:43.217000 audit[3791]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffe31c941b0 a2=94 a3=54428f items=0 ppid=3719 pid=3791 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 29 01:28:43.217000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Oct 29 01:28:43.217000 audit: BPF prog-id=22 op=UNLOAD Oct 29 01:28:43.217000 audit[3791]: AVC avc: denied { bpf } for pid=3791 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 29 01:28:43.217000 audit[3791]: AVC avc: denied { bpf } for pid=3791 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 29 01:28:43.217000 audit[3791]: AVC avc: denied { perfmon } for pid=3791 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 29 01:28:43.217000 audit[3791]: AVC avc: denied { perfmon } for pid=3791 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 29 01:28:43.217000 audit[3791]: AVC avc: denied { perfmon } for pid=3791 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 29 01:28:43.217000 audit[3791]: AVC avc: denied { perfmon } for pid=3791 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 29 01:28:43.217000 audit[3791]: AVC avc: denied { perfmon } for pid=3791 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 29 01:28:43.217000 audit[3791]: AVC avc: denied { bpf } for pid=3791 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 29 01:28:43.217000 audit[3791]: AVC avc: denied { bpf } for pid=3791 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 29 01:28:43.217000 audit: BPF prog-id=23 op=LOAD Oct 29 01:28:43.217000 audit[3791]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffe31c941e0 a2=94 a3=2 items=0 ppid=3719 pid=3791 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 29 01:28:43.217000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Oct 29 01:28:43.217000 audit: BPF prog-id=23 op=UNLOAD Oct 29 01:28:43.217000 audit[3791]: AVC avc: denied { bpf } for pid=3791 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 29 01:28:43.217000 audit[3791]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=12 a1=7ffe31c940b0 a2=28 a3=0 items=0 ppid=3719 pid=3791 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 29 01:28:43.217000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Oct 29 01:28:43.217000 audit[3791]: AVC avc: denied { bpf } for pid=3791 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 29 01:28:43.217000 audit[3791]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=12 a1=7ffe31c940e0 a2=28 a3=0 items=0 ppid=3719 pid=3791 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 29 01:28:43.217000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Oct 29 01:28:43.217000 audit[3791]: AVC avc: denied { bpf } for pid=3791 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 29 01:28:43.217000 audit[3791]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=12 a1=7ffe31c93ff0 a2=28 a3=0 items=0 ppid=3719 pid=3791 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 29 01:28:43.217000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Oct 29 01:28:43.217000 audit[3791]: AVC avc: denied { bpf } for pid=3791 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 29 01:28:43.217000 audit[3791]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=12 a1=7ffe31c94100 a2=28 a3=0 items=0 ppid=3719 pid=3791 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 29 01:28:43.217000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Oct 29 01:28:43.217000 audit[3791]: AVC avc: denied { bpf } for pid=3791 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 29 01:28:43.217000 audit[3791]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=12 a1=7ffe31c940e0 a2=28 a3=0 items=0 ppid=3719 pid=3791 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 29 01:28:43.217000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Oct 29 01:28:43.217000 audit[3791]: AVC avc: denied { bpf } for pid=3791 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 29 01:28:43.217000 audit[3791]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=12 a1=7ffe31c940d0 a2=28 a3=0 items=0 ppid=3719 pid=3791 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 29 01:28:43.217000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Oct 29 01:28:43.217000 audit[3791]: AVC avc: denied { bpf } for pid=3791 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 29 01:28:43.217000 audit[3791]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=12 a1=7ffe31c94100 a2=28 a3=0 items=0 ppid=3719 pid=3791 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 29 01:28:43.217000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Oct 29 01:28:43.217000 audit[3791]: AVC avc: denied { bpf } for pid=3791 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 29 01:28:43.217000 audit[3791]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=12 a1=7ffe31c940e0 a2=28 a3=0 items=0 ppid=3719 pid=3791 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 29 01:28:43.217000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Oct 29 01:28:43.217000 audit[3791]: AVC avc: denied { bpf } for pid=3791 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 29 01:28:43.217000 audit[3791]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=12 a1=7ffe31c94100 a2=28 a3=0 items=0 ppid=3719 pid=3791 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 29 01:28:43.217000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Oct 29 01:28:43.217000 audit[3791]: AVC avc: denied { bpf } for pid=3791 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 29 01:28:43.217000 audit[3791]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=12 a1=7ffe31c940d0 a2=28 a3=0 items=0 ppid=3719 pid=3791 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 29 01:28:43.217000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Oct 29 01:28:43.217000 audit[3791]: AVC avc: denied { bpf } for pid=3791 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 29 01:28:43.217000 audit[3791]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=12 a1=7ffe31c94140 a2=28 a3=0 items=0 ppid=3719 pid=3791 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 29 01:28:43.217000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Oct 29 01:28:43.217000 audit[3791]: AVC avc: denied { bpf } for pid=3791 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 29 01:28:43.217000 audit[3791]: AVC avc: denied { bpf } for pid=3791 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 29 01:28:43.217000 audit[3791]: AVC avc: denied { perfmon } for pid=3791 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 29 01:28:43.217000 audit[3791]: AVC avc: denied { perfmon } for pid=3791 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 29 01:28:43.217000 audit[3791]: AVC avc: denied { perfmon } for pid=3791 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 29 01:28:43.217000 audit[3791]: AVC avc: denied { perfmon } for pid=3791 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 29 01:28:43.217000 audit[3791]: AVC avc: denied { perfmon } for pid=3791 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 29 01:28:43.217000 audit[3791]: AVC avc: denied { bpf } for pid=3791 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 29 01:28:43.217000 audit[3791]: AVC avc: denied { bpf } for pid=3791 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 29 01:28:43.217000 audit: BPF prog-id=24 op=LOAD Oct 29 01:28:43.217000 audit[3791]: SYSCALL arch=c000003e syscall=321 success=yes exit=6 a0=5 a1=7ffe31c93fb0 a2=94 a3=0 items=0 ppid=3719 pid=3791 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 29 01:28:43.217000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Oct 29 01:28:43.217000 audit: BPF prog-id=24 op=UNLOAD Oct 29 01:28:43.217000 audit[3791]: AVC avc: denied { bpf } for pid=3791 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 29 01:28:43.217000 audit[3791]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=0 a1=7ffe31c93fa0 a2=50 a3=2800 items=0 ppid=3719 pid=3791 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 29 01:28:43.217000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Oct 29 01:28:43.217000 audit[3791]: AVC avc: denied { bpf } for pid=3791 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 29 01:28:43.217000 audit[3791]: SYSCALL arch=c000003e syscall=321 success=yes exit=6 a0=0 a1=7ffe31c93fa0 a2=50 a3=2800 items=0 ppid=3719 pid=3791 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 29 01:28:43.217000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Oct 29 01:28:43.217000 audit[3791]: AVC avc: denied { bpf } for pid=3791 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 29 01:28:43.217000 audit[3791]: AVC avc: denied { bpf } for pid=3791 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 29 01:28:43.217000 audit[3791]: AVC avc: denied { bpf } for pid=3791 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 29 01:28:43.217000 audit[3791]: AVC avc: denied { perfmon } for pid=3791 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 29 01:28:43.217000 audit[3791]: AVC avc: denied { perfmon } for pid=3791 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 29 01:28:43.217000 audit[3791]: AVC avc: denied { perfmon } for pid=3791 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 29 01:28:43.217000 audit[3791]: AVC avc: denied { perfmon } for pid=3791 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 29 01:28:43.217000 audit[3791]: AVC avc: denied { perfmon } for pid=3791 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 29 01:28:43.217000 audit[3791]: AVC avc: denied { bpf } for pid=3791 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 29 01:28:43.217000 audit[3791]: AVC avc: denied { bpf } for pid=3791 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 29 01:28:43.217000 audit: BPF prog-id=25 op=LOAD Oct 29 01:28:43.217000 audit[3791]: SYSCALL arch=c000003e syscall=321 success=yes exit=6 a0=5 a1=7ffe31c937c0 a2=94 a3=2 items=0 ppid=3719 pid=3791 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 29 01:28:43.217000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Oct 29 01:28:43.217000 audit: BPF prog-id=25 op=UNLOAD Oct 29 01:28:43.217000 audit[3791]: AVC avc: denied { bpf } for pid=3791 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 29 01:28:43.217000 audit[3791]: AVC avc: denied { bpf } for pid=3791 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 29 01:28:43.217000 audit[3791]: AVC avc: denied { bpf } for pid=3791 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 29 01:28:43.217000 audit[3791]: AVC avc: denied { perfmon } for pid=3791 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 29 01:28:43.217000 audit[3791]: AVC avc: denied { perfmon } for pid=3791 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 29 01:28:43.217000 audit[3791]: AVC avc: denied { perfmon } for pid=3791 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 29 01:28:43.217000 audit[3791]: AVC avc: denied { perfmon } for pid=3791 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 29 01:28:43.217000 audit[3791]: AVC avc: denied { perfmon } for pid=3791 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 29 01:28:43.217000 audit[3791]: AVC avc: denied { bpf } for pid=3791 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 29 01:28:43.217000 audit[3791]: AVC avc: denied { bpf } for pid=3791 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 29 01:28:43.217000 audit: BPF prog-id=26 op=LOAD Oct 29 01:28:43.217000 audit[3791]: SYSCALL arch=c000003e syscall=321 success=yes exit=6 a0=5 a1=7ffe31c938c0 a2=94 a3=30 items=0 ppid=3719 pid=3791 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 29 01:28:43.217000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Oct 29 01:28:43.223000 audit[3793]: AVC avc: denied { bpf } for pid=3793 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 29 01:28:43.223000 audit[3793]: AVC avc: denied { bpf } for pid=3793 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 29 01:28:43.223000 audit[3793]: AVC avc: denied { perfmon } for pid=3793 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 29 01:28:43.223000 audit[3793]: AVC avc: denied { perfmon } for pid=3793 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 29 01:28:43.223000 audit[3793]: AVC avc: denied { perfmon } for pid=3793 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 29 01:28:43.223000 audit[3793]: AVC avc: denied { perfmon } for pid=3793 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 29 01:28:43.223000 audit[3793]: AVC avc: denied { perfmon } for pid=3793 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 29 01:28:43.223000 audit[3793]: AVC avc: denied { bpf } for pid=3793 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 29 01:28:43.223000 audit[3793]: AVC avc: denied { bpf } for pid=3793 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 29 01:28:43.223000 audit: BPF prog-id=27 op=LOAD Oct 29 01:28:43.223000 audit[3793]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7fffcf583850 a2=98 a3=0 items=0 ppid=3719 pid=3793 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 29 01:28:43.223000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Oct 29 01:28:43.223000 audit: BPF prog-id=27 op=UNLOAD Oct 29 01:28:43.223000 audit[3793]: AVC avc: denied { bpf } for pid=3793 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 29 01:28:43.223000 audit[3793]: AVC avc: denied { bpf } for pid=3793 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 29 01:28:43.223000 audit[3793]: AVC avc: denied { perfmon } for pid=3793 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 29 01:28:43.223000 audit[3793]: AVC avc: denied { perfmon } for pid=3793 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 29 01:28:43.223000 audit[3793]: AVC avc: denied { perfmon } for pid=3793 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 29 01:28:43.223000 audit[3793]: AVC avc: denied { perfmon } for pid=3793 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 29 01:28:43.223000 audit[3793]: AVC avc: denied { perfmon } for pid=3793 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 29 01:28:43.223000 audit[3793]: AVC avc: denied { bpf } for pid=3793 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 29 01:28:43.223000 audit[3793]: AVC avc: denied { bpf } for pid=3793 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 29 01:28:43.223000 audit: BPF prog-id=28 op=LOAD Oct 29 01:28:43.223000 audit[3793]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7fffcf583640 a2=94 a3=54428f items=0 ppid=3719 pid=3793 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 29 01:28:43.223000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Oct 29 01:28:43.223000 audit: BPF prog-id=28 op=UNLOAD Oct 29 01:28:43.223000 audit[3793]: AVC avc: denied { bpf } for pid=3793 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 29 01:28:43.223000 audit[3793]: AVC avc: denied { bpf } for pid=3793 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 29 01:28:43.223000 audit[3793]: AVC avc: denied { perfmon } for pid=3793 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 29 01:28:43.223000 audit[3793]: AVC avc: denied { perfmon } for pid=3793 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 29 01:28:43.223000 audit[3793]: AVC avc: denied { perfmon } for pid=3793 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 29 01:28:43.223000 audit[3793]: AVC avc: denied { perfmon } for pid=3793 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 29 01:28:43.223000 audit[3793]: AVC avc: denied { perfmon } for pid=3793 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 29 01:28:43.223000 audit[3793]: AVC avc: denied { bpf } for pid=3793 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 29 01:28:43.223000 audit[3793]: AVC avc: denied { bpf } for pid=3793 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 29 01:28:43.223000 audit: BPF prog-id=29 op=LOAD Oct 29 01:28:43.223000 audit[3793]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7fffcf583670 a2=94 a3=2 items=0 ppid=3719 pid=3793 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 29 01:28:43.223000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Oct 29 01:28:43.223000 audit: BPF prog-id=29 op=UNLOAD Oct 29 01:28:43.263263 systemd-networkd[1110]: cali7feb3b10753: Gained IPv6LL Oct 29 01:28:43.342000 audit[3793]: AVC avc: denied { bpf } for pid=3793 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 29 01:28:43.342000 audit[3793]: AVC avc: denied { bpf } for pid=3793 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 29 01:28:43.342000 audit[3793]: AVC avc: denied { perfmon } for pid=3793 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 29 01:28:43.342000 audit[3793]: AVC avc: denied { perfmon } for pid=3793 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 29 01:28:43.342000 audit[3793]: AVC avc: denied { perfmon } for pid=3793 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 29 01:28:43.342000 audit[3793]: AVC avc: denied { perfmon } for pid=3793 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 29 01:28:43.342000 audit[3793]: AVC avc: denied { perfmon } for pid=3793 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 29 01:28:43.342000 audit[3793]: AVC avc: denied { bpf } for pid=3793 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 29 01:28:43.342000 audit[3793]: AVC avc: denied { bpf } for pid=3793 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 29 01:28:43.342000 audit: BPF prog-id=30 op=LOAD Oct 29 01:28:43.342000 audit[3793]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7fffcf583530 a2=94 a3=1 items=0 ppid=3719 pid=3793 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 29 01:28:43.342000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Oct 29 01:28:43.342000 audit: BPF prog-id=30 op=UNLOAD Oct 29 01:28:43.342000 audit[3793]: AVC avc: denied { perfmon } for pid=3793 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 29 01:28:43.342000 audit[3793]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=0 a1=7fffcf583600 a2=50 a3=7fffcf5836e0 items=0 ppid=3719 pid=3793 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 29 01:28:43.342000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Oct 29 01:28:43.349000 audit[3793]: AVC avc: denied { bpf } for pid=3793 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 29 01:28:43.349000 audit[3793]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=12 a1=7fffcf583540 a2=28 a3=0 items=0 ppid=3719 pid=3793 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 29 01:28:43.349000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Oct 29 01:28:43.349000 audit[3793]: AVC avc: denied { bpf } for pid=3793 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 29 01:28:43.349000 audit[3793]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=12 a1=7fffcf583570 a2=28 a3=0 items=0 ppid=3719 pid=3793 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 29 01:28:43.349000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Oct 29 01:28:43.349000 audit[3793]: AVC avc: denied { bpf } for pid=3793 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 29 01:28:43.349000 audit[3793]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=12 a1=7fffcf583480 a2=28 a3=0 items=0 ppid=3719 pid=3793 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 29 01:28:43.349000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Oct 29 01:28:43.349000 audit[3793]: AVC avc: denied { bpf } for pid=3793 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 29 01:28:43.349000 audit[3793]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=12 a1=7fffcf583590 a2=28 a3=0 items=0 ppid=3719 pid=3793 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 29 01:28:43.349000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Oct 29 01:28:43.349000 audit[3793]: AVC avc: denied { bpf } for pid=3793 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 29 01:28:43.349000 audit[3793]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=12 a1=7fffcf583570 a2=28 a3=0 items=0 ppid=3719 pid=3793 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 29 01:28:43.349000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Oct 29 01:28:43.349000 audit[3793]: AVC avc: denied { bpf } for pid=3793 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 29 01:28:43.349000 audit[3793]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=12 a1=7fffcf583560 a2=28 a3=0 items=0 ppid=3719 pid=3793 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 29 01:28:43.349000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Oct 29 01:28:43.349000 audit[3793]: AVC avc: denied { bpf } for pid=3793 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 29 01:28:43.349000 audit[3793]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=12 a1=7fffcf583590 a2=28 a3=0 items=0 ppid=3719 pid=3793 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 29 01:28:43.349000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Oct 29 01:28:43.349000 audit[3793]: AVC avc: denied { bpf } for pid=3793 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 29 01:28:43.349000 audit[3793]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=12 a1=7fffcf583570 a2=28 a3=0 items=0 ppid=3719 pid=3793 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 29 01:28:43.349000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Oct 29 01:28:43.349000 audit[3793]: AVC avc: denied { bpf } for pid=3793 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 29 01:28:43.349000 audit[3793]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=12 a1=7fffcf583590 a2=28 a3=0 items=0 ppid=3719 pid=3793 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 29 01:28:43.349000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Oct 29 01:28:43.349000 audit[3793]: AVC avc: denied { bpf } for pid=3793 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 29 01:28:43.349000 audit[3793]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=12 a1=7fffcf583560 a2=28 a3=0 items=0 ppid=3719 pid=3793 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 29 01:28:43.349000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Oct 29 01:28:43.349000 audit[3793]: AVC avc: denied { bpf } for pid=3793 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 29 01:28:43.349000 audit[3793]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=12 a1=7fffcf5835d0 a2=28 a3=0 items=0 ppid=3719 pid=3793 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 29 01:28:43.349000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Oct 29 01:28:43.350000 audit[3793]: AVC avc: denied { perfmon } for pid=3793 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 29 01:28:43.350000 audit[3793]: SYSCALL arch=c000003e syscall=321 success=yes exit=5 a0=0 a1=7fffcf583380 a2=50 a3=1 items=0 ppid=3719 pid=3793 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 29 01:28:43.350000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Oct 29 01:28:43.350000 audit[3793]: AVC avc: denied { bpf } for pid=3793 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 29 01:28:43.350000 audit[3793]: AVC avc: denied { bpf } for pid=3793 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 29 01:28:43.350000 audit[3793]: AVC avc: denied { perfmon } for pid=3793 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 29 01:28:43.350000 audit[3793]: AVC avc: denied { perfmon } for pid=3793 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 29 01:28:43.350000 audit[3793]: AVC avc: denied { perfmon } for pid=3793 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 29 01:28:43.350000 audit[3793]: AVC avc: denied { perfmon } for pid=3793 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 29 01:28:43.350000 audit[3793]: AVC avc: denied { perfmon } for pid=3793 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 29 01:28:43.350000 audit[3793]: AVC avc: denied { bpf } for pid=3793 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 29 01:28:43.350000 audit[3793]: AVC avc: denied { bpf } for pid=3793 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 29 01:28:43.350000 audit: BPF prog-id=31 op=LOAD Oct 29 01:28:43.350000 audit[3793]: SYSCALL arch=c000003e syscall=321 success=yes exit=6 a0=5 a1=7fffcf583380 a2=94 a3=5 items=0 ppid=3719 pid=3793 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 29 01:28:43.350000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Oct 29 01:28:43.350000 audit: BPF prog-id=31 op=UNLOAD Oct 29 01:28:43.350000 audit[3793]: AVC avc: denied { perfmon } for pid=3793 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 29 01:28:43.350000 audit[3793]: SYSCALL arch=c000003e syscall=321 success=yes exit=5 a0=0 a1=7fffcf583430 a2=50 a3=1 items=0 ppid=3719 pid=3793 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 29 01:28:43.350000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Oct 29 01:28:43.350000 audit[3793]: AVC avc: denied { bpf } for pid=3793 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 29 01:28:43.350000 audit[3793]: SYSCALL arch=c000003e syscall=321 success=yes exit=0 a0=16 a1=7fffcf583550 a2=4 a3=38 items=0 ppid=3719 pid=3793 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 29 01:28:43.350000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Oct 29 01:28:43.350000 audit[3793]: AVC avc: denied { bpf } for pid=3793 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 29 01:28:43.350000 audit[3793]: AVC avc: denied { bpf } for pid=3793 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 29 01:28:43.350000 audit[3793]: AVC avc: denied { perfmon } for pid=3793 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 29 01:28:43.350000 audit[3793]: AVC avc: denied { bpf } for pid=3793 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 29 01:28:43.350000 audit[3793]: AVC avc: denied { perfmon } for pid=3793 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 29 01:28:43.350000 audit[3793]: AVC avc: denied { perfmon } for pid=3793 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 29 01:28:43.350000 audit[3793]: AVC avc: denied { perfmon } for pid=3793 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 29 01:28:43.350000 audit[3793]: AVC avc: denied { perfmon } for pid=3793 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 29 01:28:43.350000 audit[3793]: AVC avc: denied { perfmon } for pid=3793 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 29 01:28:43.350000 audit[3793]: AVC avc: denied { bpf } for pid=3793 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 29 01:28:43.350000 audit[3793]: AVC avc: denied { confidentiality } for pid=3793 comm="bpftool" lockdown_reason="use of bpf to read kernel RAM" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Oct 29 01:28:43.350000 audit[3793]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=5 a1=7fffcf5835a0 a2=94 a3=6 items=0 ppid=3719 pid=3793 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 29 01:28:43.350000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Oct 29 01:28:43.350000 audit[3793]: AVC avc: denied { bpf } for pid=3793 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 29 01:28:43.350000 audit[3793]: AVC avc: denied { bpf } for pid=3793 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 29 01:28:43.350000 audit[3793]: AVC avc: denied { perfmon } for pid=3793 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 29 01:28:43.350000 audit[3793]: AVC avc: denied { bpf } for pid=3793 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 29 01:28:43.350000 audit[3793]: AVC avc: denied { perfmon } for pid=3793 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 29 01:28:43.350000 audit[3793]: AVC avc: denied { perfmon } for pid=3793 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 29 01:28:43.350000 audit[3793]: AVC avc: denied { perfmon } for pid=3793 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 29 01:28:43.350000 audit[3793]: AVC avc: denied { perfmon } for pid=3793 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 29 01:28:43.350000 audit[3793]: AVC avc: denied { perfmon } for pid=3793 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 29 01:28:43.350000 audit[3793]: AVC avc: denied { bpf } for pid=3793 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 29 01:28:43.350000 audit[3793]: AVC avc: denied { confidentiality } for pid=3793 comm="bpftool" lockdown_reason="use of bpf to read kernel RAM" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Oct 29 01:28:43.350000 audit[3793]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=5 a1=7fffcf582d50 a2=94 a3=88 items=0 ppid=3719 pid=3793 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 29 01:28:43.350000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Oct 29 01:28:43.350000 audit[3793]: AVC avc: denied { bpf } for pid=3793 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 29 01:28:43.350000 audit[3793]: AVC avc: denied { bpf } for pid=3793 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 29 01:28:43.350000 audit[3793]: AVC avc: denied { perfmon } for pid=3793 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 29 01:28:43.350000 audit[3793]: AVC avc: denied { bpf } for pid=3793 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 29 01:28:43.350000 audit[3793]: AVC avc: denied { perfmon } for pid=3793 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 29 01:28:43.350000 audit[3793]: AVC avc: denied { perfmon } for pid=3793 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 29 01:28:43.350000 audit[3793]: AVC avc: denied { perfmon } for pid=3793 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 29 01:28:43.350000 audit[3793]: AVC avc: denied { perfmon } for pid=3793 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 29 01:28:43.350000 audit[3793]: AVC avc: denied { perfmon } for pid=3793 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 29 01:28:43.350000 audit[3793]: AVC avc: denied { bpf } for pid=3793 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 29 01:28:43.350000 audit[3793]: AVC avc: denied { confidentiality } for pid=3793 comm="bpftool" lockdown_reason="use of bpf to read kernel RAM" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Oct 29 01:28:43.350000 audit[3793]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=5 a1=7fffcf582d50 a2=94 a3=88 items=0 ppid=3719 pid=3793 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 29 01:28:43.350000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Oct 29 01:28:43.350000 audit[3793]: AVC avc: denied { bpf } for pid=3793 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 29 01:28:43.350000 audit[3793]: SYSCALL arch=c000003e syscall=321 success=yes exit=0 a0=f a1=7fffcf584780 a2=10 a3=f8f00800 items=0 ppid=3719 pid=3793 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 29 01:28:43.350000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Oct 29 01:28:43.350000 audit[3793]: AVC avc: denied { bpf } for pid=3793 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 29 01:28:43.350000 audit[3793]: SYSCALL arch=c000003e syscall=321 success=yes exit=0 a0=f a1=7fffcf584620 a2=10 a3=3 items=0 ppid=3719 pid=3793 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 29 01:28:43.350000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Oct 29 01:28:43.350000 audit[3793]: AVC avc: denied { bpf } for pid=3793 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 29 01:28:43.350000 audit[3793]: SYSCALL arch=c000003e syscall=321 success=yes exit=0 a0=f a1=7fffcf5845c0 a2=10 a3=3 items=0 ppid=3719 pid=3793 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 29 01:28:43.350000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Oct 29 01:28:43.351000 audit[3793]: AVC avc: denied { bpf } for pid=3793 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 29 01:28:43.351000 audit[3793]: SYSCALL arch=c000003e syscall=321 success=yes exit=0 a0=f a1=7fffcf5845c0 a2=10 a3=7 items=0 ppid=3719 pid=3793 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 29 01:28:43.351000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Oct 29 01:28:43.354000 audit: BPF prog-id=26 op=UNLOAD Oct 29 01:28:43.399000 audit[3830]: NETFILTER_CFG table=mangle:103 family=2 entries=16 op=nft_register_chain pid=3830 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Oct 29 01:28:43.399000 audit[3830]: SYSCALL arch=c000003e syscall=46 success=yes exit=6868 a0=3 a1=7ffc7fd23570 a2=0 a3=7ffc7fd2355c items=0 ppid=3719 pid=3830 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 29 01:28:43.399000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Oct 29 01:28:43.406000 audit[3828]: NETFILTER_CFG table=nat:104 family=2 entries=15 op=nft_register_chain pid=3828 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Oct 29 01:28:43.406000 audit[3828]: SYSCALL arch=c000003e syscall=46 success=yes exit=5084 a0=3 a1=7ffce64ac6c0 a2=0 a3=7ffce64ac6ac items=0 ppid=3719 pid=3828 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 29 01:28:43.406000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Oct 29 01:28:43.409000 audit[3829]: NETFILTER_CFG table=raw:105 family=2 entries=21 op=nft_register_chain pid=3829 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Oct 29 01:28:43.409000 audit[3829]: SYSCALL arch=c000003e syscall=46 success=yes exit=8452 a0=3 a1=7ffe5e260680 a2=0 a3=7ffe5e26066c items=0 ppid=3719 pid=3829 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 29 01:28:43.409000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Oct 29 01:28:43.411000 audit[3831]: NETFILTER_CFG table=filter:106 family=2 entries=94 op=nft_register_chain pid=3831 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Oct 29 01:28:43.411000 audit[3831]: SYSCALL arch=c000003e syscall=46 success=yes exit=53116 a0=3 a1=7ffe876f29d0 a2=0 a3=7ffe876f29bc items=0 ppid=3719 pid=3831 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 29 01:28:43.411000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Oct 29 01:28:43.492829 kubelet[2259]: I1029 01:28:43.492746 2259 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d5ad8127-18bd-4b5b-a585-d5881c52dc3e" path="/var/lib/kubelet/pods/d5ad8127-18bd-4b5b-a585-d5881c52dc3e/volumes" Oct 29 01:28:43.652335 kubelet[2259]: E1029 01:28:43.652294 2259 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-55bf5bd85-l59k9" podUID="d15c73c2-757b-4aa9-bf20-2183e910769f" Oct 29 01:28:43.667000 audit[3848]: NETFILTER_CFG table=filter:107 family=2 entries=20 op=nft_register_rule pid=3848 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Oct 29 01:28:43.667000 audit[3848]: SYSCALL arch=c000003e syscall=46 success=yes exit=7480 a0=3 a1=7ffe8fcc5160 a2=0 a3=7ffe8fcc514c items=0 ppid=2360 pid=3848 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 29 01:28:43.667000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Oct 29 01:28:43.671000 audit[3848]: NETFILTER_CFG table=nat:108 family=2 entries=14 op=nft_register_rule pid=3848 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Oct 29 01:28:43.671000 audit[3848]: SYSCALL arch=c000003e syscall=46 success=yes exit=3468 a0=3 a1=7ffe8fcc5160 a2=0 a3=0 items=0 ppid=2360 pid=3848 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 29 01:28:43.671000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Oct 29 01:28:44.731294 systemd-networkd[1110]: vxlan.calico: Gained IPv6LL Oct 29 01:28:46.490375 env[1344]: time="2025-10-29T01:28:46.490346353Z" level=info msg="StopPodSandbox for \"5078cf5db7c6fcf114348afd5b40bee3e2913748cb30352026ca3581831b05a9\"" Oct 29 01:28:46.551093 env[1344]: 2025-10-29 01:28:46.516 [INFO][3862] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="5078cf5db7c6fcf114348afd5b40bee3e2913748cb30352026ca3581831b05a9" Oct 29 01:28:46.551093 env[1344]: 2025-10-29 01:28:46.516 [INFO][3862] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="5078cf5db7c6fcf114348afd5b40bee3e2913748cb30352026ca3581831b05a9" iface="eth0" netns="/var/run/netns/cni-efcb1d36-a19a-b952-53f5-b5ff6ed81646" Oct 29 01:28:46.551093 env[1344]: 2025-10-29 01:28:46.516 [INFO][3862] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="5078cf5db7c6fcf114348afd5b40bee3e2913748cb30352026ca3581831b05a9" iface="eth0" netns="/var/run/netns/cni-efcb1d36-a19a-b952-53f5-b5ff6ed81646" Oct 29 01:28:46.551093 env[1344]: 2025-10-29 01:28:46.516 [INFO][3862] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="5078cf5db7c6fcf114348afd5b40bee3e2913748cb30352026ca3581831b05a9" iface="eth0" netns="/var/run/netns/cni-efcb1d36-a19a-b952-53f5-b5ff6ed81646" Oct 29 01:28:46.551093 env[1344]: 2025-10-29 01:28:46.516 [INFO][3862] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="5078cf5db7c6fcf114348afd5b40bee3e2913748cb30352026ca3581831b05a9" Oct 29 01:28:46.551093 env[1344]: 2025-10-29 01:28:46.516 [INFO][3862] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="5078cf5db7c6fcf114348afd5b40bee3e2913748cb30352026ca3581831b05a9" Oct 29 01:28:46.551093 env[1344]: 2025-10-29 01:28:46.544 [INFO][3869] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="5078cf5db7c6fcf114348afd5b40bee3e2913748cb30352026ca3581831b05a9" HandleID="k8s-pod-network.5078cf5db7c6fcf114348afd5b40bee3e2913748cb30352026ca3581831b05a9" Workload="localhost-k8s-coredns--668d6bf9bc--h6hwn-eth0" Oct 29 01:28:46.551093 env[1344]: 2025-10-29 01:28:46.544 [INFO][3869] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Oct 29 01:28:46.551093 env[1344]: 2025-10-29 01:28:46.544 [INFO][3869] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Oct 29 01:28:46.551093 env[1344]: 2025-10-29 01:28:46.547 [WARNING][3869] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="5078cf5db7c6fcf114348afd5b40bee3e2913748cb30352026ca3581831b05a9" HandleID="k8s-pod-network.5078cf5db7c6fcf114348afd5b40bee3e2913748cb30352026ca3581831b05a9" Workload="localhost-k8s-coredns--668d6bf9bc--h6hwn-eth0" Oct 29 01:28:46.551093 env[1344]: 2025-10-29 01:28:46.547 [INFO][3869] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="5078cf5db7c6fcf114348afd5b40bee3e2913748cb30352026ca3581831b05a9" HandleID="k8s-pod-network.5078cf5db7c6fcf114348afd5b40bee3e2913748cb30352026ca3581831b05a9" Workload="localhost-k8s-coredns--668d6bf9bc--h6hwn-eth0" Oct 29 01:28:46.551093 env[1344]: 2025-10-29 01:28:46.548 [INFO][3869] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Oct 29 01:28:46.551093 env[1344]: 2025-10-29 01:28:46.549 [INFO][3862] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="5078cf5db7c6fcf114348afd5b40bee3e2913748cb30352026ca3581831b05a9" Oct 29 01:28:46.553085 systemd[1]: run-netns-cni\x2defcb1d36\x2da19a\x2db952\x2d53f5\x2db5ff6ed81646.mount: Deactivated successfully. Oct 29 01:28:46.553376 env[1344]: time="2025-10-29T01:28:46.553343068Z" level=info msg="TearDown network for sandbox \"5078cf5db7c6fcf114348afd5b40bee3e2913748cb30352026ca3581831b05a9\" successfully" Oct 29 01:28:46.553431 env[1344]: time="2025-10-29T01:28:46.553419017Z" level=info msg="StopPodSandbox for \"5078cf5db7c6fcf114348afd5b40bee3e2913748cb30352026ca3581831b05a9\" returns successfully" Oct 29 01:28:46.554372 env[1344]: time="2025-10-29T01:28:46.554351682Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-h6hwn,Uid:2fd41a65-6531-4e39-b3e6-b8b8fe6bf795,Namespace:kube-system,Attempt:1,}" Oct 29 01:28:46.628933 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Oct 29 01:28:46.629006 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cali49133034a20: link becomes ready Oct 29 01:28:46.625368 systemd-networkd[1110]: cali49133034a20: Link UP Oct 29 01:28:46.629338 systemd-networkd[1110]: cali49133034a20: Gained carrier Oct 29 01:28:46.640621 env[1344]: 2025-10-29 01:28:46.583 [INFO][3875] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--668d6bf9bc--h6hwn-eth0 coredns-668d6bf9bc- kube-system 2fd41a65-6531-4e39-b3e6-b8b8fe6bf795 902 0 2025-10-29 01:28:12 +0000 UTC map[k8s-app:kube-dns pod-template-hash:668d6bf9bc projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-668d6bf9bc-h6hwn eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali49133034a20 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="ad030092a25ed99e8a0e265858140f591ea046c6f074c42b2d14bc0eaf6047df" Namespace="kube-system" Pod="coredns-668d6bf9bc-h6hwn" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--h6hwn-" Oct 29 01:28:46.640621 env[1344]: 2025-10-29 01:28:46.584 [INFO][3875] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="ad030092a25ed99e8a0e265858140f591ea046c6f074c42b2d14bc0eaf6047df" Namespace="kube-system" Pod="coredns-668d6bf9bc-h6hwn" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--h6hwn-eth0" Oct 29 01:28:46.640621 env[1344]: 2025-10-29 01:28:46.603 [INFO][3889] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="ad030092a25ed99e8a0e265858140f591ea046c6f074c42b2d14bc0eaf6047df" HandleID="k8s-pod-network.ad030092a25ed99e8a0e265858140f591ea046c6f074c42b2d14bc0eaf6047df" Workload="localhost-k8s-coredns--668d6bf9bc--h6hwn-eth0" Oct 29 01:28:46.640621 env[1344]: 2025-10-29 01:28:46.603 [INFO][3889] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="ad030092a25ed99e8a0e265858140f591ea046c6f074c42b2d14bc0eaf6047df" HandleID="k8s-pod-network.ad030092a25ed99e8a0e265858140f591ea046c6f074c42b2d14bc0eaf6047df" Workload="localhost-k8s-coredns--668d6bf9bc--h6hwn-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002d4fe0), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-668d6bf9bc-h6hwn", "timestamp":"2025-10-29 01:28:46.603767629 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Oct 29 01:28:46.640621 env[1344]: 2025-10-29 01:28:46.603 [INFO][3889] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Oct 29 01:28:46.640621 env[1344]: 2025-10-29 01:28:46.604 [INFO][3889] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Oct 29 01:28:46.640621 env[1344]: 2025-10-29 01:28:46.604 [INFO][3889] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Oct 29 01:28:46.640621 env[1344]: 2025-10-29 01:28:46.608 [INFO][3889] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.ad030092a25ed99e8a0e265858140f591ea046c6f074c42b2d14bc0eaf6047df" host="localhost" Oct 29 01:28:46.640621 env[1344]: 2025-10-29 01:28:46.610 [INFO][3889] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Oct 29 01:28:46.640621 env[1344]: 2025-10-29 01:28:46.612 [INFO][3889] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Oct 29 01:28:46.640621 env[1344]: 2025-10-29 01:28:46.613 [INFO][3889] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Oct 29 01:28:46.640621 env[1344]: 2025-10-29 01:28:46.614 [INFO][3889] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Oct 29 01:28:46.640621 env[1344]: 2025-10-29 01:28:46.614 [INFO][3889] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.ad030092a25ed99e8a0e265858140f591ea046c6f074c42b2d14bc0eaf6047df" host="localhost" Oct 29 01:28:46.640621 env[1344]: 2025-10-29 01:28:46.614 [INFO][3889] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.ad030092a25ed99e8a0e265858140f591ea046c6f074c42b2d14bc0eaf6047df Oct 29 01:28:46.640621 env[1344]: 2025-10-29 01:28:46.617 [INFO][3889] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.ad030092a25ed99e8a0e265858140f591ea046c6f074c42b2d14bc0eaf6047df" host="localhost" Oct 29 01:28:46.640621 env[1344]: 2025-10-29 01:28:46.620 [INFO][3889] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.130/26] block=192.168.88.128/26 handle="k8s-pod-network.ad030092a25ed99e8a0e265858140f591ea046c6f074c42b2d14bc0eaf6047df" host="localhost" Oct 29 01:28:46.640621 env[1344]: 2025-10-29 01:28:46.620 [INFO][3889] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.130/26] handle="k8s-pod-network.ad030092a25ed99e8a0e265858140f591ea046c6f074c42b2d14bc0eaf6047df" host="localhost" Oct 29 01:28:46.640621 env[1344]: 2025-10-29 01:28:46.620 [INFO][3889] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Oct 29 01:28:46.640621 env[1344]: 2025-10-29 01:28:46.620 [INFO][3889] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.130/26] IPv6=[] ContainerID="ad030092a25ed99e8a0e265858140f591ea046c6f074c42b2d14bc0eaf6047df" HandleID="k8s-pod-network.ad030092a25ed99e8a0e265858140f591ea046c6f074c42b2d14bc0eaf6047df" Workload="localhost-k8s-coredns--668d6bf9bc--h6hwn-eth0" Oct 29 01:28:46.641484 env[1344]: 2025-10-29 01:28:46.622 [INFO][3875] cni-plugin/k8s.go 418: Populated endpoint ContainerID="ad030092a25ed99e8a0e265858140f591ea046c6f074c42b2d14bc0eaf6047df" Namespace="kube-system" Pod="coredns-668d6bf9bc-h6hwn" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--h6hwn-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--668d6bf9bc--h6hwn-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"2fd41a65-6531-4e39-b3e6-b8b8fe6bf795", ResourceVersion:"902", Generation:0, CreationTimestamp:time.Date(2025, time.October, 29, 1, 28, 12, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-668d6bf9bc-h6hwn", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali49133034a20", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 29 01:28:46.641484 env[1344]: 2025-10-29 01:28:46.622 [INFO][3875] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.130/32] ContainerID="ad030092a25ed99e8a0e265858140f591ea046c6f074c42b2d14bc0eaf6047df" Namespace="kube-system" Pod="coredns-668d6bf9bc-h6hwn" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--h6hwn-eth0" Oct 29 01:28:46.641484 env[1344]: 2025-10-29 01:28:46.622 [INFO][3875] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali49133034a20 ContainerID="ad030092a25ed99e8a0e265858140f591ea046c6f074c42b2d14bc0eaf6047df" Namespace="kube-system" Pod="coredns-668d6bf9bc-h6hwn" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--h6hwn-eth0" Oct 29 01:28:46.641484 env[1344]: 2025-10-29 01:28:46.629 [INFO][3875] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="ad030092a25ed99e8a0e265858140f591ea046c6f074c42b2d14bc0eaf6047df" Namespace="kube-system" Pod="coredns-668d6bf9bc-h6hwn" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--h6hwn-eth0" Oct 29 01:28:46.641484 env[1344]: 2025-10-29 01:28:46.630 [INFO][3875] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="ad030092a25ed99e8a0e265858140f591ea046c6f074c42b2d14bc0eaf6047df" Namespace="kube-system" Pod="coredns-668d6bf9bc-h6hwn" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--h6hwn-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--668d6bf9bc--h6hwn-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"2fd41a65-6531-4e39-b3e6-b8b8fe6bf795", ResourceVersion:"902", Generation:0, CreationTimestamp:time.Date(2025, time.October, 29, 1, 28, 12, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"ad030092a25ed99e8a0e265858140f591ea046c6f074c42b2d14bc0eaf6047df", Pod:"coredns-668d6bf9bc-h6hwn", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali49133034a20", MAC:"e6:32:b5:a4:40:51", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 29 01:28:46.641484 env[1344]: 2025-10-29 01:28:46.637 [INFO][3875] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="ad030092a25ed99e8a0e265858140f591ea046c6f074c42b2d14bc0eaf6047df" Namespace="kube-system" Pod="coredns-668d6bf9bc-h6hwn" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--h6hwn-eth0" Oct 29 01:28:46.649000 audit[3907]: NETFILTER_CFG table=filter:109 family=2 entries=42 op=nft_register_chain pid=3907 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Oct 29 01:28:46.649000 audit[3907]: SYSCALL arch=c000003e syscall=46 success=yes exit=22552 a0=3 a1=7ffe99359f70 a2=0 a3=7ffe99359f5c items=0 ppid=3719 pid=3907 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 29 01:28:46.649000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Oct 29 01:28:46.653653 env[1344]: time="2025-10-29T01:28:46.653175132Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 29 01:28:46.653653 env[1344]: time="2025-10-29T01:28:46.653209858Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 29 01:28:46.653653 env[1344]: time="2025-10-29T01:28:46.653217863Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 29 01:28:46.653907 env[1344]: time="2025-10-29T01:28:46.653876606Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/ad030092a25ed99e8a0e265858140f591ea046c6f074c42b2d14bc0eaf6047df pid=3913 runtime=io.containerd.runc.v2 Oct 29 01:28:46.681653 systemd-resolved[1274]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Oct 29 01:28:46.701972 env[1344]: time="2025-10-29T01:28:46.701944248Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-h6hwn,Uid:2fd41a65-6531-4e39-b3e6-b8b8fe6bf795,Namespace:kube-system,Attempt:1,} returns sandbox id \"ad030092a25ed99e8a0e265858140f591ea046c6f074c42b2d14bc0eaf6047df\"" Oct 29 01:28:46.704052 env[1344]: time="2025-10-29T01:28:46.704029161Z" level=info msg="CreateContainer within sandbox \"ad030092a25ed99e8a0e265858140f591ea046c6f074c42b2d14bc0eaf6047df\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Oct 29 01:28:46.715541 env[1344]: time="2025-10-29T01:28:46.715514617Z" level=info msg="CreateContainer within sandbox \"ad030092a25ed99e8a0e265858140f591ea046c6f074c42b2d14bc0eaf6047df\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"655c5d0cdb66c27b8885bd51fa5771d8b2191323c137e229ec91d543839855db\"" Oct 29 01:28:46.716628 env[1344]: time="2025-10-29T01:28:46.715999113Z" level=info msg="StartContainer for \"655c5d0cdb66c27b8885bd51fa5771d8b2191323c137e229ec91d543839855db\"" Oct 29 01:28:46.749232 env[1344]: time="2025-10-29T01:28:46.749151997Z" level=info msg="StartContainer for \"655c5d0cdb66c27b8885bd51fa5771d8b2191323c137e229ec91d543839855db\" returns successfully" Oct 29 01:28:47.491966 env[1344]: time="2025-10-29T01:28:47.491158645Z" level=info msg="StopPodSandbox for \"e06da40828e97ad77a9a3345000de382dcb1325707331b2dd9a825df9dc4a5eb\"" Oct 29 01:28:47.491966 env[1344]: time="2025-10-29T01:28:47.491566521Z" level=info msg="StopPodSandbox for \"853cbab672cc73cbd1c7268f5392c2db6a7e2d751e33edefaeb317d1466c5b2e\"" Oct 29 01:28:47.581366 env[1344]: 2025-10-29 01:28:47.545 [INFO][4003] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="e06da40828e97ad77a9a3345000de382dcb1325707331b2dd9a825df9dc4a5eb" Oct 29 01:28:47.581366 env[1344]: 2025-10-29 01:28:47.546 [INFO][4003] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="e06da40828e97ad77a9a3345000de382dcb1325707331b2dd9a825df9dc4a5eb" iface="eth0" netns="/var/run/netns/cni-8c0ae8f1-d84d-dbad-566a-27946680d642" Oct 29 01:28:47.581366 env[1344]: 2025-10-29 01:28:47.546 [INFO][4003] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="e06da40828e97ad77a9a3345000de382dcb1325707331b2dd9a825df9dc4a5eb" iface="eth0" netns="/var/run/netns/cni-8c0ae8f1-d84d-dbad-566a-27946680d642" Oct 29 01:28:47.581366 env[1344]: 2025-10-29 01:28:47.547 [INFO][4003] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="e06da40828e97ad77a9a3345000de382dcb1325707331b2dd9a825df9dc4a5eb" iface="eth0" netns="/var/run/netns/cni-8c0ae8f1-d84d-dbad-566a-27946680d642" Oct 29 01:28:47.581366 env[1344]: 2025-10-29 01:28:47.547 [INFO][4003] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="e06da40828e97ad77a9a3345000de382dcb1325707331b2dd9a825df9dc4a5eb" Oct 29 01:28:47.581366 env[1344]: 2025-10-29 01:28:47.547 [INFO][4003] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="e06da40828e97ad77a9a3345000de382dcb1325707331b2dd9a825df9dc4a5eb" Oct 29 01:28:47.581366 env[1344]: 2025-10-29 01:28:47.563 [INFO][4016] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="e06da40828e97ad77a9a3345000de382dcb1325707331b2dd9a825df9dc4a5eb" HandleID="k8s-pod-network.e06da40828e97ad77a9a3345000de382dcb1325707331b2dd9a825df9dc4a5eb" Workload="localhost-k8s-goldmane--666569f655--h4k8p-eth0" Oct 29 01:28:47.581366 env[1344]: 2025-10-29 01:28:47.564 [INFO][4016] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Oct 29 01:28:47.581366 env[1344]: 2025-10-29 01:28:47.564 [INFO][4016] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Oct 29 01:28:47.581366 env[1344]: 2025-10-29 01:28:47.569 [WARNING][4016] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="e06da40828e97ad77a9a3345000de382dcb1325707331b2dd9a825df9dc4a5eb" HandleID="k8s-pod-network.e06da40828e97ad77a9a3345000de382dcb1325707331b2dd9a825df9dc4a5eb" Workload="localhost-k8s-goldmane--666569f655--h4k8p-eth0" Oct 29 01:28:47.581366 env[1344]: 2025-10-29 01:28:47.569 [INFO][4016] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="e06da40828e97ad77a9a3345000de382dcb1325707331b2dd9a825df9dc4a5eb" HandleID="k8s-pod-network.e06da40828e97ad77a9a3345000de382dcb1325707331b2dd9a825df9dc4a5eb" Workload="localhost-k8s-goldmane--666569f655--h4k8p-eth0" Oct 29 01:28:47.581366 env[1344]: 2025-10-29 01:28:47.577 [INFO][4016] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Oct 29 01:28:47.581366 env[1344]: 2025-10-29 01:28:47.580 [INFO][4003] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="e06da40828e97ad77a9a3345000de382dcb1325707331b2dd9a825df9dc4a5eb" Oct 29 01:28:47.583065 systemd[1]: run-netns-cni\x2d8c0ae8f1\x2dd84d\x2ddbad\x2d566a\x2d27946680d642.mount: Deactivated successfully. Oct 29 01:28:47.583951 env[1344]: time="2025-10-29T01:28:47.583927991Z" level=info msg="TearDown network for sandbox \"e06da40828e97ad77a9a3345000de382dcb1325707331b2dd9a825df9dc4a5eb\" successfully" Oct 29 01:28:47.584000 env[1344]: time="2025-10-29T01:28:47.583949108Z" level=info msg="StopPodSandbox for \"e06da40828e97ad77a9a3345000de382dcb1325707331b2dd9a825df9dc4a5eb\" returns successfully" Oct 29 01:28:47.584274 env[1344]: time="2025-10-29T01:28:47.584258900Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-h4k8p,Uid:5af81d8c-f0dd-4f37-b2dd-db8e64891fd3,Namespace:calico-system,Attempt:1,}" Oct 29 01:28:47.605988 env[1344]: 2025-10-29 01:28:47.544 [INFO][4002] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="853cbab672cc73cbd1c7268f5392c2db6a7e2d751e33edefaeb317d1466c5b2e" Oct 29 01:28:47.605988 env[1344]: 2025-10-29 01:28:47.544 [INFO][4002] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="853cbab672cc73cbd1c7268f5392c2db6a7e2d751e33edefaeb317d1466c5b2e" iface="eth0" netns="/var/run/netns/cni-73e4416c-b54c-d3c5-0ba0-070d4d8ab835" Oct 29 01:28:47.605988 env[1344]: 2025-10-29 01:28:47.544 [INFO][4002] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="853cbab672cc73cbd1c7268f5392c2db6a7e2d751e33edefaeb317d1466c5b2e" iface="eth0" netns="/var/run/netns/cni-73e4416c-b54c-d3c5-0ba0-070d4d8ab835" Oct 29 01:28:47.605988 env[1344]: 2025-10-29 01:28:47.544 [INFO][4002] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="853cbab672cc73cbd1c7268f5392c2db6a7e2d751e33edefaeb317d1466c5b2e" iface="eth0" netns="/var/run/netns/cni-73e4416c-b54c-d3c5-0ba0-070d4d8ab835" Oct 29 01:28:47.605988 env[1344]: 2025-10-29 01:28:47.544 [INFO][4002] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="853cbab672cc73cbd1c7268f5392c2db6a7e2d751e33edefaeb317d1466c5b2e" Oct 29 01:28:47.605988 env[1344]: 2025-10-29 01:28:47.544 [INFO][4002] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="853cbab672cc73cbd1c7268f5392c2db6a7e2d751e33edefaeb317d1466c5b2e" Oct 29 01:28:47.605988 env[1344]: 2025-10-29 01:28:47.596 [INFO][4020] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="853cbab672cc73cbd1c7268f5392c2db6a7e2d751e33edefaeb317d1466c5b2e" HandleID="k8s-pod-network.853cbab672cc73cbd1c7268f5392c2db6a7e2d751e33edefaeb317d1466c5b2e" Workload="localhost-k8s-calico--apiserver--79569d88b4--q8r9t-eth0" Oct 29 01:28:47.605988 env[1344]: 2025-10-29 01:28:47.596 [INFO][4020] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Oct 29 01:28:47.605988 env[1344]: 2025-10-29 01:28:47.596 [INFO][4020] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Oct 29 01:28:47.605988 env[1344]: 2025-10-29 01:28:47.601 [WARNING][4020] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="853cbab672cc73cbd1c7268f5392c2db6a7e2d751e33edefaeb317d1466c5b2e" HandleID="k8s-pod-network.853cbab672cc73cbd1c7268f5392c2db6a7e2d751e33edefaeb317d1466c5b2e" Workload="localhost-k8s-calico--apiserver--79569d88b4--q8r9t-eth0" Oct 29 01:28:47.605988 env[1344]: 2025-10-29 01:28:47.601 [INFO][4020] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="853cbab672cc73cbd1c7268f5392c2db6a7e2d751e33edefaeb317d1466c5b2e" HandleID="k8s-pod-network.853cbab672cc73cbd1c7268f5392c2db6a7e2d751e33edefaeb317d1466c5b2e" Workload="localhost-k8s-calico--apiserver--79569d88b4--q8r9t-eth0" Oct 29 01:28:47.605988 env[1344]: 2025-10-29 01:28:47.602 [INFO][4020] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Oct 29 01:28:47.605988 env[1344]: 2025-10-29 01:28:47.605 [INFO][4002] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="853cbab672cc73cbd1c7268f5392c2db6a7e2d751e33edefaeb317d1466c5b2e" Oct 29 01:28:47.607714 systemd[1]: run-netns-cni\x2d73e4416c\x2db54c\x2dd3c5\x2d0ba0\x2d070d4d8ab835.mount: Deactivated successfully. Oct 29 01:28:47.608639 env[1344]: time="2025-10-29T01:28:47.608607840Z" level=info msg="TearDown network for sandbox \"853cbab672cc73cbd1c7268f5392c2db6a7e2d751e33edefaeb317d1466c5b2e\" successfully" Oct 29 01:28:47.608639 env[1344]: time="2025-10-29T01:28:47.608631248Z" level=info msg="StopPodSandbox for \"853cbab672cc73cbd1c7268f5392c2db6a7e2d751e33edefaeb317d1466c5b2e\" returns successfully" Oct 29 01:28:47.609054 env[1344]: time="2025-10-29T01:28:47.609034417Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-79569d88b4-q8r9t,Uid:c80c9899-41ee-40c7-92c7-ab20d72dcefe,Namespace:calico-apiserver,Attempt:1,}" Oct 29 01:28:47.693800 kubelet[2259]: I1029 01:28:47.693616 2259 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-h6hwn" podStartSLOduration=35.693602379 podStartE2EDuration="35.693602379s" podCreationTimestamp="2025-10-29 01:28:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-29 01:28:47.674391162 +0000 UTC m=+40.330506187" watchObservedRunningTime="2025-10-29 01:28:47.693602379 +0000 UTC m=+40.349717407" Oct 29 01:28:47.698000 audit[4069]: NETFILTER_CFG table=filter:110 family=2 entries=20 op=nft_register_rule pid=4069 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Oct 29 01:28:47.701874 kernel: kauditd_printk_skb: 562 callbacks suppressed Oct 29 01:28:47.701933 kernel: audit: type=1325 audit(1761701327.698:399): table=filter:110 family=2 entries=20 op=nft_register_rule pid=4069 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Oct 29 01:28:47.705971 kernel: audit: type=1300 audit(1761701327.698:399): arch=c000003e syscall=46 success=yes exit=7480 a0=3 a1=7ffcc8df6a30 a2=0 a3=7ffcc8df6a1c items=0 ppid=2360 pid=4069 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 29 01:28:47.698000 audit[4069]: SYSCALL arch=c000003e syscall=46 success=yes exit=7480 a0=3 a1=7ffcc8df6a30 a2=0 a3=7ffcc8df6a1c items=0 ppid=2360 pid=4069 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 29 01:28:47.698000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Oct 29 01:28:47.706000 audit[4069]: NETFILTER_CFG table=nat:111 family=2 entries=14 op=nft_register_rule pid=4069 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Oct 29 01:28:47.710947 kernel: audit: type=1327 audit(1761701327.698:399): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Oct 29 01:28:47.710986 kernel: audit: type=1325 audit(1761701327.706:400): table=nat:111 family=2 entries=14 op=nft_register_rule pid=4069 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Oct 29 01:28:47.706000 audit[4069]: SYSCALL arch=c000003e syscall=46 success=yes exit=3468 a0=3 a1=7ffcc8df6a30 a2=0 a3=0 items=0 ppid=2360 pid=4069 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 29 01:28:47.706000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Oct 29 01:28:47.718022 kernel: audit: type=1300 audit(1761701327.706:400): arch=c000003e syscall=46 success=yes exit=3468 a0=3 a1=7ffcc8df6a30 a2=0 a3=0 items=0 ppid=2360 pid=4069 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 29 01:28:47.718177 kernel: audit: type=1327 audit(1761701327.706:400): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Oct 29 01:28:47.732373 systemd-networkd[1110]: calid48934885b3: Link UP Oct 29 01:28:47.735049 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Oct 29 01:28:47.735092 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): calid48934885b3: link becomes ready Oct 29 01:28:47.735160 systemd-networkd[1110]: calid48934885b3: Gained carrier Oct 29 01:28:47.740000 audit[4073]: NETFILTER_CFG table=filter:112 family=2 entries=17 op=nft_register_rule pid=4073 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Oct 29 01:28:47.743655 kernel: audit: type=1325 audit(1761701327.740:401): table=filter:112 family=2 entries=17 op=nft_register_rule pid=4073 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Oct 29 01:28:47.740000 audit[4073]: SYSCALL arch=c000003e syscall=46 success=yes exit=5248 a0=3 a1=7ffca188fb00 a2=0 a3=7ffca188faec items=0 ppid=2360 pid=4073 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 29 01:28:47.749218 kernel: audit: type=1300 audit(1761701327.740:401): arch=c000003e syscall=46 success=yes exit=5248 a0=3 a1=7ffca188fb00 a2=0 a3=7ffca188faec items=0 ppid=2360 pid=4073 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 29 01:28:47.740000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Oct 29 01:28:47.750000 audit[4073]: NETFILTER_CFG table=nat:113 family=2 entries=35 op=nft_register_chain pid=4073 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Oct 29 01:28:47.754396 kernel: audit: type=1327 audit(1761701327.740:401): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Oct 29 01:28:47.754452 kernel: audit: type=1325 audit(1761701327.750:402): table=nat:113 family=2 entries=35 op=nft_register_chain pid=4073 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Oct 29 01:28:47.750000 audit[4073]: SYSCALL arch=c000003e syscall=46 success=yes exit=14196 a0=3 a1=7ffca188fb00 a2=0 a3=7ffca188faec items=0 ppid=2360 pid=4073 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 29 01:28:47.750000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Oct 29 01:28:47.754595 env[1344]: 2025-10-29 01:28:47.628 [INFO][4027] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-goldmane--666569f655--h4k8p-eth0 goldmane-666569f655- calico-system 5af81d8c-f0dd-4f37-b2dd-db8e64891fd3 912 0 2025-10-29 01:28:22 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:666569f655 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s localhost goldmane-666569f655-h4k8p eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] calid48934885b3 [] [] }} ContainerID="a967cea1dee93e67f724313a0d91337cd1fbdb4adb05f94b404fd8dc0764f83c" Namespace="calico-system" Pod="goldmane-666569f655-h4k8p" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--h4k8p-" Oct 29 01:28:47.754595 env[1344]: 2025-10-29 01:28:47.628 [INFO][4027] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="a967cea1dee93e67f724313a0d91337cd1fbdb4adb05f94b404fd8dc0764f83c" Namespace="calico-system" Pod="goldmane-666569f655-h4k8p" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--h4k8p-eth0" Oct 29 01:28:47.754595 env[1344]: 2025-10-29 01:28:47.656 [INFO][4053] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="a967cea1dee93e67f724313a0d91337cd1fbdb4adb05f94b404fd8dc0764f83c" HandleID="k8s-pod-network.a967cea1dee93e67f724313a0d91337cd1fbdb4adb05f94b404fd8dc0764f83c" Workload="localhost-k8s-goldmane--666569f655--h4k8p-eth0" Oct 29 01:28:47.754595 env[1344]: 2025-10-29 01:28:47.656 [INFO][4053] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="a967cea1dee93e67f724313a0d91337cd1fbdb4adb05f94b404fd8dc0764f83c" HandleID="k8s-pod-network.a967cea1dee93e67f724313a0d91337cd1fbdb4adb05f94b404fd8dc0764f83c" Workload="localhost-k8s-goldmane--666569f655--h4k8p-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002cd030), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"goldmane-666569f655-h4k8p", "timestamp":"2025-10-29 01:28:47.656285057 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Oct 29 01:28:47.754595 env[1344]: 2025-10-29 01:28:47.656 [INFO][4053] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Oct 29 01:28:47.754595 env[1344]: 2025-10-29 01:28:47.656 [INFO][4053] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Oct 29 01:28:47.754595 env[1344]: 2025-10-29 01:28:47.656 [INFO][4053] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Oct 29 01:28:47.754595 env[1344]: 2025-10-29 01:28:47.694 [INFO][4053] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.a967cea1dee93e67f724313a0d91337cd1fbdb4adb05f94b404fd8dc0764f83c" host="localhost" Oct 29 01:28:47.754595 env[1344]: 2025-10-29 01:28:47.706 [INFO][4053] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Oct 29 01:28:47.754595 env[1344]: 2025-10-29 01:28:47.715 [INFO][4053] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Oct 29 01:28:47.754595 env[1344]: 2025-10-29 01:28:47.718 [INFO][4053] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Oct 29 01:28:47.754595 env[1344]: 2025-10-29 01:28:47.719 [INFO][4053] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Oct 29 01:28:47.754595 env[1344]: 2025-10-29 01:28:47.719 [INFO][4053] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.a967cea1dee93e67f724313a0d91337cd1fbdb4adb05f94b404fd8dc0764f83c" host="localhost" Oct 29 01:28:47.754595 env[1344]: 2025-10-29 01:28:47.720 [INFO][4053] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.a967cea1dee93e67f724313a0d91337cd1fbdb4adb05f94b404fd8dc0764f83c Oct 29 01:28:47.754595 env[1344]: 2025-10-29 01:28:47.722 [INFO][4053] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.a967cea1dee93e67f724313a0d91337cd1fbdb4adb05f94b404fd8dc0764f83c" host="localhost" Oct 29 01:28:47.754595 env[1344]: 2025-10-29 01:28:47.725 [INFO][4053] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.131/26] block=192.168.88.128/26 handle="k8s-pod-network.a967cea1dee93e67f724313a0d91337cd1fbdb4adb05f94b404fd8dc0764f83c" host="localhost" Oct 29 01:28:47.754595 env[1344]: 2025-10-29 01:28:47.725 [INFO][4053] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.131/26] handle="k8s-pod-network.a967cea1dee93e67f724313a0d91337cd1fbdb4adb05f94b404fd8dc0764f83c" host="localhost" Oct 29 01:28:47.754595 env[1344]: 2025-10-29 01:28:47.725 [INFO][4053] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Oct 29 01:28:47.754595 env[1344]: 2025-10-29 01:28:47.725 [INFO][4053] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.131/26] IPv6=[] ContainerID="a967cea1dee93e67f724313a0d91337cd1fbdb4adb05f94b404fd8dc0764f83c" HandleID="k8s-pod-network.a967cea1dee93e67f724313a0d91337cd1fbdb4adb05f94b404fd8dc0764f83c" Workload="localhost-k8s-goldmane--666569f655--h4k8p-eth0" Oct 29 01:28:47.759341 env[1344]: 2025-10-29 01:28:47.728 [INFO][4027] cni-plugin/k8s.go 418: Populated endpoint ContainerID="a967cea1dee93e67f724313a0d91337cd1fbdb4adb05f94b404fd8dc0764f83c" Namespace="calico-system" Pod="goldmane-666569f655-h4k8p" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--h4k8p-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--666569f655--h4k8p-eth0", GenerateName:"goldmane-666569f655-", Namespace:"calico-system", SelfLink:"", UID:"5af81d8c-f0dd-4f37-b2dd-db8e64891fd3", ResourceVersion:"912", Generation:0, CreationTimestamp:time.Date(2025, time.October, 29, 1, 28, 22, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"666569f655", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"goldmane-666569f655-h4k8p", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calid48934885b3", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 29 01:28:47.759341 env[1344]: 2025-10-29 01:28:47.729 [INFO][4027] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.131/32] ContainerID="a967cea1dee93e67f724313a0d91337cd1fbdb4adb05f94b404fd8dc0764f83c" Namespace="calico-system" Pod="goldmane-666569f655-h4k8p" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--h4k8p-eth0" Oct 29 01:28:47.759341 env[1344]: 2025-10-29 01:28:47.729 [INFO][4027] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calid48934885b3 ContainerID="a967cea1dee93e67f724313a0d91337cd1fbdb4adb05f94b404fd8dc0764f83c" Namespace="calico-system" Pod="goldmane-666569f655-h4k8p" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--h4k8p-eth0" Oct 29 01:28:47.759341 env[1344]: 2025-10-29 01:28:47.735 [INFO][4027] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="a967cea1dee93e67f724313a0d91337cd1fbdb4adb05f94b404fd8dc0764f83c" Namespace="calico-system" Pod="goldmane-666569f655-h4k8p" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--h4k8p-eth0" Oct 29 01:28:47.759341 env[1344]: 2025-10-29 01:28:47.735 [INFO][4027] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="a967cea1dee93e67f724313a0d91337cd1fbdb4adb05f94b404fd8dc0764f83c" Namespace="calico-system" Pod="goldmane-666569f655-h4k8p" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--h4k8p-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--666569f655--h4k8p-eth0", GenerateName:"goldmane-666569f655-", Namespace:"calico-system", SelfLink:"", UID:"5af81d8c-f0dd-4f37-b2dd-db8e64891fd3", ResourceVersion:"912", Generation:0, CreationTimestamp:time.Date(2025, time.October, 29, 1, 28, 22, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"666569f655", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"a967cea1dee93e67f724313a0d91337cd1fbdb4adb05f94b404fd8dc0764f83c", Pod:"goldmane-666569f655-h4k8p", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calid48934885b3", MAC:"e2:eb:40:49:86:05", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 29 01:28:47.759341 env[1344]: 2025-10-29 01:28:47.744 [INFO][4027] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="a967cea1dee93e67f724313a0d91337cd1fbdb4adb05f94b404fd8dc0764f83c" Namespace="calico-system" Pod="goldmane-666569f655-h4k8p" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--h4k8p-eth0" Oct 29 01:28:47.797823 env[1344]: time="2025-10-29T01:28:47.797714669Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 29 01:28:47.797823 env[1344]: time="2025-10-29T01:28:47.797738481Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 29 01:28:47.797823 env[1344]: time="2025-10-29T01:28:47.797745378Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 29 01:28:47.798118 env[1344]: time="2025-10-29T01:28:47.798091839Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/a967cea1dee93e67f724313a0d91337cd1fbdb4adb05f94b404fd8dc0764f83c pid=4088 runtime=io.containerd.runc.v2 Oct 29 01:28:47.812000 audit[4110]: NETFILTER_CFG table=filter:114 family=2 entries=48 op=nft_register_chain pid=4110 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Oct 29 01:28:47.812000 audit[4110]: SYSCALL arch=c000003e syscall=46 success=yes exit=26368 a0=3 a1=7ffceb09c430 a2=0 a3=7ffceb09c41c items=0 ppid=3719 pid=4110 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 29 01:28:47.812000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Oct 29 01:28:47.827968 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cali261b1403950: link becomes ready Oct 29 01:28:47.827560 systemd-networkd[1110]: cali261b1403950: Link UP Oct 29 01:28:47.833205 systemd-networkd[1110]: cali261b1403950: Gained carrier Oct 29 01:28:47.838883 systemd-resolved[1274]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Oct 29 01:28:47.839073 env[1344]: 2025-10-29 01:28:47.641 [INFO][4040] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--79569d88b4--q8r9t-eth0 calico-apiserver-79569d88b4- calico-apiserver c80c9899-41ee-40c7-92c7-ab20d72dcefe 913 0 2025-10-29 01:28:20 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:79569d88b4 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-79569d88b4-q8r9t eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali261b1403950 [] [] }} ContainerID="9e54adea2d888fd3f30d6a5f2c15f1f8fa82de296197c212479cd5c501f0a375" Namespace="calico-apiserver" Pod="calico-apiserver-79569d88b4-q8r9t" WorkloadEndpoint="localhost-k8s-calico--apiserver--79569d88b4--q8r9t-" Oct 29 01:28:47.839073 env[1344]: 2025-10-29 01:28:47.641 [INFO][4040] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="9e54adea2d888fd3f30d6a5f2c15f1f8fa82de296197c212479cd5c501f0a375" Namespace="calico-apiserver" Pod="calico-apiserver-79569d88b4-q8r9t" WorkloadEndpoint="localhost-k8s-calico--apiserver--79569d88b4--q8r9t-eth0" Oct 29 01:28:47.839073 env[1344]: 2025-10-29 01:28:47.691 [INFO][4060] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="9e54adea2d888fd3f30d6a5f2c15f1f8fa82de296197c212479cd5c501f0a375" HandleID="k8s-pod-network.9e54adea2d888fd3f30d6a5f2c15f1f8fa82de296197c212479cd5c501f0a375" Workload="localhost-k8s-calico--apiserver--79569d88b4--q8r9t-eth0" Oct 29 01:28:47.839073 env[1344]: 2025-10-29 01:28:47.692 [INFO][4060] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="9e54adea2d888fd3f30d6a5f2c15f1f8fa82de296197c212479cd5c501f0a375" HandleID="k8s-pod-network.9e54adea2d888fd3f30d6a5f2c15f1f8fa82de296197c212479cd5c501f0a375" Workload="localhost-k8s-calico--apiserver--79569d88b4--q8r9t-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002ccfe0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-79569d88b4-q8r9t", "timestamp":"2025-10-29 01:28:47.691955951 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Oct 29 01:28:47.839073 env[1344]: 2025-10-29 01:28:47.692 [INFO][4060] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Oct 29 01:28:47.839073 env[1344]: 2025-10-29 01:28:47.725 [INFO][4060] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Oct 29 01:28:47.839073 env[1344]: 2025-10-29 01:28:47.725 [INFO][4060] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Oct 29 01:28:47.839073 env[1344]: 2025-10-29 01:28:47.782 [INFO][4060] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.9e54adea2d888fd3f30d6a5f2c15f1f8fa82de296197c212479cd5c501f0a375" host="localhost" Oct 29 01:28:47.839073 env[1344]: 2025-10-29 01:28:47.801 [INFO][4060] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Oct 29 01:28:47.839073 env[1344]: 2025-10-29 01:28:47.811 [INFO][4060] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Oct 29 01:28:47.839073 env[1344]: 2025-10-29 01:28:47.812 [INFO][4060] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Oct 29 01:28:47.839073 env[1344]: 2025-10-29 01:28:47.814 [INFO][4060] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Oct 29 01:28:47.839073 env[1344]: 2025-10-29 01:28:47.814 [INFO][4060] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.9e54adea2d888fd3f30d6a5f2c15f1f8fa82de296197c212479cd5c501f0a375" host="localhost" Oct 29 01:28:47.839073 env[1344]: 2025-10-29 01:28:47.816 [INFO][4060] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.9e54adea2d888fd3f30d6a5f2c15f1f8fa82de296197c212479cd5c501f0a375 Oct 29 01:28:47.839073 env[1344]: 2025-10-29 01:28:47.818 [INFO][4060] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.9e54adea2d888fd3f30d6a5f2c15f1f8fa82de296197c212479cd5c501f0a375" host="localhost" Oct 29 01:28:47.839073 env[1344]: 2025-10-29 01:28:47.822 [INFO][4060] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.132/26] block=192.168.88.128/26 handle="k8s-pod-network.9e54adea2d888fd3f30d6a5f2c15f1f8fa82de296197c212479cd5c501f0a375" host="localhost" Oct 29 01:28:47.839073 env[1344]: 2025-10-29 01:28:47.822 [INFO][4060] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.132/26] handle="k8s-pod-network.9e54adea2d888fd3f30d6a5f2c15f1f8fa82de296197c212479cd5c501f0a375" host="localhost" Oct 29 01:28:47.839073 env[1344]: 2025-10-29 01:28:47.822 [INFO][4060] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Oct 29 01:28:47.839073 env[1344]: 2025-10-29 01:28:47.822 [INFO][4060] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.132/26] IPv6=[] ContainerID="9e54adea2d888fd3f30d6a5f2c15f1f8fa82de296197c212479cd5c501f0a375" HandleID="k8s-pod-network.9e54adea2d888fd3f30d6a5f2c15f1f8fa82de296197c212479cd5c501f0a375" Workload="localhost-k8s-calico--apiserver--79569d88b4--q8r9t-eth0" Oct 29 01:28:47.840579 env[1344]: 2025-10-29 01:28:47.824 [INFO][4040] cni-plugin/k8s.go 418: Populated endpoint ContainerID="9e54adea2d888fd3f30d6a5f2c15f1f8fa82de296197c212479cd5c501f0a375" Namespace="calico-apiserver" Pod="calico-apiserver-79569d88b4-q8r9t" WorkloadEndpoint="localhost-k8s-calico--apiserver--79569d88b4--q8r9t-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--79569d88b4--q8r9t-eth0", GenerateName:"calico-apiserver-79569d88b4-", Namespace:"calico-apiserver", SelfLink:"", UID:"c80c9899-41ee-40c7-92c7-ab20d72dcefe", ResourceVersion:"913", Generation:0, CreationTimestamp:time.Date(2025, time.October, 29, 1, 28, 20, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"79569d88b4", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-79569d88b4-q8r9t", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali261b1403950", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 29 01:28:47.840579 env[1344]: 2025-10-29 01:28:47.824 [INFO][4040] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.132/32] ContainerID="9e54adea2d888fd3f30d6a5f2c15f1f8fa82de296197c212479cd5c501f0a375" Namespace="calico-apiserver" Pod="calico-apiserver-79569d88b4-q8r9t" WorkloadEndpoint="localhost-k8s-calico--apiserver--79569d88b4--q8r9t-eth0" Oct 29 01:28:47.840579 env[1344]: 2025-10-29 01:28:47.824 [INFO][4040] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali261b1403950 ContainerID="9e54adea2d888fd3f30d6a5f2c15f1f8fa82de296197c212479cd5c501f0a375" Namespace="calico-apiserver" Pod="calico-apiserver-79569d88b4-q8r9t" WorkloadEndpoint="localhost-k8s-calico--apiserver--79569d88b4--q8r9t-eth0" Oct 29 01:28:47.840579 env[1344]: 2025-10-29 01:28:47.826 [INFO][4040] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="9e54adea2d888fd3f30d6a5f2c15f1f8fa82de296197c212479cd5c501f0a375" Namespace="calico-apiserver" Pod="calico-apiserver-79569d88b4-q8r9t" WorkloadEndpoint="localhost-k8s-calico--apiserver--79569d88b4--q8r9t-eth0" Oct 29 01:28:47.840579 env[1344]: 2025-10-29 01:28:47.827 [INFO][4040] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="9e54adea2d888fd3f30d6a5f2c15f1f8fa82de296197c212479cd5c501f0a375" Namespace="calico-apiserver" Pod="calico-apiserver-79569d88b4-q8r9t" WorkloadEndpoint="localhost-k8s-calico--apiserver--79569d88b4--q8r9t-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--79569d88b4--q8r9t-eth0", GenerateName:"calico-apiserver-79569d88b4-", Namespace:"calico-apiserver", SelfLink:"", UID:"c80c9899-41ee-40c7-92c7-ab20d72dcefe", ResourceVersion:"913", Generation:0, CreationTimestamp:time.Date(2025, time.October, 29, 1, 28, 20, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"79569d88b4", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"9e54adea2d888fd3f30d6a5f2c15f1f8fa82de296197c212479cd5c501f0a375", Pod:"calico-apiserver-79569d88b4-q8r9t", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali261b1403950", MAC:"c6:58:af:42:f2:5e", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 29 01:28:47.840579 env[1344]: 2025-10-29 01:28:47.833 [INFO][4040] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="9e54adea2d888fd3f30d6a5f2c15f1f8fa82de296197c212479cd5c501f0a375" Namespace="calico-apiserver" Pod="calico-apiserver-79569d88b4-q8r9t" WorkloadEndpoint="localhost-k8s-calico--apiserver--79569d88b4--q8r9t-eth0" Oct 29 01:28:47.852000 audit[4127]: NETFILTER_CFG table=filter:115 family=2 entries=58 op=nft_register_chain pid=4127 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Oct 29 01:28:47.852000 audit[4127]: SYSCALL arch=c000003e syscall=46 success=yes exit=30584 a0=3 a1=7ffe8e99f260 a2=0 a3=7ffe8e99f24c items=0 ppid=3719 pid=4127 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 29 01:28:47.852000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Oct 29 01:28:47.859552 env[1344]: time="2025-10-29T01:28:47.859532177Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-h4k8p,Uid:5af81d8c-f0dd-4f37-b2dd-db8e64891fd3,Namespace:calico-system,Attempt:1,} returns sandbox id \"a967cea1dee93e67f724313a0d91337cd1fbdb4adb05f94b404fd8dc0764f83c\"" Oct 29 01:28:47.860824 env[1344]: time="2025-10-29T01:28:47.860812128Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Oct 29 01:28:47.866985 env[1344]: time="2025-10-29T01:28:47.866948703Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 29 01:28:47.867085 env[1344]: time="2025-10-29T01:28:47.866973504Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 29 01:28:47.867085 env[1344]: time="2025-10-29T01:28:47.866980321Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 29 01:28:47.867210 env[1344]: time="2025-10-29T01:28:47.867178325Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/9e54adea2d888fd3f30d6a5f2c15f1f8fa82de296197c212479cd5c501f0a375 pid=4142 runtime=io.containerd.runc.v2 Oct 29 01:28:47.885137 systemd-resolved[1274]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Oct 29 01:28:47.904681 env[1344]: time="2025-10-29T01:28:47.904653486Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-79569d88b4-q8r9t,Uid:c80c9899-41ee-40c7-92c7-ab20d72dcefe,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"9e54adea2d888fd3f30d6a5f2c15f1f8fa82de296197c212479cd5c501f0a375\"" Oct 29 01:28:48.123289 systemd-networkd[1110]: cali49133034a20: Gained IPv6LL Oct 29 01:28:48.216300 env[1344]: time="2025-10-29T01:28:48.216270574Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Oct 29 01:28:48.216688 env[1344]: time="2025-10-29T01:28:48.216660287Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Oct 29 01:28:48.216907 kubelet[2259]: E1029 01:28:48.216876 2259 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Oct 29 01:28:48.216963 kubelet[2259]: E1029 01:28:48.216912 2259 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Oct 29 01:28:48.217099 kubelet[2259]: E1029 01:28:48.217059 2259 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-p9wcz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-h4k8p_calico-system(5af81d8c-f0dd-4f37-b2dd-db8e64891fd3): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Oct 29 01:28:48.217485 env[1344]: time="2025-10-29T01:28:48.217468841Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Oct 29 01:28:48.218413 kubelet[2259]: E1029 01:28:48.218372 2259 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-h4k8p" podUID="5af81d8c-f0dd-4f37-b2dd-db8e64891fd3" Oct 29 01:28:48.491179 env[1344]: time="2025-10-29T01:28:48.491150136Z" level=info msg="StopPodSandbox for \"7b42392c3e1a8892f88e255c936c939e6c7bf0a8aa6d73ae86be23c467721e03\"" Oct 29 01:28:48.545120 env[1344]: 2025-10-29 01:28:48.524 [INFO][4193] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="7b42392c3e1a8892f88e255c936c939e6c7bf0a8aa6d73ae86be23c467721e03" Oct 29 01:28:48.545120 env[1344]: 2025-10-29 01:28:48.524 [INFO][4193] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="7b42392c3e1a8892f88e255c936c939e6c7bf0a8aa6d73ae86be23c467721e03" iface="eth0" netns="/var/run/netns/cni-d6131fe1-ef56-1593-eb16-3d80981e8ad1" Oct 29 01:28:48.545120 env[1344]: 2025-10-29 01:28:48.524 [INFO][4193] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="7b42392c3e1a8892f88e255c936c939e6c7bf0a8aa6d73ae86be23c467721e03" iface="eth0" netns="/var/run/netns/cni-d6131fe1-ef56-1593-eb16-3d80981e8ad1" Oct 29 01:28:48.545120 env[1344]: 2025-10-29 01:28:48.524 [INFO][4193] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="7b42392c3e1a8892f88e255c936c939e6c7bf0a8aa6d73ae86be23c467721e03" iface="eth0" netns="/var/run/netns/cni-d6131fe1-ef56-1593-eb16-3d80981e8ad1" Oct 29 01:28:48.545120 env[1344]: 2025-10-29 01:28:48.524 [INFO][4193] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="7b42392c3e1a8892f88e255c936c939e6c7bf0a8aa6d73ae86be23c467721e03" Oct 29 01:28:48.545120 env[1344]: 2025-10-29 01:28:48.524 [INFO][4193] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="7b42392c3e1a8892f88e255c936c939e6c7bf0a8aa6d73ae86be23c467721e03" Oct 29 01:28:48.545120 env[1344]: 2025-10-29 01:28:48.538 [INFO][4200] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="7b42392c3e1a8892f88e255c936c939e6c7bf0a8aa6d73ae86be23c467721e03" HandleID="k8s-pod-network.7b42392c3e1a8892f88e255c936c939e6c7bf0a8aa6d73ae86be23c467721e03" Workload="localhost-k8s-csi--node--driver--6w9mz-eth0" Oct 29 01:28:48.545120 env[1344]: 2025-10-29 01:28:48.538 [INFO][4200] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Oct 29 01:28:48.545120 env[1344]: 2025-10-29 01:28:48.538 [INFO][4200] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Oct 29 01:28:48.545120 env[1344]: 2025-10-29 01:28:48.541 [WARNING][4200] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="7b42392c3e1a8892f88e255c936c939e6c7bf0a8aa6d73ae86be23c467721e03" HandleID="k8s-pod-network.7b42392c3e1a8892f88e255c936c939e6c7bf0a8aa6d73ae86be23c467721e03" Workload="localhost-k8s-csi--node--driver--6w9mz-eth0" Oct 29 01:28:48.545120 env[1344]: 2025-10-29 01:28:48.542 [INFO][4200] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="7b42392c3e1a8892f88e255c936c939e6c7bf0a8aa6d73ae86be23c467721e03" HandleID="k8s-pod-network.7b42392c3e1a8892f88e255c936c939e6c7bf0a8aa6d73ae86be23c467721e03" Workload="localhost-k8s-csi--node--driver--6w9mz-eth0" Oct 29 01:28:48.545120 env[1344]: 2025-10-29 01:28:48.542 [INFO][4200] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Oct 29 01:28:48.545120 env[1344]: 2025-10-29 01:28:48.543 [INFO][4193] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="7b42392c3e1a8892f88e255c936c939e6c7bf0a8aa6d73ae86be23c467721e03" Oct 29 01:28:48.547256 env[1344]: time="2025-10-29T01:28:48.545233696Z" level=info msg="TearDown network for sandbox \"7b42392c3e1a8892f88e255c936c939e6c7bf0a8aa6d73ae86be23c467721e03\" successfully" Oct 29 01:28:48.547256 env[1344]: time="2025-10-29T01:28:48.545261792Z" level=info msg="StopPodSandbox for \"7b42392c3e1a8892f88e255c936c939e6c7bf0a8aa6d73ae86be23c467721e03\" returns successfully" Oct 29 01:28:48.547256 env[1344]: time="2025-10-29T01:28:48.546219887Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-6w9mz,Uid:1b41dfbb-cb8d-4095-9219-ece15b48c5c3,Namespace:calico-system,Attempt:1,}" Oct 29 01:28:48.555969 systemd[1]: run-netns-cni\x2dd6131fe1\x2def56\x2d1593\x2deb16\x2d3d80981e8ad1.mount: Deactivated successfully. Oct 29 01:28:48.556993 env[1344]: time="2025-10-29T01:28:48.556973258Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Oct 29 01:28:48.557602 env[1344]: time="2025-10-29T01:28:48.557215140Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Oct 29 01:28:48.557646 kubelet[2259]: E1029 01:28:48.557343 2259 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Oct 29 01:28:48.557646 kubelet[2259]: E1029 01:28:48.557376 2259 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Oct 29 01:28:48.557646 kubelet[2259]: E1029 01:28:48.557456 2259 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-zwtb5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-79569d88b4-q8r9t_calico-apiserver(c80c9899-41ee-40c7-92c7-ab20d72dcefe): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Oct 29 01:28:48.558707 kubelet[2259]: E1029 01:28:48.558666 2259 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-79569d88b4-q8r9t" podUID="c80c9899-41ee-40c7-92c7-ab20d72dcefe" Oct 29 01:28:48.638332 systemd-networkd[1110]: calibdaed86d59e: Link UP Oct 29 01:28:48.639608 systemd-networkd[1110]: calibdaed86d59e: Gained carrier Oct 29 01:28:48.640244 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): calibdaed86d59e: link becomes ready Oct 29 01:28:48.649872 env[1344]: 2025-10-29 01:28:48.596 [INFO][4206] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-csi--node--driver--6w9mz-eth0 csi-node-driver- calico-system 1b41dfbb-cb8d-4095-9219-ece15b48c5c3 938 0 2025-10-29 01:28:24 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:857b56db8f k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s localhost csi-node-driver-6w9mz eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] calibdaed86d59e [] [] }} ContainerID="cda72d8ed4e7493e200c1b278d6fd15adac0a7f735b110589c11e917d93dde3d" Namespace="calico-system" Pod="csi-node-driver-6w9mz" WorkloadEndpoint="localhost-k8s-csi--node--driver--6w9mz-" Oct 29 01:28:48.649872 env[1344]: 2025-10-29 01:28:48.596 [INFO][4206] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="cda72d8ed4e7493e200c1b278d6fd15adac0a7f735b110589c11e917d93dde3d" Namespace="calico-system" Pod="csi-node-driver-6w9mz" WorkloadEndpoint="localhost-k8s-csi--node--driver--6w9mz-eth0" Oct 29 01:28:48.649872 env[1344]: 2025-10-29 01:28:48.613 [INFO][4219] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="cda72d8ed4e7493e200c1b278d6fd15adac0a7f735b110589c11e917d93dde3d" HandleID="k8s-pod-network.cda72d8ed4e7493e200c1b278d6fd15adac0a7f735b110589c11e917d93dde3d" Workload="localhost-k8s-csi--node--driver--6w9mz-eth0" Oct 29 01:28:48.649872 env[1344]: 2025-10-29 01:28:48.614 [INFO][4219] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="cda72d8ed4e7493e200c1b278d6fd15adac0a7f735b110589c11e917d93dde3d" HandleID="k8s-pod-network.cda72d8ed4e7493e200c1b278d6fd15adac0a7f735b110589c11e917d93dde3d" Workload="localhost-k8s-csi--node--driver--6w9mz-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002d56a0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"csi-node-driver-6w9mz", "timestamp":"2025-10-29 01:28:48.61395345 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Oct 29 01:28:48.649872 env[1344]: 2025-10-29 01:28:48.614 [INFO][4219] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Oct 29 01:28:48.649872 env[1344]: 2025-10-29 01:28:48.614 [INFO][4219] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Oct 29 01:28:48.649872 env[1344]: 2025-10-29 01:28:48.614 [INFO][4219] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Oct 29 01:28:48.649872 env[1344]: 2025-10-29 01:28:48.619 [INFO][4219] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.cda72d8ed4e7493e200c1b278d6fd15adac0a7f735b110589c11e917d93dde3d" host="localhost" Oct 29 01:28:48.649872 env[1344]: 2025-10-29 01:28:48.621 [INFO][4219] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Oct 29 01:28:48.649872 env[1344]: 2025-10-29 01:28:48.623 [INFO][4219] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Oct 29 01:28:48.649872 env[1344]: 2025-10-29 01:28:48.624 [INFO][4219] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Oct 29 01:28:48.649872 env[1344]: 2025-10-29 01:28:48.625 [INFO][4219] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Oct 29 01:28:48.649872 env[1344]: 2025-10-29 01:28:48.625 [INFO][4219] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.cda72d8ed4e7493e200c1b278d6fd15adac0a7f735b110589c11e917d93dde3d" host="localhost" Oct 29 01:28:48.649872 env[1344]: 2025-10-29 01:28:48.626 [INFO][4219] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.cda72d8ed4e7493e200c1b278d6fd15adac0a7f735b110589c11e917d93dde3d Oct 29 01:28:48.649872 env[1344]: 2025-10-29 01:28:48.628 [INFO][4219] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.cda72d8ed4e7493e200c1b278d6fd15adac0a7f735b110589c11e917d93dde3d" host="localhost" Oct 29 01:28:48.649872 env[1344]: 2025-10-29 01:28:48.634 [INFO][4219] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.133/26] block=192.168.88.128/26 handle="k8s-pod-network.cda72d8ed4e7493e200c1b278d6fd15adac0a7f735b110589c11e917d93dde3d" host="localhost" Oct 29 01:28:48.649872 env[1344]: 2025-10-29 01:28:48.634 [INFO][4219] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.133/26] handle="k8s-pod-network.cda72d8ed4e7493e200c1b278d6fd15adac0a7f735b110589c11e917d93dde3d" host="localhost" Oct 29 01:28:48.649872 env[1344]: 2025-10-29 01:28:48.634 [INFO][4219] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Oct 29 01:28:48.649872 env[1344]: 2025-10-29 01:28:48.634 [INFO][4219] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.133/26] IPv6=[] ContainerID="cda72d8ed4e7493e200c1b278d6fd15adac0a7f735b110589c11e917d93dde3d" HandleID="k8s-pod-network.cda72d8ed4e7493e200c1b278d6fd15adac0a7f735b110589c11e917d93dde3d" Workload="localhost-k8s-csi--node--driver--6w9mz-eth0" Oct 29 01:28:48.650603 env[1344]: 2025-10-29 01:28:48.635 [INFO][4206] cni-plugin/k8s.go 418: Populated endpoint ContainerID="cda72d8ed4e7493e200c1b278d6fd15adac0a7f735b110589c11e917d93dde3d" Namespace="calico-system" Pod="csi-node-driver-6w9mz" WorkloadEndpoint="localhost-k8s-csi--node--driver--6w9mz-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--6w9mz-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"1b41dfbb-cb8d-4095-9219-ece15b48c5c3", ResourceVersion:"938", Generation:0, CreationTimestamp:time.Date(2025, time.October, 29, 1, 28, 24, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"857b56db8f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"csi-node-driver-6w9mz", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calibdaed86d59e", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 29 01:28:48.650603 env[1344]: 2025-10-29 01:28:48.635 [INFO][4206] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.133/32] ContainerID="cda72d8ed4e7493e200c1b278d6fd15adac0a7f735b110589c11e917d93dde3d" Namespace="calico-system" Pod="csi-node-driver-6w9mz" WorkloadEndpoint="localhost-k8s-csi--node--driver--6w9mz-eth0" Oct 29 01:28:48.650603 env[1344]: 2025-10-29 01:28:48.636 [INFO][4206] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calibdaed86d59e ContainerID="cda72d8ed4e7493e200c1b278d6fd15adac0a7f735b110589c11e917d93dde3d" Namespace="calico-system" Pod="csi-node-driver-6w9mz" WorkloadEndpoint="localhost-k8s-csi--node--driver--6w9mz-eth0" Oct 29 01:28:48.650603 env[1344]: 2025-10-29 01:28:48.639 [INFO][4206] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="cda72d8ed4e7493e200c1b278d6fd15adac0a7f735b110589c11e917d93dde3d" Namespace="calico-system" Pod="csi-node-driver-6w9mz" WorkloadEndpoint="localhost-k8s-csi--node--driver--6w9mz-eth0" Oct 29 01:28:48.650603 env[1344]: 2025-10-29 01:28:48.639 [INFO][4206] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="cda72d8ed4e7493e200c1b278d6fd15adac0a7f735b110589c11e917d93dde3d" Namespace="calico-system" Pod="csi-node-driver-6w9mz" WorkloadEndpoint="localhost-k8s-csi--node--driver--6w9mz-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--6w9mz-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"1b41dfbb-cb8d-4095-9219-ece15b48c5c3", ResourceVersion:"938", Generation:0, CreationTimestamp:time.Date(2025, time.October, 29, 1, 28, 24, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"857b56db8f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"cda72d8ed4e7493e200c1b278d6fd15adac0a7f735b110589c11e917d93dde3d", Pod:"csi-node-driver-6w9mz", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calibdaed86d59e", MAC:"da:f3:68:17:4b:15", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 29 01:28:48.650603 env[1344]: 2025-10-29 01:28:48.646 [INFO][4206] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="cda72d8ed4e7493e200c1b278d6fd15adac0a7f735b110589c11e917d93dde3d" Namespace="calico-system" Pod="csi-node-driver-6w9mz" WorkloadEndpoint="localhost-k8s-csi--node--driver--6w9mz-eth0" Oct 29 01:28:48.658000 audit[4234]: NETFILTER_CFG table=filter:116 family=2 entries=48 op=nft_register_chain pid=4234 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Oct 29 01:28:48.658000 audit[4234]: SYSCALL arch=c000003e syscall=46 success=yes exit=23140 a0=3 a1=7fff203ff3e0 a2=0 a3=7fff203ff3cc items=0 ppid=3719 pid=4234 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 29 01:28:48.658000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Oct 29 01:28:48.660517 env[1344]: time="2025-10-29T01:28:48.660474452Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 29 01:28:48.660625 env[1344]: time="2025-10-29T01:28:48.660607607Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 29 01:28:48.660692 env[1344]: time="2025-10-29T01:28:48.660668654Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 29 01:28:48.660871 env[1344]: time="2025-10-29T01:28:48.660849641Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/cda72d8ed4e7493e200c1b278d6fd15adac0a7f735b110589c11e917d93dde3d pid=4242 runtime=io.containerd.runc.v2 Oct 29 01:28:48.667361 kubelet[2259]: E1029 01:28:48.667336 2259 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-79569d88b4-q8r9t" podUID="c80c9899-41ee-40c7-92c7-ab20d72dcefe" Oct 29 01:28:48.677553 kubelet[2259]: E1029 01:28:48.677297 2259 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-h4k8p" podUID="5af81d8c-f0dd-4f37-b2dd-db8e64891fd3" Oct 29 01:28:48.685905 systemd[1]: run-containerd-runc-k8s.io-cda72d8ed4e7493e200c1b278d6fd15adac0a7f735b110589c11e917d93dde3d-runc.kHTwmS.mount: Deactivated successfully. Oct 29 01:28:48.705000 audit[4270]: NETFILTER_CFG table=filter:117 family=2 entries=14 op=nft_register_rule pid=4270 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Oct 29 01:28:48.705000 audit[4270]: SYSCALL arch=c000003e syscall=46 success=yes exit=5248 a0=3 a1=7ffd7e480060 a2=0 a3=7ffd7e48004c items=0 ppid=2360 pid=4270 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 29 01:28:48.705000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Oct 29 01:28:48.709000 audit[4270]: NETFILTER_CFG table=nat:118 family=2 entries=20 op=nft_register_rule pid=4270 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Oct 29 01:28:48.709758 systemd-resolved[1274]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Oct 29 01:28:48.709000 audit[4270]: SYSCALL arch=c000003e syscall=46 success=yes exit=5772 a0=3 a1=7ffd7e480060 a2=0 a3=7ffd7e48004c items=0 ppid=2360 pid=4270 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 29 01:28:48.709000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Oct 29 01:28:48.717509 env[1344]: time="2025-10-29T01:28:48.717490262Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-6w9mz,Uid:1b41dfbb-cb8d-4095-9219-ece15b48c5c3,Namespace:calico-system,Attempt:1,} returns sandbox id \"cda72d8ed4e7493e200c1b278d6fd15adac0a7f735b110589c11e917d93dde3d\"" Oct 29 01:28:48.718541 env[1344]: time="2025-10-29T01:28:48.718530674Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Oct 29 01:28:49.047628 env[1344]: time="2025-10-29T01:28:49.047589436Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Oct 29 01:28:49.048153 env[1344]: time="2025-10-29T01:28:49.048122269Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Oct 29 01:28:49.048331 kubelet[2259]: E1029 01:28:49.048285 2259 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Oct 29 01:28:49.048616 kubelet[2259]: E1029 01:28:49.048339 2259 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Oct 29 01:28:49.048616 kubelet[2259]: E1029 01:28:49.048553 2259 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-8hhz7,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-6w9mz_calico-system(1b41dfbb-cb8d-4095-9219-ece15b48c5c3): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Oct 29 01:28:49.050252 env[1344]: time="2025-10-29T01:28:49.050227663Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Oct 29 01:28:49.403312 systemd-networkd[1110]: cali261b1403950: Gained IPv6LL Oct 29 01:28:49.448458 env[1344]: time="2025-10-29T01:28:49.448428894Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Oct 29 01:28:49.448851 env[1344]: time="2025-10-29T01:28:49.448823082Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Oct 29 01:28:49.449087 kubelet[2259]: E1029 01:28:49.449051 2259 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Oct 29 01:28:49.449210 kubelet[2259]: E1029 01:28:49.449194 2259 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Oct 29 01:28:49.449614 kubelet[2259]: E1029 01:28:49.449401 2259 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-8hhz7,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-6w9mz_calico-system(1b41dfbb-cb8d-4095-9219-ece15b48c5c3): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Oct 29 01:28:49.453702 kubelet[2259]: E1029 01:28:49.453666 2259 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-6w9mz" podUID="1b41dfbb-cb8d-4095-9219-ece15b48c5c3" Oct 29 01:28:49.490128 env[1344]: time="2025-10-29T01:28:49.490106739Z" level=info msg="StopPodSandbox for \"a3bccb4383767c1535ad6c5527d11b0ad5fbc197c104bcb77f6e7590e8c0eacd\"" Oct 29 01:28:49.543540 env[1344]: 2025-10-29 01:28:49.521 [INFO][4289] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="a3bccb4383767c1535ad6c5527d11b0ad5fbc197c104bcb77f6e7590e8c0eacd" Oct 29 01:28:49.543540 env[1344]: 2025-10-29 01:28:49.521 [INFO][4289] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="a3bccb4383767c1535ad6c5527d11b0ad5fbc197c104bcb77f6e7590e8c0eacd" iface="eth0" netns="/var/run/netns/cni-ee349e1c-01d1-266b-27d5-13d57a0a1a45" Oct 29 01:28:49.543540 env[1344]: 2025-10-29 01:28:49.521 [INFO][4289] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="a3bccb4383767c1535ad6c5527d11b0ad5fbc197c104bcb77f6e7590e8c0eacd" iface="eth0" netns="/var/run/netns/cni-ee349e1c-01d1-266b-27d5-13d57a0a1a45" Oct 29 01:28:49.543540 env[1344]: 2025-10-29 01:28:49.521 [INFO][4289] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="a3bccb4383767c1535ad6c5527d11b0ad5fbc197c104bcb77f6e7590e8c0eacd" iface="eth0" netns="/var/run/netns/cni-ee349e1c-01d1-266b-27d5-13d57a0a1a45" Oct 29 01:28:49.543540 env[1344]: 2025-10-29 01:28:49.521 [INFO][4289] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="a3bccb4383767c1535ad6c5527d11b0ad5fbc197c104bcb77f6e7590e8c0eacd" Oct 29 01:28:49.543540 env[1344]: 2025-10-29 01:28:49.521 [INFO][4289] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="a3bccb4383767c1535ad6c5527d11b0ad5fbc197c104bcb77f6e7590e8c0eacd" Oct 29 01:28:49.543540 env[1344]: 2025-10-29 01:28:49.536 [INFO][4297] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="a3bccb4383767c1535ad6c5527d11b0ad5fbc197c104bcb77f6e7590e8c0eacd" HandleID="k8s-pod-network.a3bccb4383767c1535ad6c5527d11b0ad5fbc197c104bcb77f6e7590e8c0eacd" Workload="localhost-k8s-coredns--668d6bf9bc--tlq7c-eth0" Oct 29 01:28:49.543540 env[1344]: 2025-10-29 01:28:49.536 [INFO][4297] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Oct 29 01:28:49.543540 env[1344]: 2025-10-29 01:28:49.536 [INFO][4297] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Oct 29 01:28:49.543540 env[1344]: 2025-10-29 01:28:49.540 [WARNING][4297] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="a3bccb4383767c1535ad6c5527d11b0ad5fbc197c104bcb77f6e7590e8c0eacd" HandleID="k8s-pod-network.a3bccb4383767c1535ad6c5527d11b0ad5fbc197c104bcb77f6e7590e8c0eacd" Workload="localhost-k8s-coredns--668d6bf9bc--tlq7c-eth0" Oct 29 01:28:49.543540 env[1344]: 2025-10-29 01:28:49.540 [INFO][4297] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="a3bccb4383767c1535ad6c5527d11b0ad5fbc197c104bcb77f6e7590e8c0eacd" HandleID="k8s-pod-network.a3bccb4383767c1535ad6c5527d11b0ad5fbc197c104bcb77f6e7590e8c0eacd" Workload="localhost-k8s-coredns--668d6bf9bc--tlq7c-eth0" Oct 29 01:28:49.543540 env[1344]: 2025-10-29 01:28:49.541 [INFO][4297] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Oct 29 01:28:49.543540 env[1344]: 2025-10-29 01:28:49.542 [INFO][4289] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="a3bccb4383767c1535ad6c5527d11b0ad5fbc197c104bcb77f6e7590e8c0eacd" Oct 29 01:28:49.543912 env[1344]: time="2025-10-29T01:28:49.543671863Z" level=info msg="TearDown network for sandbox \"a3bccb4383767c1535ad6c5527d11b0ad5fbc197c104bcb77f6e7590e8c0eacd\" successfully" Oct 29 01:28:49.543912 env[1344]: time="2025-10-29T01:28:49.543703262Z" level=info msg="StopPodSandbox for \"a3bccb4383767c1535ad6c5527d11b0ad5fbc197c104bcb77f6e7590e8c0eacd\" returns successfully" Oct 29 01:28:49.544324 env[1344]: time="2025-10-29T01:28:49.544310387Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-tlq7c,Uid:a0f7d88c-b007-49f1-8b19-74afbf972b6c,Namespace:kube-system,Attempt:1,}" Oct 29 01:28:49.554735 systemd[1]: run-netns-cni\x2dee349e1c\x2d01d1\x2d266b\x2d27d5\x2d13d57a0a1a45.mount: Deactivated successfully. Oct 29 01:28:49.607063 systemd-networkd[1110]: cali01f27c879e9: Link UP Oct 29 01:28:49.609411 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Oct 29 01:28:49.609466 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cali01f27c879e9: link becomes ready Oct 29 01:28:49.610095 systemd-networkd[1110]: cali01f27c879e9: Gained carrier Oct 29 01:28:49.623447 env[1344]: 2025-10-29 01:28:49.569 [INFO][4304] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--668d6bf9bc--tlq7c-eth0 coredns-668d6bf9bc- kube-system a0f7d88c-b007-49f1-8b19-74afbf972b6c 962 0 2025-10-29 01:28:12 +0000 UTC map[k8s-app:kube-dns pod-template-hash:668d6bf9bc projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-668d6bf9bc-tlq7c eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali01f27c879e9 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="7a86717ae55d09a54187c30b2a4d7c8a89de192926395e2fcd6f4dd8791ad585" Namespace="kube-system" Pod="coredns-668d6bf9bc-tlq7c" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--tlq7c-" Oct 29 01:28:49.623447 env[1344]: 2025-10-29 01:28:49.569 [INFO][4304] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="7a86717ae55d09a54187c30b2a4d7c8a89de192926395e2fcd6f4dd8791ad585" Namespace="kube-system" Pod="coredns-668d6bf9bc-tlq7c" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--tlq7c-eth0" Oct 29 01:28:49.623447 env[1344]: 2025-10-29 01:28:49.584 [INFO][4318] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="7a86717ae55d09a54187c30b2a4d7c8a89de192926395e2fcd6f4dd8791ad585" HandleID="k8s-pod-network.7a86717ae55d09a54187c30b2a4d7c8a89de192926395e2fcd6f4dd8791ad585" Workload="localhost-k8s-coredns--668d6bf9bc--tlq7c-eth0" Oct 29 01:28:49.623447 env[1344]: 2025-10-29 01:28:49.584 [INFO][4318] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="7a86717ae55d09a54187c30b2a4d7c8a89de192926395e2fcd6f4dd8791ad585" HandleID="k8s-pod-network.7a86717ae55d09a54187c30b2a4d7c8a89de192926395e2fcd6f4dd8791ad585" Workload="localhost-k8s-coredns--668d6bf9bc--tlq7c-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002d4fe0), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-668d6bf9bc-tlq7c", "timestamp":"2025-10-29 01:28:49.584658321 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Oct 29 01:28:49.623447 env[1344]: 2025-10-29 01:28:49.584 [INFO][4318] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Oct 29 01:28:49.623447 env[1344]: 2025-10-29 01:28:49.584 [INFO][4318] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Oct 29 01:28:49.623447 env[1344]: 2025-10-29 01:28:49.584 [INFO][4318] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Oct 29 01:28:49.623447 env[1344]: 2025-10-29 01:28:49.588 [INFO][4318] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.7a86717ae55d09a54187c30b2a4d7c8a89de192926395e2fcd6f4dd8791ad585" host="localhost" Oct 29 01:28:49.623447 env[1344]: 2025-10-29 01:28:49.590 [INFO][4318] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Oct 29 01:28:49.623447 env[1344]: 2025-10-29 01:28:49.592 [INFO][4318] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Oct 29 01:28:49.623447 env[1344]: 2025-10-29 01:28:49.593 [INFO][4318] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Oct 29 01:28:49.623447 env[1344]: 2025-10-29 01:28:49.594 [INFO][4318] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Oct 29 01:28:49.623447 env[1344]: 2025-10-29 01:28:49.594 [INFO][4318] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.7a86717ae55d09a54187c30b2a4d7c8a89de192926395e2fcd6f4dd8791ad585" host="localhost" Oct 29 01:28:49.623447 env[1344]: 2025-10-29 01:28:49.595 [INFO][4318] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.7a86717ae55d09a54187c30b2a4d7c8a89de192926395e2fcd6f4dd8791ad585 Oct 29 01:28:49.623447 env[1344]: 2025-10-29 01:28:49.597 [INFO][4318] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.7a86717ae55d09a54187c30b2a4d7c8a89de192926395e2fcd6f4dd8791ad585" host="localhost" Oct 29 01:28:49.623447 env[1344]: 2025-10-29 01:28:49.601 [INFO][4318] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.134/26] block=192.168.88.128/26 handle="k8s-pod-network.7a86717ae55d09a54187c30b2a4d7c8a89de192926395e2fcd6f4dd8791ad585" host="localhost" Oct 29 01:28:49.623447 env[1344]: 2025-10-29 01:28:49.601 [INFO][4318] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.134/26] handle="k8s-pod-network.7a86717ae55d09a54187c30b2a4d7c8a89de192926395e2fcd6f4dd8791ad585" host="localhost" Oct 29 01:28:49.623447 env[1344]: 2025-10-29 01:28:49.601 [INFO][4318] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Oct 29 01:28:49.623447 env[1344]: 2025-10-29 01:28:49.601 [INFO][4318] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.134/26] IPv6=[] ContainerID="7a86717ae55d09a54187c30b2a4d7c8a89de192926395e2fcd6f4dd8791ad585" HandleID="k8s-pod-network.7a86717ae55d09a54187c30b2a4d7c8a89de192926395e2fcd6f4dd8791ad585" Workload="localhost-k8s-coredns--668d6bf9bc--tlq7c-eth0" Oct 29 01:28:49.624451 env[1344]: 2025-10-29 01:28:49.603 [INFO][4304] cni-plugin/k8s.go 418: Populated endpoint ContainerID="7a86717ae55d09a54187c30b2a4d7c8a89de192926395e2fcd6f4dd8791ad585" Namespace="kube-system" Pod="coredns-668d6bf9bc-tlq7c" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--tlq7c-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--668d6bf9bc--tlq7c-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"a0f7d88c-b007-49f1-8b19-74afbf972b6c", ResourceVersion:"962", Generation:0, CreationTimestamp:time.Date(2025, time.October, 29, 1, 28, 12, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-668d6bf9bc-tlq7c", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali01f27c879e9", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 29 01:28:49.624451 env[1344]: 2025-10-29 01:28:49.603 [INFO][4304] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.134/32] ContainerID="7a86717ae55d09a54187c30b2a4d7c8a89de192926395e2fcd6f4dd8791ad585" Namespace="kube-system" Pod="coredns-668d6bf9bc-tlq7c" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--tlq7c-eth0" Oct 29 01:28:49.624451 env[1344]: 2025-10-29 01:28:49.603 [INFO][4304] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali01f27c879e9 ContainerID="7a86717ae55d09a54187c30b2a4d7c8a89de192926395e2fcd6f4dd8791ad585" Namespace="kube-system" Pod="coredns-668d6bf9bc-tlq7c" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--tlq7c-eth0" Oct 29 01:28:49.624451 env[1344]: 2025-10-29 01:28:49.610 [INFO][4304] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="7a86717ae55d09a54187c30b2a4d7c8a89de192926395e2fcd6f4dd8791ad585" Namespace="kube-system" Pod="coredns-668d6bf9bc-tlq7c" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--tlq7c-eth0" Oct 29 01:28:49.624451 env[1344]: 2025-10-29 01:28:49.611 [INFO][4304] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="7a86717ae55d09a54187c30b2a4d7c8a89de192926395e2fcd6f4dd8791ad585" Namespace="kube-system" Pod="coredns-668d6bf9bc-tlq7c" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--tlq7c-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--668d6bf9bc--tlq7c-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"a0f7d88c-b007-49f1-8b19-74afbf972b6c", ResourceVersion:"962", Generation:0, CreationTimestamp:time.Date(2025, time.October, 29, 1, 28, 12, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"7a86717ae55d09a54187c30b2a4d7c8a89de192926395e2fcd6f4dd8791ad585", Pod:"coredns-668d6bf9bc-tlq7c", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali01f27c879e9", MAC:"8a:9b:c9:4d:5e:f5", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 29 01:28:49.624451 env[1344]: 2025-10-29 01:28:49.621 [INFO][4304] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="7a86717ae55d09a54187c30b2a4d7c8a89de192926395e2fcd6f4dd8791ad585" Namespace="kube-system" Pod="coredns-668d6bf9bc-tlq7c" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--tlq7c-eth0" Oct 29 01:28:49.632872 env[1344]: time="2025-10-29T01:28:49.632840805Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 29 01:28:49.632996 env[1344]: time="2025-10-29T01:28:49.632982522Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 29 01:28:49.633066 env[1344]: time="2025-10-29T01:28:49.633053814Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 29 01:28:49.633231 env[1344]: time="2025-10-29T01:28:49.633215368Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/7a86717ae55d09a54187c30b2a4d7c8a89de192926395e2fcd6f4dd8791ad585 pid=4340 runtime=io.containerd.runc.v2 Oct 29 01:28:49.632000 audit[4341]: NETFILTER_CFG table=filter:119 family=2 entries=48 op=nft_register_chain pid=4341 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Oct 29 01:28:49.632000 audit[4341]: SYSCALL arch=c000003e syscall=46 success=yes exit=22720 a0=3 a1=7fff22be12c0 a2=0 a3=7fff22be12ac items=0 ppid=3719 pid=4341 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 29 01:28:49.632000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Oct 29 01:28:49.648503 systemd[1]: run-containerd-runc-k8s.io-7a86717ae55d09a54187c30b2a4d7c8a89de192926395e2fcd6f4dd8791ad585-runc.jHed4C.mount: Deactivated successfully. Oct 29 01:28:49.657970 systemd-resolved[1274]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Oct 29 01:28:49.678839 kubelet[2259]: E1029 01:28:49.678753 2259 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-h4k8p" podUID="5af81d8c-f0dd-4f37-b2dd-db8e64891fd3" Oct 29 01:28:49.678839 kubelet[2259]: E1029 01:28:49.678801 2259 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-79569d88b4-q8r9t" podUID="c80c9899-41ee-40c7-92c7-ab20d72dcefe" Oct 29 01:28:49.679524 kubelet[2259]: E1029 01:28:49.679076 2259 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-6w9mz" podUID="1b41dfbb-cb8d-4095-9219-ece15b48c5c3" Oct 29 01:28:49.702843 env[1344]: time="2025-10-29T01:28:49.702821726Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-tlq7c,Uid:a0f7d88c-b007-49f1-8b19-74afbf972b6c,Namespace:kube-system,Attempt:1,} returns sandbox id \"7a86717ae55d09a54187c30b2a4d7c8a89de192926395e2fcd6f4dd8791ad585\"" Oct 29 01:28:49.704643 env[1344]: time="2025-10-29T01:28:49.704630140Z" level=info msg="CreateContainer within sandbox \"7a86717ae55d09a54187c30b2a4d7c8a89de192926395e2fcd6f4dd8791ad585\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Oct 29 01:28:49.712230 env[1344]: time="2025-10-29T01:28:49.712211619Z" level=info msg="CreateContainer within sandbox \"7a86717ae55d09a54187c30b2a4d7c8a89de192926395e2fcd6f4dd8791ad585\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"2a9168ca2c81f2f4059503227ee640be1f734293cc4896259e7bf06c174031c6\"" Oct 29 01:28:49.713575 env[1344]: time="2025-10-29T01:28:49.713561514Z" level=info msg="StartContainer for \"2a9168ca2c81f2f4059503227ee640be1f734293cc4896259e7bf06c174031c6\"" Oct 29 01:28:49.723000 audit[4391]: NETFILTER_CFG table=filter:120 family=2 entries=14 op=nft_register_rule pid=4391 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Oct 29 01:28:49.723000 audit[4391]: SYSCALL arch=c000003e syscall=46 success=yes exit=5248 a0=3 a1=7fff0be58e50 a2=0 a3=7fff0be58e3c items=0 ppid=2360 pid=4391 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 29 01:28:49.723000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Oct 29 01:28:49.727000 audit[4391]: NETFILTER_CFG table=nat:121 family=2 entries=20 op=nft_register_rule pid=4391 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Oct 29 01:28:49.727000 audit[4391]: SYSCALL arch=c000003e syscall=46 success=yes exit=5772 a0=3 a1=7fff0be58e50 a2=0 a3=7fff0be58e3c items=0 ppid=2360 pid=4391 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 29 01:28:49.727000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Oct 29 01:28:49.744409 env[1344]: time="2025-10-29T01:28:49.744385569Z" level=info msg="StartContainer for \"2a9168ca2c81f2f4059503227ee640be1f734293cc4896259e7bf06c174031c6\" returns successfully" Oct 29 01:28:49.787443 systemd-networkd[1110]: calid48934885b3: Gained IPv6LL Oct 29 01:28:50.490021 env[1344]: time="2025-10-29T01:28:50.489990190Z" level=info msg="StopPodSandbox for \"a5e1537dc4ee2923171974c601e5d18f412a5db4e75059365e920b87d08dff91\"" Oct 29 01:28:50.490144 env[1344]: time="2025-10-29T01:28:50.489999526Z" level=info msg="StopPodSandbox for \"3fbc9aee4c6d01189acde1e9b04926aec3023e3093d6348538b76d1cb923de68\"" Oct 29 01:28:50.491299 systemd-networkd[1110]: calibdaed86d59e: Gained IPv6LL Oct 29 01:28:50.569827 env[1344]: 2025-10-29 01:28:50.530 [INFO][4437] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="3fbc9aee4c6d01189acde1e9b04926aec3023e3093d6348538b76d1cb923de68" Oct 29 01:28:50.569827 env[1344]: 2025-10-29 01:28:50.530 [INFO][4437] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="3fbc9aee4c6d01189acde1e9b04926aec3023e3093d6348538b76d1cb923de68" iface="eth0" netns="/var/run/netns/cni-cba562c2-e407-493e-5378-34f3600b1de9" Oct 29 01:28:50.569827 env[1344]: 2025-10-29 01:28:50.530 [INFO][4437] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="3fbc9aee4c6d01189acde1e9b04926aec3023e3093d6348538b76d1cb923de68" iface="eth0" netns="/var/run/netns/cni-cba562c2-e407-493e-5378-34f3600b1de9" Oct 29 01:28:50.569827 env[1344]: 2025-10-29 01:28:50.530 [INFO][4437] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="3fbc9aee4c6d01189acde1e9b04926aec3023e3093d6348538b76d1cb923de68" iface="eth0" netns="/var/run/netns/cni-cba562c2-e407-493e-5378-34f3600b1de9" Oct 29 01:28:50.569827 env[1344]: 2025-10-29 01:28:50.530 [INFO][4437] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="3fbc9aee4c6d01189acde1e9b04926aec3023e3093d6348538b76d1cb923de68" Oct 29 01:28:50.569827 env[1344]: 2025-10-29 01:28:50.530 [INFO][4437] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="3fbc9aee4c6d01189acde1e9b04926aec3023e3093d6348538b76d1cb923de68" Oct 29 01:28:50.569827 env[1344]: 2025-10-29 01:28:50.555 [INFO][4450] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="3fbc9aee4c6d01189acde1e9b04926aec3023e3093d6348538b76d1cb923de68" HandleID="k8s-pod-network.3fbc9aee4c6d01189acde1e9b04926aec3023e3093d6348538b76d1cb923de68" Workload="localhost-k8s-calico--apiserver--79569d88b4--p6h9g-eth0" Oct 29 01:28:50.569827 env[1344]: 2025-10-29 01:28:50.555 [INFO][4450] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Oct 29 01:28:50.569827 env[1344]: 2025-10-29 01:28:50.555 [INFO][4450] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Oct 29 01:28:50.569827 env[1344]: 2025-10-29 01:28:50.560 [WARNING][4450] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="3fbc9aee4c6d01189acde1e9b04926aec3023e3093d6348538b76d1cb923de68" HandleID="k8s-pod-network.3fbc9aee4c6d01189acde1e9b04926aec3023e3093d6348538b76d1cb923de68" Workload="localhost-k8s-calico--apiserver--79569d88b4--p6h9g-eth0" Oct 29 01:28:50.569827 env[1344]: 2025-10-29 01:28:50.560 [INFO][4450] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="3fbc9aee4c6d01189acde1e9b04926aec3023e3093d6348538b76d1cb923de68" HandleID="k8s-pod-network.3fbc9aee4c6d01189acde1e9b04926aec3023e3093d6348538b76d1cb923de68" Workload="localhost-k8s-calico--apiserver--79569d88b4--p6h9g-eth0" Oct 29 01:28:50.569827 env[1344]: 2025-10-29 01:28:50.561 [INFO][4450] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Oct 29 01:28:50.569827 env[1344]: 2025-10-29 01:28:50.568 [INFO][4437] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="3fbc9aee4c6d01189acde1e9b04926aec3023e3093d6348538b76d1cb923de68" Oct 29 01:28:50.572745 env[1344]: time="2025-10-29T01:28:50.571697675Z" level=info msg="TearDown network for sandbox \"3fbc9aee4c6d01189acde1e9b04926aec3023e3093d6348538b76d1cb923de68\" successfully" Oct 29 01:28:50.572745 env[1344]: time="2025-10-29T01:28:50.571720402Z" level=info msg="StopPodSandbox for \"3fbc9aee4c6d01189acde1e9b04926aec3023e3093d6348538b76d1cb923de68\" returns successfully" Oct 29 01:28:50.571722 systemd[1]: run-netns-cni\x2dcba562c2\x2de407\x2d493e\x2d5378\x2d34f3600b1de9.mount: Deactivated successfully. Oct 29 01:28:50.573460 env[1344]: time="2025-10-29T01:28:50.573440962Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-79569d88b4-p6h9g,Uid:7dd39e5b-dcdd-482f-ab49-9053a64b98c9,Namespace:calico-apiserver,Attempt:1,}" Oct 29 01:28:50.596004 env[1344]: 2025-10-29 01:28:50.539 [INFO][4436] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="a5e1537dc4ee2923171974c601e5d18f412a5db4e75059365e920b87d08dff91" Oct 29 01:28:50.596004 env[1344]: 2025-10-29 01:28:50.539 [INFO][4436] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="a5e1537dc4ee2923171974c601e5d18f412a5db4e75059365e920b87d08dff91" iface="eth0" netns="/var/run/netns/cni-6ceacc06-354e-6c6b-dda9-14d568b41da0" Oct 29 01:28:50.596004 env[1344]: 2025-10-29 01:28:50.540 [INFO][4436] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="a5e1537dc4ee2923171974c601e5d18f412a5db4e75059365e920b87d08dff91" iface="eth0" netns="/var/run/netns/cni-6ceacc06-354e-6c6b-dda9-14d568b41da0" Oct 29 01:28:50.596004 env[1344]: 2025-10-29 01:28:50.540 [INFO][4436] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="a5e1537dc4ee2923171974c601e5d18f412a5db4e75059365e920b87d08dff91" iface="eth0" netns="/var/run/netns/cni-6ceacc06-354e-6c6b-dda9-14d568b41da0" Oct 29 01:28:50.596004 env[1344]: 2025-10-29 01:28:50.540 [INFO][4436] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="a5e1537dc4ee2923171974c601e5d18f412a5db4e75059365e920b87d08dff91" Oct 29 01:28:50.596004 env[1344]: 2025-10-29 01:28:50.540 [INFO][4436] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="a5e1537dc4ee2923171974c601e5d18f412a5db4e75059365e920b87d08dff91" Oct 29 01:28:50.596004 env[1344]: 2025-10-29 01:28:50.588 [INFO][4456] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="a5e1537dc4ee2923171974c601e5d18f412a5db4e75059365e920b87d08dff91" HandleID="k8s-pod-network.a5e1537dc4ee2923171974c601e5d18f412a5db4e75059365e920b87d08dff91" Workload="localhost-k8s-calico--kube--controllers--7b46bb89cf--x7bmp-eth0" Oct 29 01:28:50.596004 env[1344]: 2025-10-29 01:28:50.588 [INFO][4456] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Oct 29 01:28:50.596004 env[1344]: 2025-10-29 01:28:50.588 [INFO][4456] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Oct 29 01:28:50.596004 env[1344]: 2025-10-29 01:28:50.592 [WARNING][4456] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="a5e1537dc4ee2923171974c601e5d18f412a5db4e75059365e920b87d08dff91" HandleID="k8s-pod-network.a5e1537dc4ee2923171974c601e5d18f412a5db4e75059365e920b87d08dff91" Workload="localhost-k8s-calico--kube--controllers--7b46bb89cf--x7bmp-eth0" Oct 29 01:28:50.596004 env[1344]: 2025-10-29 01:28:50.592 [INFO][4456] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="a5e1537dc4ee2923171974c601e5d18f412a5db4e75059365e920b87d08dff91" HandleID="k8s-pod-network.a5e1537dc4ee2923171974c601e5d18f412a5db4e75059365e920b87d08dff91" Workload="localhost-k8s-calico--kube--controllers--7b46bb89cf--x7bmp-eth0" Oct 29 01:28:50.596004 env[1344]: 2025-10-29 01:28:50.593 [INFO][4456] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Oct 29 01:28:50.596004 env[1344]: 2025-10-29 01:28:50.594 [INFO][4436] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="a5e1537dc4ee2923171974c601e5d18f412a5db4e75059365e920b87d08dff91" Oct 29 01:28:50.598743 env[1344]: time="2025-10-29T01:28:50.598553680Z" level=info msg="TearDown network for sandbox \"a5e1537dc4ee2923171974c601e5d18f412a5db4e75059365e920b87d08dff91\" successfully" Oct 29 01:28:50.598743 env[1344]: time="2025-10-29T01:28:50.598575200Z" level=info msg="StopPodSandbox for \"a5e1537dc4ee2923171974c601e5d18f412a5db4e75059365e920b87d08dff91\" returns successfully" Oct 29 01:28:50.597751 systemd[1]: run-netns-cni\x2d6ceacc06\x2d354e\x2d6c6b\x2ddda9\x2d14d568b41da0.mount: Deactivated successfully. Oct 29 01:28:50.599288 env[1344]: time="2025-10-29T01:28:50.599274536Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7b46bb89cf-x7bmp,Uid:0c8f51f5-a169-488a-a224-1fc1684a62fb,Namespace:calico-system,Attempt:1,}" Oct 29 01:28:50.676396 systemd-networkd[1110]: cali96d03d2d772: Link UP Oct 29 01:28:50.679000 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Oct 29 01:28:50.679040 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cali96d03d2d772: link becomes ready Oct 29 01:28:50.679145 systemd-networkd[1110]: cali96d03d2d772: Gained carrier Oct 29 01:28:50.693324 env[1344]: 2025-10-29 01:28:50.619 [INFO][4462] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--79569d88b4--p6h9g-eth0 calico-apiserver-79569d88b4- calico-apiserver 7dd39e5b-dcdd-482f-ab49-9053a64b98c9 984 0 2025-10-29 01:28:20 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:79569d88b4 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-79569d88b4-p6h9g eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali96d03d2d772 [] [] }} ContainerID="b2620f1220843b2e58db91fff0f5d60b77b4c6f67a61ebb319fec0eab7f137bb" Namespace="calico-apiserver" Pod="calico-apiserver-79569d88b4-p6h9g" WorkloadEndpoint="localhost-k8s-calico--apiserver--79569d88b4--p6h9g-" Oct 29 01:28:50.693324 env[1344]: 2025-10-29 01:28:50.619 [INFO][4462] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="b2620f1220843b2e58db91fff0f5d60b77b4c6f67a61ebb319fec0eab7f137bb" Namespace="calico-apiserver" Pod="calico-apiserver-79569d88b4-p6h9g" WorkloadEndpoint="localhost-k8s-calico--apiserver--79569d88b4--p6h9g-eth0" Oct 29 01:28:50.693324 env[1344]: 2025-10-29 01:28:50.651 [INFO][4478] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="b2620f1220843b2e58db91fff0f5d60b77b4c6f67a61ebb319fec0eab7f137bb" HandleID="k8s-pod-network.b2620f1220843b2e58db91fff0f5d60b77b4c6f67a61ebb319fec0eab7f137bb" Workload="localhost-k8s-calico--apiserver--79569d88b4--p6h9g-eth0" Oct 29 01:28:50.693324 env[1344]: 2025-10-29 01:28:50.651 [INFO][4478] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="b2620f1220843b2e58db91fff0f5d60b77b4c6f67a61ebb319fec0eab7f137bb" HandleID="k8s-pod-network.b2620f1220843b2e58db91fff0f5d60b77b4c6f67a61ebb319fec0eab7f137bb" Workload="localhost-k8s-calico--apiserver--79569d88b4--p6h9g-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002d4fe0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-79569d88b4-p6h9g", "timestamp":"2025-10-29 01:28:50.651548253 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Oct 29 01:28:50.693324 env[1344]: 2025-10-29 01:28:50.651 [INFO][4478] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Oct 29 01:28:50.693324 env[1344]: 2025-10-29 01:28:50.651 [INFO][4478] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Oct 29 01:28:50.693324 env[1344]: 2025-10-29 01:28:50.651 [INFO][4478] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Oct 29 01:28:50.693324 env[1344]: 2025-10-29 01:28:50.656 [INFO][4478] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.b2620f1220843b2e58db91fff0f5d60b77b4c6f67a61ebb319fec0eab7f137bb" host="localhost" Oct 29 01:28:50.693324 env[1344]: 2025-10-29 01:28:50.662 [INFO][4478] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Oct 29 01:28:50.693324 env[1344]: 2025-10-29 01:28:50.664 [INFO][4478] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Oct 29 01:28:50.693324 env[1344]: 2025-10-29 01:28:50.665 [INFO][4478] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Oct 29 01:28:50.693324 env[1344]: 2025-10-29 01:28:50.666 [INFO][4478] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Oct 29 01:28:50.693324 env[1344]: 2025-10-29 01:28:50.666 [INFO][4478] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.b2620f1220843b2e58db91fff0f5d60b77b4c6f67a61ebb319fec0eab7f137bb" host="localhost" Oct 29 01:28:50.693324 env[1344]: 2025-10-29 01:28:50.667 [INFO][4478] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.b2620f1220843b2e58db91fff0f5d60b77b4c6f67a61ebb319fec0eab7f137bb Oct 29 01:28:50.693324 env[1344]: 2025-10-29 01:28:50.669 [INFO][4478] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.b2620f1220843b2e58db91fff0f5d60b77b4c6f67a61ebb319fec0eab7f137bb" host="localhost" Oct 29 01:28:50.693324 env[1344]: 2025-10-29 01:28:50.672 [INFO][4478] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.135/26] block=192.168.88.128/26 handle="k8s-pod-network.b2620f1220843b2e58db91fff0f5d60b77b4c6f67a61ebb319fec0eab7f137bb" host="localhost" Oct 29 01:28:50.693324 env[1344]: 2025-10-29 01:28:50.672 [INFO][4478] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.135/26] handle="k8s-pod-network.b2620f1220843b2e58db91fff0f5d60b77b4c6f67a61ebb319fec0eab7f137bb" host="localhost" Oct 29 01:28:50.693324 env[1344]: 2025-10-29 01:28:50.672 [INFO][4478] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Oct 29 01:28:50.693324 env[1344]: 2025-10-29 01:28:50.672 [INFO][4478] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.135/26] IPv6=[] ContainerID="b2620f1220843b2e58db91fff0f5d60b77b4c6f67a61ebb319fec0eab7f137bb" HandleID="k8s-pod-network.b2620f1220843b2e58db91fff0f5d60b77b4c6f67a61ebb319fec0eab7f137bb" Workload="localhost-k8s-calico--apiserver--79569d88b4--p6h9g-eth0" Oct 29 01:28:50.695000 audit[4504]: NETFILTER_CFG table=filter:122 family=2 entries=57 op=nft_register_chain pid=4504 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Oct 29 01:28:50.695000 audit[4504]: SYSCALL arch=c000003e syscall=46 success=yes exit=27828 a0=3 a1=7ffcd68b9190 a2=0 a3=7ffcd68b917c items=0 ppid=3719 pid=4504 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 29 01:28:50.695000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Oct 29 01:28:50.696373 env[1344]: 2025-10-29 01:28:50.674 [INFO][4462] cni-plugin/k8s.go 418: Populated endpoint ContainerID="b2620f1220843b2e58db91fff0f5d60b77b4c6f67a61ebb319fec0eab7f137bb" Namespace="calico-apiserver" Pod="calico-apiserver-79569d88b4-p6h9g" WorkloadEndpoint="localhost-k8s-calico--apiserver--79569d88b4--p6h9g-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--79569d88b4--p6h9g-eth0", GenerateName:"calico-apiserver-79569d88b4-", Namespace:"calico-apiserver", SelfLink:"", UID:"7dd39e5b-dcdd-482f-ab49-9053a64b98c9", ResourceVersion:"984", Generation:0, CreationTimestamp:time.Date(2025, time.October, 29, 1, 28, 20, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"79569d88b4", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-79569d88b4-p6h9g", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali96d03d2d772", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 29 01:28:50.696373 env[1344]: 2025-10-29 01:28:50.674 [INFO][4462] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.135/32] ContainerID="b2620f1220843b2e58db91fff0f5d60b77b4c6f67a61ebb319fec0eab7f137bb" Namespace="calico-apiserver" Pod="calico-apiserver-79569d88b4-p6h9g" WorkloadEndpoint="localhost-k8s-calico--apiserver--79569d88b4--p6h9g-eth0" Oct 29 01:28:50.696373 env[1344]: 2025-10-29 01:28:50.674 [INFO][4462] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali96d03d2d772 ContainerID="b2620f1220843b2e58db91fff0f5d60b77b4c6f67a61ebb319fec0eab7f137bb" Namespace="calico-apiserver" Pod="calico-apiserver-79569d88b4-p6h9g" WorkloadEndpoint="localhost-k8s-calico--apiserver--79569d88b4--p6h9g-eth0" Oct 29 01:28:50.696373 env[1344]: 2025-10-29 01:28:50.679 [INFO][4462] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="b2620f1220843b2e58db91fff0f5d60b77b4c6f67a61ebb319fec0eab7f137bb" Namespace="calico-apiserver" Pod="calico-apiserver-79569d88b4-p6h9g" WorkloadEndpoint="localhost-k8s-calico--apiserver--79569d88b4--p6h9g-eth0" Oct 29 01:28:50.696373 env[1344]: 2025-10-29 01:28:50.679 [INFO][4462] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="b2620f1220843b2e58db91fff0f5d60b77b4c6f67a61ebb319fec0eab7f137bb" Namespace="calico-apiserver" Pod="calico-apiserver-79569d88b4-p6h9g" WorkloadEndpoint="localhost-k8s-calico--apiserver--79569d88b4--p6h9g-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--79569d88b4--p6h9g-eth0", GenerateName:"calico-apiserver-79569d88b4-", Namespace:"calico-apiserver", SelfLink:"", UID:"7dd39e5b-dcdd-482f-ab49-9053a64b98c9", ResourceVersion:"984", Generation:0, CreationTimestamp:time.Date(2025, time.October, 29, 1, 28, 20, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"79569d88b4", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"b2620f1220843b2e58db91fff0f5d60b77b4c6f67a61ebb319fec0eab7f137bb", Pod:"calico-apiserver-79569d88b4-p6h9g", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali96d03d2d772", MAC:"ba:0f:71:ff:14:bf", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 29 01:28:50.696373 env[1344]: 2025-10-29 01:28:50.686 [INFO][4462] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="b2620f1220843b2e58db91fff0f5d60b77b4c6f67a61ebb319fec0eab7f137bb" Namespace="calico-apiserver" Pod="calico-apiserver-79569d88b4-p6h9g" WorkloadEndpoint="localhost-k8s-calico--apiserver--79569d88b4--p6h9g-eth0" Oct 29 01:28:50.702208 env[1344]: time="2025-10-29T01:28:50.699682012Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 29 01:28:50.702208 env[1344]: time="2025-10-29T01:28:50.699753718Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 29 01:28:50.702208 env[1344]: time="2025-10-29T01:28:50.699764256Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 29 01:28:50.702208 env[1344]: time="2025-10-29T01:28:50.700293912Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/b2620f1220843b2e58db91fff0f5d60b77b4c6f67a61ebb319fec0eab7f137bb pid=4511 runtime=io.containerd.runc.v2 Oct 29 01:28:50.703481 kubelet[2259]: E1029 01:28:50.703439 2259 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-6w9mz" podUID="1b41dfbb-cb8d-4095-9219-ece15b48c5c3" Oct 29 01:28:50.714000 audit[4526]: NETFILTER_CFG table=filter:123 family=2 entries=14 op=nft_register_rule pid=4526 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Oct 29 01:28:50.714000 audit[4526]: SYSCALL arch=c000003e syscall=46 success=yes exit=5248 a0=3 a1=7ffd76db0160 a2=0 a3=7ffd76db014c items=0 ppid=2360 pid=4526 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 29 01:28:50.714000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Oct 29 01:28:50.721500 kubelet[2259]: I1029 01:28:50.721462 2259 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-tlq7c" podStartSLOduration=38.721449516 podStartE2EDuration="38.721449516s" podCreationTimestamp="2025-10-29 01:28:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-29 01:28:50.712246956 +0000 UTC m=+43.368361988" watchObservedRunningTime="2025-10-29 01:28:50.721449516 +0000 UTC m=+43.377564543" Oct 29 01:28:50.726000 audit[4526]: NETFILTER_CFG table=nat:124 family=2 entries=44 op=nft_register_rule pid=4526 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Oct 29 01:28:50.726000 audit[4526]: SYSCALL arch=c000003e syscall=46 success=yes exit=14196 a0=3 a1=7ffd76db0160 a2=0 a3=7ffd76db014c items=0 ppid=2360 pid=4526 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 29 01:28:50.726000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Oct 29 01:28:50.747287 systemd-networkd[1110]: cali01f27c879e9: Gained IPv6LL Oct 29 01:28:50.761813 systemd-resolved[1274]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Oct 29 01:28:50.793711 env[1344]: time="2025-10-29T01:28:50.792426128Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-79569d88b4-p6h9g,Uid:7dd39e5b-dcdd-482f-ab49-9053a64b98c9,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"b2620f1220843b2e58db91fff0f5d60b77b4c6f67a61ebb319fec0eab7f137bb\"" Oct 29 01:28:50.797506 env[1344]: time="2025-10-29T01:28:50.797490212Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Oct 29 01:28:50.816264 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): calia6f0d03f484: link becomes ready Oct 29 01:28:50.816391 systemd-networkd[1110]: calia6f0d03f484: Link UP Oct 29 01:28:50.816511 systemd-networkd[1110]: calia6f0d03f484: Gained carrier Oct 29 01:28:50.830610 env[1344]: 2025-10-29 01:28:50.734 [INFO][4483] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--kube--controllers--7b46bb89cf--x7bmp-eth0 calico-kube-controllers-7b46bb89cf- calico-system 0c8f51f5-a169-488a-a224-1fc1684a62fb 985 0 2025-10-29 01:28:24 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:7b46bb89cf projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s localhost calico-kube-controllers-7b46bb89cf-x7bmp eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] calia6f0d03f484 [] [] }} ContainerID="bd2d6e7ae4c3733763ad2a5e8af69bffa10a55366fd8dfcd8c0225e096e3e739" Namespace="calico-system" Pod="calico-kube-controllers-7b46bb89cf-x7bmp" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--7b46bb89cf--x7bmp-" Oct 29 01:28:50.830610 env[1344]: 2025-10-29 01:28:50.734 [INFO][4483] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="bd2d6e7ae4c3733763ad2a5e8af69bffa10a55366fd8dfcd8c0225e096e3e739" Namespace="calico-system" Pod="calico-kube-controllers-7b46bb89cf-x7bmp" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--7b46bb89cf--x7bmp-eth0" Oct 29 01:28:50.830610 env[1344]: 2025-10-29 01:28:50.781 [INFO][4537] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="bd2d6e7ae4c3733763ad2a5e8af69bffa10a55366fd8dfcd8c0225e096e3e739" HandleID="k8s-pod-network.bd2d6e7ae4c3733763ad2a5e8af69bffa10a55366fd8dfcd8c0225e096e3e739" Workload="localhost-k8s-calico--kube--controllers--7b46bb89cf--x7bmp-eth0" Oct 29 01:28:50.830610 env[1344]: 2025-10-29 01:28:50.781 [INFO][4537] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="bd2d6e7ae4c3733763ad2a5e8af69bffa10a55366fd8dfcd8c0225e096e3e739" HandleID="k8s-pod-network.bd2d6e7ae4c3733763ad2a5e8af69bffa10a55366fd8dfcd8c0225e096e3e739" Workload="localhost-k8s-calico--kube--controllers--7b46bb89cf--x7bmp-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002c9590), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"calico-kube-controllers-7b46bb89cf-x7bmp", "timestamp":"2025-10-29 01:28:50.781177999 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Oct 29 01:28:50.830610 env[1344]: 2025-10-29 01:28:50.781 [INFO][4537] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Oct 29 01:28:50.830610 env[1344]: 2025-10-29 01:28:50.781 [INFO][4537] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Oct 29 01:28:50.830610 env[1344]: 2025-10-29 01:28:50.781 [INFO][4537] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Oct 29 01:28:50.830610 env[1344]: 2025-10-29 01:28:50.790 [INFO][4537] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.bd2d6e7ae4c3733763ad2a5e8af69bffa10a55366fd8dfcd8c0225e096e3e739" host="localhost" Oct 29 01:28:50.830610 env[1344]: 2025-10-29 01:28:50.796 [INFO][4537] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Oct 29 01:28:50.830610 env[1344]: 2025-10-29 01:28:50.800 [INFO][4537] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Oct 29 01:28:50.830610 env[1344]: 2025-10-29 01:28:50.801 [INFO][4537] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Oct 29 01:28:50.830610 env[1344]: 2025-10-29 01:28:50.802 [INFO][4537] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Oct 29 01:28:50.830610 env[1344]: 2025-10-29 01:28:50.802 [INFO][4537] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.bd2d6e7ae4c3733763ad2a5e8af69bffa10a55366fd8dfcd8c0225e096e3e739" host="localhost" Oct 29 01:28:50.830610 env[1344]: 2025-10-29 01:28:50.803 [INFO][4537] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.bd2d6e7ae4c3733763ad2a5e8af69bffa10a55366fd8dfcd8c0225e096e3e739 Oct 29 01:28:50.830610 env[1344]: 2025-10-29 01:28:50.807 [INFO][4537] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.bd2d6e7ae4c3733763ad2a5e8af69bffa10a55366fd8dfcd8c0225e096e3e739" host="localhost" Oct 29 01:28:50.830610 env[1344]: 2025-10-29 01:28:50.810 [INFO][4537] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.136/26] block=192.168.88.128/26 handle="k8s-pod-network.bd2d6e7ae4c3733763ad2a5e8af69bffa10a55366fd8dfcd8c0225e096e3e739" host="localhost" Oct 29 01:28:50.830610 env[1344]: 2025-10-29 01:28:50.811 [INFO][4537] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.136/26] handle="k8s-pod-network.bd2d6e7ae4c3733763ad2a5e8af69bffa10a55366fd8dfcd8c0225e096e3e739" host="localhost" Oct 29 01:28:50.830610 env[1344]: 2025-10-29 01:28:50.811 [INFO][4537] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Oct 29 01:28:50.830610 env[1344]: 2025-10-29 01:28:50.811 [INFO][4537] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.136/26] IPv6=[] ContainerID="bd2d6e7ae4c3733763ad2a5e8af69bffa10a55366fd8dfcd8c0225e096e3e739" HandleID="k8s-pod-network.bd2d6e7ae4c3733763ad2a5e8af69bffa10a55366fd8dfcd8c0225e096e3e739" Workload="localhost-k8s-calico--kube--controllers--7b46bb89cf--x7bmp-eth0" Oct 29 01:28:50.831726 env[1344]: 2025-10-29 01:28:50.812 [INFO][4483] cni-plugin/k8s.go 418: Populated endpoint ContainerID="bd2d6e7ae4c3733763ad2a5e8af69bffa10a55366fd8dfcd8c0225e096e3e739" Namespace="calico-system" Pod="calico-kube-controllers-7b46bb89cf-x7bmp" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--7b46bb89cf--x7bmp-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--7b46bb89cf--x7bmp-eth0", GenerateName:"calico-kube-controllers-7b46bb89cf-", Namespace:"calico-system", SelfLink:"", UID:"0c8f51f5-a169-488a-a224-1fc1684a62fb", ResourceVersion:"985", Generation:0, CreationTimestamp:time.Date(2025, time.October, 29, 1, 28, 24, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"7b46bb89cf", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-kube-controllers-7b46bb89cf-x7bmp", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calia6f0d03f484", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 29 01:28:50.831726 env[1344]: 2025-10-29 01:28:50.812 [INFO][4483] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.136/32] ContainerID="bd2d6e7ae4c3733763ad2a5e8af69bffa10a55366fd8dfcd8c0225e096e3e739" Namespace="calico-system" Pod="calico-kube-controllers-7b46bb89cf-x7bmp" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--7b46bb89cf--x7bmp-eth0" Oct 29 01:28:50.831726 env[1344]: 2025-10-29 01:28:50.812 [INFO][4483] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calia6f0d03f484 ContainerID="bd2d6e7ae4c3733763ad2a5e8af69bffa10a55366fd8dfcd8c0225e096e3e739" Namespace="calico-system" Pod="calico-kube-controllers-7b46bb89cf-x7bmp" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--7b46bb89cf--x7bmp-eth0" Oct 29 01:28:50.831726 env[1344]: 2025-10-29 01:28:50.815 [INFO][4483] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="bd2d6e7ae4c3733763ad2a5e8af69bffa10a55366fd8dfcd8c0225e096e3e739" Namespace="calico-system" Pod="calico-kube-controllers-7b46bb89cf-x7bmp" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--7b46bb89cf--x7bmp-eth0" Oct 29 01:28:50.831726 env[1344]: 2025-10-29 01:28:50.818 [INFO][4483] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="bd2d6e7ae4c3733763ad2a5e8af69bffa10a55366fd8dfcd8c0225e096e3e739" Namespace="calico-system" Pod="calico-kube-controllers-7b46bb89cf-x7bmp" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--7b46bb89cf--x7bmp-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--7b46bb89cf--x7bmp-eth0", GenerateName:"calico-kube-controllers-7b46bb89cf-", Namespace:"calico-system", SelfLink:"", UID:"0c8f51f5-a169-488a-a224-1fc1684a62fb", ResourceVersion:"985", Generation:0, CreationTimestamp:time.Date(2025, time.October, 29, 1, 28, 24, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"7b46bb89cf", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"bd2d6e7ae4c3733763ad2a5e8af69bffa10a55366fd8dfcd8c0225e096e3e739", Pod:"calico-kube-controllers-7b46bb89cf-x7bmp", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calia6f0d03f484", MAC:"96:01:8f:9c:c2:c0", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 29 01:28:50.831726 env[1344]: 2025-10-29 01:28:50.824 [INFO][4483] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="bd2d6e7ae4c3733763ad2a5e8af69bffa10a55366fd8dfcd8c0225e096e3e739" Namespace="calico-system" Pod="calico-kube-controllers-7b46bb89cf-x7bmp" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--7b46bb89cf--x7bmp-eth0" Oct 29 01:28:50.839000 audit[4578]: NETFILTER_CFG table=filter:125 family=2 entries=60 op=nft_register_chain pid=4578 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Oct 29 01:28:50.839000 audit[4578]: SYSCALL arch=c000003e syscall=46 success=yes exit=26704 a0=3 a1=7ffe49884520 a2=0 a3=7ffe4988450c items=0 ppid=3719 pid=4578 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 29 01:28:50.839000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Oct 29 01:28:50.840373 env[1344]: time="2025-10-29T01:28:50.838301226Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 29 01:28:50.840373 env[1344]: time="2025-10-29T01:28:50.838329551Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 29 01:28:50.840373 env[1344]: time="2025-10-29T01:28:50.838336585Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 29 01:28:50.840373 env[1344]: time="2025-10-29T01:28:50.838442520Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/bd2d6e7ae4c3733763ad2a5e8af69bffa10a55366fd8dfcd8c0225e096e3e739 pid=4572 runtime=io.containerd.runc.v2 Oct 29 01:28:50.856094 systemd-resolved[1274]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Oct 29 01:28:50.875137 env[1344]: time="2025-10-29T01:28:50.875111635Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7b46bb89cf-x7bmp,Uid:0c8f51f5-a169-488a-a224-1fc1684a62fb,Namespace:calico-system,Attempt:1,} returns sandbox id \"bd2d6e7ae4c3733763ad2a5e8af69bffa10a55366fd8dfcd8c0225e096e3e739\"" Oct 29 01:28:51.112915 env[1344]: time="2025-10-29T01:28:51.112840625Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Oct 29 01:28:51.113376 env[1344]: time="2025-10-29T01:28:51.113336519Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Oct 29 01:28:51.117559 kubelet[2259]: E1029 01:28:51.117531 2259 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Oct 29 01:28:51.117864 kubelet[2259]: E1029 01:28:51.117566 2259 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Oct 29 01:28:51.118841 kubelet[2259]: E1029 01:28:51.118813 2259 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-7mzpv,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-79569d88b4-p6h9g_calico-apiserver(7dd39e5b-dcdd-482f-ab49-9053a64b98c9): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Oct 29 01:28:51.119736 env[1344]: time="2025-10-29T01:28:51.119723495Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Oct 29 01:28:51.120204 kubelet[2259]: E1029 01:28:51.120179 2259 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-79569d88b4-p6h9g" podUID="7dd39e5b-dcdd-482f-ab49-9053a64b98c9" Oct 29 01:28:51.427708 env[1344]: time="2025-10-29T01:28:51.427669259Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Oct 29 01:28:51.436371 env[1344]: time="2025-10-29T01:28:51.436340025Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Oct 29 01:28:51.436626 kubelet[2259]: E1029 01:28:51.436588 2259 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Oct 29 01:28:51.436694 kubelet[2259]: E1029 01:28:51.436634 2259 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Oct 29 01:28:51.443301 kubelet[2259]: E1029 01:28:51.436737 2259 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-p7wfq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-7b46bb89cf-x7bmp_calico-system(0c8f51f5-a169-488a-a224-1fc1684a62fb): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Oct 29 01:28:51.443301 kubelet[2259]: E1029 01:28:51.438240 2259 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-7b46bb89cf-x7bmp" podUID="0c8f51f5-a169-488a-a224-1fc1684a62fb" Oct 29 01:28:51.712234 kubelet[2259]: E1029 01:28:51.712063 2259 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-79569d88b4-p6h9g" podUID="7dd39e5b-dcdd-482f-ab49-9053a64b98c9" Oct 29 01:28:51.726319 kubelet[2259]: E1029 01:28:51.726289 2259 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-7b46bb89cf-x7bmp" podUID="0c8f51f5-a169-488a-a224-1fc1684a62fb" Oct 29 01:28:51.738000 audit[4609]: NETFILTER_CFG table=filter:126 family=2 entries=14 op=nft_register_rule pid=4609 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Oct 29 01:28:51.738000 audit[4609]: SYSCALL arch=c000003e syscall=46 success=yes exit=5248 a0=3 a1=7fff3a457290 a2=0 a3=7fff3a45727c items=0 ppid=2360 pid=4609 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 29 01:28:51.738000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Oct 29 01:28:51.752000 audit[4609]: NETFILTER_CFG table=nat:127 family=2 entries=56 op=nft_register_chain pid=4609 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Oct 29 01:28:51.752000 audit[4609]: SYSCALL arch=c000003e syscall=46 success=yes exit=19860 a0=3 a1=7fff3a457290 a2=0 a3=7fff3a45727c items=0 ppid=2360 pid=4609 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 29 01:28:51.752000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Oct 29 01:28:52.027368 systemd-networkd[1110]: cali96d03d2d772: Gained IPv6LL Oct 29 01:28:52.283433 systemd-networkd[1110]: calia6f0d03f484: Gained IPv6LL Oct 29 01:28:52.721653 kubelet[2259]: E1029 01:28:52.721627 2259 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-79569d88b4-p6h9g" podUID="7dd39e5b-dcdd-482f-ab49-9053a64b98c9" Oct 29 01:28:52.722013 kubelet[2259]: E1029 01:28:52.721751 2259 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-7b46bb89cf-x7bmp" podUID="0c8f51f5-a169-488a-a224-1fc1684a62fb" Oct 29 01:28:58.490525 env[1344]: time="2025-10-29T01:28:58.490495727Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Oct 29 01:28:58.845919 env[1344]: time="2025-10-29T01:28:58.845816730Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Oct 29 01:28:58.846400 env[1344]: time="2025-10-29T01:28:58.846361625Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Oct 29 01:28:58.846549 kubelet[2259]: E1029 01:28:58.846520 2259 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Oct 29 01:28:58.846856 kubelet[2259]: E1029 01:28:58.846839 2259 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Oct 29 01:28:58.847050 kubelet[2259]: E1029 01:28:58.847024 2259 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:5c343149867d44f28e7fda414822142b,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-5dmjb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-55bf5bd85-l59k9_calico-system(d15c73c2-757b-4aa9-bf20-2183e910769f): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Oct 29 01:28:58.848869 env[1344]: time="2025-10-29T01:28:58.848848540Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Oct 29 01:28:59.170448 env[1344]: time="2025-10-29T01:28:59.170395842Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Oct 29 01:28:59.170857 env[1344]: time="2025-10-29T01:28:59.170823744Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Oct 29 01:28:59.171197 kubelet[2259]: E1029 01:28:59.170981 2259 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Oct 29 01:28:59.171197 kubelet[2259]: E1029 01:28:59.171012 2259 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Oct 29 01:28:59.171197 kubelet[2259]: E1029 01:28:59.171099 2259 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-5dmjb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-55bf5bd85-l59k9_calico-system(d15c73c2-757b-4aa9-bf20-2183e910769f): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Oct 29 01:28:59.172473 kubelet[2259]: E1029 01:28:59.172449 2259 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-55bf5bd85-l59k9" podUID="d15c73c2-757b-4aa9-bf20-2183e910769f" Oct 29 01:29:00.490919 env[1344]: time="2025-10-29T01:29:00.490666547Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Oct 29 01:29:00.826969 env[1344]: time="2025-10-29T01:29:00.826861265Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Oct 29 01:29:00.827406 env[1344]: time="2025-10-29T01:29:00.827368979Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Oct 29 01:29:00.827589 kubelet[2259]: E1029 01:29:00.827561 2259 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Oct 29 01:29:00.827855 kubelet[2259]: E1029 01:29:00.827837 2259 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Oct 29 01:29:00.828052 kubelet[2259]: E1029 01:29:00.828015 2259 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-p9wcz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-h4k8p_calico-system(5af81d8c-f0dd-4f37-b2dd-db8e64891fd3): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Oct 29 01:29:00.829381 kubelet[2259]: E1029 01:29:00.829355 2259 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-h4k8p" podUID="5af81d8c-f0dd-4f37-b2dd-db8e64891fd3" Oct 29 01:29:01.490890 env[1344]: time="2025-10-29T01:29:01.490857154Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Oct 29 01:29:01.907861 env[1344]: time="2025-10-29T01:29:01.907829838Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Oct 29 01:29:01.908531 env[1344]: time="2025-10-29T01:29:01.908240438Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Oct 29 01:29:01.908578 kubelet[2259]: E1029 01:29:01.908380 2259 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Oct 29 01:29:01.908578 kubelet[2259]: E1029 01:29:01.908428 2259 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Oct 29 01:29:01.908947 kubelet[2259]: E1029 01:29:01.908903 2259 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-8hhz7,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-6w9mz_calico-system(1b41dfbb-cb8d-4095-9219-ece15b48c5c3): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Oct 29 01:29:01.910875 env[1344]: time="2025-10-29T01:29:01.910852182Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Oct 29 01:29:02.260418 env[1344]: time="2025-10-29T01:29:02.260312419Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Oct 29 01:29:02.260963 env[1344]: time="2025-10-29T01:29:02.260932331Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Oct 29 01:29:02.261471 kubelet[2259]: E1029 01:29:02.261073 2259 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Oct 29 01:29:02.261471 kubelet[2259]: E1029 01:29:02.261121 2259 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Oct 29 01:29:02.261471 kubelet[2259]: E1029 01:29:02.261244 2259 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-8hhz7,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-6w9mz_calico-system(1b41dfbb-cb8d-4095-9219-ece15b48c5c3): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Oct 29 01:29:02.262352 kubelet[2259]: E1029 01:29:02.262326 2259 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-6w9mz" podUID="1b41dfbb-cb8d-4095-9219-ece15b48c5c3" Oct 29 01:29:03.490518 env[1344]: time="2025-10-29T01:29:03.490347822Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Oct 29 01:29:03.811937 env[1344]: time="2025-10-29T01:29:03.811852193Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Oct 29 01:29:03.812898 env[1344]: time="2025-10-29T01:29:03.812868811Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Oct 29 01:29:03.813316 kubelet[2259]: E1029 01:29:03.813174 2259 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Oct 29 01:29:03.813316 kubelet[2259]: E1029 01:29:03.813229 2259 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Oct 29 01:29:03.813718 kubelet[2259]: E1029 01:29:03.813668 2259 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-7mzpv,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-79569d88b4-p6h9g_calico-apiserver(7dd39e5b-dcdd-482f-ab49-9053a64b98c9): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Oct 29 01:29:03.815084 kubelet[2259]: E1029 01:29:03.815047 2259 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-79569d88b4-p6h9g" podUID="7dd39e5b-dcdd-482f-ab49-9053a64b98c9" Oct 29 01:29:04.495902 env[1344]: time="2025-10-29T01:29:04.495871899Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Oct 29 01:29:04.857530 env[1344]: time="2025-10-29T01:29:04.857232532Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Oct 29 01:29:04.857824 env[1344]: time="2025-10-29T01:29:04.857744139Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Oct 29 01:29:04.858104 kubelet[2259]: E1029 01:29:04.857908 2259 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Oct 29 01:29:04.858104 kubelet[2259]: E1029 01:29:04.857958 2259 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Oct 29 01:29:04.858104 kubelet[2259]: E1029 01:29:04.858077 2259 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-zwtb5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-79569d88b4-q8r9t_calico-apiserver(c80c9899-41ee-40c7-92c7-ab20d72dcefe): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Oct 29 01:29:04.859498 kubelet[2259]: E1029 01:29:04.859471 2259 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-79569d88b4-q8r9t" podUID="c80c9899-41ee-40c7-92c7-ab20d72dcefe" Oct 29 01:29:05.490730 env[1344]: time="2025-10-29T01:29:05.490428236Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Oct 29 01:29:05.810489 env[1344]: time="2025-10-29T01:29:05.810401342Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Oct 29 01:29:05.810965 env[1344]: time="2025-10-29T01:29:05.810932403Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Oct 29 01:29:05.811107 kubelet[2259]: E1029 01:29:05.811078 2259 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Oct 29 01:29:05.811215 kubelet[2259]: E1029 01:29:05.811199 2259 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Oct 29 01:29:05.811392 kubelet[2259]: E1029 01:29:05.811358 2259 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-p7wfq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-7b46bb89cf-x7bmp_calico-system(0c8f51f5-a169-488a-a224-1fc1684a62fb): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Oct 29 01:29:05.812733 kubelet[2259]: E1029 01:29:05.812709 2259 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-7b46bb89cf-x7bmp" podUID="0c8f51f5-a169-488a-a224-1fc1684a62fb" Oct 29 01:29:07.574213 env[1344]: time="2025-10-29T01:29:07.573638204Z" level=info msg="StopPodSandbox for \"a3bccb4383767c1535ad6c5527d11b0ad5fbc197c104bcb77f6e7590e8c0eacd\"" Oct 29 01:29:07.626982 env[1344]: 2025-10-29 01:29:07.597 [WARNING][4637] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="a3bccb4383767c1535ad6c5527d11b0ad5fbc197c104bcb77f6e7590e8c0eacd" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--668d6bf9bc--tlq7c-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"a0f7d88c-b007-49f1-8b19-74afbf972b6c", ResourceVersion:"995", Generation:0, CreationTimestamp:time.Date(2025, time.October, 29, 1, 28, 12, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"7a86717ae55d09a54187c30b2a4d7c8a89de192926395e2fcd6f4dd8791ad585", Pod:"coredns-668d6bf9bc-tlq7c", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali01f27c879e9", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 29 01:29:07.626982 env[1344]: 2025-10-29 01:29:07.597 [INFO][4637] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="a3bccb4383767c1535ad6c5527d11b0ad5fbc197c104bcb77f6e7590e8c0eacd" Oct 29 01:29:07.626982 env[1344]: 2025-10-29 01:29:07.597 [INFO][4637] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="a3bccb4383767c1535ad6c5527d11b0ad5fbc197c104bcb77f6e7590e8c0eacd" iface="eth0" netns="" Oct 29 01:29:07.626982 env[1344]: 2025-10-29 01:29:07.597 [INFO][4637] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="a3bccb4383767c1535ad6c5527d11b0ad5fbc197c104bcb77f6e7590e8c0eacd" Oct 29 01:29:07.626982 env[1344]: 2025-10-29 01:29:07.597 [INFO][4637] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="a3bccb4383767c1535ad6c5527d11b0ad5fbc197c104bcb77f6e7590e8c0eacd" Oct 29 01:29:07.626982 env[1344]: 2025-10-29 01:29:07.614 [INFO][4644] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="a3bccb4383767c1535ad6c5527d11b0ad5fbc197c104bcb77f6e7590e8c0eacd" HandleID="k8s-pod-network.a3bccb4383767c1535ad6c5527d11b0ad5fbc197c104bcb77f6e7590e8c0eacd" Workload="localhost-k8s-coredns--668d6bf9bc--tlq7c-eth0" Oct 29 01:29:07.626982 env[1344]: 2025-10-29 01:29:07.614 [INFO][4644] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Oct 29 01:29:07.626982 env[1344]: 2025-10-29 01:29:07.614 [INFO][4644] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Oct 29 01:29:07.626982 env[1344]: 2025-10-29 01:29:07.618 [WARNING][4644] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="a3bccb4383767c1535ad6c5527d11b0ad5fbc197c104bcb77f6e7590e8c0eacd" HandleID="k8s-pod-network.a3bccb4383767c1535ad6c5527d11b0ad5fbc197c104bcb77f6e7590e8c0eacd" Workload="localhost-k8s-coredns--668d6bf9bc--tlq7c-eth0" Oct 29 01:29:07.626982 env[1344]: 2025-10-29 01:29:07.618 [INFO][4644] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="a3bccb4383767c1535ad6c5527d11b0ad5fbc197c104bcb77f6e7590e8c0eacd" HandleID="k8s-pod-network.a3bccb4383767c1535ad6c5527d11b0ad5fbc197c104bcb77f6e7590e8c0eacd" Workload="localhost-k8s-coredns--668d6bf9bc--tlq7c-eth0" Oct 29 01:29:07.626982 env[1344]: 2025-10-29 01:29:07.624 [INFO][4644] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Oct 29 01:29:07.626982 env[1344]: 2025-10-29 01:29:07.625 [INFO][4637] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="a3bccb4383767c1535ad6c5527d11b0ad5fbc197c104bcb77f6e7590e8c0eacd" Oct 29 01:29:07.627727 env[1344]: time="2025-10-29T01:29:07.626999007Z" level=info msg="TearDown network for sandbox \"a3bccb4383767c1535ad6c5527d11b0ad5fbc197c104bcb77f6e7590e8c0eacd\" successfully" Oct 29 01:29:07.627727 env[1344]: time="2025-10-29T01:29:07.627023152Z" level=info msg="StopPodSandbox for \"a3bccb4383767c1535ad6c5527d11b0ad5fbc197c104bcb77f6e7590e8c0eacd\" returns successfully" Oct 29 01:29:07.640323 env[1344]: time="2025-10-29T01:29:07.640308641Z" level=info msg="RemovePodSandbox for \"a3bccb4383767c1535ad6c5527d11b0ad5fbc197c104bcb77f6e7590e8c0eacd\"" Oct 29 01:29:07.641045 env[1344]: time="2025-10-29T01:29:07.640467024Z" level=info msg="Forcibly stopping sandbox \"a3bccb4383767c1535ad6c5527d11b0ad5fbc197c104bcb77f6e7590e8c0eacd\"" Oct 29 01:29:07.684794 env[1344]: 2025-10-29 01:29:07.660 [WARNING][4659] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="a3bccb4383767c1535ad6c5527d11b0ad5fbc197c104bcb77f6e7590e8c0eacd" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--668d6bf9bc--tlq7c-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"a0f7d88c-b007-49f1-8b19-74afbf972b6c", ResourceVersion:"995", Generation:0, CreationTimestamp:time.Date(2025, time.October, 29, 1, 28, 12, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"7a86717ae55d09a54187c30b2a4d7c8a89de192926395e2fcd6f4dd8791ad585", Pod:"coredns-668d6bf9bc-tlq7c", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali01f27c879e9", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 29 01:29:07.684794 env[1344]: 2025-10-29 01:29:07.660 [INFO][4659] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="a3bccb4383767c1535ad6c5527d11b0ad5fbc197c104bcb77f6e7590e8c0eacd" Oct 29 01:29:07.684794 env[1344]: 2025-10-29 01:29:07.660 [INFO][4659] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="a3bccb4383767c1535ad6c5527d11b0ad5fbc197c104bcb77f6e7590e8c0eacd" iface="eth0" netns="" Oct 29 01:29:07.684794 env[1344]: 2025-10-29 01:29:07.660 [INFO][4659] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="a3bccb4383767c1535ad6c5527d11b0ad5fbc197c104bcb77f6e7590e8c0eacd" Oct 29 01:29:07.684794 env[1344]: 2025-10-29 01:29:07.660 [INFO][4659] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="a3bccb4383767c1535ad6c5527d11b0ad5fbc197c104bcb77f6e7590e8c0eacd" Oct 29 01:29:07.684794 env[1344]: 2025-10-29 01:29:07.677 [INFO][4666] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="a3bccb4383767c1535ad6c5527d11b0ad5fbc197c104bcb77f6e7590e8c0eacd" HandleID="k8s-pod-network.a3bccb4383767c1535ad6c5527d11b0ad5fbc197c104bcb77f6e7590e8c0eacd" Workload="localhost-k8s-coredns--668d6bf9bc--tlq7c-eth0" Oct 29 01:29:07.684794 env[1344]: 2025-10-29 01:29:07.677 [INFO][4666] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Oct 29 01:29:07.684794 env[1344]: 2025-10-29 01:29:07.677 [INFO][4666] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Oct 29 01:29:07.684794 env[1344]: 2025-10-29 01:29:07.681 [WARNING][4666] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="a3bccb4383767c1535ad6c5527d11b0ad5fbc197c104bcb77f6e7590e8c0eacd" HandleID="k8s-pod-network.a3bccb4383767c1535ad6c5527d11b0ad5fbc197c104bcb77f6e7590e8c0eacd" Workload="localhost-k8s-coredns--668d6bf9bc--tlq7c-eth0" Oct 29 01:29:07.684794 env[1344]: 2025-10-29 01:29:07.681 [INFO][4666] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="a3bccb4383767c1535ad6c5527d11b0ad5fbc197c104bcb77f6e7590e8c0eacd" HandleID="k8s-pod-network.a3bccb4383767c1535ad6c5527d11b0ad5fbc197c104bcb77f6e7590e8c0eacd" Workload="localhost-k8s-coredns--668d6bf9bc--tlq7c-eth0" Oct 29 01:29:07.684794 env[1344]: 2025-10-29 01:29:07.682 [INFO][4666] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Oct 29 01:29:07.684794 env[1344]: 2025-10-29 01:29:07.683 [INFO][4659] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="a3bccb4383767c1535ad6c5527d11b0ad5fbc197c104bcb77f6e7590e8c0eacd" Oct 29 01:29:07.685216 env[1344]: time="2025-10-29T01:29:07.685177853Z" level=info msg="TearDown network for sandbox \"a3bccb4383767c1535ad6c5527d11b0ad5fbc197c104bcb77f6e7590e8c0eacd\" successfully" Oct 29 01:29:07.686746 env[1344]: time="2025-10-29T01:29:07.686733109Z" level=info msg="RemovePodSandbox \"a3bccb4383767c1535ad6c5527d11b0ad5fbc197c104bcb77f6e7590e8c0eacd\" returns successfully" Oct 29 01:29:07.687204 env[1344]: time="2025-10-29T01:29:07.687169578Z" level=info msg="StopPodSandbox for \"a5e1537dc4ee2923171974c601e5d18f412a5db4e75059365e920b87d08dff91\"" Oct 29 01:29:07.725327 env[1344]: 2025-10-29 01:29:07.705 [WARNING][4681] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="a5e1537dc4ee2923171974c601e5d18f412a5db4e75059365e920b87d08dff91" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--7b46bb89cf--x7bmp-eth0", GenerateName:"calico-kube-controllers-7b46bb89cf-", Namespace:"calico-system", SelfLink:"", UID:"0c8f51f5-a169-488a-a224-1fc1684a62fb", ResourceVersion:"1027", Generation:0, CreationTimestamp:time.Date(2025, time.October, 29, 1, 28, 24, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"7b46bb89cf", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"bd2d6e7ae4c3733763ad2a5e8af69bffa10a55366fd8dfcd8c0225e096e3e739", Pod:"calico-kube-controllers-7b46bb89cf-x7bmp", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calia6f0d03f484", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 29 01:29:07.725327 env[1344]: 2025-10-29 01:29:07.706 [INFO][4681] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="a5e1537dc4ee2923171974c601e5d18f412a5db4e75059365e920b87d08dff91" Oct 29 01:29:07.725327 env[1344]: 2025-10-29 01:29:07.706 [INFO][4681] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="a5e1537dc4ee2923171974c601e5d18f412a5db4e75059365e920b87d08dff91" iface="eth0" netns="" Oct 29 01:29:07.725327 env[1344]: 2025-10-29 01:29:07.706 [INFO][4681] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="a5e1537dc4ee2923171974c601e5d18f412a5db4e75059365e920b87d08dff91" Oct 29 01:29:07.725327 env[1344]: 2025-10-29 01:29:07.706 [INFO][4681] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="a5e1537dc4ee2923171974c601e5d18f412a5db4e75059365e920b87d08dff91" Oct 29 01:29:07.725327 env[1344]: 2025-10-29 01:29:07.718 [INFO][4688] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="a5e1537dc4ee2923171974c601e5d18f412a5db4e75059365e920b87d08dff91" HandleID="k8s-pod-network.a5e1537dc4ee2923171974c601e5d18f412a5db4e75059365e920b87d08dff91" Workload="localhost-k8s-calico--kube--controllers--7b46bb89cf--x7bmp-eth0" Oct 29 01:29:07.725327 env[1344]: 2025-10-29 01:29:07.718 [INFO][4688] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Oct 29 01:29:07.725327 env[1344]: 2025-10-29 01:29:07.718 [INFO][4688] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Oct 29 01:29:07.725327 env[1344]: 2025-10-29 01:29:07.722 [WARNING][4688] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="a5e1537dc4ee2923171974c601e5d18f412a5db4e75059365e920b87d08dff91" HandleID="k8s-pod-network.a5e1537dc4ee2923171974c601e5d18f412a5db4e75059365e920b87d08dff91" Workload="localhost-k8s-calico--kube--controllers--7b46bb89cf--x7bmp-eth0" Oct 29 01:29:07.725327 env[1344]: 2025-10-29 01:29:07.722 [INFO][4688] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="a5e1537dc4ee2923171974c601e5d18f412a5db4e75059365e920b87d08dff91" HandleID="k8s-pod-network.a5e1537dc4ee2923171974c601e5d18f412a5db4e75059365e920b87d08dff91" Workload="localhost-k8s-calico--kube--controllers--7b46bb89cf--x7bmp-eth0" Oct 29 01:29:07.725327 env[1344]: 2025-10-29 01:29:07.723 [INFO][4688] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Oct 29 01:29:07.725327 env[1344]: 2025-10-29 01:29:07.724 [INFO][4681] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="a5e1537dc4ee2923171974c601e5d18f412a5db4e75059365e920b87d08dff91" Oct 29 01:29:07.725734 env[1344]: time="2025-10-29T01:29:07.725713460Z" level=info msg="TearDown network for sandbox \"a5e1537dc4ee2923171974c601e5d18f412a5db4e75059365e920b87d08dff91\" successfully" Oct 29 01:29:07.726211 env[1344]: time="2025-10-29T01:29:07.725964189Z" level=info msg="StopPodSandbox for \"a5e1537dc4ee2923171974c601e5d18f412a5db4e75059365e920b87d08dff91\" returns successfully" Oct 29 01:29:07.726293 env[1344]: time="2025-10-29T01:29:07.726275317Z" level=info msg="RemovePodSandbox for \"a5e1537dc4ee2923171974c601e5d18f412a5db4e75059365e920b87d08dff91\"" Oct 29 01:29:07.726322 env[1344]: time="2025-10-29T01:29:07.726298374Z" level=info msg="Forcibly stopping sandbox \"a5e1537dc4ee2923171974c601e5d18f412a5db4e75059365e920b87d08dff91\"" Oct 29 01:29:07.765049 env[1344]: 2025-10-29 01:29:07.745 [WARNING][4703] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="a5e1537dc4ee2923171974c601e5d18f412a5db4e75059365e920b87d08dff91" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--7b46bb89cf--x7bmp-eth0", GenerateName:"calico-kube-controllers-7b46bb89cf-", Namespace:"calico-system", SelfLink:"", UID:"0c8f51f5-a169-488a-a224-1fc1684a62fb", ResourceVersion:"1027", Generation:0, CreationTimestamp:time.Date(2025, time.October, 29, 1, 28, 24, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"7b46bb89cf", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"bd2d6e7ae4c3733763ad2a5e8af69bffa10a55366fd8dfcd8c0225e096e3e739", Pod:"calico-kube-controllers-7b46bb89cf-x7bmp", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calia6f0d03f484", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 29 01:29:07.765049 env[1344]: 2025-10-29 01:29:07.745 [INFO][4703] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="a5e1537dc4ee2923171974c601e5d18f412a5db4e75059365e920b87d08dff91" Oct 29 01:29:07.765049 env[1344]: 2025-10-29 01:29:07.745 [INFO][4703] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="a5e1537dc4ee2923171974c601e5d18f412a5db4e75059365e920b87d08dff91" iface="eth0" netns="" Oct 29 01:29:07.765049 env[1344]: 2025-10-29 01:29:07.745 [INFO][4703] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="a5e1537dc4ee2923171974c601e5d18f412a5db4e75059365e920b87d08dff91" Oct 29 01:29:07.765049 env[1344]: 2025-10-29 01:29:07.745 [INFO][4703] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="a5e1537dc4ee2923171974c601e5d18f412a5db4e75059365e920b87d08dff91" Oct 29 01:29:07.765049 env[1344]: 2025-10-29 01:29:07.758 [INFO][4711] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="a5e1537dc4ee2923171974c601e5d18f412a5db4e75059365e920b87d08dff91" HandleID="k8s-pod-network.a5e1537dc4ee2923171974c601e5d18f412a5db4e75059365e920b87d08dff91" Workload="localhost-k8s-calico--kube--controllers--7b46bb89cf--x7bmp-eth0" Oct 29 01:29:07.765049 env[1344]: 2025-10-29 01:29:07.758 [INFO][4711] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Oct 29 01:29:07.765049 env[1344]: 2025-10-29 01:29:07.758 [INFO][4711] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Oct 29 01:29:07.765049 env[1344]: 2025-10-29 01:29:07.762 [WARNING][4711] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="a5e1537dc4ee2923171974c601e5d18f412a5db4e75059365e920b87d08dff91" HandleID="k8s-pod-network.a5e1537dc4ee2923171974c601e5d18f412a5db4e75059365e920b87d08dff91" Workload="localhost-k8s-calico--kube--controllers--7b46bb89cf--x7bmp-eth0" Oct 29 01:29:07.765049 env[1344]: 2025-10-29 01:29:07.762 [INFO][4711] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="a5e1537dc4ee2923171974c601e5d18f412a5db4e75059365e920b87d08dff91" HandleID="k8s-pod-network.a5e1537dc4ee2923171974c601e5d18f412a5db4e75059365e920b87d08dff91" Workload="localhost-k8s-calico--kube--controllers--7b46bb89cf--x7bmp-eth0" Oct 29 01:29:07.765049 env[1344]: 2025-10-29 01:29:07.763 [INFO][4711] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Oct 29 01:29:07.765049 env[1344]: 2025-10-29 01:29:07.764 [INFO][4703] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="a5e1537dc4ee2923171974c601e5d18f412a5db4e75059365e920b87d08dff91" Oct 29 01:29:07.765901 env[1344]: time="2025-10-29T01:29:07.765054064Z" level=info msg="TearDown network for sandbox \"a5e1537dc4ee2923171974c601e5d18f412a5db4e75059365e920b87d08dff91\" successfully" Oct 29 01:29:07.766424 env[1344]: time="2025-10-29T01:29:07.766406993Z" level=info msg="RemovePodSandbox \"a5e1537dc4ee2923171974c601e5d18f412a5db4e75059365e920b87d08dff91\" returns successfully" Oct 29 01:29:07.766703 env[1344]: time="2025-10-29T01:29:07.766681729Z" level=info msg="StopPodSandbox for \"853cbab672cc73cbd1c7268f5392c2db6a7e2d751e33edefaeb317d1466c5b2e\"" Oct 29 01:29:07.813049 env[1344]: 2025-10-29 01:29:07.787 [WARNING][4726] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="853cbab672cc73cbd1c7268f5392c2db6a7e2d751e33edefaeb317d1466c5b2e" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--79569d88b4--q8r9t-eth0", GenerateName:"calico-apiserver-79569d88b4-", Namespace:"calico-apiserver", SelfLink:"", UID:"c80c9899-41ee-40c7-92c7-ab20d72dcefe", ResourceVersion:"975", Generation:0, CreationTimestamp:time.Date(2025, time.October, 29, 1, 28, 20, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"79569d88b4", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"9e54adea2d888fd3f30d6a5f2c15f1f8fa82de296197c212479cd5c501f0a375", Pod:"calico-apiserver-79569d88b4-q8r9t", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali261b1403950", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 29 01:29:07.813049 env[1344]: 2025-10-29 01:29:07.787 [INFO][4726] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="853cbab672cc73cbd1c7268f5392c2db6a7e2d751e33edefaeb317d1466c5b2e" Oct 29 01:29:07.813049 env[1344]: 2025-10-29 01:29:07.787 [INFO][4726] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="853cbab672cc73cbd1c7268f5392c2db6a7e2d751e33edefaeb317d1466c5b2e" iface="eth0" netns="" Oct 29 01:29:07.813049 env[1344]: 2025-10-29 01:29:07.787 [INFO][4726] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="853cbab672cc73cbd1c7268f5392c2db6a7e2d751e33edefaeb317d1466c5b2e" Oct 29 01:29:07.813049 env[1344]: 2025-10-29 01:29:07.787 [INFO][4726] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="853cbab672cc73cbd1c7268f5392c2db6a7e2d751e33edefaeb317d1466c5b2e" Oct 29 01:29:07.813049 env[1344]: 2025-10-29 01:29:07.806 [INFO][4733] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="853cbab672cc73cbd1c7268f5392c2db6a7e2d751e33edefaeb317d1466c5b2e" HandleID="k8s-pod-network.853cbab672cc73cbd1c7268f5392c2db6a7e2d751e33edefaeb317d1466c5b2e" Workload="localhost-k8s-calico--apiserver--79569d88b4--q8r9t-eth0" Oct 29 01:29:07.813049 env[1344]: 2025-10-29 01:29:07.806 [INFO][4733] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Oct 29 01:29:07.813049 env[1344]: 2025-10-29 01:29:07.806 [INFO][4733] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Oct 29 01:29:07.813049 env[1344]: 2025-10-29 01:29:07.810 [WARNING][4733] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="853cbab672cc73cbd1c7268f5392c2db6a7e2d751e33edefaeb317d1466c5b2e" HandleID="k8s-pod-network.853cbab672cc73cbd1c7268f5392c2db6a7e2d751e33edefaeb317d1466c5b2e" Workload="localhost-k8s-calico--apiserver--79569d88b4--q8r9t-eth0" Oct 29 01:29:07.813049 env[1344]: 2025-10-29 01:29:07.810 [INFO][4733] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="853cbab672cc73cbd1c7268f5392c2db6a7e2d751e33edefaeb317d1466c5b2e" HandleID="k8s-pod-network.853cbab672cc73cbd1c7268f5392c2db6a7e2d751e33edefaeb317d1466c5b2e" Workload="localhost-k8s-calico--apiserver--79569d88b4--q8r9t-eth0" Oct 29 01:29:07.813049 env[1344]: 2025-10-29 01:29:07.810 [INFO][4733] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Oct 29 01:29:07.813049 env[1344]: 2025-10-29 01:29:07.811 [INFO][4726] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="853cbab672cc73cbd1c7268f5392c2db6a7e2d751e33edefaeb317d1466c5b2e" Oct 29 01:29:07.817243 env[1344]: time="2025-10-29T01:29:07.813304566Z" level=info msg="TearDown network for sandbox \"853cbab672cc73cbd1c7268f5392c2db6a7e2d751e33edefaeb317d1466c5b2e\" successfully" Oct 29 01:29:07.817243 env[1344]: time="2025-10-29T01:29:07.813323501Z" level=info msg="StopPodSandbox for \"853cbab672cc73cbd1c7268f5392c2db6a7e2d751e33edefaeb317d1466c5b2e\" returns successfully" Oct 29 01:29:07.817243 env[1344]: time="2025-10-29T01:29:07.813597960Z" level=info msg="RemovePodSandbox for \"853cbab672cc73cbd1c7268f5392c2db6a7e2d751e33edefaeb317d1466c5b2e\"" Oct 29 01:29:07.817243 env[1344]: time="2025-10-29T01:29:07.813614748Z" level=info msg="Forcibly stopping sandbox \"853cbab672cc73cbd1c7268f5392c2db6a7e2d751e33edefaeb317d1466c5b2e\"" Oct 29 01:29:07.864193 env[1344]: 2025-10-29 01:29:07.844 [WARNING][4747] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="853cbab672cc73cbd1c7268f5392c2db6a7e2d751e33edefaeb317d1466c5b2e" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--79569d88b4--q8r9t-eth0", GenerateName:"calico-apiserver-79569d88b4-", Namespace:"calico-apiserver", SelfLink:"", UID:"c80c9899-41ee-40c7-92c7-ab20d72dcefe", ResourceVersion:"975", Generation:0, CreationTimestamp:time.Date(2025, time.October, 29, 1, 28, 20, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"79569d88b4", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"9e54adea2d888fd3f30d6a5f2c15f1f8fa82de296197c212479cd5c501f0a375", Pod:"calico-apiserver-79569d88b4-q8r9t", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali261b1403950", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 29 01:29:07.864193 env[1344]: 2025-10-29 01:29:07.845 [INFO][4747] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="853cbab672cc73cbd1c7268f5392c2db6a7e2d751e33edefaeb317d1466c5b2e" Oct 29 01:29:07.864193 env[1344]: 2025-10-29 01:29:07.845 [INFO][4747] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="853cbab672cc73cbd1c7268f5392c2db6a7e2d751e33edefaeb317d1466c5b2e" iface="eth0" netns="" Oct 29 01:29:07.864193 env[1344]: 2025-10-29 01:29:07.845 [INFO][4747] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="853cbab672cc73cbd1c7268f5392c2db6a7e2d751e33edefaeb317d1466c5b2e" Oct 29 01:29:07.864193 env[1344]: 2025-10-29 01:29:07.845 [INFO][4747] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="853cbab672cc73cbd1c7268f5392c2db6a7e2d751e33edefaeb317d1466c5b2e" Oct 29 01:29:07.864193 env[1344]: 2025-10-29 01:29:07.857 [INFO][4755] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="853cbab672cc73cbd1c7268f5392c2db6a7e2d751e33edefaeb317d1466c5b2e" HandleID="k8s-pod-network.853cbab672cc73cbd1c7268f5392c2db6a7e2d751e33edefaeb317d1466c5b2e" Workload="localhost-k8s-calico--apiserver--79569d88b4--q8r9t-eth0" Oct 29 01:29:07.864193 env[1344]: 2025-10-29 01:29:07.857 [INFO][4755] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Oct 29 01:29:07.864193 env[1344]: 2025-10-29 01:29:07.857 [INFO][4755] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Oct 29 01:29:07.864193 env[1344]: 2025-10-29 01:29:07.861 [WARNING][4755] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="853cbab672cc73cbd1c7268f5392c2db6a7e2d751e33edefaeb317d1466c5b2e" HandleID="k8s-pod-network.853cbab672cc73cbd1c7268f5392c2db6a7e2d751e33edefaeb317d1466c5b2e" Workload="localhost-k8s-calico--apiserver--79569d88b4--q8r9t-eth0" Oct 29 01:29:07.864193 env[1344]: 2025-10-29 01:29:07.861 [INFO][4755] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="853cbab672cc73cbd1c7268f5392c2db6a7e2d751e33edefaeb317d1466c5b2e" HandleID="k8s-pod-network.853cbab672cc73cbd1c7268f5392c2db6a7e2d751e33edefaeb317d1466c5b2e" Workload="localhost-k8s-calico--apiserver--79569d88b4--q8r9t-eth0" Oct 29 01:29:07.864193 env[1344]: 2025-10-29 01:29:07.861 [INFO][4755] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Oct 29 01:29:07.864193 env[1344]: 2025-10-29 01:29:07.862 [INFO][4747] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="853cbab672cc73cbd1c7268f5392c2db6a7e2d751e33edefaeb317d1466c5b2e" Oct 29 01:29:07.868943 env[1344]: time="2025-10-29T01:29:07.864160069Z" level=info msg="TearDown network for sandbox \"853cbab672cc73cbd1c7268f5392c2db6a7e2d751e33edefaeb317d1466c5b2e\" successfully" Oct 29 01:29:07.874775 env[1344]: time="2025-10-29T01:29:07.874759955Z" level=info msg="RemovePodSandbox \"853cbab672cc73cbd1c7268f5392c2db6a7e2d751e33edefaeb317d1466c5b2e\" returns successfully" Oct 29 01:29:07.875093 env[1344]: time="2025-10-29T01:29:07.875081493Z" level=info msg="StopPodSandbox for \"7b42392c3e1a8892f88e255c936c939e6c7bf0a8aa6d73ae86be23c467721e03\"" Oct 29 01:29:07.925263 env[1344]: 2025-10-29 01:29:07.894 [WARNING][4769] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="7b42392c3e1a8892f88e255c936c939e6c7bf0a8aa6d73ae86be23c467721e03" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--6w9mz-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"1b41dfbb-cb8d-4095-9219-ece15b48c5c3", ResourceVersion:"1001", Generation:0, CreationTimestamp:time.Date(2025, time.October, 29, 1, 28, 24, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"857b56db8f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"cda72d8ed4e7493e200c1b278d6fd15adac0a7f735b110589c11e917d93dde3d", Pod:"csi-node-driver-6w9mz", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calibdaed86d59e", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 29 01:29:07.925263 env[1344]: 2025-10-29 01:29:07.895 [INFO][4769] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="7b42392c3e1a8892f88e255c936c939e6c7bf0a8aa6d73ae86be23c467721e03" Oct 29 01:29:07.925263 env[1344]: 2025-10-29 01:29:07.895 [INFO][4769] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="7b42392c3e1a8892f88e255c936c939e6c7bf0a8aa6d73ae86be23c467721e03" iface="eth0" netns="" Oct 29 01:29:07.925263 env[1344]: 2025-10-29 01:29:07.895 [INFO][4769] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="7b42392c3e1a8892f88e255c936c939e6c7bf0a8aa6d73ae86be23c467721e03" Oct 29 01:29:07.925263 env[1344]: 2025-10-29 01:29:07.895 [INFO][4769] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="7b42392c3e1a8892f88e255c936c939e6c7bf0a8aa6d73ae86be23c467721e03" Oct 29 01:29:07.925263 env[1344]: 2025-10-29 01:29:07.916 [INFO][4776] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="7b42392c3e1a8892f88e255c936c939e6c7bf0a8aa6d73ae86be23c467721e03" HandleID="k8s-pod-network.7b42392c3e1a8892f88e255c936c939e6c7bf0a8aa6d73ae86be23c467721e03" Workload="localhost-k8s-csi--node--driver--6w9mz-eth0" Oct 29 01:29:07.925263 env[1344]: 2025-10-29 01:29:07.916 [INFO][4776] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Oct 29 01:29:07.925263 env[1344]: 2025-10-29 01:29:07.916 [INFO][4776] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Oct 29 01:29:07.925263 env[1344]: 2025-10-29 01:29:07.921 [WARNING][4776] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="7b42392c3e1a8892f88e255c936c939e6c7bf0a8aa6d73ae86be23c467721e03" HandleID="k8s-pod-network.7b42392c3e1a8892f88e255c936c939e6c7bf0a8aa6d73ae86be23c467721e03" Workload="localhost-k8s-csi--node--driver--6w9mz-eth0" Oct 29 01:29:07.925263 env[1344]: 2025-10-29 01:29:07.921 [INFO][4776] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="7b42392c3e1a8892f88e255c936c939e6c7bf0a8aa6d73ae86be23c467721e03" HandleID="k8s-pod-network.7b42392c3e1a8892f88e255c936c939e6c7bf0a8aa6d73ae86be23c467721e03" Workload="localhost-k8s-csi--node--driver--6w9mz-eth0" Oct 29 01:29:07.925263 env[1344]: 2025-10-29 01:29:07.922 [INFO][4776] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Oct 29 01:29:07.925263 env[1344]: 2025-10-29 01:29:07.923 [INFO][4769] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="7b42392c3e1a8892f88e255c936c939e6c7bf0a8aa6d73ae86be23c467721e03" Oct 29 01:29:07.943627 env[1344]: time="2025-10-29T01:29:07.925277278Z" level=info msg="TearDown network for sandbox \"7b42392c3e1a8892f88e255c936c939e6c7bf0a8aa6d73ae86be23c467721e03\" successfully" Oct 29 01:29:07.943627 env[1344]: time="2025-10-29T01:29:07.925296030Z" level=info msg="StopPodSandbox for \"7b42392c3e1a8892f88e255c936c939e6c7bf0a8aa6d73ae86be23c467721e03\" returns successfully" Oct 29 01:29:07.943627 env[1344]: time="2025-10-29T01:29:07.934056574Z" level=info msg="RemovePodSandbox for \"7b42392c3e1a8892f88e255c936c939e6c7bf0a8aa6d73ae86be23c467721e03\"" Oct 29 01:29:07.943627 env[1344]: time="2025-10-29T01:29:07.934075333Z" level=info msg="Forcibly stopping sandbox \"7b42392c3e1a8892f88e255c936c939e6c7bf0a8aa6d73ae86be23c467721e03\"" Oct 29 01:29:07.988690 env[1344]: 2025-10-29 01:29:07.966 [WARNING][4791] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="7b42392c3e1a8892f88e255c936c939e6c7bf0a8aa6d73ae86be23c467721e03" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--6w9mz-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"1b41dfbb-cb8d-4095-9219-ece15b48c5c3", ResourceVersion:"1001", Generation:0, CreationTimestamp:time.Date(2025, time.October, 29, 1, 28, 24, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"857b56db8f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"cda72d8ed4e7493e200c1b278d6fd15adac0a7f735b110589c11e917d93dde3d", Pod:"csi-node-driver-6w9mz", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calibdaed86d59e", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 29 01:29:07.988690 env[1344]: 2025-10-29 01:29:07.966 [INFO][4791] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="7b42392c3e1a8892f88e255c936c939e6c7bf0a8aa6d73ae86be23c467721e03" Oct 29 01:29:07.988690 env[1344]: 2025-10-29 01:29:07.966 [INFO][4791] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="7b42392c3e1a8892f88e255c936c939e6c7bf0a8aa6d73ae86be23c467721e03" iface="eth0" netns="" Oct 29 01:29:07.988690 env[1344]: 2025-10-29 01:29:07.966 [INFO][4791] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="7b42392c3e1a8892f88e255c936c939e6c7bf0a8aa6d73ae86be23c467721e03" Oct 29 01:29:07.988690 env[1344]: 2025-10-29 01:29:07.966 [INFO][4791] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="7b42392c3e1a8892f88e255c936c939e6c7bf0a8aa6d73ae86be23c467721e03" Oct 29 01:29:07.988690 env[1344]: 2025-10-29 01:29:07.979 [INFO][4799] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="7b42392c3e1a8892f88e255c936c939e6c7bf0a8aa6d73ae86be23c467721e03" HandleID="k8s-pod-network.7b42392c3e1a8892f88e255c936c939e6c7bf0a8aa6d73ae86be23c467721e03" Workload="localhost-k8s-csi--node--driver--6w9mz-eth0" Oct 29 01:29:07.988690 env[1344]: 2025-10-29 01:29:07.979 [INFO][4799] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Oct 29 01:29:07.988690 env[1344]: 2025-10-29 01:29:07.980 [INFO][4799] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Oct 29 01:29:07.988690 env[1344]: 2025-10-29 01:29:07.984 [WARNING][4799] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="7b42392c3e1a8892f88e255c936c939e6c7bf0a8aa6d73ae86be23c467721e03" HandleID="k8s-pod-network.7b42392c3e1a8892f88e255c936c939e6c7bf0a8aa6d73ae86be23c467721e03" Workload="localhost-k8s-csi--node--driver--6w9mz-eth0" Oct 29 01:29:07.988690 env[1344]: 2025-10-29 01:29:07.984 [INFO][4799] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="7b42392c3e1a8892f88e255c936c939e6c7bf0a8aa6d73ae86be23c467721e03" HandleID="k8s-pod-network.7b42392c3e1a8892f88e255c936c939e6c7bf0a8aa6d73ae86be23c467721e03" Workload="localhost-k8s-csi--node--driver--6w9mz-eth0" Oct 29 01:29:07.988690 env[1344]: 2025-10-29 01:29:07.984 [INFO][4799] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Oct 29 01:29:07.988690 env[1344]: 2025-10-29 01:29:07.986 [INFO][4791] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="7b42392c3e1a8892f88e255c936c939e6c7bf0a8aa6d73ae86be23c467721e03" Oct 29 01:29:07.994346 env[1344]: time="2025-10-29T01:29:07.989067274Z" level=info msg="TearDown network for sandbox \"7b42392c3e1a8892f88e255c936c939e6c7bf0a8aa6d73ae86be23c467721e03\" successfully" Oct 29 01:29:07.999567 env[1344]: time="2025-10-29T01:29:07.999552466Z" level=info msg="RemovePodSandbox \"7b42392c3e1a8892f88e255c936c939e6c7bf0a8aa6d73ae86be23c467721e03\" returns successfully" Oct 29 01:29:07.999775 env[1344]: time="2025-10-29T01:29:07.999763326Z" level=info msg="StopPodSandbox for \"e06da40828e97ad77a9a3345000de382dcb1325707331b2dd9a825df9dc4a5eb\"" Oct 29 01:29:08.038639 env[1344]: 2025-10-29 01:29:08.018 [WARNING][4813] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="e06da40828e97ad77a9a3345000de382dcb1325707331b2dd9a825df9dc4a5eb" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--666569f655--h4k8p-eth0", GenerateName:"goldmane-666569f655-", Namespace:"calico-system", SelfLink:"", UID:"5af81d8c-f0dd-4f37-b2dd-db8e64891fd3", ResourceVersion:"968", Generation:0, CreationTimestamp:time.Date(2025, time.October, 29, 1, 28, 22, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"666569f655", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"a967cea1dee93e67f724313a0d91337cd1fbdb4adb05f94b404fd8dc0764f83c", Pod:"goldmane-666569f655-h4k8p", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calid48934885b3", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 29 01:29:08.038639 env[1344]: 2025-10-29 01:29:08.018 [INFO][4813] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="e06da40828e97ad77a9a3345000de382dcb1325707331b2dd9a825df9dc4a5eb" Oct 29 01:29:08.038639 env[1344]: 2025-10-29 01:29:08.018 [INFO][4813] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="e06da40828e97ad77a9a3345000de382dcb1325707331b2dd9a825df9dc4a5eb" iface="eth0" netns="" Oct 29 01:29:08.038639 env[1344]: 2025-10-29 01:29:08.018 [INFO][4813] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="e06da40828e97ad77a9a3345000de382dcb1325707331b2dd9a825df9dc4a5eb" Oct 29 01:29:08.038639 env[1344]: 2025-10-29 01:29:08.018 [INFO][4813] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="e06da40828e97ad77a9a3345000de382dcb1325707331b2dd9a825df9dc4a5eb" Oct 29 01:29:08.038639 env[1344]: 2025-10-29 01:29:08.032 [INFO][4820] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="e06da40828e97ad77a9a3345000de382dcb1325707331b2dd9a825df9dc4a5eb" HandleID="k8s-pod-network.e06da40828e97ad77a9a3345000de382dcb1325707331b2dd9a825df9dc4a5eb" Workload="localhost-k8s-goldmane--666569f655--h4k8p-eth0" Oct 29 01:29:08.038639 env[1344]: 2025-10-29 01:29:08.032 [INFO][4820] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Oct 29 01:29:08.038639 env[1344]: 2025-10-29 01:29:08.032 [INFO][4820] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Oct 29 01:29:08.038639 env[1344]: 2025-10-29 01:29:08.035 [WARNING][4820] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="e06da40828e97ad77a9a3345000de382dcb1325707331b2dd9a825df9dc4a5eb" HandleID="k8s-pod-network.e06da40828e97ad77a9a3345000de382dcb1325707331b2dd9a825df9dc4a5eb" Workload="localhost-k8s-goldmane--666569f655--h4k8p-eth0" Oct 29 01:29:08.038639 env[1344]: 2025-10-29 01:29:08.035 [INFO][4820] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="e06da40828e97ad77a9a3345000de382dcb1325707331b2dd9a825df9dc4a5eb" HandleID="k8s-pod-network.e06da40828e97ad77a9a3345000de382dcb1325707331b2dd9a825df9dc4a5eb" Workload="localhost-k8s-goldmane--666569f655--h4k8p-eth0" Oct 29 01:29:08.038639 env[1344]: 2025-10-29 01:29:08.036 [INFO][4820] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Oct 29 01:29:08.038639 env[1344]: 2025-10-29 01:29:08.037 [INFO][4813] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="e06da40828e97ad77a9a3345000de382dcb1325707331b2dd9a825df9dc4a5eb" Oct 29 01:29:08.059928 env[1344]: time="2025-10-29T01:29:08.038651145Z" level=info msg="TearDown network for sandbox \"e06da40828e97ad77a9a3345000de382dcb1325707331b2dd9a825df9dc4a5eb\" successfully" Oct 29 01:29:08.059928 env[1344]: time="2025-10-29T01:29:08.038672915Z" level=info msg="StopPodSandbox for \"e06da40828e97ad77a9a3345000de382dcb1325707331b2dd9a825df9dc4a5eb\" returns successfully" Oct 29 01:29:08.059928 env[1344]: time="2025-10-29T01:29:08.038925678Z" level=info msg="RemovePodSandbox for \"e06da40828e97ad77a9a3345000de382dcb1325707331b2dd9a825df9dc4a5eb\"" Oct 29 01:29:08.059928 env[1344]: time="2025-10-29T01:29:08.038940515Z" level=info msg="Forcibly stopping sandbox \"e06da40828e97ad77a9a3345000de382dcb1325707331b2dd9a825df9dc4a5eb\"" Oct 29 01:29:08.083723 env[1344]: 2025-10-29 01:29:08.063 [WARNING][4835] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="e06da40828e97ad77a9a3345000de382dcb1325707331b2dd9a825df9dc4a5eb" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--666569f655--h4k8p-eth0", GenerateName:"goldmane-666569f655-", Namespace:"calico-system", SelfLink:"", UID:"5af81d8c-f0dd-4f37-b2dd-db8e64891fd3", ResourceVersion:"968", Generation:0, CreationTimestamp:time.Date(2025, time.October, 29, 1, 28, 22, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"666569f655", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"a967cea1dee93e67f724313a0d91337cd1fbdb4adb05f94b404fd8dc0764f83c", Pod:"goldmane-666569f655-h4k8p", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calid48934885b3", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 29 01:29:08.083723 env[1344]: 2025-10-29 01:29:08.063 [INFO][4835] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="e06da40828e97ad77a9a3345000de382dcb1325707331b2dd9a825df9dc4a5eb" Oct 29 01:29:08.083723 env[1344]: 2025-10-29 01:29:08.063 [INFO][4835] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="e06da40828e97ad77a9a3345000de382dcb1325707331b2dd9a825df9dc4a5eb" iface="eth0" netns="" Oct 29 01:29:08.083723 env[1344]: 2025-10-29 01:29:08.063 [INFO][4835] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="e06da40828e97ad77a9a3345000de382dcb1325707331b2dd9a825df9dc4a5eb" Oct 29 01:29:08.083723 env[1344]: 2025-10-29 01:29:08.063 [INFO][4835] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="e06da40828e97ad77a9a3345000de382dcb1325707331b2dd9a825df9dc4a5eb" Oct 29 01:29:08.083723 env[1344]: 2025-10-29 01:29:08.076 [INFO][4842] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="e06da40828e97ad77a9a3345000de382dcb1325707331b2dd9a825df9dc4a5eb" HandleID="k8s-pod-network.e06da40828e97ad77a9a3345000de382dcb1325707331b2dd9a825df9dc4a5eb" Workload="localhost-k8s-goldmane--666569f655--h4k8p-eth0" Oct 29 01:29:08.083723 env[1344]: 2025-10-29 01:29:08.076 [INFO][4842] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Oct 29 01:29:08.083723 env[1344]: 2025-10-29 01:29:08.076 [INFO][4842] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Oct 29 01:29:08.083723 env[1344]: 2025-10-29 01:29:08.080 [WARNING][4842] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="e06da40828e97ad77a9a3345000de382dcb1325707331b2dd9a825df9dc4a5eb" HandleID="k8s-pod-network.e06da40828e97ad77a9a3345000de382dcb1325707331b2dd9a825df9dc4a5eb" Workload="localhost-k8s-goldmane--666569f655--h4k8p-eth0" Oct 29 01:29:08.083723 env[1344]: 2025-10-29 01:29:08.080 [INFO][4842] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="e06da40828e97ad77a9a3345000de382dcb1325707331b2dd9a825df9dc4a5eb" HandleID="k8s-pod-network.e06da40828e97ad77a9a3345000de382dcb1325707331b2dd9a825df9dc4a5eb" Workload="localhost-k8s-goldmane--666569f655--h4k8p-eth0" Oct 29 01:29:08.083723 env[1344]: 2025-10-29 01:29:08.081 [INFO][4842] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Oct 29 01:29:08.083723 env[1344]: 2025-10-29 01:29:08.082 [INFO][4835] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="e06da40828e97ad77a9a3345000de382dcb1325707331b2dd9a825df9dc4a5eb" Oct 29 01:29:08.084235 env[1344]: time="2025-10-29T01:29:08.084214102Z" level=info msg="TearDown network for sandbox \"e06da40828e97ad77a9a3345000de382dcb1325707331b2dd9a825df9dc4a5eb\" successfully" Oct 29 01:29:08.098663 env[1344]: time="2025-10-29T01:29:08.098625321Z" level=info msg="RemovePodSandbox \"e06da40828e97ad77a9a3345000de382dcb1325707331b2dd9a825df9dc4a5eb\" returns successfully" Oct 29 01:29:08.099061 env[1344]: time="2025-10-29T01:29:08.099043754Z" level=info msg="StopPodSandbox for \"7aabbe45860a2ecfd505f0fb85250a46dfb4fe17fd4e862e14869bf553fa1325\"" Oct 29 01:29:08.139039 env[1344]: 2025-10-29 01:29:08.120 [WARNING][4856] cni-plugin/k8s.go 598: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="7aabbe45860a2ecfd505f0fb85250a46dfb4fe17fd4e862e14869bf553fa1325" WorkloadEndpoint="localhost-k8s-whisker--657794849--7mwqt-eth0" Oct 29 01:29:08.139039 env[1344]: 2025-10-29 01:29:08.120 [INFO][4856] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="7aabbe45860a2ecfd505f0fb85250a46dfb4fe17fd4e862e14869bf553fa1325" Oct 29 01:29:08.139039 env[1344]: 2025-10-29 01:29:08.120 [INFO][4856] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="7aabbe45860a2ecfd505f0fb85250a46dfb4fe17fd4e862e14869bf553fa1325" iface="eth0" netns="" Oct 29 01:29:08.139039 env[1344]: 2025-10-29 01:29:08.120 [INFO][4856] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="7aabbe45860a2ecfd505f0fb85250a46dfb4fe17fd4e862e14869bf553fa1325" Oct 29 01:29:08.139039 env[1344]: 2025-10-29 01:29:08.121 [INFO][4856] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="7aabbe45860a2ecfd505f0fb85250a46dfb4fe17fd4e862e14869bf553fa1325" Oct 29 01:29:08.139039 env[1344]: 2025-10-29 01:29:08.132 [INFO][4864] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="7aabbe45860a2ecfd505f0fb85250a46dfb4fe17fd4e862e14869bf553fa1325" HandleID="k8s-pod-network.7aabbe45860a2ecfd505f0fb85250a46dfb4fe17fd4e862e14869bf553fa1325" Workload="localhost-k8s-whisker--657794849--7mwqt-eth0" Oct 29 01:29:08.139039 env[1344]: 2025-10-29 01:29:08.132 [INFO][4864] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Oct 29 01:29:08.139039 env[1344]: 2025-10-29 01:29:08.132 [INFO][4864] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Oct 29 01:29:08.139039 env[1344]: 2025-10-29 01:29:08.136 [WARNING][4864] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="7aabbe45860a2ecfd505f0fb85250a46dfb4fe17fd4e862e14869bf553fa1325" HandleID="k8s-pod-network.7aabbe45860a2ecfd505f0fb85250a46dfb4fe17fd4e862e14869bf553fa1325" Workload="localhost-k8s-whisker--657794849--7mwqt-eth0" Oct 29 01:29:08.139039 env[1344]: 2025-10-29 01:29:08.136 [INFO][4864] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="7aabbe45860a2ecfd505f0fb85250a46dfb4fe17fd4e862e14869bf553fa1325" HandleID="k8s-pod-network.7aabbe45860a2ecfd505f0fb85250a46dfb4fe17fd4e862e14869bf553fa1325" Workload="localhost-k8s-whisker--657794849--7mwqt-eth0" Oct 29 01:29:08.139039 env[1344]: 2025-10-29 01:29:08.136 [INFO][4864] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Oct 29 01:29:08.139039 env[1344]: 2025-10-29 01:29:08.137 [INFO][4856] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="7aabbe45860a2ecfd505f0fb85250a46dfb4fe17fd4e862e14869bf553fa1325" Oct 29 01:29:08.139407 env[1344]: time="2025-10-29T01:29:08.139384905Z" level=info msg="TearDown network for sandbox \"7aabbe45860a2ecfd505f0fb85250a46dfb4fe17fd4e862e14869bf553fa1325\" successfully" Oct 29 01:29:08.139478 env[1344]: time="2025-10-29T01:29:08.139459663Z" level=info msg="StopPodSandbox for \"7aabbe45860a2ecfd505f0fb85250a46dfb4fe17fd4e862e14869bf553fa1325\" returns successfully" Oct 29 01:29:08.140342 env[1344]: time="2025-10-29T01:29:08.140330022Z" level=info msg="RemovePodSandbox for \"7aabbe45860a2ecfd505f0fb85250a46dfb4fe17fd4e862e14869bf553fa1325\"" Oct 29 01:29:08.140436 env[1344]: time="2025-10-29T01:29:08.140412066Z" level=info msg="Forcibly stopping sandbox \"7aabbe45860a2ecfd505f0fb85250a46dfb4fe17fd4e862e14869bf553fa1325\"" Oct 29 01:29:08.178501 env[1344]: 2025-10-29 01:29:08.159 [WARNING][4878] cni-plugin/k8s.go 598: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="7aabbe45860a2ecfd505f0fb85250a46dfb4fe17fd4e862e14869bf553fa1325" WorkloadEndpoint="localhost-k8s-whisker--657794849--7mwqt-eth0" Oct 29 01:29:08.178501 env[1344]: 2025-10-29 01:29:08.159 [INFO][4878] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="7aabbe45860a2ecfd505f0fb85250a46dfb4fe17fd4e862e14869bf553fa1325" Oct 29 01:29:08.178501 env[1344]: 2025-10-29 01:29:08.159 [INFO][4878] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="7aabbe45860a2ecfd505f0fb85250a46dfb4fe17fd4e862e14869bf553fa1325" iface="eth0" netns="" Oct 29 01:29:08.178501 env[1344]: 2025-10-29 01:29:08.159 [INFO][4878] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="7aabbe45860a2ecfd505f0fb85250a46dfb4fe17fd4e862e14869bf553fa1325" Oct 29 01:29:08.178501 env[1344]: 2025-10-29 01:29:08.159 [INFO][4878] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="7aabbe45860a2ecfd505f0fb85250a46dfb4fe17fd4e862e14869bf553fa1325" Oct 29 01:29:08.178501 env[1344]: 2025-10-29 01:29:08.171 [INFO][4885] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="7aabbe45860a2ecfd505f0fb85250a46dfb4fe17fd4e862e14869bf553fa1325" HandleID="k8s-pod-network.7aabbe45860a2ecfd505f0fb85250a46dfb4fe17fd4e862e14869bf553fa1325" Workload="localhost-k8s-whisker--657794849--7mwqt-eth0" Oct 29 01:29:08.178501 env[1344]: 2025-10-29 01:29:08.171 [INFO][4885] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Oct 29 01:29:08.178501 env[1344]: 2025-10-29 01:29:08.171 [INFO][4885] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Oct 29 01:29:08.178501 env[1344]: 2025-10-29 01:29:08.175 [WARNING][4885] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="7aabbe45860a2ecfd505f0fb85250a46dfb4fe17fd4e862e14869bf553fa1325" HandleID="k8s-pod-network.7aabbe45860a2ecfd505f0fb85250a46dfb4fe17fd4e862e14869bf553fa1325" Workload="localhost-k8s-whisker--657794849--7mwqt-eth0" Oct 29 01:29:08.178501 env[1344]: 2025-10-29 01:29:08.175 [INFO][4885] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="7aabbe45860a2ecfd505f0fb85250a46dfb4fe17fd4e862e14869bf553fa1325" HandleID="k8s-pod-network.7aabbe45860a2ecfd505f0fb85250a46dfb4fe17fd4e862e14869bf553fa1325" Workload="localhost-k8s-whisker--657794849--7mwqt-eth0" Oct 29 01:29:08.178501 env[1344]: 2025-10-29 01:29:08.176 [INFO][4885] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Oct 29 01:29:08.178501 env[1344]: 2025-10-29 01:29:08.177 [INFO][4878] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="7aabbe45860a2ecfd505f0fb85250a46dfb4fe17fd4e862e14869bf553fa1325" Oct 29 01:29:08.178860 env[1344]: time="2025-10-29T01:29:08.178840471Z" level=info msg="TearDown network for sandbox \"7aabbe45860a2ecfd505f0fb85250a46dfb4fe17fd4e862e14869bf553fa1325\" successfully" Oct 29 01:29:08.188847 env[1344]: time="2025-10-29T01:29:08.188834467Z" level=info msg="RemovePodSandbox \"7aabbe45860a2ecfd505f0fb85250a46dfb4fe17fd4e862e14869bf553fa1325\" returns successfully" Oct 29 01:29:08.189325 env[1344]: time="2025-10-29T01:29:08.189312581Z" level=info msg="StopPodSandbox for \"5078cf5db7c6fcf114348afd5b40bee3e2913748cb30352026ca3581831b05a9\"" Oct 29 01:29:08.231165 env[1344]: 2025-10-29 01:29:08.210 [WARNING][4900] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="5078cf5db7c6fcf114348afd5b40bee3e2913748cb30352026ca3581831b05a9" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--668d6bf9bc--h6hwn-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"2fd41a65-6531-4e39-b3e6-b8b8fe6bf795", ResourceVersion:"917", Generation:0, CreationTimestamp:time.Date(2025, time.October, 29, 1, 28, 12, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"ad030092a25ed99e8a0e265858140f591ea046c6f074c42b2d14bc0eaf6047df", Pod:"coredns-668d6bf9bc-h6hwn", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali49133034a20", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 29 01:29:08.231165 env[1344]: 2025-10-29 01:29:08.210 [INFO][4900] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="5078cf5db7c6fcf114348afd5b40bee3e2913748cb30352026ca3581831b05a9" Oct 29 01:29:08.231165 env[1344]: 2025-10-29 01:29:08.210 [INFO][4900] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="5078cf5db7c6fcf114348afd5b40bee3e2913748cb30352026ca3581831b05a9" iface="eth0" netns="" Oct 29 01:29:08.231165 env[1344]: 2025-10-29 01:29:08.210 [INFO][4900] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="5078cf5db7c6fcf114348afd5b40bee3e2913748cb30352026ca3581831b05a9" Oct 29 01:29:08.231165 env[1344]: 2025-10-29 01:29:08.210 [INFO][4900] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="5078cf5db7c6fcf114348afd5b40bee3e2913748cb30352026ca3581831b05a9" Oct 29 01:29:08.231165 env[1344]: 2025-10-29 01:29:08.224 [INFO][4907] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="5078cf5db7c6fcf114348afd5b40bee3e2913748cb30352026ca3581831b05a9" HandleID="k8s-pod-network.5078cf5db7c6fcf114348afd5b40bee3e2913748cb30352026ca3581831b05a9" Workload="localhost-k8s-coredns--668d6bf9bc--h6hwn-eth0" Oct 29 01:29:08.231165 env[1344]: 2025-10-29 01:29:08.224 [INFO][4907] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Oct 29 01:29:08.231165 env[1344]: 2025-10-29 01:29:08.224 [INFO][4907] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Oct 29 01:29:08.231165 env[1344]: 2025-10-29 01:29:08.228 [WARNING][4907] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="5078cf5db7c6fcf114348afd5b40bee3e2913748cb30352026ca3581831b05a9" HandleID="k8s-pod-network.5078cf5db7c6fcf114348afd5b40bee3e2913748cb30352026ca3581831b05a9" Workload="localhost-k8s-coredns--668d6bf9bc--h6hwn-eth0" Oct 29 01:29:08.231165 env[1344]: 2025-10-29 01:29:08.228 [INFO][4907] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="5078cf5db7c6fcf114348afd5b40bee3e2913748cb30352026ca3581831b05a9" HandleID="k8s-pod-network.5078cf5db7c6fcf114348afd5b40bee3e2913748cb30352026ca3581831b05a9" Workload="localhost-k8s-coredns--668d6bf9bc--h6hwn-eth0" Oct 29 01:29:08.231165 env[1344]: 2025-10-29 01:29:08.229 [INFO][4907] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Oct 29 01:29:08.231165 env[1344]: 2025-10-29 01:29:08.230 [INFO][4900] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="5078cf5db7c6fcf114348afd5b40bee3e2913748cb30352026ca3581831b05a9" Oct 29 01:29:08.231930 env[1344]: time="2025-10-29T01:29:08.231555517Z" level=info msg="TearDown network for sandbox \"5078cf5db7c6fcf114348afd5b40bee3e2913748cb30352026ca3581831b05a9\" successfully" Oct 29 01:29:08.231930 env[1344]: time="2025-10-29T01:29:08.231578260Z" level=info msg="StopPodSandbox for \"5078cf5db7c6fcf114348afd5b40bee3e2913748cb30352026ca3581831b05a9\" returns successfully" Oct 29 01:29:08.232029 env[1344]: time="2025-10-29T01:29:08.232015440Z" level=info msg="RemovePodSandbox for \"5078cf5db7c6fcf114348afd5b40bee3e2913748cb30352026ca3581831b05a9\"" Oct 29 01:29:08.232097 env[1344]: time="2025-10-29T01:29:08.232073817Z" level=info msg="Forcibly stopping sandbox \"5078cf5db7c6fcf114348afd5b40bee3e2913748cb30352026ca3581831b05a9\"" Oct 29 01:29:08.271931 env[1344]: 2025-10-29 01:29:08.251 [WARNING][4921] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="5078cf5db7c6fcf114348afd5b40bee3e2913748cb30352026ca3581831b05a9" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--668d6bf9bc--h6hwn-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"2fd41a65-6531-4e39-b3e6-b8b8fe6bf795", ResourceVersion:"917", Generation:0, CreationTimestamp:time.Date(2025, time.October, 29, 1, 28, 12, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"ad030092a25ed99e8a0e265858140f591ea046c6f074c42b2d14bc0eaf6047df", Pod:"coredns-668d6bf9bc-h6hwn", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali49133034a20", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 29 01:29:08.271931 env[1344]: 2025-10-29 01:29:08.251 [INFO][4921] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="5078cf5db7c6fcf114348afd5b40bee3e2913748cb30352026ca3581831b05a9" Oct 29 01:29:08.271931 env[1344]: 2025-10-29 01:29:08.251 [INFO][4921] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="5078cf5db7c6fcf114348afd5b40bee3e2913748cb30352026ca3581831b05a9" iface="eth0" netns="" Oct 29 01:29:08.271931 env[1344]: 2025-10-29 01:29:08.251 [INFO][4921] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="5078cf5db7c6fcf114348afd5b40bee3e2913748cb30352026ca3581831b05a9" Oct 29 01:29:08.271931 env[1344]: 2025-10-29 01:29:08.251 [INFO][4921] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="5078cf5db7c6fcf114348afd5b40bee3e2913748cb30352026ca3581831b05a9" Oct 29 01:29:08.271931 env[1344]: 2025-10-29 01:29:08.265 [INFO][4928] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="5078cf5db7c6fcf114348afd5b40bee3e2913748cb30352026ca3581831b05a9" HandleID="k8s-pod-network.5078cf5db7c6fcf114348afd5b40bee3e2913748cb30352026ca3581831b05a9" Workload="localhost-k8s-coredns--668d6bf9bc--h6hwn-eth0" Oct 29 01:29:08.271931 env[1344]: 2025-10-29 01:29:08.265 [INFO][4928] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Oct 29 01:29:08.271931 env[1344]: 2025-10-29 01:29:08.265 [INFO][4928] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Oct 29 01:29:08.271931 env[1344]: 2025-10-29 01:29:08.269 [WARNING][4928] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="5078cf5db7c6fcf114348afd5b40bee3e2913748cb30352026ca3581831b05a9" HandleID="k8s-pod-network.5078cf5db7c6fcf114348afd5b40bee3e2913748cb30352026ca3581831b05a9" Workload="localhost-k8s-coredns--668d6bf9bc--h6hwn-eth0" Oct 29 01:29:08.271931 env[1344]: 2025-10-29 01:29:08.269 [INFO][4928] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="5078cf5db7c6fcf114348afd5b40bee3e2913748cb30352026ca3581831b05a9" HandleID="k8s-pod-network.5078cf5db7c6fcf114348afd5b40bee3e2913748cb30352026ca3581831b05a9" Workload="localhost-k8s-coredns--668d6bf9bc--h6hwn-eth0" Oct 29 01:29:08.271931 env[1344]: 2025-10-29 01:29:08.269 [INFO][4928] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Oct 29 01:29:08.271931 env[1344]: 2025-10-29 01:29:08.270 [INFO][4921] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="5078cf5db7c6fcf114348afd5b40bee3e2913748cb30352026ca3581831b05a9" Oct 29 01:29:08.272287 env[1344]: time="2025-10-29T01:29:08.271946966Z" level=info msg="TearDown network for sandbox \"5078cf5db7c6fcf114348afd5b40bee3e2913748cb30352026ca3581831b05a9\" successfully" Oct 29 01:29:08.273402 env[1344]: time="2025-10-29T01:29:08.273386037Z" level=info msg="RemovePodSandbox \"5078cf5db7c6fcf114348afd5b40bee3e2913748cb30352026ca3581831b05a9\" returns successfully" Oct 29 01:29:08.273710 env[1344]: time="2025-10-29T01:29:08.273694718Z" level=info msg="StopPodSandbox for \"3fbc9aee4c6d01189acde1e9b04926aec3023e3093d6348538b76d1cb923de68\"" Oct 29 01:29:08.313632 env[1344]: 2025-10-29 01:29:08.293 [WARNING][4942] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="3fbc9aee4c6d01189acde1e9b04926aec3023e3093d6348538b76d1cb923de68" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--79569d88b4--p6h9g-eth0", GenerateName:"calico-apiserver-79569d88b4-", Namespace:"calico-apiserver", SelfLink:"", UID:"7dd39e5b-dcdd-482f-ab49-9053a64b98c9", ResourceVersion:"1023", Generation:0, CreationTimestamp:time.Date(2025, time.October, 29, 1, 28, 20, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"79569d88b4", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"b2620f1220843b2e58db91fff0f5d60b77b4c6f67a61ebb319fec0eab7f137bb", Pod:"calico-apiserver-79569d88b4-p6h9g", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali96d03d2d772", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 29 01:29:08.313632 env[1344]: 2025-10-29 01:29:08.293 [INFO][4942] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="3fbc9aee4c6d01189acde1e9b04926aec3023e3093d6348538b76d1cb923de68" Oct 29 01:29:08.313632 env[1344]: 2025-10-29 01:29:08.293 [INFO][4942] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="3fbc9aee4c6d01189acde1e9b04926aec3023e3093d6348538b76d1cb923de68" iface="eth0" netns="" Oct 29 01:29:08.313632 env[1344]: 2025-10-29 01:29:08.293 [INFO][4942] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="3fbc9aee4c6d01189acde1e9b04926aec3023e3093d6348538b76d1cb923de68" Oct 29 01:29:08.313632 env[1344]: 2025-10-29 01:29:08.293 [INFO][4942] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="3fbc9aee4c6d01189acde1e9b04926aec3023e3093d6348538b76d1cb923de68" Oct 29 01:29:08.313632 env[1344]: 2025-10-29 01:29:08.306 [INFO][4949] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="3fbc9aee4c6d01189acde1e9b04926aec3023e3093d6348538b76d1cb923de68" HandleID="k8s-pod-network.3fbc9aee4c6d01189acde1e9b04926aec3023e3093d6348538b76d1cb923de68" Workload="localhost-k8s-calico--apiserver--79569d88b4--p6h9g-eth0" Oct 29 01:29:08.313632 env[1344]: 2025-10-29 01:29:08.306 [INFO][4949] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Oct 29 01:29:08.313632 env[1344]: 2025-10-29 01:29:08.306 [INFO][4949] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Oct 29 01:29:08.313632 env[1344]: 2025-10-29 01:29:08.310 [WARNING][4949] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="3fbc9aee4c6d01189acde1e9b04926aec3023e3093d6348538b76d1cb923de68" HandleID="k8s-pod-network.3fbc9aee4c6d01189acde1e9b04926aec3023e3093d6348538b76d1cb923de68" Workload="localhost-k8s-calico--apiserver--79569d88b4--p6h9g-eth0" Oct 29 01:29:08.313632 env[1344]: 2025-10-29 01:29:08.310 [INFO][4949] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="3fbc9aee4c6d01189acde1e9b04926aec3023e3093d6348538b76d1cb923de68" HandleID="k8s-pod-network.3fbc9aee4c6d01189acde1e9b04926aec3023e3093d6348538b76d1cb923de68" Workload="localhost-k8s-calico--apiserver--79569d88b4--p6h9g-eth0" Oct 29 01:29:08.313632 env[1344]: 2025-10-29 01:29:08.311 [INFO][4949] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Oct 29 01:29:08.313632 env[1344]: 2025-10-29 01:29:08.312 [INFO][4942] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="3fbc9aee4c6d01189acde1e9b04926aec3023e3093d6348538b76d1cb923de68" Oct 29 01:29:08.313964 env[1344]: time="2025-10-29T01:29:08.313646470Z" level=info msg="TearDown network for sandbox \"3fbc9aee4c6d01189acde1e9b04926aec3023e3093d6348538b76d1cb923de68\" successfully" Oct 29 01:29:08.313964 env[1344]: time="2025-10-29T01:29:08.313667963Z" level=info msg="StopPodSandbox for \"3fbc9aee4c6d01189acde1e9b04926aec3023e3093d6348538b76d1cb923de68\" returns successfully" Oct 29 01:29:08.314263 env[1344]: time="2025-10-29T01:29:08.314249794Z" level=info msg="RemovePodSandbox for \"3fbc9aee4c6d01189acde1e9b04926aec3023e3093d6348538b76d1cb923de68\"" Oct 29 01:29:08.314381 env[1344]: time="2025-10-29T01:29:08.314357842Z" level=info msg="Forcibly stopping sandbox \"3fbc9aee4c6d01189acde1e9b04926aec3023e3093d6348538b76d1cb923de68\"" Oct 29 01:29:08.355756 env[1344]: 2025-10-29 01:29:08.335 [WARNING][4963] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="3fbc9aee4c6d01189acde1e9b04926aec3023e3093d6348538b76d1cb923de68" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--79569d88b4--p6h9g-eth0", GenerateName:"calico-apiserver-79569d88b4-", Namespace:"calico-apiserver", SelfLink:"", UID:"7dd39e5b-dcdd-482f-ab49-9053a64b98c9", ResourceVersion:"1023", Generation:0, CreationTimestamp:time.Date(2025, time.October, 29, 1, 28, 20, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"79569d88b4", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"b2620f1220843b2e58db91fff0f5d60b77b4c6f67a61ebb319fec0eab7f137bb", Pod:"calico-apiserver-79569d88b4-p6h9g", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali96d03d2d772", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 29 01:29:08.355756 env[1344]: 2025-10-29 01:29:08.335 [INFO][4963] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="3fbc9aee4c6d01189acde1e9b04926aec3023e3093d6348538b76d1cb923de68" Oct 29 01:29:08.355756 env[1344]: 2025-10-29 01:29:08.335 [INFO][4963] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="3fbc9aee4c6d01189acde1e9b04926aec3023e3093d6348538b76d1cb923de68" iface="eth0" netns="" Oct 29 01:29:08.355756 env[1344]: 2025-10-29 01:29:08.335 [INFO][4963] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="3fbc9aee4c6d01189acde1e9b04926aec3023e3093d6348538b76d1cb923de68" Oct 29 01:29:08.355756 env[1344]: 2025-10-29 01:29:08.335 [INFO][4963] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="3fbc9aee4c6d01189acde1e9b04926aec3023e3093d6348538b76d1cb923de68" Oct 29 01:29:08.355756 env[1344]: 2025-10-29 01:29:08.349 [INFO][4971] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="3fbc9aee4c6d01189acde1e9b04926aec3023e3093d6348538b76d1cb923de68" HandleID="k8s-pod-network.3fbc9aee4c6d01189acde1e9b04926aec3023e3093d6348538b76d1cb923de68" Workload="localhost-k8s-calico--apiserver--79569d88b4--p6h9g-eth0" Oct 29 01:29:08.355756 env[1344]: 2025-10-29 01:29:08.349 [INFO][4971] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Oct 29 01:29:08.355756 env[1344]: 2025-10-29 01:29:08.349 [INFO][4971] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Oct 29 01:29:08.355756 env[1344]: 2025-10-29 01:29:08.352 [WARNING][4971] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="3fbc9aee4c6d01189acde1e9b04926aec3023e3093d6348538b76d1cb923de68" HandleID="k8s-pod-network.3fbc9aee4c6d01189acde1e9b04926aec3023e3093d6348538b76d1cb923de68" Workload="localhost-k8s-calico--apiserver--79569d88b4--p6h9g-eth0" Oct 29 01:29:08.355756 env[1344]: 2025-10-29 01:29:08.352 [INFO][4971] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="3fbc9aee4c6d01189acde1e9b04926aec3023e3093d6348538b76d1cb923de68" HandleID="k8s-pod-network.3fbc9aee4c6d01189acde1e9b04926aec3023e3093d6348538b76d1cb923de68" Workload="localhost-k8s-calico--apiserver--79569d88b4--p6h9g-eth0" Oct 29 01:29:08.355756 env[1344]: 2025-10-29 01:29:08.353 [INFO][4971] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Oct 29 01:29:08.355756 env[1344]: 2025-10-29 01:29:08.354 [INFO][4963] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="3fbc9aee4c6d01189acde1e9b04926aec3023e3093d6348538b76d1cb923de68" Oct 29 01:29:08.356094 env[1344]: time="2025-10-29T01:29:08.355769454Z" level=info msg="TearDown network for sandbox \"3fbc9aee4c6d01189acde1e9b04926aec3023e3093d6348538b76d1cb923de68\" successfully" Oct 29 01:29:08.357208 env[1344]: time="2025-10-29T01:29:08.357180731Z" level=info msg="RemovePodSandbox \"3fbc9aee4c6d01189acde1e9b04926aec3023e3093d6348538b76d1cb923de68\" returns successfully" Oct 29 01:29:09.501926 kubelet[2259]: E1029 01:29:09.501879 2259 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-55bf5bd85-l59k9" podUID="d15c73c2-757b-4aa9-bf20-2183e910769f" Oct 29 01:29:12.667815 systemd[1]: run-containerd-runc-k8s.io-66720a4fa800a80f57832ea2f4eca7a3b4fbacada8b8b5341a62852577409ba7-runc.E8EIhD.mount: Deactivated successfully. Oct 29 01:29:13.491351 kubelet[2259]: E1029 01:29:13.491322 2259 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-h4k8p" podUID="5af81d8c-f0dd-4f37-b2dd-db8e64891fd3" Oct 29 01:29:14.490638 kubelet[2259]: E1029 01:29:14.490614 2259 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-6w9mz" podUID="1b41dfbb-cb8d-4095-9219-ece15b48c5c3" Oct 29 01:29:15.493161 kubelet[2259]: E1029 01:29:15.493125 2259 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-79569d88b4-p6h9g" podUID="7dd39e5b-dcdd-482f-ab49-9053a64b98c9" Oct 29 01:29:16.528921 kubelet[2259]: E1029 01:29:16.528896 2259 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-7b46bb89cf-x7bmp" podUID="0c8f51f5-a169-488a-a224-1fc1684a62fb" Oct 29 01:29:17.419000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@7-139.178.70.110:22-139.178.68.195:49806 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 29 01:29:17.428323 kernel: kauditd_printk_skb: 44 callbacks suppressed Oct 29 01:29:17.430275 kernel: audit: type=1130 audit(1761701357.419:417): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@7-139.178.70.110:22-139.178.68.195:49806 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 29 01:29:17.420577 systemd[1]: Started sshd@7-139.178.70.110:22-139.178.68.195:49806.service. Oct 29 01:29:17.542000 audit[5009]: USER_ACCT pid=5009 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Oct 29 01:29:17.555717 kernel: audit: type=1101 audit(1761701357.542:418): pid=5009 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Oct 29 01:29:17.555753 kernel: audit: type=1103 audit(1761701357.546:419): pid=5009 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Oct 29 01:29:17.555770 kernel: audit: type=1006 audit(1761701357.546:420): pid=5009 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=10 res=1 Oct 29 01:29:17.555790 kernel: audit: type=1300 audit(1761701357.546:420): arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7fffedf7c4d0 a2=3 a3=0 items=0 ppid=1 pid=5009 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=10 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 29 01:29:17.555810 kernel: audit: type=1327 audit(1761701357.546:420): proctitle=737368643A20636F7265205B707269765D Oct 29 01:29:17.546000 audit[5009]: CRED_ACQ pid=5009 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Oct 29 01:29:17.546000 audit[5009]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7fffedf7c4d0 a2=3 a3=0 items=0 ppid=1 pid=5009 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=10 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 29 01:29:17.546000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Oct 29 01:29:17.559648 sshd[5009]: Accepted publickey for core from 139.178.68.195 port 49806 ssh2: RSA SHA256:dX9xZ5mtK3eVY9XPNyM7eUHC3/BhNnXhscR3rJLIlYA Oct 29 01:29:17.557125 sshd[5009]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Oct 29 01:29:17.577834 systemd[1]: Started session-10.scope. Oct 29 01:29:17.578626 systemd-logind[1329]: New session 10 of user core. Oct 29 01:29:17.581000 audit[5009]: USER_START pid=5009 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Oct 29 01:29:17.582000 audit[5012]: CRED_ACQ pid=5012 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Oct 29 01:29:17.589245 kernel: audit: type=1105 audit(1761701357.581:421): pid=5009 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Oct 29 01:29:17.589292 kernel: audit: type=1103 audit(1761701357.582:422): pid=5012 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Oct 29 01:29:18.299056 sshd[5009]: pam_unix(sshd:session): session closed for user core Oct 29 01:29:18.298000 audit[5009]: USER_END pid=5009 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Oct 29 01:29:18.307048 kernel: audit: type=1106 audit(1761701358.298:423): pid=5009 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Oct 29 01:29:18.307566 kernel: audit: type=1104 audit(1761701358.298:424): pid=5009 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Oct 29 01:29:18.298000 audit[5009]: CRED_DISP pid=5009 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Oct 29 01:29:18.300000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@7-139.178.70.110:22-139.178.68.195:49806 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 29 01:29:18.301080 systemd-logind[1329]: Session 10 logged out. Waiting for processes to exit. Oct 29 01:29:18.301959 systemd[1]: sshd@7-139.178.70.110:22-139.178.68.195:49806.service: Deactivated successfully. Oct 29 01:29:18.302497 systemd[1]: session-10.scope: Deactivated successfully. Oct 29 01:29:18.303225 systemd-logind[1329]: Removed session 10. Oct 29 01:29:18.583027 kubelet[2259]: E1029 01:29:18.582955 2259 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-79569d88b4-q8r9t" podUID="c80c9899-41ee-40c7-92c7-ab20d72dcefe" Oct 29 01:29:20.496616 env[1344]: time="2025-10-29T01:29:20.496580704Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Oct 29 01:29:20.803088 env[1344]: time="2025-10-29T01:29:20.802983123Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Oct 29 01:29:20.803469 env[1344]: time="2025-10-29T01:29:20.803439201Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Oct 29 01:29:20.813391 kubelet[2259]: E1029 01:29:20.813355 2259 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Oct 29 01:29:20.815691 kubelet[2259]: E1029 01:29:20.815670 2259 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Oct 29 01:29:20.825637 kubelet[2259]: E1029 01:29:20.825605 2259 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:5c343149867d44f28e7fda414822142b,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-5dmjb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-55bf5bd85-l59k9_calico-system(d15c73c2-757b-4aa9-bf20-2183e910769f): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Oct 29 01:29:20.828104 env[1344]: time="2025-10-29T01:29:20.827762263Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Oct 29 01:29:21.154730 env[1344]: time="2025-10-29T01:29:21.154684350Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Oct 29 01:29:21.155062 env[1344]: time="2025-10-29T01:29:21.155033660Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Oct 29 01:29:21.155224 kubelet[2259]: E1029 01:29:21.155202 2259 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Oct 29 01:29:21.155301 kubelet[2259]: E1029 01:29:21.155288 2259 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Oct 29 01:29:21.155425 kubelet[2259]: E1029 01:29:21.155403 2259 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-5dmjb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-55bf5bd85-l59k9_calico-system(d15c73c2-757b-4aa9-bf20-2183e910769f): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Oct 29 01:29:21.156712 kubelet[2259]: E1029 01:29:21.156686 2259 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-55bf5bd85-l59k9" podUID="d15c73c2-757b-4aa9-bf20-2183e910769f" Oct 29 01:29:23.302948 systemd[1]: Started sshd@8-139.178.70.110:22-139.178.68.195:36038.service. Oct 29 01:29:23.308751 kernel: kauditd_printk_skb: 1 callbacks suppressed Oct 29 01:29:23.308811 kernel: audit: type=1130 audit(1761701363.301:426): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@8-139.178.70.110:22-139.178.68.195:36038 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 29 01:29:23.301000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@8-139.178.70.110:22-139.178.68.195:36038 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 29 01:29:23.364000 audit[5024]: USER_ACCT pid=5024 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Oct 29 01:29:23.370444 sshd[5024]: Accepted publickey for core from 139.178.68.195 port 36038 ssh2: RSA SHA256:dX9xZ5mtK3eVY9XPNyM7eUHC3/BhNnXhscR3rJLIlYA Oct 29 01:29:23.369000 audit[5024]: CRED_ACQ pid=5024 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Oct 29 01:29:23.374779 kernel: audit: type=1101 audit(1761701363.364:427): pid=5024 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Oct 29 01:29:23.374831 kernel: audit: type=1103 audit(1761701363.369:428): pid=5024 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Oct 29 01:29:23.374856 kernel: audit: type=1006 audit(1761701363.369:429): pid=5024 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=11 res=1 Oct 29 01:29:23.369000 audit[5024]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffe2cc69ba0 a2=3 a3=0 items=0 ppid=1 pid=5024 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=11 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 29 01:29:23.380827 kernel: audit: type=1300 audit(1761701363.369:429): arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffe2cc69ba0 a2=3 a3=0 items=0 ppid=1 pid=5024 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=11 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 29 01:29:23.369000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Oct 29 01:29:23.382150 kernel: audit: type=1327 audit(1761701363.369:429): proctitle=737368643A20636F7265205B707269765D Oct 29 01:29:23.382287 sshd[5024]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Oct 29 01:29:23.386932 systemd[1]: Started session-11.scope. Oct 29 01:29:23.387242 systemd-logind[1329]: New session 11 of user core. Oct 29 01:29:23.395000 audit[5024]: USER_START pid=5024 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Oct 29 01:29:23.399000 audit[5027]: CRED_ACQ pid=5027 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Oct 29 01:29:23.403857 kernel: audit: type=1105 audit(1761701363.395:430): pid=5024 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Oct 29 01:29:23.403900 kernel: audit: type=1103 audit(1761701363.399:431): pid=5027 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Oct 29 01:29:23.596511 sshd[5024]: pam_unix(sshd:session): session closed for user core Oct 29 01:29:23.596000 audit[5024]: USER_END pid=5024 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Oct 29 01:29:23.601199 kernel: audit: type=1106 audit(1761701363.596:432): pid=5024 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Oct 29 01:29:23.599000 audit[5024]: CRED_DISP pid=5024 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Oct 29 01:29:23.605790 kernel: audit: type=1104 audit(1761701363.599:433): pid=5024 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Oct 29 01:29:23.605255 systemd[1]: sshd@8-139.178.70.110:22-139.178.68.195:36038.service: Deactivated successfully. Oct 29 01:29:23.606003 systemd[1]: session-11.scope: Deactivated successfully. Oct 29 01:29:23.604000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@8-139.178.70.110:22-139.178.68.195:36038 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 29 01:29:23.606193 systemd-logind[1329]: Session 11 logged out. Waiting for processes to exit. Oct 29 01:29:23.606938 systemd-logind[1329]: Removed session 11. Oct 29 01:29:27.492782 env[1344]: time="2025-10-29T01:29:27.492757210Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Oct 29 01:29:27.837301 env[1344]: time="2025-10-29T01:29:27.837209246Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Oct 29 01:29:27.837826 env[1344]: time="2025-10-29T01:29:27.837796122Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Oct 29 01:29:27.838082 kubelet[2259]: E1029 01:29:27.838032 2259 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Oct 29 01:29:27.838357 kubelet[2259]: E1029 01:29:27.838090 2259 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Oct 29 01:29:27.838357 kubelet[2259]: E1029 01:29:27.838223 2259 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-8hhz7,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-6w9mz_calico-system(1b41dfbb-cb8d-4095-9219-ece15b48c5c3): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Oct 29 01:29:27.838935 env[1344]: time="2025-10-29T01:29:27.838918120Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Oct 29 01:29:28.145278 env[1344]: time="2025-10-29T01:29:28.145222517Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Oct 29 01:29:28.145680 env[1344]: time="2025-10-29T01:29:28.145646529Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Oct 29 01:29:28.145840 kubelet[2259]: E1029 01:29:28.145813 2259 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Oct 29 01:29:28.146625 kubelet[2259]: E1029 01:29:28.145847 2259 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Oct 29 01:29:28.146625 kubelet[2259]: E1029 01:29:28.146021 2259 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-7mzpv,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-79569d88b4-p6h9g_calico-apiserver(7dd39e5b-dcdd-482f-ab49-9053a64b98c9): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Oct 29 01:29:28.147266 env[1344]: time="2025-10-29T01:29:28.146283621Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Oct 29 01:29:28.147386 kubelet[2259]: E1029 01:29:28.147265 2259 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-79569d88b4-p6h9g" podUID="7dd39e5b-dcdd-482f-ab49-9053a64b98c9" Oct 29 01:29:28.488285 env[1344]: time="2025-10-29T01:29:28.488198895Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Oct 29 01:29:28.489263 env[1344]: time="2025-10-29T01:29:28.488565992Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Oct 29 01:29:28.489515 kubelet[2259]: E1029 01:29:28.488722 2259 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Oct 29 01:29:28.489515 kubelet[2259]: E1029 01:29:28.488770 2259 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Oct 29 01:29:28.489515 kubelet[2259]: E1029 01:29:28.488895 2259 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-8hhz7,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-6w9mz_calico-system(1b41dfbb-cb8d-4095-9219-ece15b48c5c3): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Oct 29 01:29:28.489962 env[1344]: time="2025-10-29T01:29:28.489949316Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Oct 29 01:29:28.490505 kubelet[2259]: E1029 01:29:28.489966 2259 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-6w9mz" podUID="1b41dfbb-cb8d-4095-9219-ece15b48c5c3" Oct 29 01:29:28.601998 kernel: kauditd_printk_skb: 1 callbacks suppressed Oct 29 01:29:28.602931 kernel: audit: type=1130 audit(1761701368.597:435): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@9-139.178.70.110:22-139.178.68.195:36054 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 29 01:29:28.597000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@9-139.178.70.110:22-139.178.68.195:36054 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 29 01:29:28.598234 systemd[1]: Started sshd@9-139.178.70.110:22-139.178.68.195:36054.service. Oct 29 01:29:28.673000 audit[5043]: USER_ACCT pid=5043 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Oct 29 01:29:28.679243 kernel: audit: type=1101 audit(1761701368.673:436): pid=5043 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Oct 29 01:29:28.679589 sshd[5043]: Accepted publickey for core from 139.178.68.195 port 36054 ssh2: RSA SHA256:dX9xZ5mtK3eVY9XPNyM7eUHC3/BhNnXhscR3rJLIlYA Oct 29 01:29:28.680398 sshd[5043]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Oct 29 01:29:28.678000 audit[5043]: CRED_ACQ pid=5043 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Oct 29 01:29:28.685725 kernel: audit: type=1103 audit(1761701368.678:437): pid=5043 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Oct 29 01:29:28.685770 kernel: audit: type=1006 audit(1761701368.678:438): pid=5043 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=12 res=1 Oct 29 01:29:28.678000 audit[5043]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffe83c1d820 a2=3 a3=0 items=0 ppid=1 pid=5043 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=12 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 29 01:29:28.689194 kernel: audit: type=1300 audit(1761701368.678:438): arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffe83c1d820 a2=3 a3=0 items=0 ppid=1 pid=5043 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=12 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 29 01:29:28.678000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Oct 29 01:29:28.690506 kernel: audit: type=1327 audit(1761701368.678:438): proctitle=737368643A20636F7265205B707269765D Oct 29 01:29:28.692775 systemd[1]: Started session-12.scope. Oct 29 01:29:28.693623 systemd-logind[1329]: New session 12 of user core. Oct 29 01:29:28.695000 audit[5043]: USER_START pid=5043 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Oct 29 01:29:28.699000 audit[5046]: CRED_ACQ pid=5046 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Oct 29 01:29:28.704204 kernel: audit: type=1105 audit(1761701368.695:439): pid=5043 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Oct 29 01:29:28.704244 kernel: audit: type=1103 audit(1761701368.699:440): pid=5046 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Oct 29 01:29:28.835563 env[1344]: time="2025-10-29T01:29:28.835276432Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Oct 29 01:29:28.836022 env[1344]: time="2025-10-29T01:29:28.835944378Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Oct 29 01:29:28.836536 kubelet[2259]: E1029 01:29:28.836130 2259 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Oct 29 01:29:28.836536 kubelet[2259]: E1029 01:29:28.836172 2259 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Oct 29 01:29:28.836536 kubelet[2259]: E1029 01:29:28.836283 2259 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-p9wcz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-h4k8p_calico-system(5af81d8c-f0dd-4f37-b2dd-db8e64891fd3): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Oct 29 01:29:28.837636 kubelet[2259]: E1029 01:29:28.837614 2259 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-h4k8p" podUID="5af81d8c-f0dd-4f37-b2dd-db8e64891fd3" Oct 29 01:29:28.851760 systemd[1]: Started sshd@10-139.178.70.110:22-139.178.68.195:36060.service. Oct 29 01:29:28.850000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@10-139.178.70.110:22-139.178.68.195:36060 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 29 01:29:28.856153 sshd[5043]: pam_unix(sshd:session): session closed for user core Oct 29 01:29:28.856263 kernel: audit: type=1130 audit(1761701368.850:441): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@10-139.178.70.110:22-139.178.68.195:36060 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 29 01:29:28.858000 audit[5043]: USER_END pid=5043 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Oct 29 01:29:28.860664 systemd[1]: sshd@9-139.178.70.110:22-139.178.68.195:36054.service: Deactivated successfully. Oct 29 01:29:28.861542 systemd[1]: session-12.scope: Deactivated successfully. Oct 29 01:29:28.864825 kernel: audit: type=1106 audit(1761701368.858:442): pid=5043 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Oct 29 01:29:28.861791 systemd-logind[1329]: Session 12 logged out. Waiting for processes to exit. Oct 29 01:29:28.862489 systemd-logind[1329]: Removed session 12. Oct 29 01:29:28.858000 audit[5043]: CRED_DISP pid=5043 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Oct 29 01:29:28.859000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@9-139.178.70.110:22-139.178.68.195:36054 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 29 01:29:29.021382 sshd[5055]: Accepted publickey for core from 139.178.68.195 port 36060 ssh2: RSA SHA256:dX9xZ5mtK3eVY9XPNyM7eUHC3/BhNnXhscR3rJLIlYA Oct 29 01:29:29.020000 audit[5055]: USER_ACCT pid=5055 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Oct 29 01:29:29.021000 audit[5055]: CRED_ACQ pid=5055 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Oct 29 01:29:29.021000 audit[5055]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffcbc2ca1e0 a2=3 a3=0 items=0 ppid=1 pid=5055 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=13 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 29 01:29:29.021000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Oct 29 01:29:29.035009 sshd[5055]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Oct 29 01:29:29.057510 systemd-logind[1329]: New session 13 of user core. Oct 29 01:29:29.058013 systemd[1]: Started session-13.scope. Oct 29 01:29:29.059000 audit[5055]: USER_START pid=5055 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Oct 29 01:29:29.060000 audit[5060]: CRED_ACQ pid=5060 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Oct 29 01:29:29.492233 env[1344]: time="2025-10-29T01:29:29.492019591Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Oct 29 01:29:29.546000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@11-139.178.70.110:22-139.178.68.195:36072 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 29 01:29:29.547901 systemd[1]: Started sshd@11-139.178.70.110:22-139.178.68.195:36072.service. Oct 29 01:29:29.549499 sshd[5055]: pam_unix(sshd:session): session closed for user core Oct 29 01:29:29.549000 audit[5055]: USER_END pid=5055 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Oct 29 01:29:29.549000 audit[5055]: CRED_DISP pid=5055 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Oct 29 01:29:29.550000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@10-139.178.70.110:22-139.178.68.195:36060 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 29 01:29:29.551685 systemd[1]: sshd@10-139.178.70.110:22-139.178.68.195:36060.service: Deactivated successfully. Oct 29 01:29:29.552967 systemd-logind[1329]: Session 13 logged out. Waiting for processes to exit. Oct 29 01:29:29.553334 systemd[1]: session-13.scope: Deactivated successfully. Oct 29 01:29:29.554008 systemd-logind[1329]: Removed session 13. Oct 29 01:29:29.588000 audit[5065]: USER_ACCT pid=5065 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Oct 29 01:29:29.589809 sshd[5065]: Accepted publickey for core from 139.178.68.195 port 36072 ssh2: RSA SHA256:dX9xZ5mtK3eVY9XPNyM7eUHC3/BhNnXhscR3rJLIlYA Oct 29 01:29:29.589000 audit[5065]: CRED_ACQ pid=5065 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Oct 29 01:29:29.589000 audit[5065]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffcff1c2890 a2=3 a3=0 items=0 ppid=1 pid=5065 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=14 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 29 01:29:29.589000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Oct 29 01:29:29.591251 sshd[5065]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Oct 29 01:29:29.594536 systemd[1]: Started session-14.scope. Oct 29 01:29:29.594769 systemd-logind[1329]: New session 14 of user core. Oct 29 01:29:29.597000 audit[5065]: USER_START pid=5065 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Oct 29 01:29:29.598000 audit[5070]: CRED_ACQ pid=5070 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Oct 29 01:29:29.696667 sshd[5065]: pam_unix(sshd:session): session closed for user core Oct 29 01:29:29.696000 audit[5065]: USER_END pid=5065 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Oct 29 01:29:29.696000 audit[5065]: CRED_DISP pid=5065 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Oct 29 01:29:29.698289 systemd[1]: sshd@11-139.178.70.110:22-139.178.68.195:36072.service: Deactivated successfully. Oct 29 01:29:29.698795 systemd[1]: session-14.scope: Deactivated successfully. Oct 29 01:29:29.697000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@11-139.178.70.110:22-139.178.68.195:36072 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 29 01:29:29.699424 systemd-logind[1329]: Session 14 logged out. Waiting for processes to exit. Oct 29 01:29:29.700041 systemd-logind[1329]: Removed session 14. Oct 29 01:29:29.854879 env[1344]: time="2025-10-29T01:29:29.854692222Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Oct 29 01:29:29.855344 env[1344]: time="2025-10-29T01:29:29.855254718Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Oct 29 01:29:29.855883 kubelet[2259]: E1029 01:29:29.855533 2259 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Oct 29 01:29:29.855883 kubelet[2259]: E1029 01:29:29.855563 2259 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Oct 29 01:29:29.855883 kubelet[2259]: E1029 01:29:29.855644 2259 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-p7wfq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-7b46bb89cf-x7bmp_calico-system(0c8f51f5-a169-488a-a224-1fc1684a62fb): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Oct 29 01:29:29.856788 kubelet[2259]: E1029 01:29:29.856765 2259 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-7b46bb89cf-x7bmp" podUID="0c8f51f5-a169-488a-a224-1fc1684a62fb" Oct 29 01:29:30.490900 env[1344]: time="2025-10-29T01:29:30.490870037Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Oct 29 01:29:30.811503 env[1344]: time="2025-10-29T01:29:30.811409505Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Oct 29 01:29:30.816785 env[1344]: time="2025-10-29T01:29:30.816759219Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Oct 29 01:29:30.816946 kubelet[2259]: E1029 01:29:30.816916 2259 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Oct 29 01:29:30.817006 kubelet[2259]: E1029 01:29:30.816962 2259 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Oct 29 01:29:30.817083 kubelet[2259]: E1029 01:29:30.817054 2259 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-zwtb5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-79569d88b4-q8r9t_calico-apiserver(c80c9899-41ee-40c7-92c7-ab20d72dcefe): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Oct 29 01:29:30.818290 kubelet[2259]: E1029 01:29:30.818273 2259 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-79569d88b4-q8r9t" podUID="c80c9899-41ee-40c7-92c7-ab20d72dcefe" Oct 29 01:29:32.490618 kubelet[2259]: E1029 01:29:32.490572 2259 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-55bf5bd85-l59k9" podUID="d15c73c2-757b-4aa9-bf20-2183e910769f" Oct 29 01:29:34.699554 systemd[1]: Started sshd@12-139.178.70.110:22-139.178.68.195:45990.service. Oct 29 01:29:34.701045 kernel: kauditd_printk_skb: 23 callbacks suppressed Oct 29 01:29:34.701119 kernel: audit: type=1130 audit(1761701374.698:462): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@12-139.178.70.110:22-139.178.68.195:45990 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 29 01:29:34.698000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@12-139.178.70.110:22-139.178.68.195:45990 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 29 01:29:34.797000 audit[5086]: USER_ACCT pid=5086 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Oct 29 01:29:34.799371 sshd[5086]: Accepted publickey for core from 139.178.68.195 port 45990 ssh2: RSA SHA256:dX9xZ5mtK3eVY9XPNyM7eUHC3/BhNnXhscR3rJLIlYA Oct 29 01:29:34.802242 kernel: audit: type=1101 audit(1761701374.797:463): pid=5086 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Oct 29 01:29:34.801000 audit[5086]: CRED_ACQ pid=5086 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Oct 29 01:29:34.807247 kernel: audit: type=1103 audit(1761701374.801:464): pid=5086 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Oct 29 01:29:34.807281 kernel: audit: type=1006 audit(1761701374.801:465): pid=5086 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=15 res=1 Oct 29 01:29:34.801000 audit[5086]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7fffbb9472d0 a2=3 a3=0 items=0 ppid=1 pid=5086 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=15 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 29 01:29:34.810571 kernel: audit: type=1300 audit(1761701374.801:465): arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7fffbb9472d0 a2=3 a3=0 items=0 ppid=1 pid=5086 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=15 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 29 01:29:34.810607 kernel: audit: type=1327 audit(1761701374.801:465): proctitle=737368643A20636F7265205B707269765D Oct 29 01:29:34.801000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Oct 29 01:29:34.811754 sshd[5086]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Oct 29 01:29:34.815974 systemd[1]: Started session-15.scope. Oct 29 01:29:34.816688 systemd-logind[1329]: New session 15 of user core. Oct 29 01:29:34.818000 audit[5086]: USER_START pid=5086 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Oct 29 01:29:34.827260 kernel: audit: type=1105 audit(1761701374.818:466): pid=5086 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Oct 29 01:29:34.827348 kernel: audit: type=1103 audit(1761701374.819:467): pid=5089 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Oct 29 01:29:34.819000 audit[5089]: CRED_ACQ pid=5089 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Oct 29 01:29:34.958209 sshd[5086]: pam_unix(sshd:session): session closed for user core Oct 29 01:29:34.959000 audit[5086]: USER_END pid=5086 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Oct 29 01:29:34.967577 kernel: audit: type=1106 audit(1761701374.959:468): pid=5086 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Oct 29 01:29:34.967606 kernel: audit: type=1104 audit(1761701374.959:469): pid=5086 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Oct 29 01:29:34.959000 audit[5086]: CRED_DISP pid=5086 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Oct 29 01:29:34.973456 systemd-logind[1329]: Session 15 logged out. Waiting for processes to exit. Oct 29 01:29:34.974240 systemd[1]: sshd@12-139.178.70.110:22-139.178.68.195:45990.service: Deactivated successfully. Oct 29 01:29:34.973000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@12-139.178.70.110:22-139.178.68.195:45990 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 29 01:29:34.974722 systemd[1]: session-15.scope: Deactivated successfully. Oct 29 01:29:34.975757 systemd-logind[1329]: Removed session 15. Oct 29 01:29:39.960835 systemd[1]: Started sshd@13-139.178.70.110:22-139.178.68.195:45996.service. Oct 29 01:29:39.966098 kernel: kauditd_printk_skb: 1 callbacks suppressed Oct 29 01:29:39.966144 kernel: audit: type=1130 audit(1761701379.959:471): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@13-139.178.70.110:22-139.178.68.195:45996 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 29 01:29:39.959000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@13-139.178.70.110:22-139.178.68.195:45996 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 29 01:29:40.015000 audit[5099]: USER_ACCT pid=5099 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Oct 29 01:29:40.017038 sshd[5099]: Accepted publickey for core from 139.178.68.195 port 45996 ssh2: RSA SHA256:dX9xZ5mtK3eVY9XPNyM7eUHC3/BhNnXhscR3rJLIlYA Oct 29 01:29:40.020223 kernel: audit: type=1101 audit(1761701380.015:472): pid=5099 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Oct 29 01:29:40.019000 audit[5099]: CRED_ACQ pid=5099 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Oct 29 01:29:40.030213 kernel: audit: type=1103 audit(1761701380.019:473): pid=5099 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Oct 29 01:29:40.030272 kernel: audit: type=1006 audit(1761701380.019:474): pid=5099 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=16 res=1 Oct 29 01:29:40.030291 kernel: audit: type=1300 audit(1761701380.019:474): arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffff6311190 a2=3 a3=0 items=0 ppid=1 pid=5099 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=16 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 29 01:29:40.030305 kernel: audit: type=1327 audit(1761701380.019:474): proctitle=737368643A20636F7265205B707269765D Oct 29 01:29:40.019000 audit[5099]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffff6311190 a2=3 a3=0 items=0 ppid=1 pid=5099 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=16 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 29 01:29:40.019000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Oct 29 01:29:40.030591 sshd[5099]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Oct 29 01:29:40.042740 systemd[1]: Started session-16.scope. Oct 29 01:29:40.042861 systemd-logind[1329]: New session 16 of user core. Oct 29 01:29:40.047000 audit[5099]: USER_START pid=5099 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Oct 29 01:29:40.053225 kernel: audit: type=1105 audit(1761701380.047:475): pid=5099 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Oct 29 01:29:40.051000 audit[5102]: CRED_ACQ pid=5102 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Oct 29 01:29:40.058225 kernel: audit: type=1103 audit(1761701380.051:476): pid=5102 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Oct 29 01:29:40.205299 sshd[5099]: pam_unix(sshd:session): session closed for user core Oct 29 01:29:40.204000 audit[5099]: USER_END pid=5099 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Oct 29 01:29:40.212878 kernel: audit: type=1106 audit(1761701380.204:477): pid=5099 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Oct 29 01:29:40.212923 kernel: audit: type=1104 audit(1761701380.208:478): pid=5099 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Oct 29 01:29:40.208000 audit[5099]: CRED_DISP pid=5099 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Oct 29 01:29:40.214589 systemd[1]: sshd@13-139.178.70.110:22-139.178.68.195:45996.service: Deactivated successfully. Oct 29 01:29:40.213000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@13-139.178.70.110:22-139.178.68.195:45996 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 29 01:29:40.215088 systemd[1]: session-16.scope: Deactivated successfully. Oct 29 01:29:40.215284 systemd-logind[1329]: Session 16 logged out. Waiting for processes to exit. Oct 29 01:29:40.215700 systemd-logind[1329]: Removed session 16. Oct 29 01:29:41.490348 kubelet[2259]: E1029 01:29:41.490323 2259 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-h4k8p" podUID="5af81d8c-f0dd-4f37-b2dd-db8e64891fd3" Oct 29 01:29:41.490763 kubelet[2259]: E1029 01:29:41.490745 2259 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-79569d88b4-q8r9t" podUID="c80c9899-41ee-40c7-92c7-ab20d72dcefe" Oct 29 01:29:42.490600 kubelet[2259]: E1029 01:29:42.490571 2259 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-79569d88b4-p6h9g" podUID="7dd39e5b-dcdd-482f-ab49-9053a64b98c9" Oct 29 01:29:43.492024 kubelet[2259]: E1029 01:29:43.491992 2259 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-55bf5bd85-l59k9" podUID="d15c73c2-757b-4aa9-bf20-2183e910769f" Oct 29 01:29:43.496256 kubelet[2259]: E1029 01:29:43.496225 2259 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-6w9mz" podUID="1b41dfbb-cb8d-4095-9219-ece15b48c5c3" Oct 29 01:29:44.490420 kubelet[2259]: E1029 01:29:44.490388 2259 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-7b46bb89cf-x7bmp" podUID="0c8f51f5-a169-488a-a224-1fc1684a62fb" Oct 29 01:29:45.206792 systemd[1]: Started sshd@14-139.178.70.110:22-139.178.68.195:35052.service. Oct 29 01:29:45.211603 kernel: kauditd_printk_skb: 1 callbacks suppressed Oct 29 01:29:45.211650 kernel: audit: type=1130 audit(1761701385.205:480): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@14-139.178.70.110:22-139.178.68.195:35052 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 29 01:29:45.205000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@14-139.178.70.110:22-139.178.68.195:35052 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 29 01:29:45.288000 audit[5134]: USER_ACCT pid=5134 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Oct 29 01:29:45.290324 sshd[5134]: Accepted publickey for core from 139.178.68.195 port 35052 ssh2: RSA SHA256:dX9xZ5mtK3eVY9XPNyM7eUHC3/BhNnXhscR3rJLIlYA Oct 29 01:29:45.293479 kernel: audit: type=1101 audit(1761701385.288:481): pid=5134 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Oct 29 01:29:45.292000 audit[5134]: CRED_ACQ pid=5134 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Oct 29 01:29:45.298843 kernel: audit: type=1103 audit(1761701385.292:482): pid=5134 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Oct 29 01:29:45.298876 kernel: audit: type=1006 audit(1761701385.292:483): pid=5134 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=17 res=1 Oct 29 01:29:45.298896 kernel: audit: type=1300 audit(1761701385.292:483): arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7fff3f6fa6e0 a2=3 a3=0 items=0 ppid=1 pid=5134 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=17 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 29 01:29:45.292000 audit[5134]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7fff3f6fa6e0 a2=3 a3=0 items=0 ppid=1 pid=5134 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=17 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 29 01:29:45.302395 sshd[5134]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Oct 29 01:29:45.292000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Oct 29 01:29:45.305210 kernel: audit: type=1327 audit(1761701385.292:483): proctitle=737368643A20636F7265205B707269765D Oct 29 01:29:45.309036 systemd[1]: Started session-17.scope. Oct 29 01:29:45.309237 systemd-logind[1329]: New session 17 of user core. Oct 29 01:29:45.311000 audit[5134]: USER_START pid=5134 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Oct 29 01:29:45.312000 audit[5137]: CRED_ACQ pid=5137 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Oct 29 01:29:45.319154 kernel: audit: type=1105 audit(1761701385.311:484): pid=5134 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Oct 29 01:29:45.319232 kernel: audit: type=1103 audit(1761701385.312:485): pid=5137 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Oct 29 01:29:45.593308 sshd[5134]: pam_unix(sshd:session): session closed for user core Oct 29 01:29:45.594000 audit[5134]: USER_END pid=5134 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Oct 29 01:29:45.599283 kernel: audit: type=1106 audit(1761701385.594:486): pid=5134 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Oct 29 01:29:45.599327 kernel: audit: type=1104 audit(1761701385.594:487): pid=5134 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Oct 29 01:29:45.594000 audit[5134]: CRED_DISP pid=5134 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Oct 29 01:29:45.608490 systemd[1]: sshd@14-139.178.70.110:22-139.178.68.195:35052.service: Deactivated successfully. Oct 29 01:29:45.608972 systemd[1]: session-17.scope: Deactivated successfully. Oct 29 01:29:45.607000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@14-139.178.70.110:22-139.178.68.195:35052 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 29 01:29:45.609374 systemd-logind[1329]: Session 17 logged out. Waiting for processes to exit. Oct 29 01:29:45.610040 systemd-logind[1329]: Removed session 17. Oct 29 01:29:50.594516 systemd[1]: Started sshd@15-139.178.70.110:22-139.178.68.195:35058.service. Oct 29 01:29:50.600238 kernel: kauditd_printk_skb: 1 callbacks suppressed Oct 29 01:29:50.600288 kernel: audit: type=1130 audit(1761701390.594:489): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@15-139.178.70.110:22-139.178.68.195:35058 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 29 01:29:50.594000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@15-139.178.70.110:22-139.178.68.195:35058 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 29 01:29:50.643000 audit[5148]: USER_ACCT pid=5148 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Oct 29 01:29:50.648450 sshd[5148]: Accepted publickey for core from 139.178.68.195 port 35058 ssh2: RSA SHA256:dX9xZ5mtK3eVY9XPNyM7eUHC3/BhNnXhscR3rJLIlYA Oct 29 01:29:50.649196 kernel: audit: type=1101 audit(1761701390.643:490): pid=5148 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Oct 29 01:29:50.655731 kernel: audit: type=1103 audit(1761701390.647:491): pid=5148 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Oct 29 01:29:50.655777 kernel: audit: type=1006 audit(1761701390.647:492): pid=5148 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=18 res=1 Oct 29 01:29:50.656919 kernel: audit: type=1300 audit(1761701390.647:492): arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffcc13cb760 a2=3 a3=0 items=0 ppid=1 pid=5148 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=18 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 29 01:29:50.647000 audit[5148]: CRED_ACQ pid=5148 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Oct 29 01:29:50.647000 audit[5148]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffcc13cb760 a2=3 a3=0 items=0 ppid=1 pid=5148 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=18 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 29 01:29:50.657262 sshd[5148]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Oct 29 01:29:50.647000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Oct 29 01:29:50.658945 kernel: audit: type=1327 audit(1761701390.647:492): proctitle=737368643A20636F7265205B707269765D Oct 29 01:29:50.661527 systemd[1]: Started session-18.scope. Oct 29 01:29:50.661811 systemd-logind[1329]: New session 18 of user core. Oct 29 01:29:50.663000 audit[5148]: USER_START pid=5148 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Oct 29 01:29:50.663000 audit[5151]: CRED_ACQ pid=5151 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Oct 29 01:29:50.671563 kernel: audit: type=1105 audit(1761701390.663:493): pid=5148 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Oct 29 01:29:50.671962 kernel: audit: type=1103 audit(1761701390.663:494): pid=5151 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Oct 29 01:29:50.840151 systemd[1]: Started sshd@16-139.178.70.110:22-139.178.68.195:35074.service. Oct 29 01:29:50.840000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@16-139.178.70.110:22-139.178.68.195:35074 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 29 01:29:50.845614 kernel: audit: type=1130 audit(1761701390.840:495): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@16-139.178.70.110:22-139.178.68.195:35074 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 29 01:29:50.846374 sshd[5148]: pam_unix(sshd:session): session closed for user core Oct 29 01:29:50.846000 audit[5148]: USER_END pid=5148 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Oct 29 01:29:50.849547 systemd[1]: sshd@15-139.178.70.110:22-139.178.68.195:35058.service: Deactivated successfully. Oct 29 01:29:50.850019 systemd[1]: session-18.scope: Deactivated successfully. Oct 29 01:29:50.853413 kernel: audit: type=1106 audit(1761701390.846:496): pid=5148 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Oct 29 01:29:50.847000 audit[5148]: CRED_DISP pid=5148 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Oct 29 01:29:50.848000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@15-139.178.70.110:22-139.178.68.195:35058 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 29 01:29:50.853235 systemd-logind[1329]: Session 18 logged out. Waiting for processes to exit. Oct 29 01:29:50.854023 systemd-logind[1329]: Removed session 18. Oct 29 01:29:50.889000 audit[5158]: USER_ACCT pid=5158 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Oct 29 01:29:50.891014 sshd[5158]: Accepted publickey for core from 139.178.68.195 port 35074 ssh2: RSA SHA256:dX9xZ5mtK3eVY9XPNyM7eUHC3/BhNnXhscR3rJLIlYA Oct 29 01:29:50.890000 audit[5158]: CRED_ACQ pid=5158 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Oct 29 01:29:50.890000 audit[5158]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffccb76db40 a2=3 a3=0 items=0 ppid=1 pid=5158 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=19 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 29 01:29:50.890000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Oct 29 01:29:50.892372 sshd[5158]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Oct 29 01:29:50.895886 systemd-logind[1329]: New session 19 of user core. Oct 29 01:29:50.896244 systemd[1]: Started session-19.scope. Oct 29 01:29:50.898000 audit[5158]: USER_START pid=5158 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Oct 29 01:29:50.900000 audit[5163]: CRED_ACQ pid=5163 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Oct 29 01:29:51.343000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@17-139.178.70.110:22-139.178.68.195:35084 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 29 01:29:51.344545 systemd[1]: Started sshd@17-139.178.70.110:22-139.178.68.195:35084.service. Oct 29 01:29:51.348357 sshd[5158]: pam_unix(sshd:session): session closed for user core Oct 29 01:29:51.353000 audit[5158]: USER_END pid=5158 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Oct 29 01:29:51.353000 audit[5158]: CRED_DISP pid=5158 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Oct 29 01:29:51.362000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@16-139.178.70.110:22-139.178.68.195:35074 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 29 01:29:51.363860 systemd[1]: sshd@16-139.178.70.110:22-139.178.68.195:35074.service: Deactivated successfully. Oct 29 01:29:51.365135 systemd[1]: session-19.scope: Deactivated successfully. Oct 29 01:29:51.365358 systemd-logind[1329]: Session 19 logged out. Waiting for processes to exit. Oct 29 01:29:51.366269 systemd-logind[1329]: Removed session 19. Oct 29 01:29:51.405000 audit[5169]: USER_ACCT pid=5169 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Oct 29 01:29:51.406758 sshd[5169]: Accepted publickey for core from 139.178.68.195 port 35084 ssh2: RSA SHA256:dX9xZ5mtK3eVY9XPNyM7eUHC3/BhNnXhscR3rJLIlYA Oct 29 01:29:51.406000 audit[5169]: CRED_ACQ pid=5169 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Oct 29 01:29:51.406000 audit[5169]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffebf745430 a2=3 a3=0 items=0 ppid=1 pid=5169 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=20 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 29 01:29:51.406000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Oct 29 01:29:51.407641 sshd[5169]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Oct 29 01:29:51.410675 systemd[1]: Started session-20.scope. Oct 29 01:29:51.410789 systemd-logind[1329]: New session 20 of user core. Oct 29 01:29:51.412000 audit[5169]: USER_START pid=5169 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Oct 29 01:29:51.413000 audit[5174]: CRED_ACQ pid=5174 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Oct 29 01:29:52.133000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@18-139.178.70.110:22-139.178.68.195:35088 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 29 01:29:52.134528 systemd[1]: Started sshd@18-139.178.70.110:22-139.178.68.195:35088.service. Oct 29 01:29:52.146900 sshd[5169]: pam_unix(sshd:session): session closed for user core Oct 29 01:29:52.157000 audit[5169]: USER_END pid=5169 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Oct 29 01:29:52.163000 audit[5169]: CRED_DISP pid=5169 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Oct 29 01:29:52.164000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@17-139.178.70.110:22-139.178.68.195:35084 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 29 01:29:52.165809 systemd[1]: sshd@17-139.178.70.110:22-139.178.68.195:35084.service: Deactivated successfully. Oct 29 01:29:52.168236 systemd[1]: session-20.scope: Deactivated successfully. Oct 29 01:29:52.168469 systemd-logind[1329]: Session 20 logged out. Waiting for processes to exit. Oct 29 01:29:52.171486 systemd-logind[1329]: Removed session 20. Oct 29 01:29:52.230000 audit[5185]: USER_ACCT pid=5185 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Oct 29 01:29:52.232061 sshd[5185]: Accepted publickey for core from 139.178.68.195 port 35088 ssh2: RSA SHA256:dX9xZ5mtK3eVY9XPNyM7eUHC3/BhNnXhscR3rJLIlYA Oct 29 01:29:52.231000 audit[5185]: CRED_ACQ pid=5185 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Oct 29 01:29:52.231000 audit[5185]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffc7eb45970 a2=3 a3=0 items=0 ppid=1 pid=5185 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=21 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 29 01:29:52.231000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Oct 29 01:29:52.234342 sshd[5185]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Oct 29 01:29:52.239084 systemd[1]: Started session-21.scope. Oct 29 01:29:52.243749 systemd-logind[1329]: New session 21 of user core. Oct 29 01:29:52.247000 audit[5185]: USER_START pid=5185 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Oct 29 01:29:52.248000 audit[5191]: CRED_ACQ pid=5191 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Oct 29 01:29:52.268000 audit[5192]: NETFILTER_CFG table=filter:128 family=2 entries=26 op=nft_register_rule pid=5192 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Oct 29 01:29:52.268000 audit[5192]: SYSCALL arch=c000003e syscall=46 success=yes exit=14176 a0=3 a1=7ffe27d63f10 a2=0 a3=7ffe27d63efc items=0 ppid=2360 pid=5192 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 29 01:29:52.268000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Oct 29 01:29:52.273000 audit[5192]: NETFILTER_CFG table=nat:129 family=2 entries=20 op=nft_register_rule pid=5192 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Oct 29 01:29:52.273000 audit[5192]: SYSCALL arch=c000003e syscall=46 success=yes exit=5772 a0=3 a1=7ffe27d63f10 a2=0 a3=0 items=0 ppid=2360 pid=5192 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 29 01:29:52.273000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Oct 29 01:29:52.288000 audit[5194]: NETFILTER_CFG table=filter:130 family=2 entries=38 op=nft_register_rule pid=5194 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Oct 29 01:29:52.288000 audit[5194]: SYSCALL arch=c000003e syscall=46 success=yes exit=14176 a0=3 a1=7ffd667541d0 a2=0 a3=7ffd667541bc items=0 ppid=2360 pid=5194 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 29 01:29:52.288000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Oct 29 01:29:52.293000 audit[5194]: NETFILTER_CFG table=nat:131 family=2 entries=20 op=nft_register_rule pid=5194 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Oct 29 01:29:52.293000 audit[5194]: SYSCALL arch=c000003e syscall=46 success=yes exit=5772 a0=3 a1=7ffd667541d0 a2=0 a3=0 items=0 ppid=2360 pid=5194 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 29 01:29:52.293000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Oct 29 01:29:52.773197 kubelet[2259]: E1029 01:29:52.772865 2259 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-h4k8p" podUID="5af81d8c-f0dd-4f37-b2dd-db8e64891fd3" Oct 29 01:29:52.968884 systemd[1]: Started sshd@19-139.178.70.110:22-139.178.68.195:56024.service. Oct 29 01:29:52.967000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@19-139.178.70.110:22-139.178.68.195:56024 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 29 01:29:52.980486 sshd[5185]: pam_unix(sshd:session): session closed for user core Oct 29 01:29:52.984000 audit[5185]: USER_END pid=5185 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Oct 29 01:29:52.985000 audit[5185]: CRED_DISP pid=5185 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Oct 29 01:29:52.990905 systemd[1]: sshd@18-139.178.70.110:22-139.178.68.195:35088.service: Deactivated successfully. Oct 29 01:29:52.989000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@18-139.178.70.110:22-139.178.68.195:35088 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 29 01:29:52.991699 systemd[1]: session-21.scope: Deactivated successfully. Oct 29 01:29:52.991930 systemd-logind[1329]: Session 21 logged out. Waiting for processes to exit. Oct 29 01:29:52.992769 systemd-logind[1329]: Removed session 21. Oct 29 01:29:53.045294 sshd[5200]: Accepted publickey for core from 139.178.68.195 port 56024 ssh2: RSA SHA256:dX9xZ5mtK3eVY9XPNyM7eUHC3/BhNnXhscR3rJLIlYA Oct 29 01:29:53.044000 audit[5200]: USER_ACCT pid=5200 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Oct 29 01:29:53.046169 sshd[5200]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Oct 29 01:29:53.044000 audit[5200]: CRED_ACQ pid=5200 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Oct 29 01:29:53.044000 audit[5200]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7fff003fcec0 a2=3 a3=0 items=0 ppid=1 pid=5200 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=22 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 29 01:29:53.044000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Oct 29 01:29:53.050402 systemd[1]: Started session-22.scope. Oct 29 01:29:53.050525 systemd-logind[1329]: New session 22 of user core. Oct 29 01:29:53.052000 audit[5200]: USER_START pid=5200 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Oct 29 01:29:53.053000 audit[5205]: CRED_ACQ pid=5205 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Oct 29 01:29:53.300969 sshd[5200]: pam_unix(sshd:session): session closed for user core Oct 29 01:29:53.300000 audit[5200]: USER_END pid=5200 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Oct 29 01:29:53.300000 audit[5200]: CRED_DISP pid=5200 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Oct 29 01:29:53.301000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@19-139.178.70.110:22-139.178.68.195:56024 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 29 01:29:53.302825 systemd[1]: sshd@19-139.178.70.110:22-139.178.68.195:56024.service: Deactivated successfully. Oct 29 01:29:53.303671 systemd[1]: session-22.scope: Deactivated successfully. Oct 29 01:29:53.303869 systemd-logind[1329]: Session 22 logged out. Waiting for processes to exit. Oct 29 01:29:53.304489 systemd-logind[1329]: Removed session 22. Oct 29 01:29:54.490008 kubelet[2259]: E1029 01:29:54.489978 2259 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-79569d88b4-p6h9g" podUID="7dd39e5b-dcdd-482f-ab49-9053a64b98c9" Oct 29 01:29:55.491799 kubelet[2259]: E1029 01:29:55.491770 2259 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-6w9mz" podUID="1b41dfbb-cb8d-4095-9219-ece15b48c5c3" Oct 29 01:29:55.492601 kubelet[2259]: E1029 01:29:55.491786 2259 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-79569d88b4-q8r9t" podUID="c80c9899-41ee-40c7-92c7-ab20d72dcefe" Oct 29 01:29:56.490574 kubelet[2259]: E1029 01:29:56.490534 2259 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-7b46bb89cf-x7bmp" podUID="0c8f51f5-a169-488a-a224-1fc1684a62fb" Oct 29 01:29:57.360449 kernel: kauditd_printk_skb: 57 callbacks suppressed Oct 29 01:29:57.364782 kernel: audit: type=1325 audit(1761701397.356:538): table=filter:132 family=2 entries=26 op=nft_register_rule pid=5216 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Oct 29 01:29:57.368206 kernel: audit: type=1300 audit(1761701397.356:538): arch=c000003e syscall=46 success=yes exit=5248 a0=3 a1=7ffd6e8c8330 a2=0 a3=7ffd6e8c831c items=0 ppid=2360 pid=5216 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 29 01:29:57.369650 kernel: audit: type=1327 audit(1761701397.356:538): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Oct 29 01:29:57.356000 audit[5216]: NETFILTER_CFG table=filter:132 family=2 entries=26 op=nft_register_rule pid=5216 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Oct 29 01:29:57.356000 audit[5216]: SYSCALL arch=c000003e syscall=46 success=yes exit=5248 a0=3 a1=7ffd6e8c8330 a2=0 a3=7ffd6e8c831c items=0 ppid=2360 pid=5216 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 29 01:29:57.356000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Oct 29 01:29:57.368000 audit[5216]: NETFILTER_CFG table=nat:133 family=2 entries=104 op=nft_register_chain pid=5216 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Oct 29 01:29:57.375869 kernel: audit: type=1325 audit(1761701397.368:539): table=nat:133 family=2 entries=104 op=nft_register_chain pid=5216 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Oct 29 01:29:57.375906 kernel: audit: type=1300 audit(1761701397.368:539): arch=c000003e syscall=46 success=yes exit=48684 a0=3 a1=7ffd6e8c8330 a2=0 a3=7ffd6e8c831c items=0 ppid=2360 pid=5216 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 29 01:29:57.368000 audit[5216]: SYSCALL arch=c000003e syscall=46 success=yes exit=48684 a0=3 a1=7ffd6e8c8330 a2=0 a3=7ffd6e8c831c items=0 ppid=2360 pid=5216 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 29 01:29:57.377843 kernel: audit: type=1327 audit(1761701397.368:539): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Oct 29 01:29:57.368000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Oct 29 01:29:57.500170 kubelet[2259]: E1029 01:29:57.500146 2259 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-55bf5bd85-l59k9" podUID="d15c73c2-757b-4aa9-bf20-2183e910769f" Oct 29 01:29:58.308886 kernel: audit: type=1130 audit(1761701398.302:540): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@20-139.178.70.110:22-139.178.68.195:56036 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 29 01:29:58.302000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@20-139.178.70.110:22-139.178.68.195:56036 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 29 01:29:58.303910 systemd[1]: Started sshd@20-139.178.70.110:22-139.178.68.195:56036.service. Oct 29 01:29:58.383000 audit[5217]: USER_ACCT pid=5217 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Oct 29 01:29:58.385844 sshd[5217]: Accepted publickey for core from 139.178.68.195 port 56036 ssh2: RSA SHA256:dX9xZ5mtK3eVY9XPNyM7eUHC3/BhNnXhscR3rJLIlYA Oct 29 01:29:58.388203 kernel: audit: type=1101 audit(1761701398.383:541): pid=5217 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Oct 29 01:29:58.388000 audit[5217]: CRED_ACQ pid=5217 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Oct 29 01:29:58.394309 kernel: audit: type=1103 audit(1761701398.388:542): pid=5217 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Oct 29 01:29:58.394338 kernel: audit: type=1006 audit(1761701398.388:543): pid=5217 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=23 res=1 Oct 29 01:29:58.388000 audit[5217]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffed2b3bf50 a2=3 a3=0 items=0 ppid=1 pid=5217 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=23 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 29 01:29:58.388000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Oct 29 01:29:58.394591 sshd[5217]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Oct 29 01:29:58.399528 systemd[1]: Started session-23.scope. Oct 29 01:29:58.399726 systemd-logind[1329]: New session 23 of user core. Oct 29 01:29:58.407000 audit[5217]: USER_START pid=5217 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Oct 29 01:29:58.408000 audit[5220]: CRED_ACQ pid=5220 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Oct 29 01:29:58.661115 sshd[5217]: pam_unix(sshd:session): session closed for user core Oct 29 01:29:58.660000 audit[5217]: USER_END pid=5217 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Oct 29 01:29:58.661000 audit[5217]: CRED_DISP pid=5217 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Oct 29 01:29:58.663727 systemd-logind[1329]: Session 23 logged out. Waiting for processes to exit. Oct 29 01:29:58.663000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@20-139.178.70.110:22-139.178.68.195:56036 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 29 01:29:58.664577 systemd[1]: sshd@20-139.178.70.110:22-139.178.68.195:56036.service: Deactivated successfully. Oct 29 01:29:58.665060 systemd[1]: session-23.scope: Deactivated successfully. Oct 29 01:29:58.666103 systemd-logind[1329]: Removed session 23. Oct 29 01:30:03.663000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@21-139.178.70.110:22-139.178.68.195:59198 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 29 01:30:03.664174 systemd[1]: Started sshd@21-139.178.70.110:22-139.178.68.195:59198.service. Oct 29 01:30:03.667989 kernel: kauditd_printk_skb: 7 callbacks suppressed Oct 29 01:30:03.668042 kernel: audit: type=1130 audit(1761701403.663:549): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@21-139.178.70.110:22-139.178.68.195:59198 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 29 01:30:03.731000 audit[5236]: USER_ACCT pid=5236 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Oct 29 01:30:03.732374 sshd[5236]: Accepted publickey for core from 139.178.68.195 port 59198 ssh2: RSA SHA256:dX9xZ5mtK3eVY9XPNyM7eUHC3/BhNnXhscR3rJLIlYA Oct 29 01:30:03.736200 kernel: audit: type=1101 audit(1761701403.731:550): pid=5236 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Oct 29 01:30:03.736488 kernel: audit: type=1103 audit(1761701403.734:551): pid=5236 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Oct 29 01:30:03.734000 audit[5236]: CRED_ACQ pid=5236 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Oct 29 01:30:03.739401 sshd[5236]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Oct 29 01:30:03.741786 kernel: audit: type=1006 audit(1761701403.735:552): pid=5236 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=24 res=1 Oct 29 01:30:03.741821 kernel: audit: type=1300 audit(1761701403.735:552): arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7fff110a69c0 a2=3 a3=0 items=0 ppid=1 pid=5236 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=24 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 29 01:30:03.735000 audit[5236]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7fff110a69c0 a2=3 a3=0 items=0 ppid=1 pid=5236 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=24 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 29 01:30:03.742717 systemd[1]: Started session-24.scope. Oct 29 01:30:03.743415 systemd-logind[1329]: New session 24 of user core. Oct 29 01:30:03.735000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Oct 29 01:30:03.745725 kernel: audit: type=1327 audit(1761701403.735:552): proctitle=737368643A20636F7265205B707269765D Oct 29 01:30:03.745000 audit[5236]: USER_START pid=5236 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Oct 29 01:30:03.750233 kernel: audit: type=1105 audit(1761701403.745:553): pid=5236 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Oct 29 01:30:03.751027 kernel: audit: type=1103 audit(1761701403.746:554): pid=5239 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Oct 29 01:30:03.746000 audit[5239]: CRED_ACQ pid=5239 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Oct 29 01:30:03.972092 sshd[5236]: pam_unix(sshd:session): session closed for user core Oct 29 01:30:03.975000 audit[5236]: USER_END pid=5236 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Oct 29 01:30:03.977630 systemd[1]: sshd@21-139.178.70.110:22-139.178.68.195:59198.service: Deactivated successfully. Oct 29 01:30:03.978343 systemd[1]: session-24.scope: Deactivated successfully. Oct 29 01:30:03.978648 systemd-logind[1329]: Session 24 logged out. Waiting for processes to exit. Oct 29 01:30:03.979145 systemd-logind[1329]: Removed session 24. Oct 29 01:30:03.980261 kernel: audit: type=1106 audit(1761701403.975:555): pid=5236 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Oct 29 01:30:03.975000 audit[5236]: CRED_DISP pid=5236 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Oct 29 01:30:03.984204 kernel: audit: type=1104 audit(1761701403.975:556): pid=5236 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Oct 29 01:30:03.976000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@21-139.178.70.110:22-139.178.68.195:59198 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 29 01:30:06.491045 kubelet[2259]: E1029 01:30:06.491009 2259 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-79569d88b4-p6h9g" podUID="7dd39e5b-dcdd-482f-ab49-9053a64b98c9" Oct 29 01:30:07.525144 kubelet[2259]: E1029 01:30:07.525117 2259 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-79569d88b4-q8r9t" podUID="c80c9899-41ee-40c7-92c7-ab20d72dcefe" Oct 29 01:30:07.525435 kubelet[2259]: E1029 01:30:07.525336 2259 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-7b46bb89cf-x7bmp" podUID="0c8f51f5-a169-488a-a224-1fc1684a62fb" Oct 29 01:30:07.525435 kubelet[2259]: E1029 01:30:07.525365 2259 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-h4k8p" podUID="5af81d8c-f0dd-4f37-b2dd-db8e64891fd3" Oct 29 01:30:08.971709 systemd[1]: Started sshd@22-139.178.70.110:22-139.178.68.195:59212.service. Oct 29 01:30:08.975432 kernel: kauditd_printk_skb: 1 callbacks suppressed Oct 29 01:30:08.976317 kernel: audit: type=1130 audit(1761701408.970:558): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@22-139.178.70.110:22-139.178.68.195:59212 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 29 01:30:08.970000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@22-139.178.70.110:22-139.178.68.195:59212 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 29 01:30:09.050000 audit[5250]: USER_ACCT pid=5250 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Oct 29 01:30:09.054793 sshd[5250]: Accepted publickey for core from 139.178.68.195 port 59212 ssh2: RSA SHA256:dX9xZ5mtK3eVY9XPNyM7eUHC3/BhNnXhscR3rJLIlYA Oct 29 01:30:09.058317 kernel: audit: type=1101 audit(1761701409.050:559): pid=5250 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Oct 29 01:30:09.061037 kernel: audit: type=1103 audit(1761701409.050:560): pid=5250 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Oct 29 01:30:09.061138 kernel: audit: type=1006 audit(1761701409.050:561): pid=5250 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=25 res=1 Oct 29 01:30:09.050000 audit[5250]: CRED_ACQ pid=5250 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Oct 29 01:30:09.055371 sshd[5250]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Oct 29 01:30:09.050000 audit[5250]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffee46b3a60 a2=3 a3=0 items=0 ppid=1 pid=5250 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=25 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 29 01:30:09.061696 systemd[1]: Started session-25.scope. Oct 29 01:30:09.062424 systemd-logind[1329]: New session 25 of user core. Oct 29 01:30:09.065192 kernel: audit: type=1300 audit(1761701409.050:561): arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffee46b3a60 a2=3 a3=0 items=0 ppid=1 pid=5250 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=25 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 29 01:30:09.050000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Oct 29 01:30:09.070195 kernel: audit: type=1327 audit(1761701409.050:561): proctitle=737368643A20636F7265205B707269765D Oct 29 01:30:09.064000 audit[5250]: USER_START pid=5250 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Oct 29 01:30:09.075194 kernel: audit: type=1105 audit(1761701409.064:562): pid=5250 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Oct 29 01:30:09.065000 audit[5253]: CRED_ACQ pid=5253 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Oct 29 01:30:09.079213 kernel: audit: type=1103 audit(1761701409.065:563): pid=5253 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Oct 29 01:30:09.403086 sshd[5250]: pam_unix(sshd:session): session closed for user core Oct 29 01:30:09.403000 audit[5250]: USER_END pid=5250 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Oct 29 01:30:09.405625 systemd-logind[1329]: Session 25 logged out. Waiting for processes to exit. Oct 29 01:30:09.403000 audit[5250]: CRED_DISP pid=5250 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Oct 29 01:30:09.406393 systemd[1]: sshd@22-139.178.70.110:22-139.178.68.195:59212.service: Deactivated successfully. Oct 29 01:30:09.406871 systemd[1]: session-25.scope: Deactivated successfully. Oct 29 01:30:09.407657 systemd-logind[1329]: Removed session 25. Oct 29 01:30:09.411243 kernel: audit: type=1106 audit(1761701409.403:564): pid=5250 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Oct 29 01:30:09.412080 kernel: audit: type=1104 audit(1761701409.403:565): pid=5250 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Oct 29 01:30:09.403000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@22-139.178.70.110:22-139.178.68.195:59212 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 29 01:30:10.496760 env[1344]: time="2025-10-29T01:30:10.496727376Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Oct 29 01:30:10.875446 env[1344]: time="2025-10-29T01:30:10.875345351Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Oct 29 01:30:10.875810 env[1344]: time="2025-10-29T01:30:10.875759297Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Oct 29 01:30:10.882602 kubelet[2259]: E1029 01:30:10.880510 2259 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Oct 29 01:30:10.885216 kubelet[2259]: E1029 01:30:10.885195 2259 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Oct 29 01:30:10.889994 kubelet[2259]: E1029 01:30:10.889962 2259 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-8hhz7,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-6w9mz_calico-system(1b41dfbb-cb8d-4095-9219-ece15b48c5c3): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Oct 29 01:30:10.892048 env[1344]: time="2025-10-29T01:30:10.891902856Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\""