Nov 1 00:58:26.660762 kernel: Linux version 5.15.192-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 11.3.1_p20221209 p3) 11.3.1 20221209, GNU ld (Gentoo 2.39 p5) 2.39.0) #1 SMP Fri Oct 31 23:02:53 -00 2025 Nov 1 00:58:26.660777 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=vmware flatcar.autologin verity.usrhash=c4c72a4f851a6da01cbc7150799371516ef8311ea786098908d8eb164df01ee2 Nov 1 00:58:26.660783 kernel: Disabled fast string operations Nov 1 00:58:26.660787 kernel: BIOS-provided physical RAM map: Nov 1 00:58:26.660791 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009ebff] usable Nov 1 00:58:26.660795 kernel: BIOS-e820: [mem 0x000000000009ec00-0x000000000009ffff] reserved Nov 1 00:58:26.660801 kernel: BIOS-e820: [mem 0x00000000000dc000-0x00000000000fffff] reserved Nov 1 00:58:26.660805 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000007fedffff] usable Nov 1 00:58:26.660809 kernel: BIOS-e820: [mem 0x000000007fee0000-0x000000007fefefff] ACPI data Nov 1 00:58:26.660813 kernel: BIOS-e820: [mem 0x000000007feff000-0x000000007fefffff] ACPI NVS Nov 1 00:58:26.660817 kernel: BIOS-e820: [mem 0x000000007ff00000-0x000000007fffffff] usable Nov 1 00:58:26.660821 kernel: BIOS-e820: [mem 0x00000000f0000000-0x00000000f7ffffff] reserved Nov 1 00:58:26.660825 kernel: BIOS-e820: [mem 0x00000000fec00000-0x00000000fec0ffff] reserved Nov 1 00:58:26.660829 kernel: BIOS-e820: [mem 0x00000000fee00000-0x00000000fee00fff] reserved Nov 1 00:58:26.660835 kernel: BIOS-e820: [mem 0x00000000fffe0000-0x00000000ffffffff] reserved Nov 1 00:58:26.660839 kernel: NX (Execute Disable) protection: active Nov 1 00:58:26.660844 kernel: SMBIOS 2.7 present. Nov 1 00:58:26.660848 kernel: DMI: VMware, Inc. VMware Virtual Platform/440BX Desktop Reference Platform, BIOS 6.00 05/28/2020 Nov 1 00:58:26.660853 kernel: vmware: hypercall mode: 0x00 Nov 1 00:58:26.660857 kernel: Hypervisor detected: VMware Nov 1 00:58:26.660862 kernel: vmware: TSC freq read from hypervisor : 3408.000 MHz Nov 1 00:58:26.660867 kernel: vmware: Host bus clock speed read from hypervisor : 66000000 Hz Nov 1 00:58:26.660871 kernel: vmware: using clock offset of 3659597395 ns Nov 1 00:58:26.660875 kernel: tsc: Detected 3408.000 MHz processor Nov 1 00:58:26.660880 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Nov 1 00:58:26.660885 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Nov 1 00:58:26.660890 kernel: last_pfn = 0x80000 max_arch_pfn = 0x400000000 Nov 1 00:58:26.660894 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Nov 1 00:58:26.660899 kernel: total RAM covered: 3072M Nov 1 00:58:26.660904 kernel: Found optimal setting for mtrr clean up Nov 1 00:58:26.660909 kernel: gran_size: 64K chunk_size: 64K num_reg: 2 lose cover RAM: 0G Nov 1 00:58:26.660914 kernel: Using GB pages for direct mapping Nov 1 00:58:26.660918 kernel: ACPI: Early table checksum verification disabled Nov 1 00:58:26.660923 kernel: ACPI: RSDP 0x00000000000F6A00 000024 (v02 PTLTD ) Nov 1 00:58:26.660927 kernel: ACPI: XSDT 0x000000007FEE965B 00005C (v01 INTEL 440BX 06040000 VMW 01324272) Nov 1 00:58:26.660932 kernel: ACPI: FACP 0x000000007FEFEE73 0000F4 (v04 INTEL 440BX 06040000 PTL 000F4240) Nov 1 00:58:26.660936 kernel: ACPI: DSDT 0x000000007FEEAD55 01411E (v01 PTLTD Custom 06040000 MSFT 03000001) Nov 1 00:58:26.660941 kernel: ACPI: FACS 0x000000007FEFFFC0 000040 Nov 1 00:58:26.660945 kernel: ACPI: FACS 0x000000007FEFFFC0 000040 Nov 1 00:58:26.660951 kernel: ACPI: BOOT 0x000000007FEEAD2D 000028 (v01 PTLTD $SBFTBL$ 06040000 LTP 00000001) Nov 1 00:58:26.660957 kernel: ACPI: APIC 0x000000007FEEA5EB 000742 (v01 PTLTD ? APIC 06040000 LTP 00000000) Nov 1 00:58:26.660962 kernel: ACPI: MCFG 0x000000007FEEA5AF 00003C (v01 PTLTD $PCITBL$ 06040000 LTP 00000001) Nov 1 00:58:26.660967 kernel: ACPI: SRAT 0x000000007FEE9757 0008A8 (v02 VMWARE MEMPLUG 06040000 VMW 00000001) Nov 1 00:58:26.660972 kernel: ACPI: HPET 0x000000007FEE971F 000038 (v01 VMWARE VMW HPET 06040000 VMW 00000001) Nov 1 00:58:26.660978 kernel: ACPI: WAET 0x000000007FEE96F7 000028 (v01 VMWARE VMW WAET 06040000 VMW 00000001) Nov 1 00:58:26.660983 kernel: ACPI: Reserving FACP table memory at [mem 0x7fefee73-0x7fefef66] Nov 1 00:58:26.660988 kernel: ACPI: Reserving DSDT table memory at [mem 0x7feead55-0x7fefee72] Nov 1 00:58:26.660993 kernel: ACPI: Reserving FACS table memory at [mem 0x7fefffc0-0x7fefffff] Nov 1 00:58:26.660998 kernel: ACPI: Reserving FACS table memory at [mem 0x7fefffc0-0x7fefffff] Nov 1 00:58:26.661002 kernel: ACPI: Reserving BOOT table memory at [mem 0x7feead2d-0x7feead54] Nov 1 00:58:26.661007 kernel: ACPI: Reserving APIC table memory at [mem 0x7feea5eb-0x7feead2c] Nov 1 00:58:26.661012 kernel: ACPI: Reserving MCFG table memory at [mem 0x7feea5af-0x7feea5ea] Nov 1 00:58:26.661017 kernel: ACPI: Reserving SRAT table memory at [mem 0x7fee9757-0x7fee9ffe] Nov 1 00:58:26.661023 kernel: ACPI: Reserving HPET table memory at [mem 0x7fee971f-0x7fee9756] Nov 1 00:58:26.661027 kernel: ACPI: Reserving WAET table memory at [mem 0x7fee96f7-0x7fee971e] Nov 1 00:58:26.661032 kernel: system APIC only can use physical flat Nov 1 00:58:26.661037 kernel: Setting APIC routing to physical flat. Nov 1 00:58:26.661042 kernel: SRAT: PXM 0 -> APIC 0x00 -> Node 0 Nov 1 00:58:26.661059 kernel: SRAT: PXM 0 -> APIC 0x02 -> Node 0 Nov 1 00:58:26.661064 kernel: SRAT: PXM 0 -> APIC 0x04 -> Node 0 Nov 1 00:58:26.661069 kernel: SRAT: PXM 0 -> APIC 0x06 -> Node 0 Nov 1 00:58:26.661073 kernel: SRAT: PXM 0 -> APIC 0x08 -> Node 0 Nov 1 00:58:26.661080 kernel: SRAT: PXM 0 -> APIC 0x0a -> Node 0 Nov 1 00:58:26.661085 kernel: SRAT: PXM 0 -> APIC 0x0c -> Node 0 Nov 1 00:58:26.661090 kernel: SRAT: PXM 0 -> APIC 0x0e -> Node 0 Nov 1 00:58:26.661094 kernel: SRAT: PXM 0 -> APIC 0x10 -> Node 0 Nov 1 00:58:26.661099 kernel: SRAT: PXM 0 -> APIC 0x12 -> Node 0 Nov 1 00:58:26.661104 kernel: SRAT: PXM 0 -> APIC 0x14 -> Node 0 Nov 1 00:58:26.661108 kernel: SRAT: PXM 0 -> APIC 0x16 -> Node 0 Nov 1 00:58:26.661113 kernel: SRAT: PXM 0 -> APIC 0x18 -> Node 0 Nov 1 00:58:26.661118 kernel: SRAT: PXM 0 -> APIC 0x1a -> Node 0 Nov 1 00:58:26.661123 kernel: SRAT: PXM 0 -> APIC 0x1c -> Node 0 Nov 1 00:58:26.661129 kernel: SRAT: PXM 0 -> APIC 0x1e -> Node 0 Nov 1 00:58:26.661134 kernel: SRAT: PXM 0 -> APIC 0x20 -> Node 0 Nov 1 00:58:26.661138 kernel: SRAT: PXM 0 -> APIC 0x22 -> Node 0 Nov 1 00:58:26.661143 kernel: SRAT: PXM 0 -> APIC 0x24 -> Node 0 Nov 1 00:58:26.661148 kernel: SRAT: PXM 0 -> APIC 0x26 -> Node 0 Nov 1 00:58:26.661152 kernel: SRAT: PXM 0 -> APIC 0x28 -> Node 0 Nov 1 00:58:26.661157 kernel: SRAT: PXM 0 -> APIC 0x2a -> Node 0 Nov 1 00:58:26.661162 kernel: SRAT: PXM 0 -> APIC 0x2c -> Node 0 Nov 1 00:58:26.661167 kernel: SRAT: PXM 0 -> APIC 0x2e -> Node 0 Nov 1 00:58:26.661171 kernel: SRAT: PXM 0 -> APIC 0x30 -> Node 0 Nov 1 00:58:26.661180 kernel: SRAT: PXM 0 -> APIC 0x32 -> Node 0 Nov 1 00:58:26.661187 kernel: SRAT: PXM 0 -> APIC 0x34 -> Node 0 Nov 1 00:58:26.661192 kernel: SRAT: PXM 0 -> APIC 0x36 -> Node 0 Nov 1 00:58:26.661197 kernel: SRAT: PXM 0 -> APIC 0x38 -> Node 0 Nov 1 00:58:26.661202 kernel: SRAT: PXM 0 -> APIC 0x3a -> Node 0 Nov 1 00:58:26.661206 kernel: SRAT: PXM 0 -> APIC 0x3c -> Node 0 Nov 1 00:58:26.661211 kernel: SRAT: PXM 0 -> APIC 0x3e -> Node 0 Nov 1 00:58:26.661216 kernel: SRAT: PXM 0 -> APIC 0x40 -> Node 0 Nov 1 00:58:26.661221 kernel: SRAT: PXM 0 -> APIC 0x42 -> Node 0 Nov 1 00:58:26.661225 kernel: SRAT: PXM 0 -> APIC 0x44 -> Node 0 Nov 1 00:58:26.661231 kernel: SRAT: PXM 0 -> APIC 0x46 -> Node 0 Nov 1 00:58:26.661236 kernel: SRAT: PXM 0 -> APIC 0x48 -> Node 0 Nov 1 00:58:26.661241 kernel: SRAT: PXM 0 -> APIC 0x4a -> Node 0 Nov 1 00:58:26.661246 kernel: SRAT: PXM 0 -> APIC 0x4c -> Node 0 Nov 1 00:58:26.661250 kernel: SRAT: PXM 0 -> APIC 0x4e -> Node 0 Nov 1 00:58:26.661255 kernel: SRAT: PXM 0 -> APIC 0x50 -> Node 0 Nov 1 00:58:26.661260 kernel: SRAT: PXM 0 -> APIC 0x52 -> Node 0 Nov 1 00:58:26.661265 kernel: SRAT: PXM 0 -> APIC 0x54 -> Node 0 Nov 1 00:58:26.664173 kernel: SRAT: PXM 0 -> APIC 0x56 -> Node 0 Nov 1 00:58:26.664182 kernel: SRAT: PXM 0 -> APIC 0x58 -> Node 0 Nov 1 00:58:26.664192 kernel: SRAT: PXM 0 -> APIC 0x5a -> Node 0 Nov 1 00:58:26.664197 kernel: SRAT: PXM 0 -> APIC 0x5c -> Node 0 Nov 1 00:58:26.664202 kernel: SRAT: PXM 0 -> APIC 0x5e -> Node 0 Nov 1 00:58:26.664206 kernel: SRAT: PXM 0 -> APIC 0x60 -> Node 0 Nov 1 00:58:26.664211 kernel: SRAT: PXM 0 -> APIC 0x62 -> Node 0 Nov 1 00:58:26.664216 kernel: SRAT: PXM 0 -> APIC 0x64 -> Node 0 Nov 1 00:58:26.664221 kernel: SRAT: PXM 0 -> APIC 0x66 -> Node 0 Nov 1 00:58:26.664226 kernel: SRAT: PXM 0 -> APIC 0x68 -> Node 0 Nov 1 00:58:26.664230 kernel: SRAT: PXM 0 -> APIC 0x6a -> Node 0 Nov 1 00:58:26.664235 kernel: SRAT: PXM 0 -> APIC 0x6c -> Node 0 Nov 1 00:58:26.664241 kernel: SRAT: PXM 0 -> APIC 0x6e -> Node 0 Nov 1 00:58:26.664246 kernel: SRAT: PXM 0 -> APIC 0x70 -> Node 0 Nov 1 00:58:26.664251 kernel: SRAT: PXM 0 -> APIC 0x72 -> Node 0 Nov 1 00:58:26.664256 kernel: SRAT: PXM 0 -> APIC 0x74 -> Node 0 Nov 1 00:58:26.664260 kernel: SRAT: PXM 0 -> APIC 0x76 -> Node 0 Nov 1 00:58:26.664265 kernel: SRAT: PXM 0 -> APIC 0x78 -> Node 0 Nov 1 00:58:26.664297 kernel: SRAT: PXM 0 -> APIC 0x7a -> Node 0 Nov 1 00:58:26.664304 kernel: SRAT: PXM 0 -> APIC 0x7c -> Node 0 Nov 1 00:58:26.664309 kernel: SRAT: PXM 0 -> APIC 0x7e -> Node 0 Nov 1 00:58:26.664314 kernel: SRAT: PXM 0 -> APIC 0x80 -> Node 0 Nov 1 00:58:26.664319 kernel: SRAT: PXM 0 -> APIC 0x82 -> Node 0 Nov 1 00:58:26.664325 kernel: SRAT: PXM 0 -> APIC 0x84 -> Node 0 Nov 1 00:58:26.664330 kernel: SRAT: PXM 0 -> APIC 0x86 -> Node 0 Nov 1 00:58:26.664336 kernel: SRAT: PXM 0 -> APIC 0x88 -> Node 0 Nov 1 00:58:26.664341 kernel: SRAT: PXM 0 -> APIC 0x8a -> Node 0 Nov 1 00:58:26.664346 kernel: SRAT: PXM 0 -> APIC 0x8c -> Node 0 Nov 1 00:58:26.664351 kernel: SRAT: PXM 0 -> APIC 0x8e -> Node 0 Nov 1 00:58:26.664356 kernel: SRAT: PXM 0 -> APIC 0x90 -> Node 0 Nov 1 00:58:26.664362 kernel: SRAT: PXM 0 -> APIC 0x92 -> Node 0 Nov 1 00:58:26.664368 kernel: SRAT: PXM 0 -> APIC 0x94 -> Node 0 Nov 1 00:58:26.664373 kernel: SRAT: PXM 0 -> APIC 0x96 -> Node 0 Nov 1 00:58:26.664378 kernel: SRAT: PXM 0 -> APIC 0x98 -> Node 0 Nov 1 00:58:26.664383 kernel: SRAT: PXM 0 -> APIC 0x9a -> Node 0 Nov 1 00:58:26.664388 kernel: SRAT: PXM 0 -> APIC 0x9c -> Node 0 Nov 1 00:58:26.664393 kernel: SRAT: PXM 0 -> APIC 0x9e -> Node 0 Nov 1 00:58:26.664398 kernel: SRAT: PXM 0 -> APIC 0xa0 -> Node 0 Nov 1 00:58:26.664403 kernel: SRAT: PXM 0 -> APIC 0xa2 -> Node 0 Nov 1 00:58:26.664409 kernel: SRAT: PXM 0 -> APIC 0xa4 -> Node 0 Nov 1 00:58:26.664415 kernel: SRAT: PXM 0 -> APIC 0xa6 -> Node 0 Nov 1 00:58:26.664420 kernel: SRAT: PXM 0 -> APIC 0xa8 -> Node 0 Nov 1 00:58:26.664425 kernel: SRAT: PXM 0 -> APIC 0xaa -> Node 0 Nov 1 00:58:26.664430 kernel: SRAT: PXM 0 -> APIC 0xac -> Node 0 Nov 1 00:58:26.664436 kernel: SRAT: PXM 0 -> APIC 0xae -> Node 0 Nov 1 00:58:26.664441 kernel: SRAT: PXM 0 -> APIC 0xb0 -> Node 0 Nov 1 00:58:26.664446 kernel: SRAT: PXM 0 -> APIC 0xb2 -> Node 0 Nov 1 00:58:26.664451 kernel: SRAT: PXM 0 -> APIC 0xb4 -> Node 0 Nov 1 00:58:26.664456 kernel: SRAT: PXM 0 -> APIC 0xb6 -> Node 0 Nov 1 00:58:26.664463 kernel: SRAT: PXM 0 -> APIC 0xb8 -> Node 0 Nov 1 00:58:26.664468 kernel: SRAT: PXM 0 -> APIC 0xba -> Node 0 Nov 1 00:58:26.664473 kernel: SRAT: PXM 0 -> APIC 0xbc -> Node 0 Nov 1 00:58:26.664478 kernel: SRAT: PXM 0 -> APIC 0xbe -> Node 0 Nov 1 00:58:26.664483 kernel: SRAT: PXM 0 -> APIC 0xc0 -> Node 0 Nov 1 00:58:26.664488 kernel: SRAT: PXM 0 -> APIC 0xc2 -> Node 0 Nov 1 00:58:26.664494 kernel: SRAT: PXM 0 -> APIC 0xc4 -> Node 0 Nov 1 00:58:26.664499 kernel: SRAT: PXM 0 -> APIC 0xc6 -> Node 0 Nov 1 00:58:26.664504 kernel: SRAT: PXM 0 -> APIC 0xc8 -> Node 0 Nov 1 00:58:26.664509 kernel: SRAT: PXM 0 -> APIC 0xca -> Node 0 Nov 1 00:58:26.664516 kernel: SRAT: PXM 0 -> APIC 0xcc -> Node 0 Nov 1 00:58:26.664521 kernel: SRAT: PXM 0 -> APIC 0xce -> Node 0 Nov 1 00:58:26.664526 kernel: SRAT: PXM 0 -> APIC 0xd0 -> Node 0 Nov 1 00:58:26.664531 kernel: SRAT: PXM 0 -> APIC 0xd2 -> Node 0 Nov 1 00:58:26.664536 kernel: SRAT: PXM 0 -> APIC 0xd4 -> Node 0 Nov 1 00:58:26.664542 kernel: SRAT: PXM 0 -> APIC 0xd6 -> Node 0 Nov 1 00:58:26.664547 kernel: SRAT: PXM 0 -> APIC 0xd8 -> Node 0 Nov 1 00:58:26.664552 kernel: SRAT: PXM 0 -> APIC 0xda -> Node 0 Nov 1 00:58:26.664557 kernel: SRAT: PXM 0 -> APIC 0xdc -> Node 0 Nov 1 00:58:26.664562 kernel: SRAT: PXM 0 -> APIC 0xde -> Node 0 Nov 1 00:58:26.664568 kernel: SRAT: PXM 0 -> APIC 0xe0 -> Node 0 Nov 1 00:58:26.664574 kernel: SRAT: PXM 0 -> APIC 0xe2 -> Node 0 Nov 1 00:58:26.664579 kernel: SRAT: PXM 0 -> APIC 0xe4 -> Node 0 Nov 1 00:58:26.664584 kernel: SRAT: PXM 0 -> APIC 0xe6 -> Node 0 Nov 1 00:58:26.664589 kernel: SRAT: PXM 0 -> APIC 0xe8 -> Node 0 Nov 1 00:58:26.664594 kernel: SRAT: PXM 0 -> APIC 0xea -> Node 0 Nov 1 00:58:26.664599 kernel: SRAT: PXM 0 -> APIC 0xec -> Node 0 Nov 1 00:58:26.664605 kernel: SRAT: PXM 0 -> APIC 0xee -> Node 0 Nov 1 00:58:26.664610 kernel: SRAT: PXM 0 -> APIC 0xf0 -> Node 0 Nov 1 00:58:26.664615 kernel: SRAT: PXM 0 -> APIC 0xf2 -> Node 0 Nov 1 00:58:26.664621 kernel: SRAT: PXM 0 -> APIC 0xf4 -> Node 0 Nov 1 00:58:26.664626 kernel: SRAT: PXM 0 -> APIC 0xf6 -> Node 0 Nov 1 00:58:26.664632 kernel: SRAT: PXM 0 -> APIC 0xf8 -> Node 0 Nov 1 00:58:26.664637 kernel: SRAT: PXM 0 -> APIC 0xfa -> Node 0 Nov 1 00:58:26.664642 kernel: SRAT: PXM 0 -> APIC 0xfc -> Node 0 Nov 1 00:58:26.664647 kernel: SRAT: PXM 0 -> APIC 0xfe -> Node 0 Nov 1 00:58:26.664652 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x0009ffff] Nov 1 00:58:26.664658 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00100000-0x7fffffff] Nov 1 00:58:26.664663 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x80000000-0xbfffffff] hotplug Nov 1 00:58:26.664669 kernel: NUMA: Node 0 [mem 0x00000000-0x0009ffff] + [mem 0x00100000-0x7fffffff] -> [mem 0x00000000-0x7fffffff] Nov 1 00:58:26.664675 kernel: NODE_DATA(0) allocated [mem 0x7fffa000-0x7fffffff] Nov 1 00:58:26.664681 kernel: Zone ranges: Nov 1 00:58:26.664686 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Nov 1 00:58:26.664691 kernel: DMA32 [mem 0x0000000001000000-0x000000007fffffff] Nov 1 00:58:26.664697 kernel: Normal empty Nov 1 00:58:26.664702 kernel: Movable zone start for each node Nov 1 00:58:26.664707 kernel: Early memory node ranges Nov 1 00:58:26.664713 kernel: node 0: [mem 0x0000000000001000-0x000000000009dfff] Nov 1 00:58:26.664718 kernel: node 0: [mem 0x0000000000100000-0x000000007fedffff] Nov 1 00:58:26.664724 kernel: node 0: [mem 0x000000007ff00000-0x000000007fffffff] Nov 1 00:58:26.664730 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000007fffffff] Nov 1 00:58:26.664735 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Nov 1 00:58:26.664740 kernel: On node 0, zone DMA: 98 pages in unavailable ranges Nov 1 00:58:26.664746 kernel: On node 0, zone DMA32: 32 pages in unavailable ranges Nov 1 00:58:26.664751 kernel: ACPI: PM-Timer IO Port: 0x1008 Nov 1 00:58:26.664756 kernel: system APIC only can use physical flat Nov 1 00:58:26.664762 kernel: ACPI: LAPIC_NMI (acpi_id[0x00] high edge lint[0x1]) Nov 1 00:58:26.664767 kernel: ACPI: LAPIC_NMI (acpi_id[0x01] high edge lint[0x1]) Nov 1 00:58:26.664773 kernel: ACPI: LAPIC_NMI (acpi_id[0x02] high edge lint[0x1]) Nov 1 00:58:26.664778 kernel: ACPI: LAPIC_NMI (acpi_id[0x03] high edge lint[0x1]) Nov 1 00:58:26.664783 kernel: ACPI: LAPIC_NMI (acpi_id[0x04] high edge lint[0x1]) Nov 1 00:58:26.664789 kernel: ACPI: LAPIC_NMI (acpi_id[0x05] high edge lint[0x1]) Nov 1 00:58:26.664794 kernel: ACPI: LAPIC_NMI (acpi_id[0x06] high edge lint[0x1]) Nov 1 00:58:26.664799 kernel: ACPI: LAPIC_NMI (acpi_id[0x07] high edge lint[0x1]) Nov 1 00:58:26.664804 kernel: ACPI: LAPIC_NMI (acpi_id[0x08] high edge lint[0x1]) Nov 1 00:58:26.664809 kernel: ACPI: LAPIC_NMI (acpi_id[0x09] high edge lint[0x1]) Nov 1 00:58:26.664814 kernel: ACPI: LAPIC_NMI (acpi_id[0x0a] high edge lint[0x1]) Nov 1 00:58:26.664820 kernel: ACPI: LAPIC_NMI (acpi_id[0x0b] high edge lint[0x1]) Nov 1 00:58:26.664826 kernel: ACPI: LAPIC_NMI (acpi_id[0x0c] high edge lint[0x1]) Nov 1 00:58:26.664831 kernel: ACPI: LAPIC_NMI (acpi_id[0x0d] high edge lint[0x1]) Nov 1 00:58:26.664836 kernel: ACPI: LAPIC_NMI (acpi_id[0x0e] high edge lint[0x1]) Nov 1 00:58:26.664841 kernel: ACPI: LAPIC_NMI (acpi_id[0x0f] high edge lint[0x1]) Nov 1 00:58:26.664847 kernel: ACPI: LAPIC_NMI (acpi_id[0x10] high edge lint[0x1]) Nov 1 00:58:26.664852 kernel: ACPI: LAPIC_NMI (acpi_id[0x11] high edge lint[0x1]) Nov 1 00:58:26.664857 kernel: ACPI: LAPIC_NMI (acpi_id[0x12] high edge lint[0x1]) Nov 1 00:58:26.664862 kernel: ACPI: LAPIC_NMI (acpi_id[0x13] high edge lint[0x1]) Nov 1 00:58:26.664867 kernel: ACPI: LAPIC_NMI (acpi_id[0x14] high edge lint[0x1]) Nov 1 00:58:26.664873 kernel: ACPI: LAPIC_NMI (acpi_id[0x15] high edge lint[0x1]) Nov 1 00:58:26.664879 kernel: ACPI: LAPIC_NMI (acpi_id[0x16] high edge lint[0x1]) Nov 1 00:58:26.664884 kernel: ACPI: LAPIC_NMI (acpi_id[0x17] high edge lint[0x1]) Nov 1 00:58:26.664889 kernel: ACPI: LAPIC_NMI (acpi_id[0x18] high edge lint[0x1]) Nov 1 00:58:26.664894 kernel: ACPI: LAPIC_NMI (acpi_id[0x19] high edge lint[0x1]) Nov 1 00:58:26.664899 kernel: ACPI: LAPIC_NMI (acpi_id[0x1a] high edge lint[0x1]) Nov 1 00:58:26.664905 kernel: ACPI: LAPIC_NMI (acpi_id[0x1b] high edge lint[0x1]) Nov 1 00:58:26.664910 kernel: ACPI: LAPIC_NMI (acpi_id[0x1c] high edge lint[0x1]) Nov 1 00:58:26.664915 kernel: ACPI: LAPIC_NMI (acpi_id[0x1d] high edge lint[0x1]) Nov 1 00:58:26.664920 kernel: ACPI: LAPIC_NMI (acpi_id[0x1e] high edge lint[0x1]) Nov 1 00:58:26.664926 kernel: ACPI: LAPIC_NMI (acpi_id[0x1f] high edge lint[0x1]) Nov 1 00:58:26.664931 kernel: ACPI: LAPIC_NMI (acpi_id[0x20] high edge lint[0x1]) Nov 1 00:58:26.664936 kernel: ACPI: LAPIC_NMI (acpi_id[0x21] high edge lint[0x1]) Nov 1 00:58:26.664942 kernel: ACPI: LAPIC_NMI (acpi_id[0x22] high edge lint[0x1]) Nov 1 00:58:26.664947 kernel: ACPI: LAPIC_NMI (acpi_id[0x23] high edge lint[0x1]) Nov 1 00:58:26.664952 kernel: ACPI: LAPIC_NMI (acpi_id[0x24] high edge lint[0x1]) Nov 1 00:58:26.664957 kernel: ACPI: LAPIC_NMI (acpi_id[0x25] high edge lint[0x1]) Nov 1 00:58:26.664962 kernel: ACPI: LAPIC_NMI (acpi_id[0x26] high edge lint[0x1]) Nov 1 00:58:26.664967 kernel: ACPI: LAPIC_NMI (acpi_id[0x27] high edge lint[0x1]) Nov 1 00:58:26.664973 kernel: ACPI: LAPIC_NMI (acpi_id[0x28] high edge lint[0x1]) Nov 1 00:58:26.664978 kernel: ACPI: LAPIC_NMI (acpi_id[0x29] high edge lint[0x1]) Nov 1 00:58:26.664984 kernel: ACPI: LAPIC_NMI (acpi_id[0x2a] high edge lint[0x1]) Nov 1 00:58:26.664989 kernel: ACPI: LAPIC_NMI (acpi_id[0x2b] high edge lint[0x1]) Nov 1 00:58:26.664994 kernel: ACPI: LAPIC_NMI (acpi_id[0x2c] high edge lint[0x1]) Nov 1 00:58:26.664999 kernel: ACPI: LAPIC_NMI (acpi_id[0x2d] high edge lint[0x1]) Nov 1 00:58:26.665004 kernel: ACPI: LAPIC_NMI (acpi_id[0x2e] high edge lint[0x1]) Nov 1 00:58:26.665009 kernel: ACPI: LAPIC_NMI (acpi_id[0x2f] high edge lint[0x1]) Nov 1 00:58:26.665015 kernel: ACPI: LAPIC_NMI (acpi_id[0x30] high edge lint[0x1]) Nov 1 00:58:26.665020 kernel: ACPI: LAPIC_NMI (acpi_id[0x31] high edge lint[0x1]) Nov 1 00:58:26.665026 kernel: ACPI: LAPIC_NMI (acpi_id[0x32] high edge lint[0x1]) Nov 1 00:58:26.665031 kernel: ACPI: LAPIC_NMI (acpi_id[0x33] high edge lint[0x1]) Nov 1 00:58:26.665036 kernel: ACPI: LAPIC_NMI (acpi_id[0x34] high edge lint[0x1]) Nov 1 00:58:26.665041 kernel: ACPI: LAPIC_NMI (acpi_id[0x35] high edge lint[0x1]) Nov 1 00:58:26.665046 kernel: ACPI: LAPIC_NMI (acpi_id[0x36] high edge lint[0x1]) Nov 1 00:58:26.665051 kernel: ACPI: LAPIC_NMI (acpi_id[0x37] high edge lint[0x1]) Nov 1 00:58:26.665057 kernel: ACPI: LAPIC_NMI (acpi_id[0x38] high edge lint[0x1]) Nov 1 00:58:26.665062 kernel: ACPI: LAPIC_NMI (acpi_id[0x39] high edge lint[0x1]) Nov 1 00:58:26.665067 kernel: ACPI: LAPIC_NMI (acpi_id[0x3a] high edge lint[0x1]) Nov 1 00:58:26.665073 kernel: ACPI: LAPIC_NMI (acpi_id[0x3b] high edge lint[0x1]) Nov 1 00:58:26.665078 kernel: ACPI: LAPIC_NMI (acpi_id[0x3c] high edge lint[0x1]) Nov 1 00:58:26.665083 kernel: ACPI: LAPIC_NMI (acpi_id[0x3d] high edge lint[0x1]) Nov 1 00:58:26.665088 kernel: ACPI: LAPIC_NMI (acpi_id[0x3e] high edge lint[0x1]) Nov 1 00:58:26.665094 kernel: ACPI: LAPIC_NMI (acpi_id[0x3f] high edge lint[0x1]) Nov 1 00:58:26.665099 kernel: ACPI: LAPIC_NMI (acpi_id[0x40] high edge lint[0x1]) Nov 1 00:58:26.665104 kernel: ACPI: LAPIC_NMI (acpi_id[0x41] high edge lint[0x1]) Nov 1 00:58:26.665109 kernel: ACPI: LAPIC_NMI (acpi_id[0x42] high edge lint[0x1]) Nov 1 00:58:26.665114 kernel: ACPI: LAPIC_NMI (acpi_id[0x43] high edge lint[0x1]) Nov 1 00:58:26.665119 kernel: ACPI: LAPIC_NMI (acpi_id[0x44] high edge lint[0x1]) Nov 1 00:58:26.665126 kernel: ACPI: LAPIC_NMI (acpi_id[0x45] high edge lint[0x1]) Nov 1 00:58:26.665131 kernel: ACPI: LAPIC_NMI (acpi_id[0x46] high edge lint[0x1]) Nov 1 00:58:26.665136 kernel: ACPI: LAPIC_NMI (acpi_id[0x47] high edge lint[0x1]) Nov 1 00:58:26.665141 kernel: ACPI: LAPIC_NMI (acpi_id[0x48] high edge lint[0x1]) Nov 1 00:58:26.665147 kernel: ACPI: LAPIC_NMI (acpi_id[0x49] high edge lint[0x1]) Nov 1 00:58:26.665152 kernel: ACPI: LAPIC_NMI (acpi_id[0x4a] high edge lint[0x1]) Nov 1 00:58:26.665157 kernel: ACPI: LAPIC_NMI (acpi_id[0x4b] high edge lint[0x1]) Nov 1 00:58:26.665162 kernel: ACPI: LAPIC_NMI (acpi_id[0x4c] high edge lint[0x1]) Nov 1 00:58:26.665167 kernel: ACPI: LAPIC_NMI (acpi_id[0x4d] high edge lint[0x1]) Nov 1 00:58:26.665174 kernel: ACPI: LAPIC_NMI (acpi_id[0x4e] high edge lint[0x1]) Nov 1 00:58:26.665179 kernel: ACPI: LAPIC_NMI (acpi_id[0x4f] high edge lint[0x1]) Nov 1 00:58:26.665184 kernel: ACPI: LAPIC_NMI (acpi_id[0x50] high edge lint[0x1]) Nov 1 00:58:26.665190 kernel: ACPI: LAPIC_NMI (acpi_id[0x51] high edge lint[0x1]) Nov 1 00:58:26.665195 kernel: ACPI: LAPIC_NMI (acpi_id[0x52] high edge lint[0x1]) Nov 1 00:58:26.665200 kernel: ACPI: LAPIC_NMI (acpi_id[0x53] high edge lint[0x1]) Nov 1 00:58:26.665205 kernel: ACPI: LAPIC_NMI (acpi_id[0x54] high edge lint[0x1]) Nov 1 00:58:26.665210 kernel: ACPI: LAPIC_NMI (acpi_id[0x55] high edge lint[0x1]) Nov 1 00:58:26.665215 kernel: ACPI: LAPIC_NMI (acpi_id[0x56] high edge lint[0x1]) Nov 1 00:58:26.665220 kernel: ACPI: LAPIC_NMI (acpi_id[0x57] high edge lint[0x1]) Nov 1 00:58:26.665226 kernel: ACPI: LAPIC_NMI (acpi_id[0x58] high edge lint[0x1]) Nov 1 00:58:26.665232 kernel: ACPI: LAPIC_NMI (acpi_id[0x59] high edge lint[0x1]) Nov 1 00:58:26.665237 kernel: ACPI: LAPIC_NMI (acpi_id[0x5a] high edge lint[0x1]) Nov 1 00:58:26.665242 kernel: ACPI: LAPIC_NMI (acpi_id[0x5b] high edge lint[0x1]) Nov 1 00:58:26.665247 kernel: ACPI: LAPIC_NMI (acpi_id[0x5c] high edge lint[0x1]) Nov 1 00:58:26.665252 kernel: ACPI: LAPIC_NMI (acpi_id[0x5d] high edge lint[0x1]) Nov 1 00:58:26.665257 kernel: ACPI: LAPIC_NMI (acpi_id[0x5e] high edge lint[0x1]) Nov 1 00:58:26.665262 kernel: ACPI: LAPIC_NMI (acpi_id[0x5f] high edge lint[0x1]) Nov 1 00:58:26.666855 kernel: ACPI: LAPIC_NMI (acpi_id[0x60] high edge lint[0x1]) Nov 1 00:58:26.666866 kernel: ACPI: LAPIC_NMI (acpi_id[0x61] high edge lint[0x1]) Nov 1 00:58:26.666872 kernel: ACPI: LAPIC_NMI (acpi_id[0x62] high edge lint[0x1]) Nov 1 00:58:26.666878 kernel: ACPI: LAPIC_NMI (acpi_id[0x63] high edge lint[0x1]) Nov 1 00:58:26.666883 kernel: ACPI: LAPIC_NMI (acpi_id[0x64] high edge lint[0x1]) Nov 1 00:58:26.666888 kernel: ACPI: LAPIC_NMI (acpi_id[0x65] high edge lint[0x1]) Nov 1 00:58:26.666894 kernel: ACPI: LAPIC_NMI (acpi_id[0x66] high edge lint[0x1]) Nov 1 00:58:26.666899 kernel: ACPI: LAPIC_NMI (acpi_id[0x67] high edge lint[0x1]) Nov 1 00:58:26.666904 kernel: ACPI: LAPIC_NMI (acpi_id[0x68] high edge lint[0x1]) Nov 1 00:58:26.666909 kernel: ACPI: LAPIC_NMI (acpi_id[0x69] high edge lint[0x1]) Nov 1 00:58:26.666915 kernel: ACPI: LAPIC_NMI (acpi_id[0x6a] high edge lint[0x1]) Nov 1 00:58:26.666921 kernel: ACPI: LAPIC_NMI (acpi_id[0x6b] high edge lint[0x1]) Nov 1 00:58:26.666926 kernel: ACPI: LAPIC_NMI (acpi_id[0x6c] high edge lint[0x1]) Nov 1 00:58:26.666931 kernel: ACPI: LAPIC_NMI (acpi_id[0x6d] high edge lint[0x1]) Nov 1 00:58:26.666951 kernel: ACPI: LAPIC_NMI (acpi_id[0x6e] high edge lint[0x1]) Nov 1 00:58:26.666957 kernel: ACPI: LAPIC_NMI (acpi_id[0x6f] high edge lint[0x1]) Nov 1 00:58:26.666962 kernel: ACPI: LAPIC_NMI (acpi_id[0x70] high edge lint[0x1]) Nov 1 00:58:26.666968 kernel: ACPI: LAPIC_NMI (acpi_id[0x71] high edge lint[0x1]) Nov 1 00:58:26.666973 kernel: ACPI: LAPIC_NMI (acpi_id[0x72] high edge lint[0x1]) Nov 1 00:58:26.666978 kernel: ACPI: LAPIC_NMI (acpi_id[0x73] high edge lint[0x1]) Nov 1 00:58:26.666985 kernel: ACPI: LAPIC_NMI (acpi_id[0x74] high edge lint[0x1]) Nov 1 00:58:26.666990 kernel: ACPI: LAPIC_NMI (acpi_id[0x75] high edge lint[0x1]) Nov 1 00:58:26.666995 kernel: ACPI: LAPIC_NMI (acpi_id[0x76] high edge lint[0x1]) Nov 1 00:58:26.667000 kernel: ACPI: LAPIC_NMI (acpi_id[0x77] high edge lint[0x1]) Nov 1 00:58:26.667006 kernel: ACPI: LAPIC_NMI (acpi_id[0x78] high edge lint[0x1]) Nov 1 00:58:26.667011 kernel: ACPI: LAPIC_NMI (acpi_id[0x79] high edge lint[0x1]) Nov 1 00:58:26.667016 kernel: ACPI: LAPIC_NMI (acpi_id[0x7a] high edge lint[0x1]) Nov 1 00:58:26.667021 kernel: ACPI: LAPIC_NMI (acpi_id[0x7b] high edge lint[0x1]) Nov 1 00:58:26.667026 kernel: ACPI: LAPIC_NMI (acpi_id[0x7c] high edge lint[0x1]) Nov 1 00:58:26.667032 kernel: ACPI: LAPIC_NMI (acpi_id[0x7d] high edge lint[0x1]) Nov 1 00:58:26.667038 kernel: ACPI: LAPIC_NMI (acpi_id[0x7e] high edge lint[0x1]) Nov 1 00:58:26.667043 kernel: ACPI: LAPIC_NMI (acpi_id[0x7f] high edge lint[0x1]) Nov 1 00:58:26.667048 kernel: IOAPIC[0]: apic_id 1, version 17, address 0xfec00000, GSI 0-23 Nov 1 00:58:26.667053 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 high edge) Nov 1 00:58:26.667059 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Nov 1 00:58:26.667064 kernel: ACPI: HPET id: 0x8086af01 base: 0xfed00000 Nov 1 00:58:26.667069 kernel: TSC deadline timer available Nov 1 00:58:26.667075 kernel: smpboot: Allowing 128 CPUs, 126 hotplug CPUs Nov 1 00:58:26.667080 kernel: [mem 0x80000000-0xefffffff] available for PCI devices Nov 1 00:58:26.667087 kernel: Booting paravirtualized kernel on VMware hypervisor Nov 1 00:58:26.667092 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Nov 1 00:58:26.667097 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:512 nr_cpu_ids:128 nr_node_ids:1 Nov 1 00:58:26.667103 kernel: percpu: Embedded 56 pages/cpu s188696 r8192 d32488 u262144 Nov 1 00:58:26.667108 kernel: pcpu-alloc: s188696 r8192 d32488 u262144 alloc=1*2097152 Nov 1 00:58:26.667114 kernel: pcpu-alloc: [0] 000 001 002 003 004 005 006 007 Nov 1 00:58:26.667119 kernel: pcpu-alloc: [0] 008 009 010 011 012 013 014 015 Nov 1 00:58:26.667124 kernel: pcpu-alloc: [0] 016 017 018 019 020 021 022 023 Nov 1 00:58:26.667130 kernel: pcpu-alloc: [0] 024 025 026 027 028 029 030 031 Nov 1 00:58:26.667135 kernel: pcpu-alloc: [0] 032 033 034 035 036 037 038 039 Nov 1 00:58:26.667140 kernel: pcpu-alloc: [0] 040 041 042 043 044 045 046 047 Nov 1 00:58:26.667146 kernel: pcpu-alloc: [0] 048 049 050 051 052 053 054 055 Nov 1 00:58:26.667158 kernel: pcpu-alloc: [0] 056 057 058 059 060 061 062 063 Nov 1 00:58:26.667164 kernel: pcpu-alloc: [0] 064 065 066 067 068 069 070 071 Nov 1 00:58:26.667170 kernel: pcpu-alloc: [0] 072 073 074 075 076 077 078 079 Nov 1 00:58:26.667185 kernel: pcpu-alloc: [0] 080 081 082 083 084 085 086 087 Nov 1 00:58:26.667192 kernel: pcpu-alloc: [0] 088 089 090 091 092 093 094 095 Nov 1 00:58:26.667200 kernel: pcpu-alloc: [0] 096 097 098 099 100 101 102 103 Nov 1 00:58:26.667205 kernel: pcpu-alloc: [0] 104 105 106 107 108 109 110 111 Nov 1 00:58:26.667211 kernel: pcpu-alloc: [0] 112 113 114 115 116 117 118 119 Nov 1 00:58:26.667216 kernel: pcpu-alloc: [0] 120 121 122 123 124 125 126 127 Nov 1 00:58:26.667222 kernel: Built 1 zonelists, mobility grouping on. Total pages: 515808 Nov 1 00:58:26.667228 kernel: Policy zone: DMA32 Nov 1 00:58:26.667234 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=vmware flatcar.autologin verity.usrhash=c4c72a4f851a6da01cbc7150799371516ef8311ea786098908d8eb164df01ee2 Nov 1 00:58:26.667240 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Nov 1 00:58:26.667247 kernel: printk: log_buf_len individual max cpu contribution: 4096 bytes Nov 1 00:58:26.667253 kernel: printk: log_buf_len total cpu_extra contributions: 520192 bytes Nov 1 00:58:26.667258 kernel: printk: log_buf_len min size: 262144 bytes Nov 1 00:58:26.667264 kernel: printk: log_buf_len: 1048576 bytes Nov 1 00:58:26.667276 kernel: printk: early log buf free: 239728(91%) Nov 1 00:58:26.667282 kernel: Dentry cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Nov 1 00:58:26.667289 kernel: Inode-cache hash table entries: 131072 (order: 8, 1048576 bytes, linear) Nov 1 00:58:26.667294 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Nov 1 00:58:26.667300 kernel: Memory: 1940392K/2096628K available (12295K kernel code, 2276K rwdata, 13732K rodata, 47496K init, 4084K bss, 155976K reserved, 0K cma-reserved) Nov 1 00:58:26.667307 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=128, Nodes=1 Nov 1 00:58:26.667313 kernel: ftrace: allocating 34614 entries in 136 pages Nov 1 00:58:26.667318 kernel: ftrace: allocated 136 pages with 2 groups Nov 1 00:58:26.667325 kernel: rcu: Hierarchical RCU implementation. Nov 1 00:58:26.667331 kernel: rcu: RCU event tracing is enabled. Nov 1 00:58:26.667338 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=128. Nov 1 00:58:26.667344 kernel: Rude variant of Tasks RCU enabled. Nov 1 00:58:26.667349 kernel: Tracing variant of Tasks RCU enabled. Nov 1 00:58:26.667355 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Nov 1 00:58:26.667361 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=128 Nov 1 00:58:26.667366 kernel: NR_IRQS: 33024, nr_irqs: 1448, preallocated irqs: 16 Nov 1 00:58:26.667372 kernel: random: crng init done Nov 1 00:58:26.667377 kernel: Console: colour VGA+ 80x25 Nov 1 00:58:26.667383 kernel: printk: console [tty0] enabled Nov 1 00:58:26.667389 kernel: printk: console [ttyS0] enabled Nov 1 00:58:26.667395 kernel: ACPI: Core revision 20210730 Nov 1 00:58:26.667401 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 133484882848 ns Nov 1 00:58:26.667407 kernel: APIC: Switch to symmetric I/O mode setup Nov 1 00:58:26.667413 kernel: x2apic enabled Nov 1 00:58:26.667418 kernel: Switched APIC routing to physical x2apic. Nov 1 00:58:26.667424 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Nov 1 00:58:26.667430 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x311fd3cd494, max_idle_ns: 440795223879 ns Nov 1 00:58:26.667436 kernel: Calibrating delay loop (skipped) preset value.. 6816.00 BogoMIPS (lpj=3408000) Nov 1 00:58:26.667441 kernel: Disabled fast string operations Nov 1 00:58:26.667448 kernel: Last level iTLB entries: 4KB 64, 2MB 8, 4MB 8 Nov 1 00:58:26.667454 kernel: Last level dTLB entries: 4KB 64, 2MB 32, 4MB 32, 1GB 4 Nov 1 00:58:26.667459 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Nov 1 00:58:26.667465 kernel: Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks! Nov 1 00:58:26.667471 kernel: Spectre V2 : Spectre BHI mitigation: SW BHB clearing on vm exit Nov 1 00:58:26.667477 kernel: Spectre V2 : Spectre BHI mitigation: SW BHB clearing on syscall Nov 1 00:58:26.667482 kernel: Spectre V2 : Mitigation: Enhanced / Automatic IBRS Nov 1 00:58:26.667488 kernel: Spectre V2 : Spectre v2 / PBRSB-eIBRS: Retire a single CALL on VMEXIT Nov 1 00:58:26.667495 kernel: RETBleed: Mitigation: Enhanced IBRS Nov 1 00:58:26.667501 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Nov 1 00:58:26.667507 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl and seccomp Nov 1 00:58:26.667512 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode Nov 1 00:58:26.667518 kernel: SRBDS: Unknown: Dependent on hypervisor status Nov 1 00:58:26.667524 kernel: GDS: Unknown: Dependent on hypervisor status Nov 1 00:58:26.667529 kernel: active return thunk: its_return_thunk Nov 1 00:58:26.667535 kernel: ITS: Mitigation: Aligned branch/return thunks Nov 1 00:58:26.667541 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Nov 1 00:58:26.667547 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Nov 1 00:58:26.667553 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Nov 1 00:58:26.667559 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Nov 1 00:58:26.667564 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format. Nov 1 00:58:26.667570 kernel: Freeing SMP alternatives memory: 32K Nov 1 00:58:26.667575 kernel: pid_max: default: 131072 minimum: 1024 Nov 1 00:58:26.667581 kernel: LSM: Security Framework initializing Nov 1 00:58:26.667587 kernel: SELinux: Initializing. Nov 1 00:58:26.667593 kernel: Mount-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Nov 1 00:58:26.667599 kernel: Mountpoint-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Nov 1 00:58:26.667606 kernel: smpboot: CPU0: Intel(R) Xeon(R) E-2278G CPU @ 3.40GHz (family: 0x6, model: 0x9e, stepping: 0xd) Nov 1 00:58:26.667611 kernel: Performance Events: Skylake events, core PMU driver. Nov 1 00:58:26.667617 kernel: core: CPUID marked event: 'cpu cycles' unavailable Nov 1 00:58:26.667622 kernel: core: CPUID marked event: 'instructions' unavailable Nov 1 00:58:26.667628 kernel: core: CPUID marked event: 'bus cycles' unavailable Nov 1 00:58:26.667634 kernel: core: CPUID marked event: 'cache references' unavailable Nov 1 00:58:26.667639 kernel: core: CPUID marked event: 'cache misses' unavailable Nov 1 00:58:26.667644 kernel: core: CPUID marked event: 'branch instructions' unavailable Nov 1 00:58:26.667651 kernel: core: CPUID marked event: 'branch misses' unavailable Nov 1 00:58:26.667657 kernel: ... version: 1 Nov 1 00:58:26.667662 kernel: ... bit width: 48 Nov 1 00:58:26.667668 kernel: ... generic registers: 4 Nov 1 00:58:26.667673 kernel: ... value mask: 0000ffffffffffff Nov 1 00:58:26.667680 kernel: ... max period: 000000007fffffff Nov 1 00:58:26.667686 kernel: ... fixed-purpose events: 0 Nov 1 00:58:26.667691 kernel: ... event mask: 000000000000000f Nov 1 00:58:26.667697 kernel: signal: max sigframe size: 1776 Nov 1 00:58:26.667704 kernel: rcu: Hierarchical SRCU implementation. Nov 1 00:58:26.667709 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Nov 1 00:58:26.667715 kernel: smp: Bringing up secondary CPUs ... Nov 1 00:58:26.667721 kernel: x86: Booting SMP configuration: Nov 1 00:58:26.667726 kernel: .... node #0, CPUs: #1 Nov 1 00:58:26.667732 kernel: Disabled fast string operations Nov 1 00:58:26.667737 kernel: smpboot: CPU 1 Converting physical 2 to logical package 1 Nov 1 00:58:26.667743 kernel: smpboot: CPU 1 Converting physical 0 to logical die 1 Nov 1 00:58:26.667748 kernel: smp: Brought up 1 node, 2 CPUs Nov 1 00:58:26.667754 kernel: smpboot: Max logical packages: 128 Nov 1 00:58:26.667761 kernel: smpboot: Total of 2 processors activated (13632.00 BogoMIPS) Nov 1 00:58:26.667766 kernel: devtmpfs: initialized Nov 1 00:58:26.667772 kernel: x86/mm: Memory block size: 128MB Nov 1 00:58:26.667778 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x7feff000-0x7fefffff] (4096 bytes) Nov 1 00:58:26.667783 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Nov 1 00:58:26.667789 kernel: futex hash table entries: 32768 (order: 9, 2097152 bytes, linear) Nov 1 00:58:26.667795 kernel: pinctrl core: initialized pinctrl subsystem Nov 1 00:58:26.667801 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Nov 1 00:58:26.667806 kernel: audit: initializing netlink subsys (disabled) Nov 1 00:58:26.667813 kernel: audit: type=2000 audit(1761958705.085:1): state=initialized audit_enabled=0 res=1 Nov 1 00:58:26.667819 kernel: thermal_sys: Registered thermal governor 'step_wise' Nov 1 00:58:26.667824 kernel: thermal_sys: Registered thermal governor 'user_space' Nov 1 00:58:26.667830 kernel: cpuidle: using governor menu Nov 1 00:58:26.667836 kernel: Simple Boot Flag at 0x36 set to 0x80 Nov 1 00:58:26.667841 kernel: ACPI: bus type PCI registered Nov 1 00:58:26.667847 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Nov 1 00:58:26.667853 kernel: dca service started, version 1.12.1 Nov 1 00:58:26.667859 kernel: PCI: MMCONFIG for domain 0000 [bus 00-7f] at [mem 0xf0000000-0xf7ffffff] (base 0xf0000000) Nov 1 00:58:26.667865 kernel: PCI: MMCONFIG at [mem 0xf0000000-0xf7ffffff] reserved in E820 Nov 1 00:58:26.667871 kernel: PCI: Using configuration type 1 for base access Nov 1 00:58:26.667877 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Nov 1 00:58:26.667882 kernel: HugeTLB registered 1.00 GiB page size, pre-allocated 0 pages Nov 1 00:58:26.667888 kernel: HugeTLB registered 2.00 MiB page size, pre-allocated 0 pages Nov 1 00:58:26.667893 kernel: ACPI: Added _OSI(Module Device) Nov 1 00:58:26.667899 kernel: ACPI: Added _OSI(Processor Device) Nov 1 00:58:26.667904 kernel: ACPI: Added _OSI(Processor Aggregator Device) Nov 1 00:58:26.667910 kernel: ACPI: Added _OSI(Linux-Dell-Video) Nov 1 00:58:26.667916 kernel: ACPI: Added _OSI(Linux-Lenovo-NV-HDMI-Audio) Nov 1 00:58:26.667922 kernel: ACPI: Added _OSI(Linux-HPI-Hybrid-Graphics) Nov 1 00:58:26.667928 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Nov 1 00:58:26.667933 kernel: ACPI: [Firmware Bug]: BIOS _OSI(Linux) query ignored Nov 1 00:58:26.667939 kernel: ACPI: Interpreter enabled Nov 1 00:58:26.667945 kernel: ACPI: PM: (supports S0 S1 S5) Nov 1 00:58:26.667950 kernel: ACPI: Using IOAPIC for interrupt routing Nov 1 00:58:26.667956 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Nov 1 00:58:26.667962 kernel: ACPI: Enabled 4 GPEs in block 00 to 0F Nov 1 00:58:26.667969 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-7f]) Nov 1 00:58:26.668047 kernel: acpi PNP0A03:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Nov 1 00:58:26.668099 kernel: acpi PNP0A03:00: _OSC: platform does not support [AER LTR] Nov 1 00:58:26.668146 kernel: acpi PNP0A03:00: _OSC: OS now controls [PCIeHotplug PME PCIeCapability] Nov 1 00:58:26.668154 kernel: PCI host bridge to bus 0000:00 Nov 1 00:58:26.668203 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Nov 1 00:58:26.668249 kernel: pci_bus 0000:00: root bus resource [mem 0x000cc000-0x000dbfff window] Nov 1 00:58:26.668302 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Nov 1 00:58:26.668344 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Nov 1 00:58:26.668390 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xfeff window] Nov 1 00:58:26.668432 kernel: pci_bus 0000:00: root bus resource [bus 00-7f] Nov 1 00:58:26.668520 kernel: pci 0000:00:00.0: [8086:7190] type 00 class 0x060000 Nov 1 00:58:26.668584 kernel: pci 0000:00:01.0: [8086:7191] type 01 class 0x060400 Nov 1 00:58:26.668645 kernel: pci 0000:00:07.0: [8086:7110] type 00 class 0x060100 Nov 1 00:58:26.668705 kernel: pci 0000:00:07.1: [8086:7111] type 00 class 0x01018a Nov 1 00:58:26.668758 kernel: pci 0000:00:07.1: reg 0x20: [io 0x1060-0x106f] Nov 1 00:58:26.668811 kernel: pci 0000:00:07.1: legacy IDE quirk: reg 0x10: [io 0x01f0-0x01f7] Nov 1 00:58:26.668863 kernel: pci 0000:00:07.1: legacy IDE quirk: reg 0x14: [io 0x03f6] Nov 1 00:58:26.668915 kernel: pci 0000:00:07.1: legacy IDE quirk: reg 0x18: [io 0x0170-0x0177] Nov 1 00:58:26.668969 kernel: pci 0000:00:07.1: legacy IDE quirk: reg 0x1c: [io 0x0376] Nov 1 00:58:26.669025 kernel: pci 0000:00:07.3: [8086:7113] type 00 class 0x068000 Nov 1 00:58:26.669078 kernel: pci 0000:00:07.3: quirk: [io 0x1000-0x103f] claimed by PIIX4 ACPI Nov 1 00:58:26.669130 kernel: pci 0000:00:07.3: quirk: [io 0x1040-0x104f] claimed by PIIX4 SMB Nov 1 00:58:26.669187 kernel: pci 0000:00:07.7: [15ad:0740] type 00 class 0x088000 Nov 1 00:58:26.669239 kernel: pci 0000:00:07.7: reg 0x10: [io 0x1080-0x10bf] Nov 1 00:58:26.669333 kernel: pci 0000:00:07.7: reg 0x14: [mem 0xfebfe000-0xfebfffff 64bit] Nov 1 00:58:26.669394 kernel: pci 0000:00:0f.0: [15ad:0405] type 00 class 0x030000 Nov 1 00:58:26.671950 kernel: pci 0000:00:0f.0: reg 0x10: [io 0x1070-0x107f] Nov 1 00:58:26.672009 kernel: pci 0000:00:0f.0: reg 0x14: [mem 0xe8000000-0xefffffff pref] Nov 1 00:58:26.672063 kernel: pci 0000:00:0f.0: reg 0x18: [mem 0xfe000000-0xfe7fffff] Nov 1 00:58:26.672116 kernel: pci 0000:00:0f.0: reg 0x30: [mem 0x00000000-0x00007fff pref] Nov 1 00:58:26.672168 kernel: pci 0000:00:0f.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Nov 1 00:58:26.672225 kernel: pci 0000:00:11.0: [15ad:0790] type 01 class 0x060401 Nov 1 00:58:26.672303 kernel: pci 0000:00:15.0: [15ad:07a0] type 01 class 0x060400 Nov 1 00:58:26.672359 kernel: pci 0000:00:15.0: PME# supported from D0 D3hot D3cold Nov 1 00:58:26.672416 kernel: pci 0000:00:15.1: [15ad:07a0] type 01 class 0x060400 Nov 1 00:58:26.672472 kernel: pci 0000:00:15.1: PME# supported from D0 D3hot D3cold Nov 1 00:58:26.672529 kernel: pci 0000:00:15.2: [15ad:07a0] type 01 class 0x060400 Nov 1 00:58:26.672582 kernel: pci 0000:00:15.2: PME# supported from D0 D3hot D3cold Nov 1 00:58:26.672644 kernel: pci 0000:00:15.3: [15ad:07a0] type 01 class 0x060400 Nov 1 00:58:26.672696 kernel: pci 0000:00:15.3: PME# supported from D0 D3hot D3cold Nov 1 00:58:26.672753 kernel: pci 0000:00:15.4: [15ad:07a0] type 01 class 0x060400 Nov 1 00:58:26.672806 kernel: pci 0000:00:15.4: PME# supported from D0 D3hot D3cold Nov 1 00:58:26.672862 kernel: pci 0000:00:15.5: [15ad:07a0] type 01 class 0x060400 Nov 1 00:58:26.672915 kernel: pci 0000:00:15.5: PME# supported from D0 D3hot D3cold Nov 1 00:58:26.672974 kernel: pci 0000:00:15.6: [15ad:07a0] type 01 class 0x060400 Nov 1 00:58:26.673026 kernel: pci 0000:00:15.6: PME# supported from D0 D3hot D3cold Nov 1 00:58:26.673081 kernel: pci 0000:00:15.7: [15ad:07a0] type 01 class 0x060400 Nov 1 00:58:26.673133 kernel: pci 0000:00:15.7: PME# supported from D0 D3hot D3cold Nov 1 00:58:26.673198 kernel: pci 0000:00:16.0: [15ad:07a0] type 01 class 0x060400 Nov 1 00:58:26.673251 kernel: pci 0000:00:16.0: PME# supported from D0 D3hot D3cold Nov 1 00:58:26.673324 kernel: pci 0000:00:16.1: [15ad:07a0] type 01 class 0x060400 Nov 1 00:58:26.673378 kernel: pci 0000:00:16.1: PME# supported from D0 D3hot D3cold Nov 1 00:58:26.673435 kernel: pci 0000:00:16.2: [15ad:07a0] type 01 class 0x060400 Nov 1 00:58:26.673487 kernel: pci 0000:00:16.2: PME# supported from D0 D3hot D3cold Nov 1 00:58:26.673545 kernel: pci 0000:00:16.3: [15ad:07a0] type 01 class 0x060400 Nov 1 00:58:26.673600 kernel: pci 0000:00:16.3: PME# supported from D0 D3hot D3cold Nov 1 00:58:26.673656 kernel: pci 0000:00:16.4: [15ad:07a0] type 01 class 0x060400 Nov 1 00:58:26.673708 kernel: pci 0000:00:16.4: PME# supported from D0 D3hot D3cold Nov 1 00:58:26.673764 kernel: pci 0000:00:16.5: [15ad:07a0] type 01 class 0x060400 Nov 1 00:58:26.673817 kernel: pci 0000:00:16.5: PME# supported from D0 D3hot D3cold Nov 1 00:58:26.673873 kernel: pci 0000:00:16.6: [15ad:07a0] type 01 class 0x060400 Nov 1 00:58:26.673924 kernel: pci 0000:00:16.6: PME# supported from D0 D3hot D3cold Nov 1 00:58:26.673982 kernel: pci 0000:00:16.7: [15ad:07a0] type 01 class 0x060400 Nov 1 00:58:26.674036 kernel: pci 0000:00:16.7: PME# supported from D0 D3hot D3cold Nov 1 00:58:26.674092 kernel: pci 0000:00:17.0: [15ad:07a0] type 01 class 0x060400 Nov 1 00:58:26.674144 kernel: pci 0000:00:17.0: PME# supported from D0 D3hot D3cold Nov 1 00:58:26.674200 kernel: pci 0000:00:17.1: [15ad:07a0] type 01 class 0x060400 Nov 1 00:58:26.674252 kernel: pci 0000:00:17.1: PME# supported from D0 D3hot D3cold Nov 1 00:58:26.674318 kernel: pci 0000:00:17.2: [15ad:07a0] type 01 class 0x060400 Nov 1 00:58:26.674370 kernel: pci 0000:00:17.2: PME# supported from D0 D3hot D3cold Nov 1 00:58:26.674426 kernel: pci 0000:00:17.3: [15ad:07a0] type 01 class 0x060400 Nov 1 00:58:26.674478 kernel: pci 0000:00:17.3: PME# supported from D0 D3hot D3cold Nov 1 00:58:26.674537 kernel: pci 0000:00:17.4: [15ad:07a0] type 01 class 0x060400 Nov 1 00:58:26.674590 kernel: pci 0000:00:17.4: PME# supported from D0 D3hot D3cold Nov 1 00:58:26.674649 kernel: pci 0000:00:17.5: [15ad:07a0] type 01 class 0x060400 Nov 1 00:58:26.674702 kernel: pci 0000:00:17.5: PME# supported from D0 D3hot D3cold Nov 1 00:58:26.674758 kernel: pci 0000:00:17.6: [15ad:07a0] type 01 class 0x060400 Nov 1 00:58:26.674810 kernel: pci 0000:00:17.6: PME# supported from D0 D3hot D3cold Nov 1 00:58:26.674865 kernel: pci 0000:00:17.7: [15ad:07a0] type 01 class 0x060400 Nov 1 00:58:26.674917 kernel: pci 0000:00:17.7: PME# supported from D0 D3hot D3cold Nov 1 00:58:26.674975 kernel: pci 0000:00:18.0: [15ad:07a0] type 01 class 0x060400 Nov 1 00:58:26.675027 kernel: pci 0000:00:18.0: PME# supported from D0 D3hot D3cold Nov 1 00:58:26.675083 kernel: pci 0000:00:18.1: [15ad:07a0] type 01 class 0x060400 Nov 1 00:58:26.675135 kernel: pci 0000:00:18.1: PME# supported from D0 D3hot D3cold Nov 1 00:58:26.675192 kernel: pci 0000:00:18.2: [15ad:07a0] type 01 class 0x060400 Nov 1 00:58:26.675244 kernel: pci 0000:00:18.2: PME# supported from D0 D3hot D3cold Nov 1 00:58:26.676388 kernel: pci 0000:00:18.3: [15ad:07a0] type 01 class 0x060400 Nov 1 00:58:26.676453 kernel: pci 0000:00:18.3: PME# supported from D0 D3hot D3cold Nov 1 00:58:26.676515 kernel: pci 0000:00:18.4: [15ad:07a0] type 01 class 0x060400 Nov 1 00:58:26.676570 kernel: pci 0000:00:18.4: PME# supported from D0 D3hot D3cold Nov 1 00:58:26.676627 kernel: pci 0000:00:18.5: [15ad:07a0] type 01 class 0x060400 Nov 1 00:58:26.676680 kernel: pci 0000:00:18.5: PME# supported from D0 D3hot D3cold Nov 1 00:58:26.676736 kernel: pci 0000:00:18.6: [15ad:07a0] type 01 class 0x060400 Nov 1 00:58:26.676791 kernel: pci 0000:00:18.6: PME# supported from D0 D3hot D3cold Nov 1 00:58:26.676846 kernel: pci 0000:00:18.7: [15ad:07a0] type 01 class 0x060400 Nov 1 00:58:26.676898 kernel: pci 0000:00:18.7: PME# supported from D0 D3hot D3cold Nov 1 00:58:26.676955 kernel: pci_bus 0000:01: extended config space not accessible Nov 1 00:58:26.677010 kernel: pci 0000:00:01.0: PCI bridge to [bus 01] Nov 1 00:58:26.677066 kernel: pci_bus 0000:02: extended config space not accessible Nov 1 00:58:26.677077 kernel: acpiphp: Slot [32] registered Nov 1 00:58:26.677083 kernel: acpiphp: Slot [33] registered Nov 1 00:58:26.677089 kernel: acpiphp: Slot [34] registered Nov 1 00:58:26.677094 kernel: acpiphp: Slot [35] registered Nov 1 00:58:26.677100 kernel: acpiphp: Slot [36] registered Nov 1 00:58:26.677106 kernel: acpiphp: Slot [37] registered Nov 1 00:58:26.677112 kernel: acpiphp: Slot [38] registered Nov 1 00:58:26.677117 kernel: acpiphp: Slot [39] registered Nov 1 00:58:26.677123 kernel: acpiphp: Slot [40] registered Nov 1 00:58:26.677130 kernel: acpiphp: Slot [41] registered Nov 1 00:58:26.677136 kernel: acpiphp: Slot [42] registered Nov 1 00:58:26.677142 kernel: acpiphp: Slot [43] registered Nov 1 00:58:26.677147 kernel: acpiphp: Slot [44] registered Nov 1 00:58:26.677153 kernel: acpiphp: Slot [45] registered Nov 1 00:58:26.677159 kernel: acpiphp: Slot [46] registered Nov 1 00:58:26.677164 kernel: acpiphp: Slot [47] registered Nov 1 00:58:26.677170 kernel: acpiphp: Slot [48] registered Nov 1 00:58:26.677183 kernel: acpiphp: Slot [49] registered Nov 1 00:58:26.677190 kernel: acpiphp: Slot [50] registered Nov 1 00:58:26.677197 kernel: acpiphp: Slot [51] registered Nov 1 00:58:26.677203 kernel: acpiphp: Slot [52] registered Nov 1 00:58:26.677208 kernel: acpiphp: Slot [53] registered Nov 1 00:58:26.677214 kernel: acpiphp: Slot [54] registered Nov 1 00:58:26.677220 kernel: acpiphp: Slot [55] registered Nov 1 00:58:26.677226 kernel: acpiphp: Slot [56] registered Nov 1 00:58:26.677232 kernel: acpiphp: Slot [57] registered Nov 1 00:58:26.677238 kernel: acpiphp: Slot [58] registered Nov 1 00:58:26.677244 kernel: acpiphp: Slot [59] registered Nov 1 00:58:26.677250 kernel: acpiphp: Slot [60] registered Nov 1 00:58:26.677256 kernel: acpiphp: Slot [61] registered Nov 1 00:58:26.677262 kernel: acpiphp: Slot [62] registered Nov 1 00:58:26.689296 kernel: acpiphp: Slot [63] registered Nov 1 00:58:26.689393 kernel: pci 0000:00:11.0: PCI bridge to [bus 02] (subtractive decode) Nov 1 00:58:26.689447 kernel: pci 0000:00:11.0: bridge window [io 0x2000-0x3fff] Nov 1 00:58:26.689497 kernel: pci 0000:00:11.0: bridge window [mem 0xfd600000-0xfdffffff] Nov 1 00:58:26.689545 kernel: pci 0000:00:11.0: bridge window [mem 0xe7b00000-0xe7ffffff 64bit pref] Nov 1 00:58:26.689593 kernel: pci 0000:00:11.0: bridge window [mem 0x000a0000-0x000bffff window] (subtractive decode) Nov 1 00:58:26.689645 kernel: pci 0000:00:11.0: bridge window [mem 0x000cc000-0x000dbfff window] (subtractive decode) Nov 1 00:58:26.689693 kernel: pci 0000:00:11.0: bridge window [mem 0xc0000000-0xfebfffff window] (subtractive decode) Nov 1 00:58:26.689740 kernel: pci 0000:00:11.0: bridge window [io 0x0000-0x0cf7 window] (subtractive decode) Nov 1 00:58:26.689787 kernel: pci 0000:00:11.0: bridge window [io 0x0d00-0xfeff window] (subtractive decode) Nov 1 00:58:26.689844 kernel: pci 0000:03:00.0: [15ad:07c0] type 00 class 0x010700 Nov 1 00:58:26.689895 kernel: pci 0000:03:00.0: reg 0x10: [io 0x4000-0x4007] Nov 1 00:58:26.689944 kernel: pci 0000:03:00.0: reg 0x14: [mem 0xfd5f8000-0xfd5fffff 64bit] Nov 1 00:58:26.689995 kernel: pci 0000:03:00.0: reg 0x30: [mem 0x00000000-0x0000ffff pref] Nov 1 00:58:26.690044 kernel: pci 0000:03:00.0: PME# supported from D0 D3hot D3cold Nov 1 00:58:26.690093 kernel: pci 0000:03:00.0: disabling ASPM on pre-1.1 PCIe device. You can enable it with 'pcie_aspm=force' Nov 1 00:58:26.690142 kernel: pci 0000:00:15.0: PCI bridge to [bus 03] Nov 1 00:58:26.690196 kernel: pci 0000:00:15.0: bridge window [io 0x4000-0x4fff] Nov 1 00:58:26.690244 kernel: pci 0000:00:15.0: bridge window [mem 0xfd500000-0xfd5fffff] Nov 1 00:58:26.690303 kernel: pci 0000:00:15.1: PCI bridge to [bus 04] Nov 1 00:58:26.690355 kernel: pci 0000:00:15.1: bridge window [io 0x8000-0x8fff] Nov 1 00:58:26.690403 kernel: pci 0000:00:15.1: bridge window [mem 0xfd100000-0xfd1fffff] Nov 1 00:58:26.690452 kernel: pci 0000:00:15.1: bridge window [mem 0xe7800000-0xe78fffff 64bit pref] Nov 1 00:58:26.690503 kernel: pci 0000:00:15.2: PCI bridge to [bus 05] Nov 1 00:58:26.690550 kernel: pci 0000:00:15.2: bridge window [io 0xc000-0xcfff] Nov 1 00:58:26.690598 kernel: pci 0000:00:15.2: bridge window [mem 0xfcd00000-0xfcdfffff] Nov 1 00:58:26.690645 kernel: pci 0000:00:15.2: bridge window [mem 0xe7400000-0xe74fffff 64bit pref] Nov 1 00:58:26.690695 kernel: pci 0000:00:15.3: PCI bridge to [bus 06] Nov 1 00:58:26.690746 kernel: pci 0000:00:15.3: bridge window [mem 0xfc900000-0xfc9fffff] Nov 1 00:58:26.690793 kernel: pci 0000:00:15.3: bridge window [mem 0xe7000000-0xe70fffff 64bit pref] Nov 1 00:58:26.690843 kernel: pci 0000:00:15.4: PCI bridge to [bus 07] Nov 1 00:58:26.690891 kernel: pci 0000:00:15.4: bridge window [mem 0xfc500000-0xfc5fffff] Nov 1 00:58:26.690938 kernel: pci 0000:00:15.4: bridge window [mem 0xe6c00000-0xe6cfffff 64bit pref] Nov 1 00:58:26.690991 kernel: pci 0000:00:15.5: PCI bridge to [bus 08] Nov 1 00:58:26.691039 kernel: pci 0000:00:15.5: bridge window [mem 0xfc100000-0xfc1fffff] Nov 1 00:58:26.691087 kernel: pci 0000:00:15.5: bridge window [mem 0xe6800000-0xe68fffff 64bit pref] Nov 1 00:58:26.691136 kernel: pci 0000:00:15.6: PCI bridge to [bus 09] Nov 1 00:58:26.691200 kernel: pci 0000:00:15.6: bridge window [mem 0xfbd00000-0xfbdfffff] Nov 1 00:58:26.691250 kernel: pci 0000:00:15.6: bridge window [mem 0xe6400000-0xe64fffff 64bit pref] Nov 1 00:58:26.691308 kernel: pci 0000:00:15.7: PCI bridge to [bus 0a] Nov 1 00:58:26.691358 kernel: pci 0000:00:15.7: bridge window [mem 0xfb900000-0xfb9fffff] Nov 1 00:58:26.691406 kernel: pci 0000:00:15.7: bridge window [mem 0xe6000000-0xe60fffff 64bit pref] Nov 1 00:58:26.691461 kernel: pci 0000:0b:00.0: [15ad:07b0] type 00 class 0x020000 Nov 1 00:58:26.691511 kernel: pci 0000:0b:00.0: reg 0x10: [mem 0xfd4fc000-0xfd4fcfff] Nov 1 00:58:26.691559 kernel: pci 0000:0b:00.0: reg 0x14: [mem 0xfd4fd000-0xfd4fdfff] Nov 1 00:58:26.691608 kernel: pci 0000:0b:00.0: reg 0x18: [mem 0xfd4fe000-0xfd4fffff] Nov 1 00:58:26.691657 kernel: pci 0000:0b:00.0: reg 0x1c: [io 0x5000-0x500f] Nov 1 00:58:26.691706 kernel: pci 0000:0b:00.0: reg 0x30: [mem 0x00000000-0x0000ffff pref] Nov 1 00:58:26.691758 kernel: pci 0000:0b:00.0: supports D1 D2 Nov 1 00:58:26.691807 kernel: pci 0000:0b:00.0: PME# supported from D0 D1 D2 D3hot D3cold Nov 1 00:58:26.691856 kernel: pci 0000:0b:00.0: disabling ASPM on pre-1.1 PCIe device. You can enable it with 'pcie_aspm=force' Nov 1 00:58:26.691905 kernel: pci 0000:00:16.0: PCI bridge to [bus 0b] Nov 1 00:58:26.691953 kernel: pci 0000:00:16.0: bridge window [io 0x5000-0x5fff] Nov 1 00:58:26.692015 kernel: pci 0000:00:16.0: bridge window [mem 0xfd400000-0xfd4fffff] Nov 1 00:58:26.692129 kernel: pci 0000:00:16.1: PCI bridge to [bus 0c] Nov 1 00:58:26.692183 kernel: pci 0000:00:16.1: bridge window [io 0x9000-0x9fff] Nov 1 00:58:26.692231 kernel: pci 0000:00:16.1: bridge window [mem 0xfd000000-0xfd0fffff] Nov 1 00:58:26.692295 kernel: pci 0000:00:16.1: bridge window [mem 0xe7700000-0xe77fffff 64bit pref] Nov 1 00:58:26.692346 kernel: pci 0000:00:16.2: PCI bridge to [bus 0d] Nov 1 00:58:26.692394 kernel: pci 0000:00:16.2: bridge window [io 0xd000-0xdfff] Nov 1 00:58:26.692441 kernel: pci 0000:00:16.2: bridge window [mem 0xfcc00000-0xfccfffff] Nov 1 00:58:26.692487 kernel: pci 0000:00:16.2: bridge window [mem 0xe7300000-0xe73fffff 64bit pref] Nov 1 00:58:26.692536 kernel: pci 0000:00:16.3: PCI bridge to [bus 0e] Nov 1 00:58:26.692587 kernel: pci 0000:00:16.3: bridge window [mem 0xfc800000-0xfc8fffff] Nov 1 00:58:26.692633 kernel: pci 0000:00:16.3: bridge window [mem 0xe6f00000-0xe6ffffff 64bit pref] Nov 1 00:58:26.692683 kernel: pci 0000:00:16.4: PCI bridge to [bus 0f] Nov 1 00:58:26.692731 kernel: pci 0000:00:16.4: bridge window [mem 0xfc400000-0xfc4fffff] Nov 1 00:58:26.692778 kernel: pci 0000:00:16.4: bridge window [mem 0xe6b00000-0xe6bfffff 64bit pref] Nov 1 00:58:26.692827 kernel: pci 0000:00:16.5: PCI bridge to [bus 10] Nov 1 00:58:26.692874 kernel: pci 0000:00:16.5: bridge window [mem 0xfc000000-0xfc0fffff] Nov 1 00:58:26.692921 kernel: pci 0000:00:16.5: bridge window [mem 0xe6700000-0xe67fffff 64bit pref] Nov 1 00:58:26.692973 kernel: pci 0000:00:16.6: PCI bridge to [bus 11] Nov 1 00:58:26.693019 kernel: pci 0000:00:16.6: bridge window [mem 0xfbc00000-0xfbcfffff] Nov 1 00:58:26.693067 kernel: pci 0000:00:16.6: bridge window [mem 0xe6300000-0xe63fffff 64bit pref] Nov 1 00:58:26.693115 kernel: pci 0000:00:16.7: PCI bridge to [bus 12] Nov 1 00:58:26.693164 kernel: pci 0000:00:16.7: bridge window [mem 0xfb800000-0xfb8fffff] Nov 1 00:58:26.693216 kernel: pci 0000:00:16.7: bridge window [mem 0xe5f00000-0xe5ffffff 64bit pref] Nov 1 00:58:26.693265 kernel: pci 0000:00:17.0: PCI bridge to [bus 13] Nov 1 00:58:26.693329 kernel: pci 0000:00:17.0: bridge window [io 0x6000-0x6fff] Nov 1 00:58:26.693380 kernel: pci 0000:00:17.0: bridge window [mem 0xfd300000-0xfd3fffff] Nov 1 00:58:26.693428 kernel: pci 0000:00:17.0: bridge window [mem 0xe7a00000-0xe7afffff 64bit pref] Nov 1 00:58:26.693477 kernel: pci 0000:00:17.1: PCI bridge to [bus 14] Nov 1 00:58:26.693524 kernel: pci 0000:00:17.1: bridge window [io 0xa000-0xafff] Nov 1 00:58:26.693571 kernel: pci 0000:00:17.1: bridge window [mem 0xfcf00000-0xfcffffff] Nov 1 00:58:26.693618 kernel: pci 0000:00:17.1: bridge window [mem 0xe7600000-0xe76fffff 64bit pref] Nov 1 00:58:26.693668 kernel: pci 0000:00:17.2: PCI bridge to [bus 15] Nov 1 00:58:26.693717 kernel: pci 0000:00:17.2: bridge window [io 0xe000-0xefff] Nov 1 00:58:26.693764 kernel: pci 0000:00:17.2: bridge window [mem 0xfcb00000-0xfcbfffff] Nov 1 00:58:26.693811 kernel: pci 0000:00:17.2: bridge window [mem 0xe7200000-0xe72fffff 64bit pref] Nov 1 00:58:26.693861 kernel: pci 0000:00:17.3: PCI bridge to [bus 16] Nov 1 00:58:26.693908 kernel: pci 0000:00:17.3: bridge window [mem 0xfc700000-0xfc7fffff] Nov 1 00:58:26.693956 kernel: pci 0000:00:17.3: bridge window [mem 0xe6e00000-0xe6efffff 64bit pref] Nov 1 00:58:26.694005 kernel: pci 0000:00:17.4: PCI bridge to [bus 17] Nov 1 00:58:26.694052 kernel: pci 0000:00:17.4: bridge window [mem 0xfc300000-0xfc3fffff] Nov 1 00:58:26.694101 kernel: pci 0000:00:17.4: bridge window [mem 0xe6a00000-0xe6afffff 64bit pref] Nov 1 00:58:26.694150 kernel: pci 0000:00:17.5: PCI bridge to [bus 18] Nov 1 00:58:26.694205 kernel: pci 0000:00:17.5: bridge window [mem 0xfbf00000-0xfbffffff] Nov 1 00:58:26.694254 kernel: pci 0000:00:17.5: bridge window [mem 0xe6600000-0xe66fffff 64bit pref] Nov 1 00:58:26.694310 kernel: pci 0000:00:17.6: PCI bridge to [bus 19] Nov 1 00:58:26.694359 kernel: pci 0000:00:17.6: bridge window [mem 0xfbb00000-0xfbbfffff] Nov 1 00:58:26.694406 kernel: pci 0000:00:17.6: bridge window [mem 0xe6200000-0xe62fffff 64bit pref] Nov 1 00:58:26.694455 kernel: pci 0000:00:17.7: PCI bridge to [bus 1a] Nov 1 00:58:26.694505 kernel: pci 0000:00:17.7: bridge window [mem 0xfb700000-0xfb7fffff] Nov 1 00:58:26.694553 kernel: pci 0000:00:17.7: bridge window [mem 0xe5e00000-0xe5efffff 64bit pref] Nov 1 00:58:26.694601 kernel: pci 0000:00:18.0: PCI bridge to [bus 1b] Nov 1 00:58:26.694648 kernel: pci 0000:00:18.0: bridge window [io 0x7000-0x7fff] Nov 1 00:58:26.694695 kernel: pci 0000:00:18.0: bridge window [mem 0xfd200000-0xfd2fffff] Nov 1 00:58:26.694742 kernel: pci 0000:00:18.0: bridge window [mem 0xe7900000-0xe79fffff 64bit pref] Nov 1 00:58:26.694791 kernel: pci 0000:00:18.1: PCI bridge to [bus 1c] Nov 1 00:58:26.694838 kernel: pci 0000:00:18.1: bridge window [io 0xb000-0xbfff] Nov 1 00:58:26.694888 kernel: pci 0000:00:18.1: bridge window [mem 0xfce00000-0xfcefffff] Nov 1 00:58:26.694937 kernel: pci 0000:00:18.1: bridge window [mem 0xe7500000-0xe75fffff 64bit pref] Nov 1 00:58:26.694986 kernel: pci 0000:00:18.2: PCI bridge to [bus 1d] Nov 1 00:58:26.695035 kernel: pci 0000:00:18.2: bridge window [mem 0xfca00000-0xfcafffff] Nov 1 00:58:26.695082 kernel: pci 0000:00:18.2: bridge window [mem 0xe7100000-0xe71fffff 64bit pref] Nov 1 00:58:26.695132 kernel: pci 0000:00:18.3: PCI bridge to [bus 1e] Nov 1 00:58:26.695179 kernel: pci 0000:00:18.3: bridge window [mem 0xfc600000-0xfc6fffff] Nov 1 00:58:26.695226 kernel: pci 0000:00:18.3: bridge window [mem 0xe6d00000-0xe6dfffff 64bit pref] Nov 1 00:58:26.699486 kernel: pci 0000:00:18.4: PCI bridge to [bus 1f] Nov 1 00:58:26.699560 kernel: pci 0000:00:18.4: bridge window [mem 0xfc200000-0xfc2fffff] Nov 1 00:58:26.699613 kernel: pci 0000:00:18.4: bridge window [mem 0xe6900000-0xe69fffff 64bit pref] Nov 1 00:58:26.699666 kernel: pci 0000:00:18.5: PCI bridge to [bus 20] Nov 1 00:58:26.699715 kernel: pci 0000:00:18.5: bridge window [mem 0xfbe00000-0xfbefffff] Nov 1 00:58:26.699762 kernel: pci 0000:00:18.5: bridge window [mem 0xe6500000-0xe65fffff 64bit pref] Nov 1 00:58:26.699812 kernel: pci 0000:00:18.6: PCI bridge to [bus 21] Nov 1 00:58:26.699859 kernel: pci 0000:00:18.6: bridge window [mem 0xfba00000-0xfbafffff] Nov 1 00:58:26.699910 kernel: pci 0000:00:18.6: bridge window [mem 0xe6100000-0xe61fffff 64bit pref] Nov 1 00:58:26.699959 kernel: pci 0000:00:18.7: PCI bridge to [bus 22] Nov 1 00:58:26.700007 kernel: pci 0000:00:18.7: bridge window [mem 0xfb600000-0xfb6fffff] Nov 1 00:58:26.700055 kernel: pci 0000:00:18.7: bridge window [mem 0xe5d00000-0xe5dfffff 64bit pref] Nov 1 00:58:26.700063 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 9 Nov 1 00:58:26.700069 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 0 Nov 1 00:58:26.700075 kernel: ACPI: PCI: Interrupt link LNKB disabled Nov 1 00:58:26.700081 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Nov 1 00:58:26.700089 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 10 Nov 1 00:58:26.700095 kernel: iommu: Default domain type: Translated Nov 1 00:58:26.700101 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Nov 1 00:58:26.700150 kernel: pci 0000:00:0f.0: vgaarb: setting as boot VGA device Nov 1 00:58:26.700203 kernel: pci 0000:00:0f.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Nov 1 00:58:26.702594 kernel: pci 0000:00:0f.0: vgaarb: bridge control possible Nov 1 00:58:26.702605 kernel: vgaarb: loaded Nov 1 00:58:26.702611 kernel: pps_core: LinuxPPS API ver. 1 registered Nov 1 00:58:26.702617 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Nov 1 00:58:26.702625 kernel: PTP clock support registered Nov 1 00:58:26.702631 kernel: PCI: Using ACPI for IRQ routing Nov 1 00:58:26.702637 kernel: PCI: pci_cache_line_size set to 64 bytes Nov 1 00:58:26.702643 kernel: e820: reserve RAM buffer [mem 0x0009ec00-0x0009ffff] Nov 1 00:58:26.702648 kernel: e820: reserve RAM buffer [mem 0x7fee0000-0x7fffffff] Nov 1 00:58:26.702654 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 Nov 1 00:58:26.702660 kernel: hpet0: 16 comparators, 64-bit 14.318180 MHz counter Nov 1 00:58:26.702665 kernel: clocksource: Switched to clocksource tsc-early Nov 1 00:58:26.702671 kernel: VFS: Disk quotas dquot_6.6.0 Nov 1 00:58:26.702678 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Nov 1 00:58:26.702684 kernel: pnp: PnP ACPI init Nov 1 00:58:26.702742 kernel: system 00:00: [io 0x1000-0x103f] has been reserved Nov 1 00:58:26.702788 kernel: system 00:00: [io 0x1040-0x104f] has been reserved Nov 1 00:58:26.702831 kernel: system 00:00: [io 0x0cf0-0x0cf1] has been reserved Nov 1 00:58:26.702880 kernel: system 00:04: [mem 0xfed00000-0xfed003ff] has been reserved Nov 1 00:58:26.702927 kernel: pnp 00:06: [dma 2] Nov 1 00:58:26.702978 kernel: system 00:07: [io 0xfce0-0xfcff] has been reserved Nov 1 00:58:26.703022 kernel: system 00:07: [mem 0xf0000000-0xf7ffffff] has been reserved Nov 1 00:58:26.703066 kernel: system 00:07: [mem 0xfe800000-0xfe9fffff] has been reserved Nov 1 00:58:26.703074 kernel: pnp: PnP ACPI: found 8 devices Nov 1 00:58:26.703080 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Nov 1 00:58:26.703086 kernel: NET: Registered PF_INET protocol family Nov 1 00:58:26.703092 kernel: IP idents hash table entries: 32768 (order: 6, 262144 bytes, linear) Nov 1 00:58:26.703098 kernel: tcp_listen_portaddr_hash hash table entries: 1024 (order: 2, 16384 bytes, linear) Nov 1 00:58:26.703105 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Nov 1 00:58:26.703111 kernel: TCP established hash table entries: 16384 (order: 5, 131072 bytes, linear) Nov 1 00:58:26.703117 kernel: TCP bind hash table entries: 16384 (order: 6, 262144 bytes, linear) Nov 1 00:58:26.703122 kernel: TCP: Hash tables configured (established 16384 bind 16384) Nov 1 00:58:26.703128 kernel: UDP hash table entries: 1024 (order: 3, 32768 bytes, linear) Nov 1 00:58:26.703135 kernel: UDP-Lite hash table entries: 1024 (order: 3, 32768 bytes, linear) Nov 1 00:58:26.703140 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Nov 1 00:58:26.703146 kernel: NET: Registered PF_XDP protocol family Nov 1 00:58:26.703199 kernel: pci 0000:00:15.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 03] add_size 200000 add_align 100000 Nov 1 00:58:26.703252 kernel: pci 0000:00:15.3: bridge window [io 0x1000-0x0fff] to [bus 06] add_size 1000 Nov 1 00:58:26.703315 kernel: pci 0000:00:15.4: bridge window [io 0x1000-0x0fff] to [bus 07] add_size 1000 Nov 1 00:58:26.703367 kernel: pci 0000:00:15.5: bridge window [io 0x1000-0x0fff] to [bus 08] add_size 1000 Nov 1 00:58:26.703417 kernel: pci 0000:00:15.6: bridge window [io 0x1000-0x0fff] to [bus 09] add_size 1000 Nov 1 00:58:26.703467 kernel: pci 0000:00:15.7: bridge window [io 0x1000-0x0fff] to [bus 0a] add_size 1000 Nov 1 00:58:26.703520 kernel: pci 0000:00:16.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 0b] add_size 200000 add_align 100000 Nov 1 00:58:26.703570 kernel: pci 0000:00:16.3: bridge window [io 0x1000-0x0fff] to [bus 0e] add_size 1000 Nov 1 00:58:26.703619 kernel: pci 0000:00:16.4: bridge window [io 0x1000-0x0fff] to [bus 0f] add_size 1000 Nov 1 00:58:26.703669 kernel: pci 0000:00:16.5: bridge window [io 0x1000-0x0fff] to [bus 10] add_size 1000 Nov 1 00:58:26.703719 kernel: pci 0000:00:16.6: bridge window [io 0x1000-0x0fff] to [bus 11] add_size 1000 Nov 1 00:58:26.703769 kernel: pci 0000:00:16.7: bridge window [io 0x1000-0x0fff] to [bus 12] add_size 1000 Nov 1 00:58:26.705004 kernel: pci 0000:00:17.3: bridge window [io 0x1000-0x0fff] to [bus 16] add_size 1000 Nov 1 00:58:26.705068 kernel: pci 0000:00:17.4: bridge window [io 0x1000-0x0fff] to [bus 17] add_size 1000 Nov 1 00:58:26.706580 kernel: pci 0000:00:17.5: bridge window [io 0x1000-0x0fff] to [bus 18] add_size 1000 Nov 1 00:58:26.706644 kernel: pci 0000:00:17.6: bridge window [io 0x1000-0x0fff] to [bus 19] add_size 1000 Nov 1 00:58:26.706698 kernel: pci 0000:00:17.7: bridge window [io 0x1000-0x0fff] to [bus 1a] add_size 1000 Nov 1 00:58:26.706750 kernel: pci 0000:00:18.2: bridge window [io 0x1000-0x0fff] to [bus 1d] add_size 1000 Nov 1 00:58:26.706805 kernel: pci 0000:00:18.3: bridge window [io 0x1000-0x0fff] to [bus 1e] add_size 1000 Nov 1 00:58:26.706855 kernel: pci 0000:00:18.4: bridge window [io 0x1000-0x0fff] to [bus 1f] add_size 1000 Nov 1 00:58:26.706905 kernel: pci 0000:00:18.5: bridge window [io 0x1000-0x0fff] to [bus 20] add_size 1000 Nov 1 00:58:26.706954 kernel: pci 0000:00:18.6: bridge window [io 0x1000-0x0fff] to [bus 21] add_size 1000 Nov 1 00:58:26.707002 kernel: pci 0000:00:18.7: bridge window [io 0x1000-0x0fff] to [bus 22] add_size 1000 Nov 1 00:58:26.707051 kernel: pci 0000:00:15.0: BAR 15: assigned [mem 0xc0000000-0xc01fffff 64bit pref] Nov 1 00:58:26.707104 kernel: pci 0000:00:16.0: BAR 15: assigned [mem 0xc0200000-0xc03fffff 64bit pref] Nov 1 00:58:26.707152 kernel: pci 0000:00:15.3: BAR 13: no space for [io size 0x1000] Nov 1 00:58:26.707215 kernel: pci 0000:00:15.3: BAR 13: failed to assign [io size 0x1000] Nov 1 00:58:26.707272 kernel: pci 0000:00:15.4: BAR 13: no space for [io size 0x1000] Nov 1 00:58:26.707328 kernel: pci 0000:00:15.4: BAR 13: failed to assign [io size 0x1000] Nov 1 00:58:26.707377 kernel: pci 0000:00:15.5: BAR 13: no space for [io size 0x1000] Nov 1 00:58:26.707425 kernel: pci 0000:00:15.5: BAR 13: failed to assign [io size 0x1000] Nov 1 00:58:26.707474 kernel: pci 0000:00:15.6: BAR 13: no space for [io size 0x1000] Nov 1 00:58:26.707525 kernel: pci 0000:00:15.6: BAR 13: failed to assign [io size 0x1000] Nov 1 00:58:26.707573 kernel: pci 0000:00:15.7: BAR 13: no space for [io size 0x1000] Nov 1 00:58:26.711888 kernel: pci 0000:00:15.7: BAR 13: failed to assign [io size 0x1000] Nov 1 00:58:26.711949 kernel: pci 0000:00:16.3: BAR 13: no space for [io size 0x1000] Nov 1 00:58:26.712220 kernel: pci 0000:00:16.3: BAR 13: failed to assign [io size 0x1000] Nov 1 00:58:26.712316 kernel: pci 0000:00:16.4: BAR 13: no space for [io size 0x1000] Nov 1 00:58:26.712379 kernel: pci 0000:00:16.4: BAR 13: failed to assign [io size 0x1000] Nov 1 00:58:26.712434 kernel: pci 0000:00:16.5: BAR 13: no space for [io size 0x1000] Nov 1 00:58:26.712487 kernel: pci 0000:00:16.5: BAR 13: failed to assign [io size 0x1000] Nov 1 00:58:26.712537 kernel: pci 0000:00:16.6: BAR 13: no space for [io size 0x1000] Nov 1 00:58:26.712585 kernel: pci 0000:00:16.6: BAR 13: failed to assign [io size 0x1000] Nov 1 00:58:26.712633 kernel: pci 0000:00:16.7: BAR 13: no space for [io size 0x1000] Nov 1 00:58:26.712681 kernel: pci 0000:00:16.7: BAR 13: failed to assign [io size 0x1000] Nov 1 00:58:26.712729 kernel: pci 0000:00:17.3: BAR 13: no space for [io size 0x1000] Nov 1 00:58:26.712776 kernel: pci 0000:00:17.3: BAR 13: failed to assign [io size 0x1000] Nov 1 00:58:26.712826 kernel: pci 0000:00:17.4: BAR 13: no space for [io size 0x1000] Nov 1 00:58:26.712875 kernel: pci 0000:00:17.4: BAR 13: failed to assign [io size 0x1000] Nov 1 00:58:26.712924 kernel: pci 0000:00:17.5: BAR 13: no space for [io size 0x1000] Nov 1 00:58:26.712971 kernel: pci 0000:00:17.5: BAR 13: failed to assign [io size 0x1000] Nov 1 00:58:26.713020 kernel: pci 0000:00:17.6: BAR 13: no space for [io size 0x1000] Nov 1 00:58:26.713067 kernel: pci 0000:00:17.6: BAR 13: failed to assign [io size 0x1000] Nov 1 00:58:26.713115 kernel: pci 0000:00:17.7: BAR 13: no space for [io size 0x1000] Nov 1 00:58:26.713163 kernel: pci 0000:00:17.7: BAR 13: failed to assign [io size 0x1000] Nov 1 00:58:26.713220 kernel: pci 0000:00:18.2: BAR 13: no space for [io size 0x1000] Nov 1 00:58:26.713281 kernel: pci 0000:00:18.2: BAR 13: failed to assign [io size 0x1000] Nov 1 00:58:26.713341 kernel: pci 0000:00:18.3: BAR 13: no space for [io size 0x1000] Nov 1 00:58:26.713391 kernel: pci 0000:00:18.3: BAR 13: failed to assign [io size 0x1000] Nov 1 00:58:26.713440 kernel: pci 0000:00:18.4: BAR 13: no space for [io size 0x1000] Nov 1 00:58:26.713488 kernel: pci 0000:00:18.4: BAR 13: failed to assign [io size 0x1000] Nov 1 00:58:26.713536 kernel: pci 0000:00:18.5: BAR 13: no space for [io size 0x1000] Nov 1 00:58:26.713584 kernel: pci 0000:00:18.5: BAR 13: failed to assign [io size 0x1000] Nov 1 00:58:26.713633 kernel: pci 0000:00:18.6: BAR 13: no space for [io size 0x1000] Nov 1 00:58:26.713684 kernel: pci 0000:00:18.6: BAR 13: failed to assign [io size 0x1000] Nov 1 00:58:26.713733 kernel: pci 0000:00:18.7: BAR 13: no space for [io size 0x1000] Nov 1 00:58:26.713780 kernel: pci 0000:00:18.7: BAR 13: failed to assign [io size 0x1000] Nov 1 00:58:26.713828 kernel: pci 0000:00:18.7: BAR 13: no space for [io size 0x1000] Nov 1 00:58:26.713876 kernel: pci 0000:00:18.7: BAR 13: failed to assign [io size 0x1000] Nov 1 00:58:26.713924 kernel: pci 0000:00:18.6: BAR 13: no space for [io size 0x1000] Nov 1 00:58:26.713971 kernel: pci 0000:00:18.6: BAR 13: failed to assign [io size 0x1000] Nov 1 00:58:26.714018 kernel: pci 0000:00:18.5: BAR 13: no space for [io size 0x1000] Nov 1 00:58:26.714065 kernel: pci 0000:00:18.5: BAR 13: failed to assign [io size 0x1000] Nov 1 00:58:26.714114 kernel: pci 0000:00:18.4: BAR 13: no space for [io size 0x1000] Nov 1 00:58:26.714163 kernel: pci 0000:00:18.4: BAR 13: failed to assign [io size 0x1000] Nov 1 00:58:26.714211 kernel: pci 0000:00:18.3: BAR 13: no space for [io size 0x1000] Nov 1 00:58:26.714265 kernel: pci 0000:00:18.3: BAR 13: failed to assign [io size 0x1000] Nov 1 00:58:26.714347 kernel: pci 0000:00:18.2: BAR 13: no space for [io size 0x1000] Nov 1 00:58:26.714417 kernel: pci 0000:00:18.2: BAR 13: failed to assign [io size 0x1000] Nov 1 00:58:26.714465 kernel: pci 0000:00:17.7: BAR 13: no space for [io size 0x1000] Nov 1 00:58:26.714513 kernel: pci 0000:00:17.7: BAR 13: failed to assign [io size 0x1000] Nov 1 00:58:26.714560 kernel: pci 0000:00:17.6: BAR 13: no space for [io size 0x1000] Nov 1 00:58:26.714847 kernel: pci 0000:00:17.6: BAR 13: failed to assign [io size 0x1000] Nov 1 00:58:26.714929 kernel: pci 0000:00:17.5: BAR 13: no space for [io size 0x1000] Nov 1 00:58:26.715277 kernel: pci 0000:00:17.5: BAR 13: failed to assign [io size 0x1000] Nov 1 00:58:26.715339 kernel: pci 0000:00:17.4: BAR 13: no space for [io size 0x1000] Nov 1 00:58:26.715388 kernel: pci 0000:00:17.4: BAR 13: failed to assign [io size 0x1000] Nov 1 00:58:26.715437 kernel: pci 0000:00:17.3: BAR 13: no space for [io size 0x1000] Nov 1 00:58:26.715485 kernel: pci 0000:00:17.3: BAR 13: failed to assign [io size 0x1000] Nov 1 00:58:26.715534 kernel: pci 0000:00:16.7: BAR 13: no space for [io size 0x1000] Nov 1 00:58:26.715583 kernel: pci 0000:00:16.7: BAR 13: failed to assign [io size 0x1000] Nov 1 00:58:26.715631 kernel: pci 0000:00:16.6: BAR 13: no space for [io size 0x1000] Nov 1 00:58:26.715980 kernel: pci 0000:00:16.6: BAR 13: failed to assign [io size 0x1000] Nov 1 00:58:26.716036 kernel: pci 0000:00:16.5: BAR 13: no space for [io size 0x1000] Nov 1 00:58:26.716087 kernel: pci 0000:00:16.5: BAR 13: failed to assign [io size 0x1000] Nov 1 00:58:26.716260 kernel: pci 0000:00:16.4: BAR 13: no space for [io size 0x1000] Nov 1 00:58:26.716327 kernel: pci 0000:00:16.4: BAR 13: failed to assign [io size 0x1000] Nov 1 00:58:26.716377 kernel: pci 0000:00:16.3: BAR 13: no space for [io size 0x1000] Nov 1 00:58:26.716426 kernel: pci 0000:00:16.3: BAR 13: failed to assign [io size 0x1000] Nov 1 00:58:26.716630 kernel: pci 0000:00:15.7: BAR 13: no space for [io size 0x1000] Nov 1 00:58:26.716689 kernel: pci 0000:00:15.7: BAR 13: failed to assign [io size 0x1000] Nov 1 00:58:26.716744 kernel: pci 0000:00:15.6: BAR 13: no space for [io size 0x1000] Nov 1 00:58:26.716793 kernel: pci 0000:00:15.6: BAR 13: failed to assign [io size 0x1000] Nov 1 00:58:26.717148 kernel: pci 0000:00:15.5: BAR 13: no space for [io size 0x1000] Nov 1 00:58:26.717219 kernel: pci 0000:00:15.5: BAR 13: failed to assign [io size 0x1000] Nov 1 00:58:26.717320 kernel: pci 0000:00:15.4: BAR 13: no space for [io size 0x1000] Nov 1 00:58:26.717372 kernel: pci 0000:00:15.4: BAR 13: failed to assign [io size 0x1000] Nov 1 00:58:26.717729 kernel: pci 0000:00:15.3: BAR 13: no space for [io size 0x1000] Nov 1 00:58:26.717784 kernel: pci 0000:00:15.3: BAR 13: failed to assign [io size 0x1000] Nov 1 00:58:26.717836 kernel: pci 0000:00:01.0: PCI bridge to [bus 01] Nov 1 00:58:26.717886 kernel: pci 0000:00:11.0: PCI bridge to [bus 02] Nov 1 00:58:26.717954 kernel: pci 0000:00:11.0: bridge window [io 0x2000-0x3fff] Nov 1 00:58:26.718003 kernel: pci 0000:00:11.0: bridge window [mem 0xfd600000-0xfdffffff] Nov 1 00:58:26.718051 kernel: pci 0000:00:11.0: bridge window [mem 0xe7b00000-0xe7ffffff 64bit pref] Nov 1 00:58:26.718104 kernel: pci 0000:03:00.0: BAR 6: assigned [mem 0xfd500000-0xfd50ffff pref] Nov 1 00:58:26.718152 kernel: pci 0000:00:15.0: PCI bridge to [bus 03] Nov 1 00:58:26.718200 kernel: pci 0000:00:15.0: bridge window [io 0x4000-0x4fff] Nov 1 00:58:26.718248 kernel: pci 0000:00:15.0: bridge window [mem 0xfd500000-0xfd5fffff] Nov 1 00:58:26.718303 kernel: pci 0000:00:15.0: bridge window [mem 0xc0000000-0xc01fffff 64bit pref] Nov 1 00:58:26.718356 kernel: pci 0000:00:15.1: PCI bridge to [bus 04] Nov 1 00:58:26.718403 kernel: pci 0000:00:15.1: bridge window [io 0x8000-0x8fff] Nov 1 00:58:26.718451 kernel: pci 0000:00:15.1: bridge window [mem 0xfd100000-0xfd1fffff] Nov 1 00:58:26.718498 kernel: pci 0000:00:15.1: bridge window [mem 0xe7800000-0xe78fffff 64bit pref] Nov 1 00:58:26.718547 kernel: pci 0000:00:15.2: PCI bridge to [bus 05] Nov 1 00:58:26.718594 kernel: pci 0000:00:15.2: bridge window [io 0xc000-0xcfff] Nov 1 00:58:26.718641 kernel: pci 0000:00:15.2: bridge window [mem 0xfcd00000-0xfcdfffff] Nov 1 00:58:26.718688 kernel: pci 0000:00:15.2: bridge window [mem 0xe7400000-0xe74fffff 64bit pref] Nov 1 00:58:26.718736 kernel: pci 0000:00:15.3: PCI bridge to [bus 06] Nov 1 00:58:26.718785 kernel: pci 0000:00:15.3: bridge window [mem 0xfc900000-0xfc9fffff] Nov 1 00:58:26.718833 kernel: pci 0000:00:15.3: bridge window [mem 0xe7000000-0xe70fffff 64bit pref] Nov 1 00:58:26.718900 kernel: pci 0000:00:15.4: PCI bridge to [bus 07] Nov 1 00:58:26.718950 kernel: pci 0000:00:15.4: bridge window [mem 0xfc500000-0xfc5fffff] Nov 1 00:58:26.718996 kernel: pci 0000:00:15.4: bridge window [mem 0xe6c00000-0xe6cfffff 64bit pref] Nov 1 00:58:26.719048 kernel: pci 0000:00:15.5: PCI bridge to [bus 08] Nov 1 00:58:26.719098 kernel: pci 0000:00:15.5: bridge window [mem 0xfc100000-0xfc1fffff] Nov 1 00:58:26.719145 kernel: pci 0000:00:15.5: bridge window [mem 0xe6800000-0xe68fffff 64bit pref] Nov 1 00:58:26.719193 kernel: pci 0000:00:15.6: PCI bridge to [bus 09] Nov 1 00:58:26.719241 kernel: pci 0000:00:15.6: bridge window [mem 0xfbd00000-0xfbdfffff] Nov 1 00:58:26.719302 kernel: pci 0000:00:15.6: bridge window [mem 0xe6400000-0xe64fffff 64bit pref] Nov 1 00:58:26.719360 kernel: pci 0000:00:15.7: PCI bridge to [bus 0a] Nov 1 00:58:26.719410 kernel: pci 0000:00:15.7: bridge window [mem 0xfb900000-0xfb9fffff] Nov 1 00:58:26.719747 kernel: pci 0000:00:15.7: bridge window [mem 0xe6000000-0xe60fffff 64bit pref] Nov 1 00:58:26.719816 kernel: pci 0000:0b:00.0: BAR 6: assigned [mem 0xfd400000-0xfd40ffff pref] Nov 1 00:58:26.719873 kernel: pci 0000:00:16.0: PCI bridge to [bus 0b] Nov 1 00:58:26.720214 kernel: pci 0000:00:16.0: bridge window [io 0x5000-0x5fff] Nov 1 00:58:26.720317 kernel: pci 0000:00:16.0: bridge window [mem 0xfd400000-0xfd4fffff] Nov 1 00:58:26.720371 kernel: pci 0000:00:16.0: bridge window [mem 0xc0200000-0xc03fffff 64bit pref] Nov 1 00:58:26.720567 kernel: pci 0000:00:16.1: PCI bridge to [bus 0c] Nov 1 00:58:26.720621 kernel: pci 0000:00:16.1: bridge window [io 0x9000-0x9fff] Nov 1 00:58:26.720671 kernel: pci 0000:00:16.1: bridge window [mem 0xfd000000-0xfd0fffff] Nov 1 00:58:26.720994 kernel: pci 0000:00:16.1: bridge window [mem 0xe7700000-0xe77fffff 64bit pref] Nov 1 00:58:26.721057 kernel: pci 0000:00:16.2: PCI bridge to [bus 0d] Nov 1 00:58:26.721111 kernel: pci 0000:00:16.2: bridge window [io 0xd000-0xdfff] Nov 1 00:58:26.721160 kernel: pci 0000:00:16.2: bridge window [mem 0xfcc00000-0xfccfffff] Nov 1 00:58:26.721208 kernel: pci 0000:00:16.2: bridge window [mem 0xe7300000-0xe73fffff 64bit pref] Nov 1 00:58:26.721257 kernel: pci 0000:00:16.3: PCI bridge to [bus 0e] Nov 1 00:58:26.721340 kernel: pci 0000:00:16.3: bridge window [mem 0xfc800000-0xfc8fffff] Nov 1 00:58:26.721389 kernel: pci 0000:00:16.3: bridge window [mem 0xe6f00000-0xe6ffffff 64bit pref] Nov 1 00:58:26.721436 kernel: pci 0000:00:16.4: PCI bridge to [bus 0f] Nov 1 00:58:26.721763 kernel: pci 0000:00:16.4: bridge window [mem 0xfc400000-0xfc4fffff] Nov 1 00:58:26.721826 kernel: pci 0000:00:16.4: bridge window [mem 0xe6b00000-0xe6bfffff 64bit pref] Nov 1 00:58:26.721878 kernel: pci 0000:00:16.5: PCI bridge to [bus 10] Nov 1 00:58:26.721936 kernel: pci 0000:00:16.5: bridge window [mem 0xfc000000-0xfc0fffff] Nov 1 00:58:26.721985 kernel: pci 0000:00:16.5: bridge window [mem 0xe6700000-0xe67fffff 64bit pref] Nov 1 00:58:26.722035 kernel: pci 0000:00:16.6: PCI bridge to [bus 11] Nov 1 00:58:26.722082 kernel: pci 0000:00:16.6: bridge window [mem 0xfbc00000-0xfbcfffff] Nov 1 00:58:26.722130 kernel: pci 0000:00:16.6: bridge window [mem 0xe6300000-0xe63fffff 64bit pref] Nov 1 00:58:26.722178 kernel: pci 0000:00:16.7: PCI bridge to [bus 12] Nov 1 00:58:26.722227 kernel: pci 0000:00:16.7: bridge window [mem 0xfb800000-0xfb8fffff] Nov 1 00:58:26.722333 kernel: pci 0000:00:16.7: bridge window [mem 0xe5f00000-0xe5ffffff 64bit pref] Nov 1 00:58:26.722384 kernel: pci 0000:00:17.0: PCI bridge to [bus 13] Nov 1 00:58:26.722435 kernel: pci 0000:00:17.0: bridge window [io 0x6000-0x6fff] Nov 1 00:58:26.722482 kernel: pci 0000:00:17.0: bridge window [mem 0xfd300000-0xfd3fffff] Nov 1 00:58:26.722529 kernel: pci 0000:00:17.0: bridge window [mem 0xe7a00000-0xe7afffff 64bit pref] Nov 1 00:58:26.722578 kernel: pci 0000:00:17.1: PCI bridge to [bus 14] Nov 1 00:58:26.722625 kernel: pci 0000:00:17.1: bridge window [io 0xa000-0xafff] Nov 1 00:58:26.722672 kernel: pci 0000:00:17.1: bridge window [mem 0xfcf00000-0xfcffffff] Nov 1 00:58:26.722720 kernel: pci 0000:00:17.1: bridge window [mem 0xe7600000-0xe76fffff 64bit pref] Nov 1 00:58:26.722768 kernel: pci 0000:00:17.2: PCI bridge to [bus 15] Nov 1 00:58:26.722815 kernel: pci 0000:00:17.2: bridge window [io 0xe000-0xefff] Nov 1 00:58:26.722862 kernel: pci 0000:00:17.2: bridge window [mem 0xfcb00000-0xfcbfffff] Nov 1 00:58:26.722911 kernel: pci 0000:00:17.2: bridge window [mem 0xe7200000-0xe72fffff 64bit pref] Nov 1 00:58:26.722959 kernel: pci 0000:00:17.3: PCI bridge to [bus 16] Nov 1 00:58:26.723007 kernel: pci 0000:00:17.3: bridge window [mem 0xfc700000-0xfc7fffff] Nov 1 00:58:26.723054 kernel: pci 0000:00:17.3: bridge window [mem 0xe6e00000-0xe6efffff 64bit pref] Nov 1 00:58:26.723101 kernel: pci 0000:00:17.4: PCI bridge to [bus 17] Nov 1 00:58:26.723149 kernel: pci 0000:00:17.4: bridge window [mem 0xfc300000-0xfc3fffff] Nov 1 00:58:26.723201 kernel: pci 0000:00:17.4: bridge window [mem 0xe6a00000-0xe6afffff 64bit pref] Nov 1 00:58:26.723250 kernel: pci 0000:00:17.5: PCI bridge to [bus 18] Nov 1 00:58:26.723326 kernel: pci 0000:00:17.5: bridge window [mem 0xfbf00000-0xfbffffff] Nov 1 00:58:26.723378 kernel: pci 0000:00:17.5: bridge window [mem 0xe6600000-0xe66fffff 64bit pref] Nov 1 00:58:26.723427 kernel: pci 0000:00:17.6: PCI bridge to [bus 19] Nov 1 00:58:26.723762 kernel: pci 0000:00:17.6: bridge window [mem 0xfbb00000-0xfbbfffff] Nov 1 00:58:26.723822 kernel: pci 0000:00:17.6: bridge window [mem 0xe6200000-0xe62fffff 64bit pref] Nov 1 00:58:26.723874 kernel: pci 0000:00:17.7: PCI bridge to [bus 1a] Nov 1 00:58:26.724205 kernel: pci 0000:00:17.7: bridge window [mem 0xfb700000-0xfb7fffff] Nov 1 00:58:26.724276 kernel: pci 0000:00:17.7: bridge window [mem 0xe5e00000-0xe5efffff 64bit pref] Nov 1 00:58:26.724332 kernel: pci 0000:00:18.0: PCI bridge to [bus 1b] Nov 1 00:58:26.724382 kernel: pci 0000:00:18.0: bridge window [io 0x7000-0x7fff] Nov 1 00:58:26.724624 kernel: pci 0000:00:18.0: bridge window [mem 0xfd200000-0xfd2fffff] Nov 1 00:58:26.724681 kernel: pci 0000:00:18.0: bridge window [mem 0xe7900000-0xe79fffff 64bit pref] Nov 1 00:58:26.724733 kernel: pci 0000:00:18.1: PCI bridge to [bus 1c] Nov 1 00:58:26.725067 kernel: pci 0000:00:18.1: bridge window [io 0xb000-0xbfff] Nov 1 00:58:26.725138 kernel: pci 0000:00:18.1: bridge window [mem 0xfce00000-0xfcefffff] Nov 1 00:58:26.725213 kernel: pci 0000:00:18.1: bridge window [mem 0xe7500000-0xe75fffff 64bit pref] Nov 1 00:58:26.725394 kernel: pci 0000:00:18.2: PCI bridge to [bus 1d] Nov 1 00:58:26.725451 kernel: pci 0000:00:18.2: bridge window [mem 0xfca00000-0xfcafffff] Nov 1 00:58:26.725501 kernel: pci 0000:00:18.2: bridge window [mem 0xe7100000-0xe71fffff 64bit pref] Nov 1 00:58:26.725826 kernel: pci 0000:00:18.3: PCI bridge to [bus 1e] Nov 1 00:58:26.725891 kernel: pci 0000:00:18.3: bridge window [mem 0xfc600000-0xfc6fffff] Nov 1 00:58:26.725942 kernel: pci 0000:00:18.3: bridge window [mem 0xe6d00000-0xe6dfffff 64bit pref] Nov 1 00:58:26.725993 kernel: pci 0000:00:18.4: PCI bridge to [bus 1f] Nov 1 00:58:26.726043 kernel: pci 0000:00:18.4: bridge window [mem 0xfc200000-0xfc2fffff] Nov 1 00:58:26.726091 kernel: pci 0000:00:18.4: bridge window [mem 0xe6900000-0xe69fffff 64bit pref] Nov 1 00:58:26.726142 kernel: pci 0000:00:18.5: PCI bridge to [bus 20] Nov 1 00:58:26.726197 kernel: pci 0000:00:18.5: bridge window [mem 0xfbe00000-0xfbefffff] Nov 1 00:58:26.726246 kernel: pci 0000:00:18.5: bridge window [mem 0xe6500000-0xe65fffff 64bit pref] Nov 1 00:58:26.726334 kernel: pci 0000:00:18.6: PCI bridge to [bus 21] Nov 1 00:58:26.726382 kernel: pci 0000:00:18.6: bridge window [mem 0xfba00000-0xfbafffff] Nov 1 00:58:26.726434 kernel: pci 0000:00:18.6: bridge window [mem 0xe6100000-0xe61fffff 64bit pref] Nov 1 00:58:26.726482 kernel: pci 0000:00:18.7: PCI bridge to [bus 22] Nov 1 00:58:26.726531 kernel: pci 0000:00:18.7: bridge window [mem 0xfb600000-0xfb6fffff] Nov 1 00:58:26.726579 kernel: pci 0000:00:18.7: bridge window [mem 0xe5d00000-0xe5dfffff 64bit pref] Nov 1 00:58:26.726625 kernel: pci_bus 0000:00: resource 4 [mem 0x000a0000-0x000bffff window] Nov 1 00:58:26.726669 kernel: pci_bus 0000:00: resource 5 [mem 0x000cc000-0x000dbfff window] Nov 1 00:58:26.726712 kernel: pci_bus 0000:00: resource 6 [mem 0xc0000000-0xfebfffff window] Nov 1 00:58:26.726755 kernel: pci_bus 0000:00: resource 7 [io 0x0000-0x0cf7 window] Nov 1 00:58:26.726798 kernel: pci_bus 0000:00: resource 8 [io 0x0d00-0xfeff window] Nov 1 00:58:26.726848 kernel: pci_bus 0000:02: resource 0 [io 0x2000-0x3fff] Nov 1 00:58:26.726894 kernel: pci_bus 0000:02: resource 1 [mem 0xfd600000-0xfdffffff] Nov 1 00:58:26.726938 kernel: pci_bus 0000:02: resource 2 [mem 0xe7b00000-0xe7ffffff 64bit pref] Nov 1 00:58:26.726983 kernel: pci_bus 0000:02: resource 4 [mem 0x000a0000-0x000bffff window] Nov 1 00:58:26.727027 kernel: pci_bus 0000:02: resource 5 [mem 0x000cc000-0x000dbfff window] Nov 1 00:58:26.727072 kernel: pci_bus 0000:02: resource 6 [mem 0xc0000000-0xfebfffff window] Nov 1 00:58:26.727116 kernel: pci_bus 0000:02: resource 7 [io 0x0000-0x0cf7 window] Nov 1 00:58:26.727164 kernel: pci_bus 0000:02: resource 8 [io 0x0d00-0xfeff window] Nov 1 00:58:26.727231 kernel: pci_bus 0000:03: resource 0 [io 0x4000-0x4fff] Nov 1 00:58:26.727286 kernel: pci_bus 0000:03: resource 1 [mem 0xfd500000-0xfd5fffff] Nov 1 00:58:26.727331 kernel: pci_bus 0000:03: resource 2 [mem 0xc0000000-0xc01fffff 64bit pref] Nov 1 00:58:26.727383 kernel: pci_bus 0000:04: resource 0 [io 0x8000-0x8fff] Nov 1 00:58:26.727429 kernel: pci_bus 0000:04: resource 1 [mem 0xfd100000-0xfd1fffff] Nov 1 00:58:26.727473 kernel: pci_bus 0000:04: resource 2 [mem 0xe7800000-0xe78fffff 64bit pref] Nov 1 00:58:26.727525 kernel: pci_bus 0000:05: resource 0 [io 0xc000-0xcfff] Nov 1 00:58:26.727570 kernel: pci_bus 0000:05: resource 1 [mem 0xfcd00000-0xfcdfffff] Nov 1 00:58:26.727613 kernel: pci_bus 0000:05: resource 2 [mem 0xe7400000-0xe74fffff 64bit pref] Nov 1 00:58:26.727662 kernel: pci_bus 0000:06: resource 1 [mem 0xfc900000-0xfc9fffff] Nov 1 00:58:26.727707 kernel: pci_bus 0000:06: resource 2 [mem 0xe7000000-0xe70fffff 64bit pref] Nov 1 00:58:26.727757 kernel: pci_bus 0000:07: resource 1 [mem 0xfc500000-0xfc5fffff] Nov 1 00:58:26.727804 kernel: pci_bus 0000:07: resource 2 [mem 0xe6c00000-0xe6cfffff 64bit pref] Nov 1 00:58:26.727854 kernel: pci_bus 0000:08: resource 1 [mem 0xfc100000-0xfc1fffff] Nov 1 00:58:26.727899 kernel: pci_bus 0000:08: resource 2 [mem 0xe6800000-0xe68fffff 64bit pref] Nov 1 00:58:26.727948 kernel: pci_bus 0000:09: resource 1 [mem 0xfbd00000-0xfbdfffff] Nov 1 00:58:26.727994 kernel: pci_bus 0000:09: resource 2 [mem 0xe6400000-0xe64fffff 64bit pref] Nov 1 00:58:26.728044 kernel: pci_bus 0000:0a: resource 1 [mem 0xfb900000-0xfb9fffff] Nov 1 00:58:26.728092 kernel: pci_bus 0000:0a: resource 2 [mem 0xe6000000-0xe60fffff 64bit pref] Nov 1 00:58:26.728142 kernel: pci_bus 0000:0b: resource 0 [io 0x5000-0x5fff] Nov 1 00:58:26.728187 kernel: pci_bus 0000:0b: resource 1 [mem 0xfd400000-0xfd4fffff] Nov 1 00:58:26.728231 kernel: pci_bus 0000:0b: resource 2 [mem 0xc0200000-0xc03fffff 64bit pref] Nov 1 00:58:26.728287 kernel: pci_bus 0000:0c: resource 0 [io 0x9000-0x9fff] Nov 1 00:58:26.728335 kernel: pci_bus 0000:0c: resource 1 [mem 0xfd000000-0xfd0fffff] Nov 1 00:58:26.728389 kernel: pci_bus 0000:0c: resource 2 [mem 0xe7700000-0xe77fffff 64bit pref] Nov 1 00:58:26.728439 kernel: pci_bus 0000:0d: resource 0 [io 0xd000-0xdfff] Nov 1 00:58:26.728485 kernel: pci_bus 0000:0d: resource 1 [mem 0xfcc00000-0xfccfffff] Nov 1 00:58:26.728530 kernel: pci_bus 0000:0d: resource 2 [mem 0xe7300000-0xe73fffff 64bit pref] Nov 1 00:58:26.728579 kernel: pci_bus 0000:0e: resource 1 [mem 0xfc800000-0xfc8fffff] Nov 1 00:58:26.728624 kernel: pci_bus 0000:0e: resource 2 [mem 0xe6f00000-0xe6ffffff 64bit pref] Nov 1 00:58:26.728674 kernel: pci_bus 0000:0f: resource 1 [mem 0xfc400000-0xfc4fffff] Nov 1 00:58:26.728723 kernel: pci_bus 0000:0f: resource 2 [mem 0xe6b00000-0xe6bfffff 64bit pref] Nov 1 00:58:26.728773 kernel: pci_bus 0000:10: resource 1 [mem 0xfc000000-0xfc0fffff] Nov 1 00:58:26.728817 kernel: pci_bus 0000:10: resource 2 [mem 0xe6700000-0xe67fffff 64bit pref] Nov 1 00:58:26.728869 kernel: pci_bus 0000:11: resource 1 [mem 0xfbc00000-0xfbcfffff] Nov 1 00:58:26.728914 kernel: pci_bus 0000:11: resource 2 [mem 0xe6300000-0xe63fffff 64bit pref] Nov 1 00:58:26.728964 kernel: pci_bus 0000:12: resource 1 [mem 0xfb800000-0xfb8fffff] Nov 1 00:58:26.729012 kernel: pci_bus 0000:12: resource 2 [mem 0xe5f00000-0xe5ffffff 64bit pref] Nov 1 00:58:26.729062 kernel: pci_bus 0000:13: resource 0 [io 0x6000-0x6fff] Nov 1 00:58:26.729108 kernel: pci_bus 0000:13: resource 1 [mem 0xfd300000-0xfd3fffff] Nov 1 00:58:26.729152 kernel: pci_bus 0000:13: resource 2 [mem 0xe7a00000-0xe7afffff 64bit pref] Nov 1 00:58:26.729200 kernel: pci_bus 0000:14: resource 0 [io 0xa000-0xafff] Nov 1 00:58:26.729246 kernel: pci_bus 0000:14: resource 1 [mem 0xfcf00000-0xfcffffff] Nov 1 00:58:26.729311 kernel: pci_bus 0000:14: resource 2 [mem 0xe7600000-0xe76fffff 64bit pref] Nov 1 00:58:26.729367 kernel: pci_bus 0000:15: resource 0 [io 0xe000-0xefff] Nov 1 00:58:26.729416 kernel: pci_bus 0000:15: resource 1 [mem 0xfcb00000-0xfcbfffff] Nov 1 00:58:26.729514 kernel: pci_bus 0000:15: resource 2 [mem 0xe7200000-0xe72fffff 64bit pref] Nov 1 00:58:26.729650 kernel: pci_bus 0000:16: resource 1 [mem 0xfc700000-0xfc7fffff] Nov 1 00:58:26.729782 kernel: pci_bus 0000:16: resource 2 [mem 0xe6e00000-0xe6efffff 64bit pref] Nov 1 00:58:26.729906 kernel: pci_bus 0000:17: resource 1 [mem 0xfc300000-0xfc3fffff] Nov 1 00:58:26.729958 kernel: pci_bus 0000:17: resource 2 [mem 0xe6a00000-0xe6afffff 64bit pref] Nov 1 00:58:26.730007 kernel: pci_bus 0000:18: resource 1 [mem 0xfbf00000-0xfbffffff] Nov 1 00:58:26.730053 kernel: pci_bus 0000:18: resource 2 [mem 0xe6600000-0xe66fffff 64bit pref] Nov 1 00:58:26.730103 kernel: pci_bus 0000:19: resource 1 [mem 0xfbb00000-0xfbbfffff] Nov 1 00:58:26.730149 kernel: pci_bus 0000:19: resource 2 [mem 0xe6200000-0xe62fffff 64bit pref] Nov 1 00:58:26.730210 kernel: pci_bus 0000:1a: resource 1 [mem 0xfb700000-0xfb7fffff] Nov 1 00:58:26.730261 kernel: pci_bus 0000:1a: resource 2 [mem 0xe5e00000-0xe5efffff 64bit pref] Nov 1 00:58:26.730318 kernel: pci_bus 0000:1b: resource 0 [io 0x7000-0x7fff] Nov 1 00:58:26.730364 kernel: pci_bus 0000:1b: resource 1 [mem 0xfd200000-0xfd2fffff] Nov 1 00:58:26.730409 kernel: pci_bus 0000:1b: resource 2 [mem 0xe7900000-0xe79fffff 64bit pref] Nov 1 00:58:26.730481 kernel: pci_bus 0000:1c: resource 0 [io 0xb000-0xbfff] Nov 1 00:58:26.730535 kernel: pci_bus 0000:1c: resource 1 [mem 0xfce00000-0xfcefffff] Nov 1 00:58:26.730736 kernel: pci_bus 0000:1c: resource 2 [mem 0xe7500000-0xe75fffff 64bit pref] Nov 1 00:58:26.730797 kernel: pci_bus 0000:1d: resource 1 [mem 0xfca00000-0xfcafffff] Nov 1 00:58:26.730844 kernel: pci_bus 0000:1d: resource 2 [mem 0xe7100000-0xe71fffff 64bit pref] Nov 1 00:58:26.731217 kernel: pci_bus 0000:1e: resource 1 [mem 0xfc600000-0xfc6fffff] Nov 1 00:58:26.731281 kernel: pci_bus 0000:1e: resource 2 [mem 0xe6d00000-0xe6dfffff 64bit pref] Nov 1 00:58:26.731335 kernel: pci_bus 0000:1f: resource 1 [mem 0xfc200000-0xfc2fffff] Nov 1 00:58:26.731385 kernel: pci_bus 0000:1f: resource 2 [mem 0xe6900000-0xe69fffff 64bit pref] Nov 1 00:58:26.731581 kernel: pci_bus 0000:20: resource 1 [mem 0xfbe00000-0xfbefffff] Nov 1 00:58:26.731637 kernel: pci_bus 0000:20: resource 2 [mem 0xe6500000-0xe65fffff 64bit pref] Nov 1 00:58:26.731693 kernel: pci_bus 0000:21: resource 1 [mem 0xfba00000-0xfbafffff] Nov 1 00:58:26.732020 kernel: pci_bus 0000:21: resource 2 [mem 0xe6100000-0xe61fffff 64bit pref] Nov 1 00:58:26.732077 kernel: pci_bus 0000:22: resource 1 [mem 0xfb600000-0xfb6fffff] Nov 1 00:58:26.732124 kernel: pci_bus 0000:22: resource 2 [mem 0xe5d00000-0xe5dfffff 64bit pref] Nov 1 00:58:26.732182 kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers Nov 1 00:58:26.732192 kernel: PCI: CLS 32 bytes, default 64 Nov 1 00:58:26.732199 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Nov 1 00:58:26.732205 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x311fd3cd494, max_idle_ns: 440795223879 ns Nov 1 00:58:26.732211 kernel: clocksource: Switched to clocksource tsc Nov 1 00:58:26.732218 kernel: Initialise system trusted keyrings Nov 1 00:58:26.732224 kernel: workingset: timestamp_bits=39 max_order=19 bucket_order=0 Nov 1 00:58:26.732230 kernel: Key type asymmetric registered Nov 1 00:58:26.732238 kernel: Asymmetric key parser 'x509' registered Nov 1 00:58:26.732244 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 249) Nov 1 00:58:26.732250 kernel: io scheduler mq-deadline registered Nov 1 00:58:26.732256 kernel: io scheduler kyber registered Nov 1 00:58:26.732263 kernel: io scheduler bfq registered Nov 1 00:58:26.732362 kernel: pcieport 0000:00:15.0: PME: Signaling with IRQ 24 Nov 1 00:58:26.732414 kernel: pcieport 0000:00:15.0: pciehp: Slot #160 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Nov 1 00:58:26.732464 kernel: pcieport 0000:00:15.1: PME: Signaling with IRQ 25 Nov 1 00:58:26.732669 kernel: pcieport 0000:00:15.1: pciehp: Slot #161 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Nov 1 00:58:26.732726 kernel: pcieport 0000:00:15.2: PME: Signaling with IRQ 26 Nov 1 00:58:26.733055 kernel: pcieport 0000:00:15.2: pciehp: Slot #162 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Nov 1 00:58:26.733116 kernel: pcieport 0000:00:15.3: PME: Signaling with IRQ 27 Nov 1 00:58:26.733168 kernel: pcieport 0000:00:15.3: pciehp: Slot #163 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Nov 1 00:58:26.733465 kernel: pcieport 0000:00:15.4: PME: Signaling with IRQ 28 Nov 1 00:58:26.733531 kernel: pcieport 0000:00:15.4: pciehp: Slot #164 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Nov 1 00:58:26.733588 kernel: pcieport 0000:00:15.5: PME: Signaling with IRQ 29 Nov 1 00:58:26.733654 kernel: pcieport 0000:00:15.5: pciehp: Slot #165 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Nov 1 00:58:26.733914 kernel: pcieport 0000:00:15.6: PME: Signaling with IRQ 30 Nov 1 00:58:26.733972 kernel: pcieport 0000:00:15.6: pciehp: Slot #166 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Nov 1 00:58:26.734025 kernel: pcieport 0000:00:15.7: PME: Signaling with IRQ 31 Nov 1 00:58:26.734097 kernel: pcieport 0000:00:15.7: pciehp: Slot #167 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Nov 1 00:58:26.734991 kernel: pcieport 0000:00:16.0: PME: Signaling with IRQ 32 Nov 1 00:58:26.735049 kernel: pcieport 0000:00:16.0: pciehp: Slot #192 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Nov 1 00:58:26.735103 kernel: pcieport 0000:00:16.1: PME: Signaling with IRQ 33 Nov 1 00:58:26.735578 kernel: pcieport 0000:00:16.1: pciehp: Slot #193 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Nov 1 00:58:26.735925 kernel: pcieport 0000:00:16.2: PME: Signaling with IRQ 34 Nov 1 00:58:26.735986 kernel: pcieport 0000:00:16.2: pciehp: Slot #194 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Nov 1 00:58:26.736044 kernel: pcieport 0000:00:16.3: PME: Signaling with IRQ 35 Nov 1 00:58:26.736094 kernel: pcieport 0000:00:16.3: pciehp: Slot #195 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Nov 1 00:58:26.736145 kernel: pcieport 0000:00:16.4: PME: Signaling with IRQ 36 Nov 1 00:58:26.736193 kernel: pcieport 0000:00:16.4: pciehp: Slot #196 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Nov 1 00:58:26.736243 kernel: pcieport 0000:00:16.5: PME: Signaling with IRQ 37 Nov 1 00:58:26.736307 kernel: pcieport 0000:00:16.5: pciehp: Slot #197 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Nov 1 00:58:26.736359 kernel: pcieport 0000:00:16.6: PME: Signaling with IRQ 38 Nov 1 00:58:26.736407 kernel: pcieport 0000:00:16.6: pciehp: Slot #198 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Nov 1 00:58:26.736457 kernel: pcieport 0000:00:16.7: PME: Signaling with IRQ 39 Nov 1 00:58:26.736646 kernel: pcieport 0000:00:16.7: pciehp: Slot #199 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Nov 1 00:58:26.736704 kernel: pcieport 0000:00:17.0: PME: Signaling with IRQ 40 Nov 1 00:58:26.736754 kernel: pcieport 0000:00:17.0: pciehp: Slot #224 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Nov 1 00:58:26.736983 kernel: pcieport 0000:00:17.1: PME: Signaling with IRQ 41 Nov 1 00:58:26.737040 kernel: pcieport 0000:00:17.1: pciehp: Slot #225 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Nov 1 00:58:26.737091 kernel: pcieport 0000:00:17.2: PME: Signaling with IRQ 42 Nov 1 00:58:26.737140 kernel: pcieport 0000:00:17.2: pciehp: Slot #226 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Nov 1 00:58:26.737485 kernel: pcieport 0000:00:17.3: PME: Signaling with IRQ 43 Nov 1 00:58:26.737719 kernel: pcieport 0000:00:17.3: pciehp: Slot #227 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Nov 1 00:58:26.737782 kernel: pcieport 0000:00:17.4: PME: Signaling with IRQ 44 Nov 1 00:58:26.737836 kernel: pcieport 0000:00:17.4: pciehp: Slot #228 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Nov 1 00:58:26.737888 kernel: pcieport 0000:00:17.5: PME: Signaling with IRQ 45 Nov 1 00:58:26.738199 kernel: pcieport 0000:00:17.5: pciehp: Slot #229 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Nov 1 00:58:26.738261 kernel: pcieport 0000:00:17.6: PME: Signaling with IRQ 46 Nov 1 00:58:26.738414 kernel: pcieport 0000:00:17.6: pciehp: Slot #230 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Nov 1 00:58:26.738472 kernel: pcieport 0000:00:17.7: PME: Signaling with IRQ 47 Nov 1 00:58:26.738522 kernel: pcieport 0000:00:17.7: pciehp: Slot #231 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Nov 1 00:58:26.738756 kernel: pcieport 0000:00:18.0: PME: Signaling with IRQ 48 Nov 1 00:58:26.738810 kernel: pcieport 0000:00:18.0: pciehp: Slot #256 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Nov 1 00:58:26.738862 kernel: pcieport 0000:00:18.1: PME: Signaling with IRQ 49 Nov 1 00:58:26.738914 kernel: pcieport 0000:00:18.1: pciehp: Slot #257 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Nov 1 00:58:26.738969 kernel: pcieport 0000:00:18.2: PME: Signaling with IRQ 50 Nov 1 00:58:26.739019 kernel: pcieport 0000:00:18.2: pciehp: Slot #258 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Nov 1 00:58:26.739068 kernel: pcieport 0000:00:18.3: PME: Signaling with IRQ 51 Nov 1 00:58:26.739117 kernel: pcieport 0000:00:18.3: pciehp: Slot #259 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Nov 1 00:58:26.739168 kernel: pcieport 0000:00:18.4: PME: Signaling with IRQ 52 Nov 1 00:58:26.739218 kernel: pcieport 0000:00:18.4: pciehp: Slot #260 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Nov 1 00:58:26.739274 kernel: pcieport 0000:00:18.5: PME: Signaling with IRQ 53 Nov 1 00:58:26.739325 kernel: pcieport 0000:00:18.5: pciehp: Slot #261 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Nov 1 00:58:26.739375 kernel: pcieport 0000:00:18.6: PME: Signaling with IRQ 54 Nov 1 00:58:26.739424 kernel: pcieport 0000:00:18.6: pciehp: Slot #262 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Nov 1 00:58:26.739476 kernel: pcieport 0000:00:18.7: PME: Signaling with IRQ 55 Nov 1 00:58:26.739524 kernel: pcieport 0000:00:18.7: pciehp: Slot #263 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Nov 1 00:58:26.739533 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Nov 1 00:58:26.739540 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Nov 1 00:58:26.739546 kernel: 00:05: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Nov 1 00:58:26.739553 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBC,PNP0f13:MOUS] at 0x60,0x64 irq 1,12 Nov 1 00:58:26.739559 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Nov 1 00:58:26.739566 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Nov 1 00:58:26.739620 kernel: rtc_cmos 00:01: registered as rtc0 Nov 1 00:58:26.739666 kernel: rtc_cmos 00:01: setting system clock to 2025-11-01T00:58:26 UTC (1761958706) Nov 1 00:58:26.739709 kernel: rtc_cmos 00:01: alarms up to one month, y3k, 114 bytes nvram Nov 1 00:58:26.739718 kernel: intel_pstate: CPU model not supported Nov 1 00:58:26.739724 kernel: NET: Registered PF_INET6 protocol family Nov 1 00:58:26.739730 kernel: Segment Routing with IPv6 Nov 1 00:58:26.739737 kernel: In-situ OAM (IOAM) with IPv6 Nov 1 00:58:26.739745 kernel: NET: Registered PF_PACKET protocol family Nov 1 00:58:26.739751 kernel: Key type dns_resolver registered Nov 1 00:58:26.739757 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Nov 1 00:58:26.739763 kernel: IPI shorthand broadcast: enabled Nov 1 00:58:26.739769 kernel: sched_clock: Marking stable (879106002, 221569555)->(1166217399, -65541842) Nov 1 00:58:26.739775 kernel: registered taskstats version 1 Nov 1 00:58:26.739782 kernel: Loading compiled-in X.509 certificates Nov 1 00:58:26.739788 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 5.15.192-flatcar: f2055682e6899ad8548fd369019e7b47939b46a0' Nov 1 00:58:26.739793 kernel: Key type .fscrypt registered Nov 1 00:58:26.739800 kernel: Key type fscrypt-provisioning registered Nov 1 00:58:26.739806 kernel: ima: No TPM chip found, activating TPM-bypass! Nov 1 00:58:26.739813 kernel: ima: Allocated hash algorithm: sha1 Nov 1 00:58:26.739819 kernel: ima: No architecture policies found Nov 1 00:58:26.739825 kernel: clk: Disabling unused clocks Nov 1 00:58:26.739831 kernel: Freeing unused kernel image (initmem) memory: 47496K Nov 1 00:58:26.739838 kernel: Write protecting the kernel read-only data: 28672k Nov 1 00:58:26.739844 kernel: Freeing unused kernel image (text/rodata gap) memory: 2040K Nov 1 00:58:26.739851 kernel: Freeing unused kernel image (rodata/data gap) memory: 604K Nov 1 00:58:26.739857 kernel: Run /init as init process Nov 1 00:58:26.739863 kernel: with arguments: Nov 1 00:58:26.739869 kernel: /init Nov 1 00:58:26.739875 kernel: with environment: Nov 1 00:58:26.739881 kernel: HOME=/ Nov 1 00:58:26.739887 kernel: TERM=linux Nov 1 00:58:26.739893 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Nov 1 00:58:26.739901 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Nov 1 00:58:26.739911 systemd[1]: Detected virtualization vmware. Nov 1 00:58:26.739918 systemd[1]: Detected architecture x86-64. Nov 1 00:58:26.739924 systemd[1]: Running in initrd. Nov 1 00:58:26.739930 systemd[1]: No hostname configured, using default hostname. Nov 1 00:58:26.739936 systemd[1]: Hostname set to . Nov 1 00:58:26.739942 systemd[1]: Initializing machine ID from random generator. Nov 1 00:58:26.739949 systemd[1]: Queued start job for default target initrd.target. Nov 1 00:58:26.739955 systemd[1]: Started systemd-ask-password-console.path. Nov 1 00:58:26.739962 systemd[1]: Reached target cryptsetup.target. Nov 1 00:58:26.739968 systemd[1]: Reached target paths.target. Nov 1 00:58:26.739974 systemd[1]: Reached target slices.target. Nov 1 00:58:26.739981 systemd[1]: Reached target swap.target. Nov 1 00:58:26.739987 systemd[1]: Reached target timers.target. Nov 1 00:58:26.739994 systemd[1]: Listening on iscsid.socket. Nov 1 00:58:26.740000 systemd[1]: Listening on iscsiuio.socket. Nov 1 00:58:26.740007 systemd[1]: Listening on systemd-journald-audit.socket. Nov 1 00:58:26.740013 systemd[1]: Listening on systemd-journald-dev-log.socket. Nov 1 00:58:26.740020 systemd[1]: Listening on systemd-journald.socket. Nov 1 00:58:26.740026 systemd[1]: Listening on systemd-networkd.socket. Nov 1 00:58:26.740032 systemd[1]: Listening on systemd-udevd-control.socket. Nov 1 00:58:26.740038 systemd[1]: Listening on systemd-udevd-kernel.socket. Nov 1 00:58:26.740044 systemd[1]: Reached target sockets.target. Nov 1 00:58:26.740051 systemd[1]: Starting kmod-static-nodes.service... Nov 1 00:58:26.740057 systemd[1]: Finished network-cleanup.service. Nov 1 00:58:26.740064 systemd[1]: Starting systemd-fsck-usr.service... Nov 1 00:58:26.740071 systemd[1]: Starting systemd-journald.service... Nov 1 00:58:26.740077 systemd[1]: Starting systemd-modules-load.service... Nov 1 00:58:26.740084 systemd[1]: Starting systemd-resolved.service... Nov 1 00:58:26.740090 systemd[1]: Starting systemd-vconsole-setup.service... Nov 1 00:58:26.740096 systemd[1]: Finished kmod-static-nodes.service. Nov 1 00:58:26.740102 systemd[1]: Finished systemd-fsck-usr.service. Nov 1 00:58:26.740109 systemd[1]: Starting systemd-tmpfiles-setup-dev.service... Nov 1 00:58:26.740115 systemd[1]: Finished systemd-vconsole-setup.service. Nov 1 00:58:26.740122 systemd[1]: Finished systemd-tmpfiles-setup-dev.service. Nov 1 00:58:26.740129 systemd[1]: Starting dracut-cmdline-ask.service... Nov 1 00:58:26.740135 kernel: audit: type=1130 audit(1761958706.676:2): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:58:26.740141 kernel: audit: type=1130 audit(1761958706.677:3): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:58:26.740148 systemd[1]: Finished dracut-cmdline-ask.service. Nov 1 00:58:26.740154 kernel: audit: type=1130 audit(1761958706.698:4): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:58:26.740160 systemd[1]: Starting dracut-cmdline.service... Nov 1 00:58:26.740167 systemd[1]: Started systemd-resolved.service. Nov 1 00:58:26.740180 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Nov 1 00:58:26.740189 systemd[1]: Reached target nss-lookup.target. Nov 1 00:58:26.740196 kernel: audit: type=1130 audit(1761958706.706:5): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:58:26.740202 kernel: Bridge firewalling registered Nov 1 00:58:26.740209 kernel: SCSI subsystem initialized Nov 1 00:58:26.740219 systemd-journald[217]: Journal started Nov 1 00:58:26.740250 systemd-journald[217]: Runtime Journal (/run/log/journal/c87460b76b1741c49f0a5320e6e23750) is 4.8M, max 38.8M, 34.0M free. Nov 1 00:58:26.676000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:58:26.677000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:58:26.698000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:58:26.706000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:58:26.662389 systemd-modules-load[218]: Inserted module 'overlay' Nov 1 00:58:26.698176 systemd-resolved[219]: Positive Trust Anchors: Nov 1 00:58:26.698183 systemd-resolved[219]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Nov 1 00:58:26.745536 systemd[1]: Started systemd-journald.service. Nov 1 00:58:26.745550 kernel: audit: type=1130 audit(1761958706.740:6): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:58:26.740000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:58:26.698202 systemd-resolved[219]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Nov 1 00:58:26.749559 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Nov 1 00:58:26.749574 kernel: device-mapper: uevent: version 1.0.3 Nov 1 00:58:26.749583 kernel: device-mapper: ioctl: 4.45.0-ioctl (2021-03-22) initialised: dm-devel@redhat.com Nov 1 00:58:26.704443 systemd-resolved[219]: Defaulting to hostname 'linux'. Nov 1 00:58:26.715716 systemd-modules-load[218]: Inserted module 'br_netfilter' Nov 1 00:58:26.750230 dracut-cmdline[233]: dracut-dracut-053 Nov 1 00:58:26.750230 dracut-cmdline[233]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LA Nov 1 00:58:26.750230 dracut-cmdline[233]: BEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=vmware flatcar.autologin verity.usrhash=c4c72a4f851a6da01cbc7150799371516ef8311ea786098908d8eb164df01ee2 Nov 1 00:58:26.751009 systemd-modules-load[218]: Inserted module 'dm_multipath' Nov 1 00:58:26.751499 systemd[1]: Finished systemd-modules-load.service. Nov 1 00:58:26.751000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:58:26.754885 systemd[1]: Starting systemd-sysctl.service... Nov 1 00:58:26.755342 kernel: audit: type=1130 audit(1761958706.751:7): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:58:26.758757 systemd[1]: Finished systemd-sysctl.service. Nov 1 00:58:26.757000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:58:26.762462 kernel: audit: type=1130 audit(1761958706.757:8): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:58:26.775292 kernel: Loading iSCSI transport class v2.0-870. Nov 1 00:58:26.788288 kernel: iscsi: registered transport (tcp) Nov 1 00:58:26.805289 kernel: iscsi: registered transport (qla4xxx) Nov 1 00:58:26.805327 kernel: QLogic iSCSI HBA Driver Nov 1 00:58:26.820000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:58:26.822004 systemd[1]: Finished dracut-cmdline.service. Nov 1 00:58:26.825513 kernel: audit: type=1130 audit(1761958706.820:9): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:58:26.822665 systemd[1]: Starting dracut-pre-udev.service... Nov 1 00:58:26.863284 kernel: raid6: avx2x4 gen() 47937 MB/s Nov 1 00:58:26.879286 kernel: raid6: avx2x4 xor() 21212 MB/s Nov 1 00:58:26.896288 kernel: raid6: avx2x2 gen() 49158 MB/s Nov 1 00:58:26.913282 kernel: raid6: avx2x2 xor() 31612 MB/s Nov 1 00:58:26.930287 kernel: raid6: avx2x1 gen() 43823 MB/s Nov 1 00:58:26.947288 kernel: raid6: avx2x1 xor() 27616 MB/s Nov 1 00:58:26.964282 kernel: raid6: sse2x4 gen() 20816 MB/s Nov 1 00:58:26.981409 kernel: raid6: sse2x4 xor() 11829 MB/s Nov 1 00:58:26.998291 kernel: raid6: sse2x2 gen() 21273 MB/s Nov 1 00:58:27.015283 kernel: raid6: sse2x2 xor() 13249 MB/s Nov 1 00:58:27.032287 kernel: raid6: sse2x1 gen() 17746 MB/s Nov 1 00:58:27.049527 kernel: raid6: sse2x1 xor() 8830 MB/s Nov 1 00:58:27.049569 kernel: raid6: using algorithm avx2x2 gen() 49158 MB/s Nov 1 00:58:27.049584 kernel: raid6: .... xor() 31612 MB/s, rmw enabled Nov 1 00:58:27.050692 kernel: raid6: using avx2x2 recovery algorithm Nov 1 00:58:27.059282 kernel: xor: automatically using best checksumming function avx Nov 1 00:58:27.120288 kernel: Btrfs loaded, crc32c=crc32c-intel, zoned=no, fsverity=no Nov 1 00:58:27.125165 systemd[1]: Finished dracut-pre-udev.service. Nov 1 00:58:27.124000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:58:27.125830 systemd[1]: Starting systemd-udevd.service... Nov 1 00:58:27.124000 audit: BPF prog-id=7 op=LOAD Nov 1 00:58:27.124000 audit: BPF prog-id=8 op=LOAD Nov 1 00:58:27.129288 kernel: audit: type=1130 audit(1761958707.124:10): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:58:27.136533 systemd-udevd[415]: Using default interface naming scheme 'v252'. Nov 1 00:58:27.139238 systemd[1]: Started systemd-udevd.service. Nov 1 00:58:27.139783 systemd[1]: Starting dracut-pre-trigger.service... Nov 1 00:58:27.138000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:58:27.147605 dracut-pre-trigger[420]: rd.md=0: removing MD RAID activation Nov 1 00:58:27.162000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:58:27.163364 systemd[1]: Finished dracut-pre-trigger.service. Nov 1 00:58:27.163884 systemd[1]: Starting systemd-udev-trigger.service... Nov 1 00:58:27.227460 systemd[1]: Finished systemd-udev-trigger.service. Nov 1 00:58:27.226000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:58:27.282286 kernel: VMware PVSCSI driver - version 1.0.7.0-k Nov 1 00:58:27.282317 kernel: vmw_pvscsi: using 64bit dma Nov 1 00:58:27.286285 kernel: vmw_pvscsi: max_id: 16 Nov 1 00:58:27.286307 kernel: vmw_pvscsi: setting ring_pages to 8 Nov 1 00:58:27.297011 kernel: VMware vmxnet3 virtual NIC driver - version 1.6.0.0-k-NAPI Nov 1 00:58:27.297045 kernel: vmxnet3 0000:0b:00.0: # of Tx queues : 2, # of Rx queues : 2 Nov 1 00:58:27.312405 kernel: cryptd: max_cpu_qlen set to 1000 Nov 1 00:58:27.312421 kernel: vmxnet3 0000:0b:00.0 eth0: NIC Link is Up 10000 Mbps Nov 1 00:58:27.312510 kernel: vmw_pvscsi: enabling reqCallThreshold Nov 1 00:58:27.312519 kernel: vmw_pvscsi: driver-based request coalescing enabled Nov 1 00:58:27.312530 kernel: vmw_pvscsi: using MSI-X Nov 1 00:58:27.312542 kernel: scsi host0: VMware PVSCSI storage adapter rev 2, req/cmp/msg rings: 8/8/1 pages, cmd_per_lun=254 Nov 1 00:58:27.313463 kernel: vmw_pvscsi 0000:03:00.0: VMware PVSCSI rev 2 host #0 Nov 1 00:58:27.316747 kernel: scsi 0:0:0:0: Direct-Access VMware Virtual disk 2.0 PQ: 0 ANSI: 6 Nov 1 00:58:27.318279 kernel: vmxnet3 0000:0b:00.0 ens192: renamed from eth0 Nov 1 00:58:27.320289 kernel: AVX2 version of gcm_enc/dec engaged. Nov 1 00:58:27.320312 kernel: AES CTR mode by8 optimization enabled Nov 1 00:58:27.328291 kernel: libata version 3.00 loaded. Nov 1 00:58:27.328334 kernel: sd 0:0:0:0: [sda] 17805312 512-byte logical blocks: (9.12 GB/8.49 GiB) Nov 1 00:58:27.337747 kernel: sd 0:0:0:0: [sda] Write Protect is off Nov 1 00:58:27.337822 kernel: sd 0:0:0:0: [sda] Mode Sense: 31 00 00 00 Nov 1 00:58:27.337884 kernel: sd 0:0:0:0: [sda] Cache data unavailable Nov 1 00:58:27.337944 kernel: sd 0:0:0:0: [sda] Assuming drive cache: write through Nov 1 00:58:27.338012 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Nov 1 00:58:27.338025 kernel: sd 0:0:0:0: [sda] Attached SCSI disk Nov 1 00:58:27.341280 kernel: ata_piix 0000:00:07.1: version 2.13 Nov 1 00:58:27.347517 kernel: scsi host1: ata_piix Nov 1 00:58:27.347589 kernel: scsi host2: ata_piix Nov 1 00:58:27.347648 kernel: ata1: PATA max UDMA/33 cmd 0x1f0 ctl 0x3f6 bmdma 0x1060 irq 14 Nov 1 00:58:27.347656 kernel: ata2: PATA max UDMA/33 cmd 0x170 ctl 0x376 bmdma 0x1068 irq 15 Nov 1 00:58:27.371286 kernel: BTRFS: device label OEM devid 1 transid 9 /dev/sda6 scanned by (udev-worker) (471) Nov 1 00:58:27.373171 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device. Nov 1 00:58:27.376721 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device. Nov 1 00:58:27.376849 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device. Nov 1 00:58:27.378955 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device. Nov 1 00:58:27.380940 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. Nov 1 00:58:27.381556 systemd[1]: Starting disk-uuid.service... Nov 1 00:58:27.410283 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Nov 1 00:58:27.414288 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Nov 1 00:58:27.513369 kernel: ata2.00: ATAPI: VMware Virtual IDE CDROM Drive, 00000001, max UDMA/33 Nov 1 00:58:27.520284 kernel: scsi 2:0:0:0: CD-ROM NECVMWar VMware IDE CDR10 1.00 PQ: 0 ANSI: 5 Nov 1 00:58:27.546748 kernel: sr 2:0:0:0: [sr0] scsi3-mmc drive: 1x/1x writer dvd-ram cd/rw xa/form2 cdda tray Nov 1 00:58:27.563223 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Nov 1 00:58:27.563239 kernel: sr 2:0:0:0: Attached scsi CD-ROM sr0 Nov 1 00:58:28.418640 disk-uuid[538]: The operation has completed successfully. Nov 1 00:58:28.419285 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Nov 1 00:58:28.459798 systemd[1]: disk-uuid.service: Deactivated successfully. Nov 1 00:58:28.460154 systemd[1]: Finished disk-uuid.service. Nov 1 00:58:28.459000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:58:28.459000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:58:28.461016 systemd[1]: Starting verity-setup.service... Nov 1 00:58:28.473286 kernel: device-mapper: verity: sha256 using implementation "sha256-avx2" Nov 1 00:58:28.517714 systemd[1]: Found device dev-mapper-usr.device. Nov 1 00:58:28.518994 systemd[1]: Mounting sysusr-usr.mount... Nov 1 00:58:28.520157 systemd[1]: Finished verity-setup.service. Nov 1 00:58:28.519000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:58:28.577605 kernel: EXT4-fs (dm-0): mounted filesystem without journal. Opts: norecovery. Quota mode: none. Nov 1 00:58:28.578052 systemd[1]: Mounted sysusr-usr.mount. Nov 1 00:58:28.579073 systemd[1]: Starting afterburn-network-kargs.service... Nov 1 00:58:28.579953 systemd[1]: Starting ignition-setup.service... Nov 1 00:58:28.594729 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Nov 1 00:58:28.594763 kernel: BTRFS info (device sda6): using free space tree Nov 1 00:58:28.594772 kernel: BTRFS info (device sda6): has skinny extents Nov 1 00:58:28.599281 kernel: BTRFS info (device sda6): enabling ssd optimizations Nov 1 00:58:28.606389 systemd[1]: mnt-oem.mount: Deactivated successfully. Nov 1 00:58:28.616118 systemd[1]: Finished ignition-setup.service. Nov 1 00:58:28.615000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:58:28.617325 systemd[1]: Starting ignition-fetch-offline.service... Nov 1 00:58:28.663559 systemd[1]: Finished afterburn-network-kargs.service. Nov 1 00:58:28.664188 systemd[1]: Starting parse-ip-for-networkd.service... Nov 1 00:58:28.662000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=afterburn-network-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:58:28.718638 systemd[1]: Finished parse-ip-for-networkd.service. Nov 1 00:58:28.719564 systemd[1]: Starting systemd-networkd.service... Nov 1 00:58:28.717000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:58:28.717000 audit: BPF prog-id=9 op=LOAD Nov 1 00:58:28.740935 systemd-networkd[734]: lo: Link UP Nov 1 00:58:28.740941 systemd-networkd[734]: lo: Gained carrier Nov 1 00:58:28.741258 systemd-networkd[734]: Enumeration completed Nov 1 00:58:28.745716 kernel: vmxnet3 0000:0b:00.0 ens192: intr type 3, mode 0, 3 vectors allocated Nov 1 00:58:28.745833 kernel: vmxnet3 0000:0b:00.0 ens192: NIC Link is Up 10000 Mbps Nov 1 00:58:28.740000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:58:28.741532 systemd[1]: Started systemd-networkd.service. Nov 1 00:58:28.741678 systemd-networkd[734]: ens192: Configuring with /etc/systemd/network/10-dracut-cmdline-99.network. Nov 1 00:58:28.741683 systemd[1]: Reached target network.target. Nov 1 00:58:28.742206 systemd[1]: Starting iscsiuio.service... Nov 1 00:58:28.745225 systemd-networkd[734]: ens192: Link UP Nov 1 00:58:28.745228 systemd-networkd[734]: ens192: Gained carrier Nov 1 00:58:28.746000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:58:28.747300 systemd[1]: Started iscsiuio.service. Nov 1 00:58:28.747896 systemd[1]: Starting iscsid.service... Nov 1 00:58:28.749918 iscsid[739]: iscsid: can't open InitiatorName configuration file /etc/iscsi/initiatorname.iscsi Nov 1 00:58:28.749918 iscsid[739]: iscsid: Warning: InitiatorName file /etc/iscsi/initiatorname.iscsi does not exist or does not contain a properly formatted InitiatorName. If using software iscsi (iscsi_tcp or ib_iser) or partial offload (bnx2i or cxgbi iscsi), you may not be able to log into or discover targets. Please create a file /etc/iscsi/initiatorname.iscsi that contains a sting with the format: InitiatorName=iqn.yyyy-mm.[:identifier]. Nov 1 00:58:28.749918 iscsid[739]: Example: InitiatorName=iqn.2001-04.com.redhat:fc6. Nov 1 00:58:28.749918 iscsid[739]: If using hardware iscsi like qla4xxx this message can be ignored. Nov 1 00:58:28.749918 iscsid[739]: iscsid: can't open InitiatorAlias configuration file /etc/iscsi/initiatorname.iscsi Nov 1 00:58:28.750911 iscsid[739]: iscsid: can't open iscsid.safe_logout configuration file /etc/iscsi/iscsid.conf Nov 1 00:58:28.750732 systemd[1]: Started iscsid.service. Nov 1 00:58:28.749000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:58:28.751702 systemd[1]: Starting dracut-initqueue.service... Nov 1 00:58:28.757000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:58:28.758769 systemd[1]: Finished dracut-initqueue.service. Nov 1 00:58:28.758957 systemd[1]: Reached target remote-fs-pre.target. Nov 1 00:58:28.759245 systemd[1]: Reached target remote-cryptsetup.target. Nov 1 00:58:28.759344 systemd[1]: Reached target remote-fs.target. Nov 1 00:58:28.760444 systemd[1]: Starting dracut-pre-mount.service... Nov 1 00:58:28.765181 systemd[1]: Finished dracut-pre-mount.service. Nov 1 00:58:28.764000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:58:28.765938 ignition[606]: Ignition 2.14.0 Nov 1 00:58:28.765945 ignition[606]: Stage: fetch-offline Nov 1 00:58:28.765978 ignition[606]: reading system config file "/usr/lib/ignition/base.d/base.ign" Nov 1 00:58:28.765991 ignition[606]: parsing config with SHA512: bd85a898f7da4744ff98e02742aa4854e1ceea8026a4e95cb6fb599b39b54cff0db353847df13d3c55ae196a9dc5d648977228d55e5da3ea20cd600fa7cec8ed Nov 1 00:58:28.773017 ignition[606]: no config dir at "/usr/lib/ignition/base.platform.d/vmware" Nov 1 00:58:28.773093 ignition[606]: parsed url from cmdline: "" Nov 1 00:58:28.773095 ignition[606]: no config URL provided Nov 1 00:58:28.773098 ignition[606]: reading system config file "/usr/lib/ignition/user.ign" Nov 1 00:58:28.773103 ignition[606]: no config at "/usr/lib/ignition/user.ign" Nov 1 00:58:28.780574 ignition[606]: config successfully fetched Nov 1 00:58:28.780600 ignition[606]: parsing config with SHA512: 44ae66970f4a47a00f7ddccf3b6e3ff778a99f0301f75bdaf32e6bce75b21c46878d3b9e35625273e5fc058875304aa72fccdc4cf43424ee0fbdc6a6dbba878e Nov 1 00:58:28.783885 unknown[606]: fetched base config from "system" Nov 1 00:58:28.783891 unknown[606]: fetched user config from "vmware" Nov 1 00:58:28.784200 ignition[606]: fetch-offline: fetch-offline passed Nov 1 00:58:28.784240 ignition[606]: Ignition finished successfully Nov 1 00:58:28.783000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:58:28.784855 systemd[1]: Finished ignition-fetch-offline.service. Nov 1 00:58:28.785005 systemd[1]: ignition-fetch.service was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Nov 1 00:58:28.785511 systemd[1]: Starting ignition-kargs.service... Nov 1 00:58:28.790871 ignition[754]: Ignition 2.14.0 Nov 1 00:58:28.790879 ignition[754]: Stage: kargs Nov 1 00:58:28.790941 ignition[754]: reading system config file "/usr/lib/ignition/base.d/base.ign" Nov 1 00:58:28.790954 ignition[754]: parsing config with SHA512: bd85a898f7da4744ff98e02742aa4854e1ceea8026a4e95cb6fb599b39b54cff0db353847df13d3c55ae196a9dc5d648977228d55e5da3ea20cd600fa7cec8ed Nov 1 00:58:28.792205 ignition[754]: no config dir at "/usr/lib/ignition/base.platform.d/vmware" Nov 1 00:58:28.793782 ignition[754]: kargs: kargs passed Nov 1 00:58:28.793811 ignition[754]: Ignition finished successfully Nov 1 00:58:28.794762 systemd[1]: Finished ignition-kargs.service. Nov 1 00:58:28.793000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:58:28.795414 systemd[1]: Starting ignition-disks.service... Nov 1 00:58:28.799854 ignition[760]: Ignition 2.14.0 Nov 1 00:58:28.800084 ignition[760]: Stage: disks Nov 1 00:58:28.800259 ignition[760]: reading system config file "/usr/lib/ignition/base.d/base.ign" Nov 1 00:58:28.800423 ignition[760]: parsing config with SHA512: bd85a898f7da4744ff98e02742aa4854e1ceea8026a4e95cb6fb599b39b54cff0db353847df13d3c55ae196a9dc5d648977228d55e5da3ea20cd600fa7cec8ed Nov 1 00:58:28.801709 ignition[760]: no config dir at "/usr/lib/ignition/base.platform.d/vmware" Nov 1 00:58:28.803303 ignition[760]: disks: disks passed Nov 1 00:58:28.803341 ignition[760]: Ignition finished successfully Nov 1 00:58:28.803971 systemd[1]: Finished ignition-disks.service. Nov 1 00:58:28.804146 systemd[1]: Reached target initrd-root-device.target. Nov 1 00:58:28.802000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:58:28.804256 systemd[1]: Reached target local-fs-pre.target. Nov 1 00:58:28.804409 systemd[1]: Reached target local-fs.target. Nov 1 00:58:28.804575 systemd[1]: Reached target sysinit.target. Nov 1 00:58:28.804732 systemd[1]: Reached target basic.target. Nov 1 00:58:28.805407 systemd[1]: Starting systemd-fsck-root.service... Nov 1 00:58:28.816629 systemd-fsck[768]: ROOT: clean, 637/1628000 files, 124069/1617920 blocks Nov 1 00:58:28.818074 systemd[1]: Finished systemd-fsck-root.service. Nov 1 00:58:28.816000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:58:28.818719 systemd[1]: Mounting sysroot.mount... Nov 1 00:58:28.826393 kernel: EXT4-fs (sda9): mounted filesystem with ordered data mode. Opts: (null). Quota mode: none. Nov 1 00:58:28.826314 systemd[1]: Mounted sysroot.mount. Nov 1 00:58:28.826585 systemd[1]: Reached target initrd-root-fs.target. Nov 1 00:58:28.827707 systemd[1]: Mounting sysroot-usr.mount... Nov 1 00:58:28.828227 systemd[1]: flatcar-metadata-hostname.service was skipped because no trigger condition checks were met. Nov 1 00:58:28.828251 systemd[1]: ignition-remount-sysroot.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Nov 1 00:58:28.828273 systemd[1]: Reached target ignition-diskful.target. Nov 1 00:58:28.829888 systemd[1]: Mounted sysroot-usr.mount. Nov 1 00:58:28.830599 systemd[1]: Starting initrd-setup-root.service... Nov 1 00:58:28.833588 initrd-setup-root[778]: cut: /sysroot/etc/passwd: No such file or directory Nov 1 00:58:28.837607 initrd-setup-root[786]: cut: /sysroot/etc/group: No such file or directory Nov 1 00:58:28.839972 initrd-setup-root[794]: cut: /sysroot/etc/shadow: No such file or directory Nov 1 00:58:28.842731 initrd-setup-root[802]: cut: /sysroot/etc/gshadow: No such file or directory Nov 1 00:58:28.873915 systemd[1]: Finished initrd-setup-root.service. Nov 1 00:58:28.872000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:58:28.874549 systemd[1]: Starting ignition-mount.service... Nov 1 00:58:28.875071 systemd[1]: Starting sysroot-boot.service... Nov 1 00:58:28.879125 bash[819]: umount: /sysroot/usr/share/oem: not mounted. Nov 1 00:58:28.886452 ignition[820]: INFO : Ignition 2.14.0 Nov 1 00:58:28.886761 ignition[820]: INFO : Stage: mount Nov 1 00:58:28.886971 ignition[820]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" Nov 1 00:58:28.887126 ignition[820]: DEBUG : parsing config with SHA512: bd85a898f7da4744ff98e02742aa4854e1ceea8026a4e95cb6fb599b39b54cff0db353847df13d3c55ae196a9dc5d648977228d55e5da3ea20cd600fa7cec8ed Nov 1 00:58:28.889352 ignition[820]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/vmware" Nov 1 00:58:28.891670 ignition[820]: INFO : mount: mount passed Nov 1 00:58:28.891806 ignition[820]: INFO : Ignition finished successfully Nov 1 00:58:28.893416 systemd[1]: Finished ignition-mount.service. Nov 1 00:58:28.892000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:58:28.897418 systemd[1]: Finished sysroot-boot.service. Nov 1 00:58:28.896000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:58:28.932719 systemd-resolved[219]: Detected conflict on linux IN A 139.178.70.108 Nov 1 00:58:28.932728 systemd-resolved[219]: Hostname conflict, changing published hostname from 'linux' to 'linux9'. Nov 1 00:58:29.534061 systemd[1]: Mounting sysroot-usr-share-oem.mount... Nov 1 00:58:29.543291 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/sda6 scanned by mount (831) Nov 1 00:58:29.545628 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Nov 1 00:58:29.545665 kernel: BTRFS info (device sda6): using free space tree Nov 1 00:58:29.545674 kernel: BTRFS info (device sda6): has skinny extents Nov 1 00:58:29.550296 kernel: BTRFS info (device sda6): enabling ssd optimizations Nov 1 00:58:29.552764 systemd[1]: Mounted sysroot-usr-share-oem.mount. Nov 1 00:58:29.553681 systemd[1]: Starting ignition-files.service... Nov 1 00:58:29.564502 ignition[851]: INFO : Ignition 2.14.0 Nov 1 00:58:29.564771 ignition[851]: INFO : Stage: files Nov 1 00:58:29.564953 ignition[851]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" Nov 1 00:58:29.565105 ignition[851]: DEBUG : parsing config with SHA512: bd85a898f7da4744ff98e02742aa4854e1ceea8026a4e95cb6fb599b39b54cff0db353847df13d3c55ae196a9dc5d648977228d55e5da3ea20cd600fa7cec8ed Nov 1 00:58:29.566691 ignition[851]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/vmware" Nov 1 00:58:29.568996 ignition[851]: DEBUG : files: compiled without relabeling support, skipping Nov 1 00:58:29.569619 ignition[851]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Nov 1 00:58:29.569783 ignition[851]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Nov 1 00:58:29.573169 ignition[851]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Nov 1 00:58:29.573490 ignition[851]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Nov 1 00:58:29.574441 unknown[851]: wrote ssh authorized keys file for user: core Nov 1 00:58:29.574681 ignition[851]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Nov 1 00:58:29.575394 ignition[851]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/etc/flatcar-cgroupv1" Nov 1 00:58:29.575621 ignition[851]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/etc/flatcar-cgroupv1" Nov 1 00:58:29.575809 ignition[851]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/helm-v3.17.0-linux-amd64.tar.gz" Nov 1 00:58:29.576020 ignition[851]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://get.helm.sh/helm-v3.17.0-linux-amd64.tar.gz: attempt #1 Nov 1 00:58:29.621206 ignition[851]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Nov 1 00:58:29.703100 ignition[851]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/helm-v3.17.0-linux-amd64.tar.gz" Nov 1 00:58:29.703100 ignition[851]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Nov 1 00:58:29.703517 ignition[851]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Nov 1 00:58:29.703517 ignition[851]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Nov 1 00:58:29.703517 ignition[851]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Nov 1 00:58:29.703517 ignition[851]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Nov 1 00:58:29.703517 ignition[851]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Nov 1 00:58:29.703517 ignition[851]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Nov 1 00:58:29.703517 ignition[851]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Nov 1 00:58:29.704594 ignition[851]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Nov 1 00:58:29.704594 ignition[851]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Nov 1 00:58:29.704594 ignition[851]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Nov 1 00:58:29.704594 ignition[851]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Nov 1 00:58:29.704594 ignition[851]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/etc/systemd/system/vmtoolsd.service" Nov 1 00:58:29.705467 ignition[851]: INFO : files: createFilesystemsFiles: createFiles: op(b): oem config not found in "/usr/share/oem", looking on oem partition Nov 1 00:58:29.710158 ignition[851]: INFO : files: createFilesystemsFiles: createFiles: op(b): op(c): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem3401956376" Nov 1 00:58:29.710390 ignition[851]: CRITICAL : files: createFilesystemsFiles: createFiles: op(b): op(c): [failed] mounting "/dev/disk/by-label/OEM" at "/mnt/oem3401956376": device or resource busy Nov 1 00:58:29.710599 ignition[851]: ERROR : files: createFilesystemsFiles: createFiles: op(b): failed to mount ext4 device "/dev/disk/by-label/OEM" at "/mnt/oem3401956376", trying btrfs: device or resource busy Nov 1 00:58:29.710812 ignition[851]: INFO : files: createFilesystemsFiles: createFiles: op(b): op(d): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem3401956376" Nov 1 00:58:29.712602 ignition[851]: INFO : files: createFilesystemsFiles: createFiles: op(b): op(d): [finished] mounting "/dev/disk/by-label/OEM" at "/mnt/oem3401956376" Nov 1 00:58:29.713718 ignition[851]: INFO : files: createFilesystemsFiles: createFiles: op(b): op(e): [started] unmounting "/mnt/oem3401956376" Nov 1 00:58:29.714834 ignition[851]: INFO : files: createFilesystemsFiles: createFiles: op(b): op(e): [finished] unmounting "/mnt/oem3401956376" Nov 1 00:58:29.714834 ignition[851]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/etc/systemd/system/vmtoolsd.service" Nov 1 00:58:29.714834 ignition[851]: INFO : files: createFilesystemsFiles: createFiles: op(f): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Nov 1 00:58:29.714834 ignition[851]: INFO : files: createFilesystemsFiles: createFiles: op(f): GET https://extensions.flatcar.org/extensions/kubernetes-v1.32.4-x86-64.raw: attempt #1 Nov 1 00:58:29.714558 systemd[1]: mnt-oem3401956376.mount: Deactivated successfully. Nov 1 00:58:30.171448 ignition[851]: INFO : files: createFilesystemsFiles: createFiles: op(f): GET result: OK Nov 1 00:58:30.516273 ignition[851]: INFO : files: createFilesystemsFiles: createFiles: op(f): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Nov 1 00:58:30.524226 ignition[851]: INFO : files: createFilesystemsFiles: createFiles: op(10): [started] writing file "/sysroot/etc/systemd/network/00-vmware.network" Nov 1 00:58:30.524519 ignition[851]: INFO : files: createFilesystemsFiles: createFiles: op(10): [finished] writing file "/sysroot/etc/systemd/network/00-vmware.network" Nov 1 00:58:30.524519 ignition[851]: INFO : files: op(11): [started] processing unit "vmtoolsd.service" Nov 1 00:58:30.524519 ignition[851]: INFO : files: op(11): [finished] processing unit "vmtoolsd.service" Nov 1 00:58:30.524519 ignition[851]: INFO : files: op(12): [started] processing unit "containerd.service" Nov 1 00:58:30.524519 ignition[851]: INFO : files: op(12): op(13): [started] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Nov 1 00:58:30.524519 ignition[851]: INFO : files: op(12): op(13): [finished] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Nov 1 00:58:30.524519 ignition[851]: INFO : files: op(12): [finished] processing unit "containerd.service" Nov 1 00:58:30.524519 ignition[851]: INFO : files: op(14): [started] processing unit "prepare-helm.service" Nov 1 00:58:30.524519 ignition[851]: INFO : files: op(14): op(15): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Nov 1 00:58:30.526533 ignition[851]: INFO : files: op(14): op(15): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Nov 1 00:58:30.526533 ignition[851]: INFO : files: op(14): [finished] processing unit "prepare-helm.service" Nov 1 00:58:30.526533 ignition[851]: INFO : files: op(16): [started] processing unit "coreos-metadata.service" Nov 1 00:58:30.526533 ignition[851]: INFO : files: op(16): op(17): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Nov 1 00:58:30.526533 ignition[851]: INFO : files: op(16): op(17): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Nov 1 00:58:30.526533 ignition[851]: INFO : files: op(16): [finished] processing unit "coreos-metadata.service" Nov 1 00:58:30.526533 ignition[851]: INFO : files: op(18): [started] setting preset to enabled for "vmtoolsd.service" Nov 1 00:58:30.526533 ignition[851]: INFO : files: op(18): [finished] setting preset to enabled for "vmtoolsd.service" Nov 1 00:58:30.526533 ignition[851]: INFO : files: op(19): [started] setting preset to enabled for "prepare-helm.service" Nov 1 00:58:30.526533 ignition[851]: INFO : files: op(19): [finished] setting preset to enabled for "prepare-helm.service" Nov 1 00:58:30.526533 ignition[851]: INFO : files: op(1a): [started] setting preset to disabled for "coreos-metadata.service" Nov 1 00:58:30.526533 ignition[851]: INFO : files: op(1a): op(1b): [started] removing enablement symlink(s) for "coreos-metadata.service" Nov 1 00:58:30.619457 systemd-networkd[734]: ens192: Gained IPv6LL Nov 1 00:58:30.717352 ignition[851]: INFO : files: op(1a): op(1b): [finished] removing enablement symlink(s) for "coreos-metadata.service" Nov 1 00:58:30.717677 ignition[851]: INFO : files: op(1a): [finished] setting preset to disabled for "coreos-metadata.service" Nov 1 00:58:30.717677 ignition[851]: INFO : files: createResultFile: createFiles: op(1c): [started] writing file "/sysroot/etc/.ignition-result.json" Nov 1 00:58:30.717677 ignition[851]: INFO : files: createResultFile: createFiles: op(1c): [finished] writing file "/sysroot/etc/.ignition-result.json" Nov 1 00:58:30.717677 ignition[851]: INFO : files: files passed Nov 1 00:58:30.717677 ignition[851]: INFO : Ignition finished successfully Nov 1 00:58:30.718499 systemd[1]: Finished ignition-files.service. Nov 1 00:58:30.717000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:58:30.719802 systemd[1]: Starting initrd-setup-root-after-ignition.service... Nov 1 00:58:30.719921 systemd[1]: torcx-profile-populate.service was skipped because of an unmet condition check (ConditionPathExists=/sysroot/etc/torcx/next-profile). Nov 1 00:58:30.720335 systemd[1]: Starting ignition-quench.service... Nov 1 00:58:30.721000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:58:30.722626 systemd[1]: ignition-quench.service: Deactivated successfully. Nov 1 00:58:30.722687 systemd[1]: Finished ignition-quench.service. Nov 1 00:58:30.722000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:58:30.724862 initrd-setup-root-after-ignition[877]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Nov 1 00:58:30.725669 systemd[1]: Finished initrd-setup-root-after-ignition.service. Nov 1 00:58:30.724000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:58:30.725848 systemd[1]: Reached target ignition-complete.target. Nov 1 00:58:30.726435 systemd[1]: Starting initrd-parse-etc.service... Nov 1 00:58:30.736488 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Nov 1 00:58:30.736547 systemd[1]: Finished initrd-parse-etc.service. Nov 1 00:58:30.735000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:58:30.735000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:58:30.737019 systemd[1]: Reached target initrd-fs.target. Nov 1 00:58:30.737293 systemd[1]: Reached target initrd.target. Nov 1 00:58:30.737518 systemd[1]: dracut-mount.service was skipped because no trigger condition checks were met. Nov 1 00:58:30.738163 systemd[1]: Starting dracut-pre-pivot.service... Nov 1 00:58:30.744941 systemd[1]: Finished dracut-pre-pivot.service. Nov 1 00:58:30.743000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:58:30.745767 systemd[1]: Starting initrd-cleanup.service... Nov 1 00:58:30.751634 systemd[1]: Stopped target nss-lookup.target. Nov 1 00:58:30.751952 systemd[1]: Stopped target remote-cryptsetup.target. Nov 1 00:58:30.752237 systemd[1]: Stopped target timers.target. Nov 1 00:58:30.752502 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Nov 1 00:58:30.752723 systemd[1]: Stopped dracut-pre-pivot.service. Nov 1 00:58:30.751000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:58:30.753092 systemd[1]: Stopped target initrd.target. Nov 1 00:58:30.756083 kernel: kauditd_printk_skb: 31 callbacks suppressed Nov 1 00:58:30.756098 kernel: audit: type=1131 audit(1761958710.751:42): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:58:30.756290 systemd[1]: Stopped target basic.target. Nov 1 00:58:30.756550 systemd[1]: Stopped target ignition-complete.target. Nov 1 00:58:30.756813 systemd[1]: Stopped target ignition-diskful.target. Nov 1 00:58:30.757070 systemd[1]: Stopped target initrd-root-device.target. Nov 1 00:58:30.757361 systemd[1]: Stopped target remote-fs.target. Nov 1 00:58:30.757611 systemd[1]: Stopped target remote-fs-pre.target. Nov 1 00:58:30.757872 systemd[1]: Stopped target sysinit.target. Nov 1 00:58:30.758128 systemd[1]: Stopped target local-fs.target. Nov 1 00:58:30.758392 systemd[1]: Stopped target local-fs-pre.target. Nov 1 00:58:30.758645 systemd[1]: Stopped target swap.target. Nov 1 00:58:30.758869 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Nov 1 00:58:30.759081 systemd[1]: Stopped dracut-pre-mount.service. Nov 1 00:58:30.758000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:58:30.759498 systemd[1]: Stopped target cryptsetup.target. Nov 1 00:58:30.761967 systemd[1]: dracut-initqueue.service: Deactivated successfully. Nov 1 00:58:30.762165 systemd[1]: Stopped dracut-initqueue.service. Nov 1 00:58:30.762285 kernel: audit: type=1131 audit(1761958710.758:43): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:58:30.761000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:58:30.762518 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Nov 1 00:58:30.762601 systemd[1]: Stopped ignition-fetch-offline.service. Nov 1 00:58:30.763000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:58:30.765303 systemd[1]: Stopped target paths.target. Nov 1 00:58:30.767677 kernel: audit: type=1131 audit(1761958710.761:44): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:58:30.767694 kernel: audit: type=1131 audit(1761958710.763:45): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:58:30.767877 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Nov 1 00:58:30.771306 systemd[1]: Stopped systemd-ask-password-console.path. Nov 1 00:58:30.771624 systemd[1]: Stopped target slices.target. Nov 1 00:58:30.771920 systemd[1]: Stopped target sockets.target. Nov 1 00:58:30.772189 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Nov 1 00:58:30.772531 systemd[1]: Stopped initrd-setup-root-after-ignition.service. Nov 1 00:58:30.771000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:58:30.773014 systemd[1]: ignition-files.service: Deactivated successfully. Nov 1 00:58:30.773138 systemd[1]: Stopped ignition-files.service. Nov 1 00:58:30.774000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:58:30.776749 systemd[1]: Stopping ignition-mount.service... Nov 1 00:58:30.778508 kernel: audit: type=1131 audit(1761958710.771:46): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:58:30.778530 kernel: audit: type=1131 audit(1761958710.774:47): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:58:30.781458 iscsid[739]: iscsid shutting down. Nov 1 00:58:30.782057 systemd[1]: Stopping iscsid.service... Nov 1 00:58:30.782365 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Nov 1 00:58:30.782670 systemd[1]: Stopped kmod-static-nodes.service. Nov 1 00:58:30.781000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:58:30.784066 systemd[1]: Stopping sysroot-boot.service... Nov 1 00:58:30.788458 kernel: audit: type=1131 audit(1761958710.781:48): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:58:30.788494 ignition[890]: INFO : Ignition 2.14.0 Nov 1 00:58:30.788494 ignition[890]: INFO : Stage: umount Nov 1 00:58:30.788494 ignition[890]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" Nov 1 00:58:30.788494 ignition[890]: DEBUG : parsing config with SHA512: bd85a898f7da4744ff98e02742aa4854e1ceea8026a4e95cb6fb599b39b54cff0db353847df13d3c55ae196a9dc5d648977228d55e5da3ea20cd600fa7cec8ed Nov 1 00:58:30.790386 ignition[890]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/vmware" Nov 1 00:58:30.790790 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Nov 1 00:58:30.791334 systemd[1]: Stopped systemd-udev-trigger.service. Nov 1 00:58:30.790000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:58:30.791794 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Nov 1 00:58:30.794302 kernel: audit: type=1131 audit(1761958710.790:49): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:58:30.791910 systemd[1]: Stopped dracut-pre-trigger.service. Nov 1 00:58:30.793000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:58:30.796208 systemd[1]: iscsid.service: Deactivated successfully. Nov 1 00:58:30.796345 systemd[1]: Stopped iscsid.service. Nov 1 00:58:30.797347 kernel: audit: type=1131 audit(1761958710.793:50): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:58:30.798044 ignition[890]: INFO : umount: umount passed Nov 1 00:58:30.797000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:58:30.798784 systemd[1]: iscsid.socket: Deactivated successfully. Nov 1 00:58:30.798890 systemd[1]: Closed iscsid.socket. Nov 1 00:58:30.800948 ignition[890]: INFO : Ignition finished successfully Nov 1 00:58:30.801280 kernel: audit: type=1131 audit(1761958710.797:51): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:58:30.802297 systemd[1]: Stopping iscsiuio.service... Nov 1 00:58:30.803011 systemd[1]: sysroot-boot.mount: Deactivated successfully. Nov 1 00:58:30.802000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:58:30.802000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:58:30.802000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:58:30.802000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:58:30.803000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:58:30.803345 systemd[1]: initrd-cleanup.service: Deactivated successfully. Nov 1 00:58:30.803000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:58:30.803000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:58:30.803394 systemd[1]: Finished initrd-cleanup.service. Nov 1 00:58:30.803597 systemd[1]: ignition-mount.service: Deactivated successfully. Nov 1 00:58:30.803637 systemd[1]: Stopped ignition-mount.service. Nov 1 00:58:30.804124 systemd[1]: ignition-disks.service: Deactivated successfully. Nov 1 00:58:30.804148 systemd[1]: Stopped ignition-disks.service. Nov 1 00:58:30.804248 systemd[1]: ignition-kargs.service: Deactivated successfully. Nov 1 00:58:30.804276 systemd[1]: Stopped ignition-kargs.service. Nov 1 00:58:30.804374 systemd[1]: ignition-setup.service: Deactivated successfully. Nov 1 00:58:30.804393 systemd[1]: Stopped ignition-setup.service. Nov 1 00:58:30.804597 systemd[1]: iscsiuio.service: Deactivated successfully. Nov 1 00:58:30.804643 systemd[1]: Stopped iscsiuio.service. Nov 1 00:58:30.804763 systemd[1]: Stopped target network.target. Nov 1 00:58:30.804875 systemd[1]: iscsiuio.socket: Deactivated successfully. Nov 1 00:58:30.804892 systemd[1]: Closed iscsiuio.socket. Nov 1 00:58:30.805109 systemd[1]: Stopping systemd-networkd.service... Nov 1 00:58:30.805569 systemd[1]: Stopping systemd-resolved.service... Nov 1 00:58:30.810185 systemd[1]: systemd-resolved.service: Deactivated successfully. Nov 1 00:58:30.810246 systemd[1]: Stopped systemd-resolved.service. Nov 1 00:58:30.809000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:58:30.812127 systemd[1]: systemd-networkd.service: Deactivated successfully. Nov 1 00:58:30.812183 systemd[1]: Stopped systemd-networkd.service. Nov 1 00:58:30.811000 audit: BPF prog-id=6 op=UNLOAD Nov 1 00:58:30.811000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:58:30.812883 systemd[1]: systemd-networkd.socket: Deactivated successfully. Nov 1 00:58:30.812906 systemd[1]: Closed systemd-networkd.socket. Nov 1 00:58:30.813426 systemd[1]: Stopping network-cleanup.service... Nov 1 00:58:30.813524 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Nov 1 00:58:30.813552 systemd[1]: Stopped parse-ip-for-networkd.service. Nov 1 00:58:30.813689 systemd[1]: afterburn-network-kargs.service: Deactivated successfully. Nov 1 00:58:30.813711 systemd[1]: Stopped afterburn-network-kargs.service. Nov 1 00:58:30.813821 systemd[1]: systemd-sysctl.service: Deactivated successfully. Nov 1 00:58:30.813843 systemd[1]: Stopped systemd-sysctl.service. Nov 1 00:58:30.814001 systemd[1]: systemd-modules-load.service: Deactivated successfully. Nov 1 00:58:30.814021 systemd[1]: Stopped systemd-modules-load.service. Nov 1 00:58:30.816630 systemd[1]: Stopping systemd-udevd.service... Nov 1 00:58:30.812000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:58:30.812000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=afterburn-network-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:58:30.812000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:58:30.812000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:58:30.814000 audit: BPF prog-id=9 op=UNLOAD Nov 1 00:58:30.817384 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Nov 1 00:58:30.820321 systemd[1]: network-cleanup.service: Deactivated successfully. Nov 1 00:58:30.820595 systemd[1]: Stopped network-cleanup.service. Nov 1 00:58:30.819000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=network-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:58:30.821012 systemd[1]: systemd-udevd.service: Deactivated successfully. Nov 1 00:58:30.821397 systemd[1]: Stopped systemd-udevd.service. Nov 1 00:58:30.820000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:58:30.822013 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Nov 1 00:58:30.822046 systemd[1]: Closed systemd-udevd-control.socket. Nov 1 00:58:30.822485 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Nov 1 00:58:30.822512 systemd[1]: Closed systemd-udevd-kernel.socket. Nov 1 00:58:30.821000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:58:30.822802 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Nov 1 00:58:30.822000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:58:30.822835 systemd[1]: Stopped dracut-pre-udev.service. Nov 1 00:58:30.822000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:58:30.823117 systemd[1]: dracut-cmdline.service: Deactivated successfully. Nov 1 00:58:30.823149 systemd[1]: Stopped dracut-cmdline.service. Nov 1 00:58:30.823490 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Nov 1 00:58:30.823522 systemd[1]: Stopped dracut-cmdline-ask.service. Nov 1 00:58:30.825042 systemd[1]: Starting initrd-udevadm-cleanup-db.service... Nov 1 00:58:30.825290 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Nov 1 00:58:30.825509 systemd[1]: Stopped systemd-vconsole-setup.service. Nov 1 00:58:30.824000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:58:30.829649 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Nov 1 00:58:30.829868 systemd[1]: Finished initrd-udevadm-cleanup-db.service. Nov 1 00:58:30.828000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:58:30.828000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:58:31.019731 systemd[1]: sysroot-boot.service: Deactivated successfully. Nov 1 00:58:31.019793 systemd[1]: Stopped sysroot-boot.service. Nov 1 00:58:31.018000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:58:31.020077 systemd[1]: Reached target initrd-switch-root.target. Nov 1 00:58:31.020190 systemd[1]: initrd-setup-root.service: Deactivated successfully. Nov 1 00:58:31.020215 systemd[1]: Stopped initrd-setup-root.service. Nov 1 00:58:31.019000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:58:31.020836 systemd[1]: Starting initrd-switch-root.service... Nov 1 00:58:31.038127 systemd[1]: Switching root. Nov 1 00:58:31.038000 audit: BPF prog-id=8 op=UNLOAD Nov 1 00:58:31.039000 audit: BPF prog-id=7 op=UNLOAD Nov 1 00:58:31.039000 audit: BPF prog-id=5 op=UNLOAD Nov 1 00:58:31.039000 audit: BPF prog-id=4 op=UNLOAD Nov 1 00:58:31.039000 audit: BPF prog-id=3 op=UNLOAD Nov 1 00:58:31.055473 systemd-journald[217]: Journal stopped Nov 1 00:58:33.373879 systemd-journald[217]: Received SIGTERM from PID 1 (systemd). Nov 1 00:58:33.373914 kernel: SELinux: Class mctp_socket not defined in policy. Nov 1 00:58:33.373924 kernel: SELinux: Class anon_inode not defined in policy. Nov 1 00:58:33.373930 kernel: SELinux: the above unknown classes and permissions will be allowed Nov 1 00:58:33.373936 kernel: SELinux: policy capability network_peer_controls=1 Nov 1 00:58:33.373943 kernel: SELinux: policy capability open_perms=1 Nov 1 00:58:33.373949 kernel: SELinux: policy capability extended_socket_class=1 Nov 1 00:58:33.373955 kernel: SELinux: policy capability always_check_network=0 Nov 1 00:58:33.373961 kernel: SELinux: policy capability cgroup_seclabel=1 Nov 1 00:58:33.373967 kernel: SELinux: policy capability nnp_nosuid_transition=1 Nov 1 00:58:33.373977 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Nov 1 00:58:33.373987 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Nov 1 00:58:33.373996 systemd[1]: Successfully loaded SELinux policy in 64.174ms. Nov 1 00:58:33.374005 systemd[1]: Relabelled /dev, /dev/shm, /run, /sys/fs/cgroup in 10.248ms. Nov 1 00:58:33.374019 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Nov 1 00:58:33.374031 systemd[1]: Detected virtualization vmware. Nov 1 00:58:33.374041 systemd[1]: Detected architecture x86-64. Nov 1 00:58:33.374047 systemd[1]: Detected first boot. Nov 1 00:58:33.374057 systemd[1]: Initializing machine ID from random generator. Nov 1 00:58:33.374068 kernel: SELinux: Context system_u:object_r:container_file_t:s0:c1022,c1023 is not valid (left unmapped). Nov 1 00:58:33.374078 systemd[1]: Populated /etc with preset unit settings. Nov 1 00:58:33.374088 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Nov 1 00:58:33.374099 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Nov 1 00:58:33.374110 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Nov 1 00:58:33.374122 systemd[1]: Queued start job for default target multi-user.target. Nov 1 00:58:33.374131 systemd[1]: Unnecessary job was removed for dev-sda6.device. Nov 1 00:58:33.378340 systemd[1]: Created slice system-addon\x2dconfig.slice. Nov 1 00:58:33.378360 systemd[1]: Created slice system-addon\x2drun.slice. Nov 1 00:58:33.378372 systemd[1]: Created slice system-getty.slice. Nov 1 00:58:33.378383 systemd[1]: Created slice system-modprobe.slice. Nov 1 00:58:33.378395 systemd[1]: Created slice system-serial\x2dgetty.slice. Nov 1 00:58:33.378411 systemd[1]: Created slice system-system\x2dcloudinit.slice. Nov 1 00:58:33.378423 systemd[1]: Created slice system-systemd\x2dfsck.slice. Nov 1 00:58:33.378434 systemd[1]: Created slice user.slice. Nov 1 00:58:33.378446 systemd[1]: Started systemd-ask-password-console.path. Nov 1 00:58:33.378458 systemd[1]: Started systemd-ask-password-wall.path. Nov 1 00:58:33.378469 systemd[1]: Set up automount boot.automount. Nov 1 00:58:33.378481 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount. Nov 1 00:58:33.378492 systemd[1]: Reached target integritysetup.target. Nov 1 00:58:33.378503 systemd[1]: Reached target remote-cryptsetup.target. Nov 1 00:58:33.378516 systemd[1]: Reached target remote-fs.target. Nov 1 00:58:33.378530 systemd[1]: Reached target slices.target. Nov 1 00:58:33.378540 systemd[1]: Reached target swap.target. Nov 1 00:58:33.378551 systemd[1]: Reached target torcx.target. Nov 1 00:58:33.378563 systemd[1]: Reached target veritysetup.target. Nov 1 00:58:33.378575 systemd[1]: Listening on systemd-coredump.socket. Nov 1 00:58:33.378587 systemd[1]: Listening on systemd-initctl.socket. Nov 1 00:58:33.378598 systemd[1]: Listening on systemd-journald-audit.socket. Nov 1 00:58:33.378612 systemd[1]: Listening on systemd-journald-dev-log.socket. Nov 1 00:58:33.378624 systemd[1]: Listening on systemd-journald.socket. Nov 1 00:58:33.378636 systemd[1]: Listening on systemd-networkd.socket. Nov 1 00:58:33.378647 systemd[1]: Listening on systemd-udevd-control.socket. Nov 1 00:58:33.378660 systemd[1]: Listening on systemd-udevd-kernel.socket. Nov 1 00:58:33.378672 systemd[1]: Listening on systemd-userdbd.socket. Nov 1 00:58:33.378686 systemd[1]: Mounting dev-hugepages.mount... Nov 1 00:58:33.378697 systemd[1]: Mounting dev-mqueue.mount... Nov 1 00:58:33.378708 systemd[1]: Mounting media.mount... Nov 1 00:58:33.378720 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 1 00:58:33.378731 systemd[1]: Mounting sys-kernel-debug.mount... Nov 1 00:58:33.378743 systemd[1]: Mounting sys-kernel-tracing.mount... Nov 1 00:58:33.378754 systemd[1]: Mounting tmp.mount... Nov 1 00:58:33.378767 systemd[1]: Starting flatcar-tmpfiles.service... Nov 1 00:58:33.378779 systemd[1]: Starting ignition-delete-config.service... Nov 1 00:58:33.378790 systemd[1]: Starting kmod-static-nodes.service... Nov 1 00:58:33.378801 systemd[1]: Starting modprobe@configfs.service... Nov 1 00:58:33.378812 systemd[1]: Starting modprobe@dm_mod.service... Nov 1 00:58:33.378824 systemd[1]: Starting modprobe@drm.service... Nov 1 00:58:33.378837 systemd[1]: Starting modprobe@efi_pstore.service... Nov 1 00:58:33.378849 systemd[1]: Starting modprobe@fuse.service... Nov 1 00:58:33.378861 systemd[1]: Starting modprobe@loop.service... Nov 1 00:58:33.378876 systemd[1]: setup-nsswitch.service was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Nov 1 00:58:33.378891 systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling. Nov 1 00:58:33.378903 systemd[1]: (This warning is only shown for the first unit using IP firewalling.) Nov 1 00:58:33.378915 systemd[1]: Starting systemd-journald.service... Nov 1 00:58:33.378927 systemd[1]: Starting systemd-modules-load.service... Nov 1 00:58:33.378939 systemd[1]: Starting systemd-network-generator.service... Nov 1 00:58:33.378951 systemd[1]: Starting systemd-remount-fs.service... Nov 1 00:58:33.378963 systemd[1]: Starting systemd-udev-trigger.service... Nov 1 00:58:33.378974 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 1 00:58:33.378987 kernel: fuse: init (API version 7.34) Nov 1 00:58:33.378998 systemd[1]: Mounted dev-hugepages.mount. Nov 1 00:58:33.379009 systemd[1]: Mounted dev-mqueue.mount. Nov 1 00:58:33.379020 systemd[1]: Mounted media.mount. Nov 1 00:58:33.379031 systemd[1]: Mounted sys-kernel-debug.mount. Nov 1 00:58:33.379042 systemd[1]: Mounted sys-kernel-tracing.mount. Nov 1 00:58:33.379054 systemd[1]: Mounted tmp.mount. Nov 1 00:58:33.379065 systemd[1]: Finished kmod-static-nodes.service. Nov 1 00:58:33.379081 systemd-journald[1044]: Journal started Nov 1 00:58:33.379134 systemd-journald[1044]: Runtime Journal (/run/log/journal/a32c92c20dae4c7e9179a4fa94c43319) is 4.8M, max 38.8M, 34.0M free. Nov 1 00:58:33.293000 audit[1]: AVC avc: denied { audit_read } for pid=1 comm="systemd" capability=37 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 Nov 1 00:58:33.293000 audit[1]: EVENT_LISTENER pid=1 uid=0 auid=4294967295 tty=(none) ses=4294967295 subj=system_u:system_r:kernel_t:s0 comm="systemd" exe="/usr/lib/systemd/systemd" nl-mcgrp=1 op=connect res=1 Nov 1 00:58:33.380752 systemd[1]: Started systemd-journald.service. Nov 1 00:58:33.368000 audit: CONFIG_CHANGE op=set audit_enabled=1 old=1 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 res=1 Nov 1 00:58:33.368000 audit[1044]: SYSCALL arch=c000003e syscall=46 success=yes exit=60 a0=5 a1=7ffc6e698f20 a2=4000 a3=7ffc6e698fbc items=0 ppid=1 pid=1044 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="systemd-journal" exe="/usr/lib/systemd/systemd-journald" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:58:33.368000 audit: PROCTITLE proctitle="/usr/lib/systemd/systemd-journald" Nov 1 00:58:33.378000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:58:33.379000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:58:33.379000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:58:33.379000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:58:33.380589 systemd[1]: modprobe@configfs.service: Deactivated successfully. Nov 1 00:58:33.379000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:58:33.379000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:58:33.380000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:58:33.380000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:58:33.381000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:58:33.381000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:58:33.380698 systemd[1]: Finished modprobe@configfs.service. Nov 1 00:58:33.380964 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Nov 1 00:58:33.381048 systemd[1]: Finished modprobe@dm_mod.service. Nov 1 00:58:33.381313 systemd[1]: modprobe@drm.service: Deactivated successfully. Nov 1 00:58:33.381420 systemd[1]: Finished modprobe@drm.service. Nov 1 00:58:33.384228 jq[1020]: true Nov 1 00:58:33.384000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:58:33.381640 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Nov 1 00:58:33.381717 systemd[1]: Finished modprobe@efi_pstore.service. Nov 1 00:58:33.383373 systemd[1]: Mounting sys-kernel-config.mount... Nov 1 00:58:33.385916 systemd[1]: Finished systemd-modules-load.service. Nov 1 00:58:33.395778 systemd[1]: Starting systemd-sysctl.service... Nov 1 00:58:33.396625 systemd[1]: modprobe@fuse.service: Deactivated successfully. Nov 1 00:58:33.396825 systemd[1]: Finished modprobe@fuse.service. Nov 1 00:58:33.395000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:58:33.395000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:58:33.404542 systemd[1]: Finished systemd-network-generator.service. Nov 1 00:58:33.403000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:58:33.403000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-remount-fs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:58:33.404859 systemd[1]: Finished systemd-remount-fs.service. Nov 1 00:58:33.405065 systemd[1]: Reached target network-pre.target. Nov 1 00:58:33.406020 systemd[1]: Mounting sys-fs-fuse-connections.mount... Nov 1 00:58:33.406137 systemd[1]: remount-root.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Nov 1 00:58:33.410364 systemd[1]: Starting systemd-hwdb-update.service... Nov 1 00:58:33.411259 systemd[1]: Starting systemd-journal-flush.service... Nov 1 00:58:33.412558 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Nov 1 00:58:33.413417 systemd[1]: Starting systemd-random-seed.service... Nov 1 00:58:33.415116 systemd[1]: Mounted sys-kernel-config.mount. Nov 1 00:58:33.417394 systemd[1]: Mounted sys-fs-fuse-connections.mount. Nov 1 00:58:33.425311 jq[1057]: true Nov 1 00:58:33.435332 kernel: loop: module loaded Nov 1 00:58:33.435707 systemd[1]: modprobe@loop.service: Deactivated successfully. Nov 1 00:58:33.434000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:58:33.434000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:58:33.435820 systemd[1]: Finished modprobe@loop.service. Nov 1 00:58:33.436031 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Nov 1 00:58:33.439621 systemd-journald[1044]: Time spent on flushing to /var/log/journal/a32c92c20dae4c7e9179a4fa94c43319 is 50.403ms for 1939 entries. Nov 1 00:58:33.439621 systemd-journald[1044]: System Journal (/var/log/journal/a32c92c20dae4c7e9179a4fa94c43319) is 8.0M, max 584.8M, 576.8M free. Nov 1 00:58:33.533786 systemd-journald[1044]: Received client request to flush runtime journal. Nov 1 00:58:33.448000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=flatcar-tmpfiles comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:58:33.470000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-random-seed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:58:33.486000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:58:33.510000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysusers comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:58:33.514000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:58:33.449785 systemd[1]: Finished flatcar-tmpfiles.service. Nov 1 00:58:33.450893 systemd[1]: Starting systemd-sysusers.service... Nov 1 00:58:33.472074 systemd[1]: Finished systemd-random-seed.service. Nov 1 00:58:33.534376 udevadm[1103]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Nov 1 00:58:33.472245 systemd[1]: Reached target first-boot-complete.target. Nov 1 00:58:33.488001 systemd[1]: Finished systemd-sysctl.service. Nov 1 00:58:33.511471 systemd[1]: Finished systemd-sysusers.service. Nov 1 00:58:33.533000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-flush comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:58:33.512623 systemd[1]: Starting systemd-tmpfiles-setup-dev.service... Nov 1 00:58:33.516107 systemd[1]: Finished systemd-udev-trigger.service. Nov 1 00:58:33.517119 systemd[1]: Starting systemd-udev-settle.service... Nov 1 00:58:33.534865 systemd[1]: Finished systemd-journal-flush.service. Nov 1 00:58:33.555000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:58:33.556595 systemd[1]: Finished systemd-tmpfiles-setup-dev.service. Nov 1 00:58:33.606596 ignition[1085]: Ignition 2.14.0 Nov 1 00:58:33.606821 ignition[1085]: deleting config from guestinfo properties Nov 1 00:58:33.609182 ignition[1085]: Successfully deleted config Nov 1 00:58:33.608000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=ignition-delete-config comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:58:33.609962 systemd[1]: Finished ignition-delete-config.service. Nov 1 00:58:33.894000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hwdb-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:58:33.895886 systemd[1]: Finished systemd-hwdb-update.service. Nov 1 00:58:33.897008 systemd[1]: Starting systemd-udevd.service... Nov 1 00:58:33.910344 systemd-udevd[1113]: Using default interface naming scheme 'v252'. Nov 1 00:58:33.962000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:58:33.963851 systemd[1]: Started systemd-udevd.service. Nov 1 00:58:33.965208 systemd[1]: Starting systemd-networkd.service... Nov 1 00:58:33.973999 systemd[1]: Starting systemd-userdbd.service... Nov 1 00:58:34.001876 systemd[1]: Found device dev-ttyS0.device. Nov 1 00:58:34.013000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-userdbd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:58:34.014349 systemd[1]: Started systemd-userdbd.service. Nov 1 00:58:34.042282 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input2 Nov 1 00:58:34.046306 kernel: ACPI: button: Power Button [PWRF] Nov 1 00:58:34.074006 systemd-networkd[1114]: lo: Link UP Nov 1 00:58:34.074013 systemd-networkd[1114]: lo: Gained carrier Nov 1 00:58:34.074577 systemd-networkd[1114]: Enumeration completed Nov 1 00:58:34.073000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:58:34.074652 systemd-networkd[1114]: ens192: Configuring with /etc/systemd/network/00-vmware.network. Nov 1 00:58:34.074658 systemd[1]: Started systemd-networkd.service. Nov 1 00:58:34.077496 kernel: vmxnet3 0000:0b:00.0 ens192: intr type 3, mode 0, 3 vectors allocated Nov 1 00:58:34.077629 kernel: vmxnet3 0000:0b:00.0 ens192: NIC Link is Up 10000 Mbps Nov 1 00:58:34.078521 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): ens192: link becomes ready Nov 1 00:58:34.078997 systemd-networkd[1114]: ens192: Link UP Nov 1 00:58:34.079091 systemd-networkd[1114]: ens192: Gained carrier Nov 1 00:58:34.126636 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. Nov 1 00:58:34.132300 kernel: vmw_vmci 0000:00:07.7: Found VMCI PCI device at 0x11080, irq 16 Nov 1 00:58:34.139994 kernel: vmw_vmci 0000:00:07.7: Using capabilities 0xc Nov 1 00:58:34.140085 kernel: Guest personality initialized and is active Nov 1 00:58:34.145379 kernel: VMCI host device registered (name=vmci, major=10, minor=125) Nov 1 00:58:34.145430 kernel: Initialized host personality Nov 1 00:58:34.139000 audit[1123]: AVC avc: denied { confidentiality } for pid=1123 comm="(udev-worker)" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=1 Nov 1 00:58:34.139000 audit[1123]: SYSCALL arch=c000003e syscall=175 success=yes exit=0 a0=5559ea4b02c0 a1=338ec a2=7efdaf05abc5 a3=5 items=110 ppid=1113 pid=1123 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="(udev-worker)" exe="/usr/bin/udevadm" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:58:34.139000 audit: CWD cwd="/" Nov 1 00:58:34.139000 audit: PATH item=0 name=(null) inode=45 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:58:34.139000 audit: PATH item=1 name=(null) inode=24742 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:58:34.139000 audit: PATH item=2 name=(null) inode=24742 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:58:34.139000 audit: PATH item=3 name=(null) inode=24743 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:58:34.139000 audit: PATH item=4 name=(null) inode=24742 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:58:34.139000 audit: PATH item=5 name=(null) inode=24744 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:58:34.139000 audit: PATH item=6 name=(null) inode=24742 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:58:34.139000 audit: PATH item=7 name=(null) inode=24745 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:58:34.139000 audit: PATH item=8 name=(null) inode=24745 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:58:34.139000 audit: PATH item=9 name=(null) inode=24746 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:58:34.139000 audit: PATH item=10 name=(null) inode=24745 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:58:34.139000 audit: PATH item=11 name=(null) inode=24747 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:58:34.139000 audit: PATH item=12 name=(null) inode=24745 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:58:34.139000 audit: PATH item=13 name=(null) inode=24748 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:58:34.139000 audit: PATH item=14 name=(null) inode=24745 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:58:34.139000 audit: PATH item=15 name=(null) inode=24749 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:58:34.139000 audit: PATH item=16 name=(null) inode=24745 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:58:34.139000 audit: PATH item=17 name=(null) inode=24750 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:58:34.139000 audit: PATH item=18 name=(null) inode=24742 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:58:34.139000 audit: PATH item=19 name=(null) inode=24751 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:58:34.139000 audit: PATH item=20 name=(null) inode=24751 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:58:34.139000 audit: PATH item=21 name=(null) inode=24752 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:58:34.139000 audit: PATH item=22 name=(null) inode=24751 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:58:34.139000 audit: PATH item=23 name=(null) inode=24753 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:58:34.139000 audit: PATH item=24 name=(null) inode=24751 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:58:34.139000 audit: PATH item=25 name=(null) inode=24754 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:58:34.139000 audit: PATH item=26 name=(null) inode=24751 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:58:34.139000 audit: PATH item=27 name=(null) inode=24755 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:58:34.139000 audit: PATH item=28 name=(null) inode=24751 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:58:34.139000 audit: PATH item=29 name=(null) inode=24756 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:58:34.139000 audit: PATH item=30 name=(null) inode=24742 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:58:34.139000 audit: PATH item=31 name=(null) inode=24757 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:58:34.139000 audit: PATH item=32 name=(null) inode=24757 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:58:34.139000 audit: PATH item=33 name=(null) inode=24758 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:58:34.139000 audit: PATH item=34 name=(null) inode=24757 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:58:34.139000 audit: PATH item=35 name=(null) inode=24759 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:58:34.139000 audit: PATH item=36 name=(null) inode=24757 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:58:34.139000 audit: PATH item=37 name=(null) inode=24760 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:58:34.139000 audit: PATH item=38 name=(null) inode=24757 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:58:34.139000 audit: PATH item=39 name=(null) inode=24761 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:58:34.139000 audit: PATH item=40 name=(null) inode=24757 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:58:34.139000 audit: PATH item=41 name=(null) inode=24762 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:58:34.139000 audit: PATH item=42 name=(null) inode=24742 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:58:34.139000 audit: PATH item=43 name=(null) inode=24763 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:58:34.139000 audit: PATH item=44 name=(null) inode=24763 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:58:34.139000 audit: PATH item=45 name=(null) inode=24764 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:58:34.139000 audit: PATH item=46 name=(null) inode=24763 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:58:34.139000 audit: PATH item=47 name=(null) inode=24765 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:58:34.139000 audit: PATH item=48 name=(null) inode=24763 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:58:34.139000 audit: PATH item=49 name=(null) inode=24766 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:58:34.139000 audit: PATH item=50 name=(null) inode=24763 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:58:34.139000 audit: PATH item=51 name=(null) inode=24767 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:58:34.139000 audit: PATH item=52 name=(null) inode=24763 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:58:34.139000 audit: PATH item=53 name=(null) inode=24772 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:58:34.139000 audit: PATH item=54 name=(null) inode=45 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:58:34.139000 audit: PATH item=55 name=(null) inode=24773 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:58:34.139000 audit: PATH item=56 name=(null) inode=24773 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:58:34.139000 audit: PATH item=57 name=(null) inode=24774 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:58:34.139000 audit: PATH item=58 name=(null) inode=24773 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:58:34.139000 audit: PATH item=59 name=(null) inode=24775 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:58:34.139000 audit: PATH item=60 name=(null) inode=24773 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:58:34.139000 audit: PATH item=61 name=(null) inode=24776 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:58:34.139000 audit: PATH item=62 name=(null) inode=24776 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:58:34.139000 audit: PATH item=63 name=(null) inode=24777 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:58:34.139000 audit: PATH item=64 name=(null) inode=24776 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:58:34.139000 audit: PATH item=65 name=(null) inode=24778 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:58:34.139000 audit: PATH item=66 name=(null) inode=24776 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:58:34.139000 audit: PATH item=67 name=(null) inode=24779 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:58:34.139000 audit: PATH item=68 name=(null) inode=24776 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:58:34.139000 audit: PATH item=69 name=(null) inode=24780 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:58:34.139000 audit: PATH item=70 name=(null) inode=24776 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:58:34.139000 audit: PATH item=71 name=(null) inode=24781 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:58:34.139000 audit: PATH item=72 name=(null) inode=24773 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:58:34.139000 audit: PATH item=73 name=(null) inode=24782 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:58:34.139000 audit: PATH item=74 name=(null) inode=24782 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:58:34.139000 audit: PATH item=75 name=(null) inode=24783 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:58:34.139000 audit: PATH item=76 name=(null) inode=24782 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:58:34.139000 audit: PATH item=77 name=(null) inode=24784 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:58:34.139000 audit: PATH item=78 name=(null) inode=24782 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:58:34.139000 audit: PATH item=79 name=(null) inode=24785 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:58:34.139000 audit: PATH item=80 name=(null) inode=24782 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:58:34.139000 audit: PATH item=81 name=(null) inode=24786 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:58:34.139000 audit: PATH item=82 name=(null) inode=24782 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:58:34.139000 audit: PATH item=83 name=(null) inode=24787 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:58:34.139000 audit: PATH item=84 name=(null) inode=24773 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:58:34.139000 audit: PATH item=85 name=(null) inode=24788 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:58:34.139000 audit: PATH item=86 name=(null) inode=24788 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:58:34.139000 audit: PATH item=87 name=(null) inode=24789 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:58:34.139000 audit: PATH item=88 name=(null) inode=24788 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:58:34.139000 audit: PATH item=89 name=(null) inode=24790 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:58:34.139000 audit: PATH item=90 name=(null) inode=24788 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:58:34.139000 audit: PATH item=91 name=(null) inode=24791 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:58:34.139000 audit: PATH item=92 name=(null) inode=24788 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:58:34.139000 audit: PATH item=93 name=(null) inode=24792 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:58:34.139000 audit: PATH item=94 name=(null) inode=24788 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:58:34.139000 audit: PATH item=95 name=(null) inode=24793 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:58:34.139000 audit: PATH item=96 name=(null) inode=24773 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:58:34.139000 audit: PATH item=97 name=(null) inode=24794 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:58:34.139000 audit: PATH item=98 name=(null) inode=24794 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:58:34.139000 audit: PATH item=99 name=(null) inode=24795 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:58:34.139000 audit: PATH item=100 name=(null) inode=24794 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:58:34.139000 audit: PATH item=101 name=(null) inode=24796 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:58:34.139000 audit: PATH item=102 name=(null) inode=24794 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:58:34.139000 audit: PATH item=103 name=(null) inode=24797 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:58:34.139000 audit: PATH item=104 name=(null) inode=24794 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:58:34.139000 audit: PATH item=105 name=(null) inode=24798 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:58:34.139000 audit: PATH item=106 name=(null) inode=24794 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:58:34.139000 audit: PATH item=107 name=(null) inode=24799 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:58:34.139000 audit: PATH item=108 name=(null) inode=1 dev=00:07 mode=040700 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:debugfs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:58:34.139000 audit: PATH item=109 name=(null) inode=24800 dev=00:07 mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:debugfs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:58:34.139000 audit: PROCTITLE proctitle="(udev-worker)" Nov 1 00:58:34.177343 kernel: input: ImPS/2 Generic Wheel Mouse as /devices/platform/i8042/serio1/input/input3 Nov 1 00:58:34.181281 kernel: mousedev: PS/2 mouse device common for all mice Nov 1 00:58:34.184016 (udev-worker)[1119]: id: Truncating stdout of 'dmi_memory_id' up to 16384 byte. Nov 1 00:58:34.184286 kernel: piix4_smbus 0000:00:07.3: SMBus Host Controller not enabled! Nov 1 00:58:34.192000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-settle comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:58:34.193599 systemd[1]: Finished systemd-udev-settle.service. Nov 1 00:58:34.194742 systemd[1]: Starting lvm2-activation-early.service... Nov 1 00:58:34.210974 lvm[1147]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Nov 1 00:58:34.236908 systemd[1]: Finished lvm2-activation-early.service. Nov 1 00:58:34.235000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:58:34.237093 systemd[1]: Reached target cryptsetup.target. Nov 1 00:58:34.238198 systemd[1]: Starting lvm2-activation.service... Nov 1 00:58:34.240877 lvm[1149]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Nov 1 00:58:34.256000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:58:34.257905 systemd[1]: Finished lvm2-activation.service. Nov 1 00:58:34.258089 systemd[1]: Reached target local-fs-pre.target. Nov 1 00:58:34.258197 systemd[1]: var-lib-machines.mount was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Nov 1 00:58:34.258210 systemd[1]: Reached target local-fs.target. Nov 1 00:58:34.258315 systemd[1]: Reached target machines.target. Nov 1 00:58:34.259438 systemd[1]: Starting ldconfig.service... Nov 1 00:58:34.260309 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Nov 1 00:58:34.260342 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Nov 1 00:58:34.261427 systemd[1]: Starting systemd-boot-update.service... Nov 1 00:58:34.262141 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service... Nov 1 00:58:34.263309 systemd[1]: Starting systemd-machine-id-commit.service... Nov 1 00:58:34.264309 systemd[1]: Starting systemd-sysext.service... Nov 1 00:58:34.293214 systemd[1]: boot.automount: Got automount request for /boot, triggered by 1152 (bootctl) Nov 1 00:58:34.294103 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service... Nov 1 00:58:34.297000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-OEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:58:34.298903 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service. Nov 1 00:58:34.306072 systemd[1]: Unmounting usr-share-oem.mount... Nov 1 00:58:34.308707 systemd[1]: usr-share-oem.mount: Deactivated successfully. Nov 1 00:58:34.308843 systemd[1]: Unmounted usr-share-oem.mount. Nov 1 00:58:34.359294 kernel: loop0: detected capacity change from 0 to 224512 Nov 1 00:58:34.725975 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Nov 1 00:58:34.726387 systemd[1]: Finished systemd-machine-id-commit.service. Nov 1 00:58:34.725000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-machine-id-commit comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:58:34.741524 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Nov 1 00:58:34.756768 systemd-fsck[1164]: fsck.fat 4.2 (2021-01-31) Nov 1 00:58:34.756768 systemd-fsck[1164]: /dev/sda1: 790 files, 120773/258078 clusters Nov 1 00:58:34.757781 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service. Nov 1 00:58:34.756000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:58:34.758931 systemd[1]: Mounting boot.mount... Nov 1 00:58:34.764284 kernel: loop1: detected capacity change from 0 to 224512 Nov 1 00:58:34.770752 systemd[1]: Mounted boot.mount. Nov 1 00:58:34.781929 systemd[1]: Finished systemd-boot-update.service. Nov 1 00:58:34.780000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-boot-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:58:34.786035 (sd-sysext)[1171]: Using extensions 'kubernetes'. Nov 1 00:58:34.786461 (sd-sysext)[1171]: Merged extensions into '/usr'. Nov 1 00:58:34.796635 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 1 00:58:34.797835 systemd[1]: Mounting usr-share-oem.mount... Nov 1 00:58:34.798690 systemd[1]: Starting modprobe@dm_mod.service... Nov 1 00:58:34.799444 systemd[1]: Starting modprobe@efi_pstore.service... Nov 1 00:58:34.800157 systemd[1]: Starting modprobe@loop.service... Nov 1 00:58:34.800340 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Nov 1 00:58:34.800421 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Nov 1 00:58:34.800494 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 1 00:58:34.803632 systemd[1]: Mounted usr-share-oem.mount. Nov 1 00:58:34.803920 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Nov 1 00:58:34.804024 systemd[1]: Finished modprobe@dm_mod.service. Nov 1 00:58:34.802000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:58:34.802000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:58:34.808261 systemd[1]: Finished systemd-sysext.service. Nov 1 00:58:34.807000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:58:34.808622 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Nov 1 00:58:34.808707 systemd[1]: Finished modprobe@efi_pstore.service. Nov 1 00:58:34.807000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:58:34.807000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:58:34.808928 systemd[1]: modprobe@loop.service: Deactivated successfully. Nov 1 00:58:34.809397 systemd[1]: Finished modprobe@loop.service. Nov 1 00:58:34.808000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:58:34.808000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:58:34.810505 systemd[1]: Starting ensure-sysext.service... Nov 1 00:58:34.811305 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Nov 1 00:58:34.811338 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Nov 1 00:58:34.812147 systemd[1]: Starting systemd-tmpfiles-setup.service... Nov 1 00:58:34.816357 systemd[1]: Reloading. Nov 1 00:58:34.829717 systemd-tmpfiles[1187]: /usr/lib/tmpfiles.d/legacy.conf:13: Duplicate line for path "/run/lock", ignoring. Nov 1 00:58:34.845418 systemd-tmpfiles[1187]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Nov 1 00:58:34.851261 systemd-tmpfiles[1187]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Nov 1 00:58:34.852489 /usr/lib/systemd/system-generators/torcx-generator[1207]: time="2025-11-01T00:58:34Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.8 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.8 /var/lib/torcx/store]" Nov 1 00:58:34.852508 /usr/lib/systemd/system-generators/torcx-generator[1207]: time="2025-11-01T00:58:34Z" level=info msg="torcx already run" Nov 1 00:58:34.940908 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Nov 1 00:58:34.940919 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Nov 1 00:58:34.957844 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Nov 1 00:58:35.001808 systemd[1]: Finished systemd-tmpfiles-setup.service. Nov 1 00:58:35.000000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:58:35.003608 systemd[1]: Starting audit-rules.service... Nov 1 00:58:35.004567 systemd[1]: Starting clean-ca-certificates.service... Nov 1 00:58:35.005510 systemd[1]: Starting systemd-journal-catalog-update.service... Nov 1 00:58:35.006619 systemd[1]: Starting systemd-resolved.service... Nov 1 00:58:35.007941 systemd[1]: Starting systemd-timesyncd.service... Nov 1 00:58:35.014000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=clean-ca-certificates comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:58:35.013722 systemd[1]: Starting systemd-update-utmp.service... Nov 1 00:58:35.014172 systemd[1]: Finished clean-ca-certificates.service. Nov 1 00:58:35.015590 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Nov 1 00:58:35.019000 audit[1287]: SYSTEM_BOOT pid=1287 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg=' comm="systemd-update-utmp" exe="/usr/lib/systemd/systemd-update-utmp" hostname=? addr=? terminal=? res=success' Nov 1 00:58:35.025016 systemd[1]: Finished systemd-update-utmp.service. Nov 1 00:58:35.023000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-update-utmp comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:58:35.025853 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 1 00:58:35.026798 systemd[1]: Starting modprobe@dm_mod.service... Nov 1 00:58:35.027975 systemd[1]: Starting modprobe@efi_pstore.service... Nov 1 00:58:35.028854 systemd[1]: Starting modprobe@loop.service... Nov 1 00:58:35.028987 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Nov 1 00:58:35.029063 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Nov 1 00:58:35.029135 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Nov 1 00:58:35.029184 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 1 00:58:35.031000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:58:35.031000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:58:35.032769 systemd[1]: modprobe@loop.service: Deactivated successfully. Nov 1 00:58:35.032856 systemd[1]: Finished modprobe@loop.service. Nov 1 00:58:35.033188 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Nov 1 00:58:35.033263 systemd[1]: Finished modprobe@dm_mod.service. Nov 1 00:58:35.032000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:58:35.032000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:58:35.033499 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Nov 1 00:58:35.033703 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Nov 1 00:58:35.033793 systemd[1]: Finished modprobe@efi_pstore.service. Nov 1 00:58:35.032000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:58:35.032000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:58:35.034004 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Nov 1 00:58:35.035545 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 1 00:58:35.036279 systemd[1]: Starting modprobe@dm_mod.service... Nov 1 00:58:35.037006 systemd[1]: Starting modprobe@efi_pstore.service... Nov 1 00:58:35.038012 systemd[1]: Starting modprobe@loop.service... Nov 1 00:58:35.038133 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Nov 1 00:58:35.038204 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Nov 1 00:58:35.038285 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Nov 1 00:58:35.038334 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 1 00:58:35.044623 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Nov 1 00:58:35.044721 systemd[1]: Finished modprobe@efi_pstore.service. Nov 1 00:58:35.043000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:58:35.043000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:58:35.045060 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Nov 1 00:58:35.046616 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 1 00:58:35.047446 systemd[1]: Starting modprobe@drm.service... Nov 1 00:58:35.048519 systemd[1]: Starting modprobe@efi_pstore.service... Nov 1 00:58:35.048679 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Nov 1 00:58:35.048754 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Nov 1 00:58:35.049801 systemd[1]: Starting systemd-networkd-wait-online.service... Nov 1 00:58:35.050377 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Nov 1 00:58:35.050451 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 1 00:58:35.051000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=ensure-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:58:35.052000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:58:35.052000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:58:35.052879 systemd[1]: Finished ensure-sysext.service. Nov 1 00:58:35.053147 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Nov 1 00:58:35.053229 systemd[1]: Finished modprobe@dm_mod.service. Nov 1 00:58:35.053000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:58:35.053000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:58:35.054819 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Nov 1 00:58:35.054905 systemd[1]: Finished modprobe@efi_pstore.service. Nov 1 00:58:35.055178 systemd[1]: modprobe@loop.service: Deactivated successfully. Nov 1 00:58:35.055255 systemd[1]: Finished modprobe@loop.service. Nov 1 00:58:35.054000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:58:35.054000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:58:35.054000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:58:35.054000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:58:35.055490 systemd[1]: modprobe@drm.service: Deactivated successfully. Nov 1 00:58:35.055571 systemd[1]: Finished modprobe@drm.service. Nov 1 00:58:35.055720 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Nov 1 00:58:35.055748 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Nov 1 00:58:35.069492 systemd[1]: Finished systemd-journal-catalog-update.service. Nov 1 00:58:35.068000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-catalog-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:58:35.082516 ldconfig[1151]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Nov 1 00:58:35.084929 systemd[1]: Finished ldconfig.service. Nov 1 00:58:35.083000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=ldconfig comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:58:35.086097 systemd[1]: Starting systemd-update-done.service... Nov 1 00:58:35.093413 systemd[1]: Finished systemd-update-done.service. Nov 1 00:58:35.095000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-update-done comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:58:35.107928 systemd-resolved[1278]: Positive Trust Anchors: Nov 1 00:58:35.108089 systemd-resolved[1278]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Nov 1 00:58:35.108147 systemd-resolved[1278]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Nov 1 00:58:35.107000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=add_rule key=(null) list=5 res=1 Nov 1 00:58:35.107000 audit[1321]: SYSCALL arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7ffd4d2017f0 a2=420 a3=0 items=0 ppid=1275 pid=1321 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/sbin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:58:35.107000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Nov 1 00:58:35.108906 augenrules[1321]: No rules Nov 1 00:58:35.108961 systemd[1]: Finished audit-rules.service. Nov 1 00:58:35.115083 systemd[1]: Started systemd-timesyncd.service. Nov 1 00:58:35.115296 systemd[1]: Reached target time-set.target. Nov 1 00:58:35.127673 systemd-resolved[1278]: Defaulting to hostname 'linux'. Nov 1 00:58:35.128719 systemd[1]: Started systemd-resolved.service. Nov 1 00:58:35.128862 systemd[1]: Reached target network.target. Nov 1 00:58:35.128952 systemd[1]: Reached target nss-lookup.target. Nov 1 00:58:35.129044 systemd[1]: Reached target sysinit.target. Nov 1 00:58:35.129188 systemd[1]: Started motdgen.path. Nov 1 00:58:35.129299 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path. Nov 1 00:58:35.129488 systemd[1]: Started logrotate.timer. Nov 1 00:58:35.129606 systemd[1]: Started mdadm.timer. Nov 1 00:58:35.129689 systemd[1]: Started systemd-tmpfiles-clean.timer. Nov 1 00:58:35.129785 systemd[1]: update-engine-stub.timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Nov 1 00:58:35.129808 systemd[1]: Reached target paths.target. Nov 1 00:58:35.129891 systemd[1]: Reached target timers.target. Nov 1 00:58:35.130130 systemd[1]: Listening on dbus.socket. Nov 1 00:58:35.131110 systemd[1]: Starting docker.socket... Nov 1 00:58:35.132089 systemd[1]: Listening on sshd.socket. Nov 1 00:58:35.132226 systemd[1]: systemd-pcrphase-sysinit.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Nov 1 00:58:35.132441 systemd[1]: Listening on docker.socket. Nov 1 00:58:35.132539 systemd[1]: Reached target sockets.target. Nov 1 00:58:35.132622 systemd[1]: Reached target basic.target. Nov 1 00:58:35.132782 systemd[1]: System is tainted: cgroupsv1 Nov 1 00:58:35.132810 systemd[1]: addon-config@usr-share-oem.service was skipped because no trigger condition checks were met. Nov 1 00:58:35.132823 systemd[1]: addon-run@usr-share-oem.service was skipped because no trigger condition checks were met. Nov 1 00:58:35.133643 systemd[1]: Starting containerd.service... Nov 1 00:58:35.134705 systemd[1]: Starting dbus.service... Nov 1 00:58:35.135562 systemd[1]: Starting enable-oem-cloudinit.service... Nov 1 00:58:35.136554 systemd[1]: Starting extend-filesystems.service... Nov 1 00:58:35.139278 jq[1333]: false Nov 1 00:58:35.138908 systemd[1]: flatcar-setup-environment.service was skipped because of an unmet condition check (ConditionPathExists=/usr/share/oem/bin/flatcar-setup-environment). Nov 1 00:58:35.139743 systemd[1]: Starting motdgen.service... Nov 1 00:58:35.140803 systemd[1]: Starting prepare-helm.service... Nov 1 00:58:35.141784 systemd[1]: Starting ssh-key-proc-cmdline.service... Nov 1 00:58:35.143976 systemd[1]: Starting sshd-keygen.service... Nov 1 00:58:35.145644 systemd[1]: Starting systemd-logind.service... Nov 1 00:58:35.145757 systemd[1]: systemd-pcrphase.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Nov 1 00:58:35.145801 systemd[1]: tcsd.service was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Nov 1 00:58:35.147613 systemd[1]: Starting update-engine.service... Nov 1 00:58:35.148455 systemd[1]: Starting update-ssh-keys-after-ignition.service... Nov 1 00:58:35.149680 systemd[1]: Starting vmtoolsd.service... Nov 1 00:58:35.153179 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Nov 1 00:58:35.153330 systemd[1]: Condition check resulted in enable-oem-cloudinit.service being skipped. Nov 1 00:58:35.156584 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Nov 1 00:58:35.156718 systemd[1]: Finished ssh-key-proc-cmdline.service. Nov 1 00:58:35.162867 jq[1343]: true Nov 1 00:58:35.170896 systemd[1]: Started vmtoolsd.service. Nov 1 00:58:35.172125 jq[1363]: true Nov 1 00:58:35.175609 extend-filesystems[1334]: Found loop1 Nov 1 00:58:35.175609 extend-filesystems[1334]: Found sda Nov 1 00:58:35.175609 extend-filesystems[1334]: Found sda1 Nov 1 00:58:35.175609 extend-filesystems[1334]: Found sda2 Nov 1 00:58:35.175609 extend-filesystems[1334]: Found sda3 Nov 1 00:58:35.175609 extend-filesystems[1334]: Found usr Nov 1 00:58:35.175609 extend-filesystems[1334]: Found sda4 Nov 1 00:58:35.175609 extend-filesystems[1334]: Found sda6 Nov 1 00:58:35.175609 extend-filesystems[1334]: Found sda7 Nov 1 00:58:35.175609 extend-filesystems[1334]: Found sda9 Nov 1 00:58:35.175609 extend-filesystems[1334]: Checking size of /dev/sda9 Nov 1 01:00:13.289752 systemd-timesyncd[1280]: Contacted time server 144.202.62.209:123 (0.flatcar.pool.ntp.org). Nov 1 01:00:13.289784 systemd-timesyncd[1280]: Initial clock synchronization to Sat 2025-11-01 01:00:13.289683 UTC. Nov 1 01:00:13.289951 systemd-resolved[1278]: Clock change detected. Flushing caches. Nov 1 01:00:13.297212 systemd[1]: motdgen.service: Deactivated successfully. Nov 1 01:00:13.297356 systemd[1]: Finished motdgen.service. Nov 1 01:00:13.321432 env[1357]: time="2025-11-01T01:00:13.321397638Z" level=info msg="starting containerd" revision=92b3a9d6f1b3bcc6dc74875cfdea653fe39f09c2 version=1.6.16 Nov 1 01:00:13.333165 env[1357]: time="2025-11-01T01:00:13.333143846Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Nov 1 01:00:13.333300 env[1357]: time="2025-11-01T01:00:13.333289092Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Nov 1 01:00:13.334035 env[1357]: time="2025-11-01T01:00:13.334018566Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.15.192-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Nov 1 01:00:13.334087 env[1357]: time="2025-11-01T01:00:13.334077394Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Nov 1 01:00:13.334261 env[1357]: time="2025-11-01T01:00:13.334250022Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Nov 1 01:00:13.334307 env[1357]: time="2025-11-01T01:00:13.334297636Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Nov 1 01:00:13.334353 env[1357]: time="2025-11-01T01:00:13.334342583Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured" Nov 1 01:00:13.334397 env[1357]: time="2025-11-01T01:00:13.334387794Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Nov 1 01:00:13.334481 env[1357]: time="2025-11-01T01:00:13.334471974Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Nov 1 01:00:13.334646 env[1357]: time="2025-11-01T01:00:13.334637340Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Nov 1 01:00:13.334782 env[1357]: time="2025-11-01T01:00:13.334770261Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Nov 1 01:00:13.334826 env[1357]: time="2025-11-01T01:00:13.334816964Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Nov 1 01:00:13.334891 env[1357]: time="2025-11-01T01:00:13.334881651Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured" Nov 1 01:00:13.334940 env[1357]: time="2025-11-01T01:00:13.334931003Z" level=info msg="metadata content store policy set" policy=shared Nov 1 01:00:13.346205 extend-filesystems[1334]: Old size kept for /dev/sda9 Nov 1 01:00:13.346937 extend-filesystems[1334]: Found sr0 Nov 1 01:00:13.346467 systemd[1]: extend-filesystems.service: Deactivated successfully. Nov 1 01:00:13.346698 systemd[1]: Finished extend-filesystems.service. Nov 1 01:00:13.356155 tar[1351]: linux-amd64/LICENSE Nov 1 01:00:13.356155 tar[1351]: linux-amd64/helm Nov 1 01:00:13.357612 bash[1384]: Updated "/home/core/.ssh/authorized_keys" Nov 1 01:00:13.358103 systemd[1]: Finished update-ssh-keys-after-ignition.service. Nov 1 01:00:13.363282 dbus-daemon[1332]: [system] SELinux support is enabled Nov 1 01:00:13.363530 systemd[1]: Started dbus.service. Nov 1 01:00:13.364758 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Nov 1 01:00:13.364772 systemd[1]: Reached target system-config.target. Nov 1 01:00:13.364887 systemd[1]: user-cloudinit-proc-cmdline.service was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Nov 1 01:00:13.364896 systemd[1]: Reached target user-config.target. Nov 1 01:00:13.378267 systemd-logind[1341]: Watching system buttons on /dev/input/event1 (Power Button) Nov 1 01:00:13.378455 systemd-logind[1341]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Nov 1 01:00:13.380338 systemd-logind[1341]: New seat seat0. Nov 1 01:00:13.386886 systemd[1]: Started systemd-logind.service. Nov 1 01:00:13.389275 env[1357]: time="2025-11-01T01:00:13.389256182Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Nov 1 01:00:13.389398 env[1357]: time="2025-11-01T01:00:13.389387474Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Nov 1 01:00:13.389451 env[1357]: time="2025-11-01T01:00:13.389436450Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Nov 1 01:00:13.389516 env[1357]: time="2025-11-01T01:00:13.389507146Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Nov 1 01:00:13.389752 env[1357]: time="2025-11-01T01:00:13.389743563Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Nov 1 01:00:13.389806 env[1357]: time="2025-11-01T01:00:13.389797777Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Nov 1 01:00:13.389852 env[1357]: time="2025-11-01T01:00:13.389842438Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Nov 1 01:00:13.389906 env[1357]: time="2025-11-01T01:00:13.389896461Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Nov 1 01:00:13.390333 env[1357]: time="2025-11-01T01:00:13.390323710Z" level=info msg="loading plugin \"io.containerd.service.v1.leases-service\"..." type=io.containerd.service.v1 Nov 1 01:00:13.390400 env[1357]: time="2025-11-01T01:00:13.390378066Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Nov 1 01:00:13.390451 env[1357]: time="2025-11-01T01:00:13.390441680Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Nov 1 01:00:13.393973 env[1357]: time="2025-11-01T01:00:13.393956322Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Nov 1 01:00:13.394402 env[1357]: time="2025-11-01T01:00:13.394384666Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Nov 1 01:00:13.394554 env[1357]: time="2025-11-01T01:00:13.394544969Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Nov 1 01:00:13.394828 env[1357]: time="2025-11-01T01:00:13.394812161Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Nov 1 01:00:13.394920 env[1357]: time="2025-11-01T01:00:13.394910799Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Nov 1 01:00:13.395189 env[1357]: time="2025-11-01T01:00:13.394958013Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Nov 1 01:00:13.395284 env[1357]: time="2025-11-01T01:00:13.395274505Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Nov 1 01:00:13.395342 env[1357]: time="2025-11-01T01:00:13.395332699Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Nov 1 01:00:13.395482 env[1357]: time="2025-11-01T01:00:13.395473023Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Nov 1 01:00:13.395527 env[1357]: time="2025-11-01T01:00:13.395517443Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Nov 1 01:00:13.395604 env[1357]: time="2025-11-01T01:00:13.395595564Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Nov 1 01:00:13.395652 env[1357]: time="2025-11-01T01:00:13.395642723Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Nov 1 01:00:13.395709 env[1357]: time="2025-11-01T01:00:13.395700253Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Nov 1 01:00:13.395756 env[1357]: time="2025-11-01T01:00:13.395746409Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Nov 1 01:00:13.395817 env[1357]: time="2025-11-01T01:00:13.395807779Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Nov 1 01:00:13.396198 env[1357]: time="2025-11-01T01:00:13.396181007Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Nov 1 01:00:13.396700 env[1357]: time="2025-11-01T01:00:13.396688963Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Nov 1 01:00:13.396755 env[1357]: time="2025-11-01T01:00:13.396740433Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Nov 1 01:00:13.396847 env[1357]: time="2025-11-01T01:00:13.396837082Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Nov 1 01:00:13.397327 env[1357]: time="2025-11-01T01:00:13.397314259Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1 Nov 1 01:00:13.397407 env[1357]: time="2025-11-01T01:00:13.397388809Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Nov 1 01:00:13.397675 kernel: NET: Registered PF_VSOCK protocol family Nov 1 01:00:13.397733 env[1357]: time="2025-11-01T01:00:13.397721430Z" level=error msg="failed to initialize a tracing processor \"otlp\"" error="no OpenTelemetry endpoint: skip plugin" Nov 1 01:00:13.397809 env[1357]: time="2025-11-01T01:00:13.397798861Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Nov 1 01:00:13.398684 env[1357]: time="2025-11-01T01:00:13.398598195Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:false] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:false SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.6 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Nov 1 01:00:13.400766 env[1357]: time="2025-11-01T01:00:13.399006549Z" level=info msg="Connect containerd service" Nov 1 01:00:13.400766 env[1357]: time="2025-11-01T01:00:13.399033594Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Nov 1 01:00:13.400766 env[1357]: time="2025-11-01T01:00:13.399953985Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Nov 1 01:00:13.400766 env[1357]: time="2025-11-01T01:00:13.400455289Z" level=info msg="Start subscribing containerd event" Nov 1 01:00:13.400766 env[1357]: time="2025-11-01T01:00:13.400479347Z" level=info msg="Start recovering state" Nov 1 01:00:13.400766 env[1357]: time="2025-11-01T01:00:13.400509189Z" level=info msg="Start event monitor" Nov 1 01:00:13.400766 env[1357]: time="2025-11-01T01:00:13.400529066Z" level=info msg="Start snapshots syncer" Nov 1 01:00:13.400766 env[1357]: time="2025-11-01T01:00:13.400536307Z" level=info msg="Start cni network conf syncer for default" Nov 1 01:00:13.400766 env[1357]: time="2025-11-01T01:00:13.400542220Z" level=info msg="Start streaming server" Nov 1 01:00:13.401574 env[1357]: time="2025-11-01T01:00:13.401546596Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Nov 1 01:00:13.403411 env[1357]: time="2025-11-01T01:00:13.403400402Z" level=info msg=serving... address=/run/containerd/containerd.sock Nov 1 01:00:13.403564 systemd[1]: Started containerd.service. Nov 1 01:00:13.405315 update_engine[1342]: I1101 01:00:13.404640 1342 main.cc:92] Flatcar Update Engine starting Nov 1 01:00:13.410638 systemd[1]: Started update-engine.service. Nov 1 01:00:13.412748 update_engine[1342]: I1101 01:00:13.412559 1342 update_check_scheduler.cc:74] Next update check in 3m14s Nov 1 01:00:13.412220 systemd[1]: Started locksmithd.service. Nov 1 01:00:13.416533 env[1357]: time="2025-11-01T01:00:13.416511223Z" level=info msg="containerd successfully booted in 0.098349s" Nov 1 01:00:13.660829 systemd-networkd[1114]: ens192: Gained IPv6LL Nov 1 01:00:13.662122 systemd[1]: Finished systemd-networkd-wait-online.service. Nov 1 01:00:13.662415 systemd[1]: Reached target network-online.target. Nov 1 01:00:13.663782 systemd[1]: Starting kubelet.service... Nov 1 01:00:13.847840 tar[1351]: linux-amd64/README.md Nov 1 01:00:13.854346 systemd[1]: Finished prepare-helm.service. Nov 1 01:00:13.969223 locksmithd[1407]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Nov 1 01:00:14.915351 systemd[1]: Started kubelet.service. Nov 1 01:00:15.204424 sshd_keygen[1366]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Nov 1 01:00:15.219316 systemd[1]: Finished sshd-keygen.service. Nov 1 01:00:15.220619 systemd[1]: Starting issuegen.service... Nov 1 01:00:15.224311 systemd[1]: issuegen.service: Deactivated successfully. Nov 1 01:00:15.224457 systemd[1]: Finished issuegen.service. Nov 1 01:00:15.225767 systemd[1]: Starting systemd-user-sessions.service... Nov 1 01:00:15.230502 systemd[1]: Finished systemd-user-sessions.service. Nov 1 01:00:15.231544 systemd[1]: Started getty@tty1.service. Nov 1 01:00:15.232435 systemd[1]: Started serial-getty@ttyS0.service. Nov 1 01:00:15.232634 systemd[1]: Reached target getty.target. Nov 1 01:00:15.232854 systemd[1]: Reached target multi-user.target. Nov 1 01:00:15.233935 systemd[1]: Starting systemd-update-utmp-runlevel.service... Nov 1 01:00:15.239225 systemd[1]: systemd-update-utmp-runlevel.service: Deactivated successfully. Nov 1 01:00:15.239373 systemd[1]: Finished systemd-update-utmp-runlevel.service. Nov 1 01:00:15.239578 systemd[1]: Startup finished in 5.533s (kernel) + 5.964s (userspace) = 11.497s. Nov 1 01:00:15.261885 login[1491]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Nov 1 01:00:15.263348 login[1492]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Nov 1 01:00:15.269794 systemd[1]: Created slice user-500.slice. Nov 1 01:00:15.270437 systemd[1]: Starting user-runtime-dir@500.service... Nov 1 01:00:15.275220 systemd-logind[1341]: New session 2 of user core. Nov 1 01:00:15.278309 systemd-logind[1341]: New session 1 of user core. Nov 1 01:00:15.281158 systemd[1]: Finished user-runtime-dir@500.service. Nov 1 01:00:15.281987 systemd[1]: Starting user@500.service... Nov 1 01:00:15.285194 (systemd)[1498]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Nov 1 01:00:15.355442 systemd[1498]: Queued start job for default target default.target. Nov 1 01:00:15.355824 systemd[1498]: Reached target paths.target. Nov 1 01:00:15.355896 systemd[1498]: Reached target sockets.target. Nov 1 01:00:15.355957 systemd[1498]: Reached target timers.target. Nov 1 01:00:15.356023 systemd[1498]: Reached target basic.target. Nov 1 01:00:15.356104 systemd[1498]: Reached target default.target. Nov 1 01:00:15.356155 systemd[1]: Started user@500.service. Nov 1 01:00:15.356225 systemd[1498]: Startup finished in 66ms. Nov 1 01:00:15.356853 systemd[1]: Started session-1.scope. Nov 1 01:00:15.357228 systemd[1]: Started session-2.scope. Nov 1 01:00:15.488514 kubelet[1470]: E1101 01:00:15.488452 1470 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Nov 1 01:00:15.489632 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Nov 1 01:00:15.489736 systemd[1]: kubelet.service: Failed with result 'exit-code'. Nov 1 01:00:25.740273 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Nov 1 01:00:25.740399 systemd[1]: Stopped kubelet.service. Nov 1 01:00:25.741649 systemd[1]: Starting kubelet.service... Nov 1 01:00:25.800986 systemd[1]: Started kubelet.service. Nov 1 01:00:25.847324 kubelet[1529]: E1101 01:00:25.847300 1529 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Nov 1 01:00:25.849313 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Nov 1 01:00:25.849401 systemd[1]: kubelet.service: Failed with result 'exit-code'. Nov 1 01:00:36.100049 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Nov 1 01:00:36.100216 systemd[1]: Stopped kubelet.service. Nov 1 01:00:36.101749 systemd[1]: Starting kubelet.service... Nov 1 01:00:36.404386 systemd[1]: Started kubelet.service. Nov 1 01:00:36.429695 kubelet[1543]: E1101 01:00:36.429659 1543 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Nov 1 01:00:36.430856 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Nov 1 01:00:36.430944 systemd[1]: kubelet.service: Failed with result 'exit-code'. Nov 1 01:00:43.533299 systemd[1]: Created slice system-sshd.slice. Nov 1 01:00:43.534115 systemd[1]: Started sshd@0-139.178.70.108:22-147.75.109.163:55760.service. Nov 1 01:00:43.582649 sshd[1550]: Accepted publickey for core from 147.75.109.163 port 55760 ssh2: RSA SHA256:Zb6OsOkmHuKObgLqAaxNeVGNfZDCbP6FgE1ozchKog8 Nov 1 01:00:43.583445 sshd[1550]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Nov 1 01:00:43.586325 systemd[1]: Started session-3.scope. Nov 1 01:00:43.586976 systemd-logind[1341]: New session 3 of user core. Nov 1 01:00:43.635370 systemd[1]: Started sshd@1-139.178.70.108:22-147.75.109.163:55770.service. Nov 1 01:00:43.674419 sshd[1555]: Accepted publickey for core from 147.75.109.163 port 55770 ssh2: RSA SHA256:Zb6OsOkmHuKObgLqAaxNeVGNfZDCbP6FgE1ozchKog8 Nov 1 01:00:43.675487 sshd[1555]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Nov 1 01:00:43.678435 systemd-logind[1341]: New session 4 of user core. Nov 1 01:00:43.678759 systemd[1]: Started session-4.scope. Nov 1 01:00:43.730474 systemd[1]: Started sshd@2-139.178.70.108:22-147.75.109.163:55778.service. Nov 1 01:00:43.731194 sshd[1555]: pam_unix(sshd:session): session closed for user core Nov 1 01:00:43.732884 systemd[1]: sshd@1-139.178.70.108:22-147.75.109.163:55770.service: Deactivated successfully. Nov 1 01:00:43.734779 systemd-logind[1341]: Session 4 logged out. Waiting for processes to exit. Nov 1 01:00:43.734834 systemd[1]: session-4.scope: Deactivated successfully. Nov 1 01:00:43.736829 systemd-logind[1341]: Removed session 4. Nov 1 01:00:43.770603 sshd[1560]: Accepted publickey for core from 147.75.109.163 port 55778 ssh2: RSA SHA256:Zb6OsOkmHuKObgLqAaxNeVGNfZDCbP6FgE1ozchKog8 Nov 1 01:00:43.771397 sshd[1560]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Nov 1 01:00:43.774226 systemd[1]: Started session-5.scope. Nov 1 01:00:43.774864 systemd-logind[1341]: New session 5 of user core. Nov 1 01:00:43.822194 sshd[1560]: pam_unix(sshd:session): session closed for user core Nov 1 01:00:43.823995 systemd[1]: Started sshd@3-139.178.70.108:22-147.75.109.163:55786.service. Nov 1 01:00:43.825544 systemd-logind[1341]: Session 5 logged out. Waiting for processes to exit. Nov 1 01:00:43.825599 systemd[1]: sshd@2-139.178.70.108:22-147.75.109.163:55778.service: Deactivated successfully. Nov 1 01:00:43.826039 systemd[1]: session-5.scope: Deactivated successfully. Nov 1 01:00:43.826326 systemd-logind[1341]: Removed session 5. Nov 1 01:00:43.862473 sshd[1567]: Accepted publickey for core from 147.75.109.163 port 55786 ssh2: RSA SHA256:Zb6OsOkmHuKObgLqAaxNeVGNfZDCbP6FgE1ozchKog8 Nov 1 01:00:43.863234 sshd[1567]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Nov 1 01:00:43.865843 systemd-logind[1341]: New session 6 of user core. Nov 1 01:00:43.866155 systemd[1]: Started session-6.scope. Nov 1 01:00:43.917308 systemd[1]: Started sshd@4-139.178.70.108:22-147.75.109.163:55798.service. Nov 1 01:00:43.917767 sshd[1567]: pam_unix(sshd:session): session closed for user core Nov 1 01:00:43.921623 systemd[1]: sshd@3-139.178.70.108:22-147.75.109.163:55786.service: Deactivated successfully. Nov 1 01:00:43.922154 systemd-logind[1341]: Session 6 logged out. Waiting for processes to exit. Nov 1 01:00:43.922188 systemd[1]: session-6.scope: Deactivated successfully. Nov 1 01:00:43.924990 systemd-logind[1341]: Removed session 6. Nov 1 01:00:43.956702 sshd[1574]: Accepted publickey for core from 147.75.109.163 port 55798 ssh2: RSA SHA256:Zb6OsOkmHuKObgLqAaxNeVGNfZDCbP6FgE1ozchKog8 Nov 1 01:00:43.957564 sshd[1574]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Nov 1 01:00:43.960267 systemd-logind[1341]: New session 7 of user core. Nov 1 01:00:43.961032 systemd[1]: Started session-7.scope. Nov 1 01:00:44.026907 sudo[1580]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Nov 1 01:00:44.027059 sudo[1580]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Nov 1 01:00:44.032793 dbus-daemon[1332]: \xd0-Ft/V: received setenforce notice (enforcing=1575807648) Nov 1 01:00:44.033819 sudo[1580]: pam_unix(sudo:session): session closed for user root Nov 1 01:00:44.037043 sshd[1574]: pam_unix(sshd:session): session closed for user core Nov 1 01:00:44.037279 systemd[1]: Started sshd@5-139.178.70.108:22-147.75.109.163:55808.service. Nov 1 01:00:44.039849 systemd-logind[1341]: Session 7 logged out. Waiting for processes to exit. Nov 1 01:00:44.039995 systemd[1]: sshd@4-139.178.70.108:22-147.75.109.163:55798.service: Deactivated successfully. Nov 1 01:00:44.040437 systemd[1]: session-7.scope: Deactivated successfully. Nov 1 01:00:44.040933 systemd-logind[1341]: Removed session 7. Nov 1 01:00:44.077529 sshd[1582]: Accepted publickey for core from 147.75.109.163 port 55808 ssh2: RSA SHA256:Zb6OsOkmHuKObgLqAaxNeVGNfZDCbP6FgE1ozchKog8 Nov 1 01:00:44.078488 sshd[1582]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Nov 1 01:00:44.081324 systemd[1]: Started session-8.scope. Nov 1 01:00:44.082065 systemd-logind[1341]: New session 8 of user core. Nov 1 01:00:44.131088 sudo[1589]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Nov 1 01:00:44.131217 sudo[1589]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Nov 1 01:00:44.133253 sudo[1589]: pam_unix(sudo:session): session closed for user root Nov 1 01:00:44.136284 sudo[1588]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Nov 1 01:00:44.136429 sudo[1588]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Nov 1 01:00:44.142441 systemd[1]: Stopping audit-rules.service... Nov 1 01:00:44.142000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=remove_rule key=(null) list=5 res=1 Nov 1 01:00:44.147520 kernel: kauditd_printk_skb: 230 callbacks suppressed Nov 1 01:00:44.147606 kernel: audit: type=1305 audit(1761958844.142:158): auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=remove_rule key=(null) list=5 res=1 Nov 1 01:00:44.149263 auditctl[1592]: No rules Nov 1 01:00:44.142000 audit[1592]: SYSCALL arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7ffecc82ea20 a2=420 a3=0 items=0 ppid=1 pid=1592 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/sbin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 01:00:44.149635 systemd[1]: audit-rules.service: Deactivated successfully. Nov 1 01:00:44.149791 systemd[1]: Stopped audit-rules.service. Nov 1 01:00:44.151160 systemd[1]: Starting audit-rules.service... Nov 1 01:00:44.153735 kernel: audit: type=1300 audit(1761958844.142:158): arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7ffecc82ea20 a2=420 a3=0 items=0 ppid=1 pid=1592 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/sbin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 01:00:44.153784 kernel: audit: type=1327 audit(1761958844.142:158): proctitle=2F7362696E2F617564697463746C002D44 Nov 1 01:00:44.142000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D44 Nov 1 01:00:44.154314 kernel: audit: type=1131 audit(1761958844.148:159): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 01:00:44.148000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 01:00:44.164649 augenrules[1610]: No rules Nov 1 01:00:44.163000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 01:00:44.165185 systemd[1]: Finished audit-rules.service. Nov 1 01:00:44.168180 sudo[1588]: pam_unix(sudo:session): session closed for user root Nov 1 01:00:44.167000 audit[1588]: USER_END pid=1588 uid=500 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Nov 1 01:00:44.171659 kernel: audit: type=1130 audit(1761958844.163:160): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 01:00:44.171720 kernel: audit: type=1106 audit(1761958844.167:161): pid=1588 uid=500 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Nov 1 01:00:44.167000 audit[1588]: CRED_DISP pid=1588 uid=500 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Nov 1 01:00:44.174542 sshd[1582]: pam_unix(sshd:session): session closed for user core Nov 1 01:00:44.176832 kernel: audit: type=1104 audit(1761958844.167:162): pid=1588 uid=500 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Nov 1 01:00:44.176493 systemd[1]: Started sshd@6-139.178.70.108:22-147.75.109.163:55818.service. Nov 1 01:00:44.174000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@6-139.178.70.108:22-147.75.109.163:55818 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 01:00:44.185542 kernel: audit: type=1130 audit(1761958844.174:163): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@6-139.178.70.108:22-147.75.109.163:55818 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 01:00:44.185612 kernel: audit: type=1106 audit(1761958844.179:164): pid=1582 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Nov 1 01:00:44.179000 audit[1582]: USER_END pid=1582 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Nov 1 01:00:44.185895 systemd[1]: sshd@5-139.178.70.108:22-147.75.109.163:55808.service: Deactivated successfully. Nov 1 01:00:44.186985 systemd[1]: session-8.scope: Deactivated successfully. Nov 1 01:00:44.187333 systemd-logind[1341]: Session 8 logged out. Waiting for processes to exit. Nov 1 01:00:44.187957 systemd-logind[1341]: Removed session 8. Nov 1 01:00:44.179000 audit[1582]: CRED_DISP pid=1582 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Nov 1 01:00:44.192163 kernel: audit: type=1104 audit(1761958844.179:165): pid=1582 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Nov 1 01:00:44.184000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@5-139.178.70.108:22-147.75.109.163:55808 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 01:00:44.215000 audit[1615]: USER_ACCT pid=1615 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Nov 1 01:00:44.216943 sshd[1615]: Accepted publickey for core from 147.75.109.163 port 55818 ssh2: RSA SHA256:Zb6OsOkmHuKObgLqAaxNeVGNfZDCbP6FgE1ozchKog8 Nov 1 01:00:44.215000 audit[1615]: CRED_ACQ pid=1615 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Nov 1 01:00:44.216000 audit[1615]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffef5dd88e0 a2=3 a3=0 items=0 ppid=1 pid=1615 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=9 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 01:00:44.216000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Nov 1 01:00:44.218073 sshd[1615]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Nov 1 01:00:44.220958 systemd[1]: Started session-9.scope. Nov 1 01:00:44.221169 systemd-logind[1341]: New session 9 of user core. Nov 1 01:00:44.222000 audit[1615]: USER_START pid=1615 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Nov 1 01:00:44.223000 audit[1620]: CRED_ACQ pid=1620 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Nov 1 01:00:44.268000 audit[1621]: USER_ACCT pid=1621 uid=500 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Nov 1 01:00:44.269962 sudo[1621]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Nov 1 01:00:44.268000 audit[1621]: CRED_REFR pid=1621 uid=500 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Nov 1 01:00:44.270128 sudo[1621]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Nov 1 01:00:44.269000 audit[1621]: USER_START pid=1621 uid=500 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Nov 1 01:00:44.291360 systemd[1]: Starting docker.service... Nov 1 01:00:44.317028 env[1631]: time="2025-11-01T01:00:44.317002445Z" level=info msg="Starting up" Nov 1 01:00:44.318105 env[1631]: time="2025-11-01T01:00:44.318088821Z" level=info msg="parsed scheme: \"unix\"" module=grpc Nov 1 01:00:44.318105 env[1631]: time="2025-11-01T01:00:44.318102034Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc Nov 1 01:00:44.318158 env[1631]: time="2025-11-01T01:00:44.318117111Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/libcontainerd/docker-containerd.sock 0 }] }" module=grpc Nov 1 01:00:44.318158 env[1631]: time="2025-11-01T01:00:44.318123391Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc Nov 1 01:00:44.318990 env[1631]: time="2025-11-01T01:00:44.318974770Z" level=info msg="parsed scheme: \"unix\"" module=grpc Nov 1 01:00:44.319024 env[1631]: time="2025-11-01T01:00:44.318990237Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc Nov 1 01:00:44.319024 env[1631]: time="2025-11-01T01:00:44.318998328Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/libcontainerd/docker-containerd.sock 0 }] }" module=grpc Nov 1 01:00:44.319024 env[1631]: time="2025-11-01T01:00:44.319003377Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc Nov 1 01:00:44.338018 env[1631]: time="2025-11-01T01:00:44.337955027Z" level=warning msg="Your kernel does not support cgroup blkio weight" Nov 1 01:00:44.338018 env[1631]: time="2025-11-01T01:00:44.337983839Z" level=warning msg="Your kernel does not support cgroup blkio weight_device" Nov 1 01:00:44.338168 env[1631]: time="2025-11-01T01:00:44.338110389Z" level=info msg="Loading containers: start." Nov 1 01:00:44.399000 audit[1662]: NETFILTER_CFG table=nat:2 family=2 entries=2 op=nft_register_chain pid=1662 subj=system_u:system_r:kernel_t:s0 comm="iptables" Nov 1 01:00:44.399000 audit[1662]: SYSCALL arch=c000003e syscall=46 success=yes exit=116 a0=3 a1=7ffff43f9260 a2=0 a3=7ffff43f924c items=0 ppid=1631 pid=1662 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 01:00:44.399000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D74006E6174002D4E00444F434B4552 Nov 1 01:00:44.401000 audit[1664]: NETFILTER_CFG table=filter:3 family=2 entries=2 op=nft_register_chain pid=1664 subj=system_u:system_r:kernel_t:s0 comm="iptables" Nov 1 01:00:44.401000 audit[1664]: SYSCALL arch=c000003e syscall=46 success=yes exit=124 a0=3 a1=7ffd77ab21c0 a2=0 a3=7ffd77ab21ac items=0 ppid=1631 pid=1664 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 01:00:44.401000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B4552 Nov 1 01:00:44.402000 audit[1666]: NETFILTER_CFG table=filter:4 family=2 entries=1 op=nft_register_chain pid=1666 subj=system_u:system_r:kernel_t:s0 comm="iptables" Nov 1 01:00:44.402000 audit[1666]: SYSCALL arch=c000003e syscall=46 success=yes exit=112 a0=3 a1=7fff811c5930 a2=0 a3=7fff811c591c items=0 ppid=1631 pid=1666 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 01:00:44.402000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D49534F4C4154494F4E2D53544147452D31 Nov 1 01:00:44.403000 audit[1668]: NETFILTER_CFG table=filter:5 family=2 entries=1 op=nft_register_chain pid=1668 subj=system_u:system_r:kernel_t:s0 comm="iptables" Nov 1 01:00:44.403000 audit[1668]: SYSCALL arch=c000003e syscall=46 success=yes exit=112 a0=3 a1=7ffdfcf31960 a2=0 a3=7ffdfcf3194c items=0 ppid=1631 pid=1668 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 01:00:44.403000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D49534F4C4154494F4E2D53544147452D32 Nov 1 01:00:44.405000 audit[1670]: NETFILTER_CFG table=filter:6 family=2 entries=1 op=nft_register_rule pid=1670 subj=system_u:system_r:kernel_t:s0 comm="iptables" Nov 1 01:00:44.405000 audit[1670]: SYSCALL arch=c000003e syscall=46 success=yes exit=228 a0=3 a1=7ffde8099140 a2=0 a3=7ffde809912c items=0 ppid=1631 pid=1670 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 01:00:44.405000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4100444F434B45522D49534F4C4154494F4E2D53544147452D31002D6A0052455455524E Nov 1 01:00:44.419000 audit[1675]: NETFILTER_CFG table=filter:7 family=2 entries=1 op=nft_register_rule pid=1675 subj=system_u:system_r:kernel_t:s0 comm="iptables" Nov 1 01:00:44.419000 audit[1675]: SYSCALL arch=c000003e syscall=46 success=yes exit=228 a0=3 a1=7ffd500d4660 a2=0 a3=7ffd500d464c items=0 ppid=1631 pid=1675 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 01:00:44.419000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4100444F434B45522D49534F4C4154494F4E2D53544147452D32002D6A0052455455524E Nov 1 01:00:44.434000 audit[1677]: NETFILTER_CFG table=filter:8 family=2 entries=1 op=nft_register_chain pid=1677 subj=system_u:system_r:kernel_t:s0 comm="iptables" Nov 1 01:00:44.434000 audit[1677]: SYSCALL arch=c000003e syscall=46 success=yes exit=96 a0=3 a1=7ffd14bf1680 a2=0 a3=7ffd14bf166c items=0 ppid=1631 pid=1677 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 01:00:44.434000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D55534552 Nov 1 01:00:44.436000 audit[1679]: NETFILTER_CFG table=filter:9 family=2 entries=1 op=nft_register_rule pid=1679 subj=system_u:system_r:kernel_t:s0 comm="iptables" Nov 1 01:00:44.436000 audit[1679]: SYSCALL arch=c000003e syscall=46 success=yes exit=212 a0=3 a1=7ffc22a51610 a2=0 a3=7ffc22a515fc items=0 ppid=1631 pid=1679 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 01:00:44.436000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4100444F434B45522D55534552002D6A0052455455524E Nov 1 01:00:44.437000 audit[1681]: NETFILTER_CFG table=filter:10 family=2 entries=2 op=nft_register_chain pid=1681 subj=system_u:system_r:kernel_t:s0 comm="iptables" Nov 1 01:00:44.437000 audit[1681]: SYSCALL arch=c000003e syscall=46 success=yes exit=308 a0=3 a1=7ffc5a643490 a2=0 a3=7ffc5a64347c items=0 ppid=1631 pid=1681 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 01:00:44.437000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D55534552 Nov 1 01:00:44.455000 audit[1685]: NETFILTER_CFG table=filter:11 family=2 entries=1 op=nft_unregister_rule pid=1685 subj=system_u:system_r:kernel_t:s0 comm="iptables" Nov 1 01:00:44.455000 audit[1685]: SYSCALL arch=c000003e syscall=46 success=yes exit=216 a0=3 a1=7fff5ffe4430 a2=0 a3=7fff5ffe441c items=0 ppid=1631 pid=1685 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 01:00:44.455000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4400464F5257415244002D6A00444F434B45522D55534552 Nov 1 01:00:44.459000 audit[1686]: NETFILTER_CFG table=filter:12 family=2 entries=1 op=nft_register_rule pid=1686 subj=system_u:system_r:kernel_t:s0 comm="iptables" Nov 1 01:00:44.459000 audit[1686]: SYSCALL arch=c000003e syscall=46 success=yes exit=224 a0=3 a1=7ffcf990fcd0 a2=0 a3=7ffcf990fcbc items=0 ppid=1631 pid=1686 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 01:00:44.459000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D55534552 Nov 1 01:00:44.468692 kernel: Initializing XFRM netlink socket Nov 1 01:00:44.496734 env[1631]: time="2025-11-01T01:00:44.496707299Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address" Nov 1 01:00:44.537000 audit[1694]: NETFILTER_CFG table=nat:13 family=2 entries=2 op=nft_register_chain pid=1694 subj=system_u:system_r:kernel_t:s0 comm="iptables" Nov 1 01:00:44.537000 audit[1694]: SYSCALL arch=c000003e syscall=46 success=yes exit=492 a0=3 a1=7fff5f60d9f0 a2=0 a3=7fff5f60d9dc items=0 ppid=1631 pid=1694 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 01:00:44.537000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D74006E6174002D4900504F5354524F5554494E47002D73003137322E31372E302E302F31360000002D6F00646F636B657230002D6A004D415351554552414445 Nov 1 01:00:44.556000 audit[1697]: NETFILTER_CFG table=nat:14 family=2 entries=1 op=nft_register_rule pid=1697 subj=system_u:system_r:kernel_t:s0 comm="iptables" Nov 1 01:00:44.556000 audit[1697]: SYSCALL arch=c000003e syscall=46 success=yes exit=288 a0=3 a1=7ffd9f354db0 a2=0 a3=7ffd9f354d9c items=0 ppid=1631 pid=1697 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 01:00:44.556000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D74006E6174002D4900444F434B4552002D6900646F636B657230002D6A0052455455524E Nov 1 01:00:44.558000 audit[1700]: NETFILTER_CFG table=filter:15 family=2 entries=1 op=nft_register_rule pid=1700 subj=system_u:system_r:kernel_t:s0 comm="iptables" Nov 1 01:00:44.558000 audit[1700]: SYSCALL arch=c000003e syscall=46 success=yes exit=376 a0=3 a1=7ffd8f7c9680 a2=0 a3=7ffd8f7c966c items=0 ppid=1631 pid=1700 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 01:00:44.558000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6900646F636B657230002D6F00646F636B657230002D6A00414343455054 Nov 1 01:00:44.560000 audit[1702]: NETFILTER_CFG table=filter:16 family=2 entries=1 op=nft_register_rule pid=1702 subj=system_u:system_r:kernel_t:s0 comm="iptables" Nov 1 01:00:44.560000 audit[1702]: SYSCALL arch=c000003e syscall=46 success=yes exit=376 a0=3 a1=7ffc9b7ab470 a2=0 a3=7ffc9b7ab45c items=0 ppid=1631 pid=1702 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 01:00:44.560000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6900646F636B6572300000002D6F00646F636B657230002D6A00414343455054 Nov 1 01:00:44.561000 audit[1704]: NETFILTER_CFG table=nat:17 family=2 entries=2 op=nft_register_chain pid=1704 subj=system_u:system_r:kernel_t:s0 comm="iptables" Nov 1 01:00:44.561000 audit[1704]: SYSCALL arch=c000003e syscall=46 success=yes exit=356 a0=3 a1=7ffdff907760 a2=0 a3=7ffdff90774c items=0 ppid=1631 pid=1704 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 01:00:44.561000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D74006E6174002D4100505245524F5554494E47002D6D006164647274797065002D2D6473742D74797065004C4F43414C002D6A00444F434B4552 Nov 1 01:00:44.562000 audit[1706]: NETFILTER_CFG table=nat:18 family=2 entries=2 op=nft_register_chain pid=1706 subj=system_u:system_r:kernel_t:s0 comm="iptables" Nov 1 01:00:44.562000 audit[1706]: SYSCALL arch=c000003e syscall=46 success=yes exit=444 a0=3 a1=7ffdc7aa0f60 a2=0 a3=7ffdc7aa0f4c items=0 ppid=1631 pid=1706 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 01:00:44.562000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D74006E6174002D41004F5554505554002D6D006164647274797065002D2D6473742D74797065004C4F43414C002D6A00444F434B45520000002D2D647374003132372E302E302E302F38 Nov 1 01:00:44.564000 audit[1708]: NETFILTER_CFG table=filter:19 family=2 entries=1 op=nft_register_rule pid=1708 subj=system_u:system_r:kernel_t:s0 comm="iptables" Nov 1 01:00:44.564000 audit[1708]: SYSCALL arch=c000003e syscall=46 success=yes exit=304 a0=3 a1=7ffd0b2cedf0 a2=0 a3=7ffd0b2ceddc items=0 ppid=1631 pid=1708 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 01:00:44.564000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6F00646F636B657230002D6A00444F434B4552 Nov 1 01:00:44.583000 audit[1711]: NETFILTER_CFG table=filter:20 family=2 entries=1 op=nft_register_rule pid=1711 subj=system_u:system_r:kernel_t:s0 comm="iptables" Nov 1 01:00:44.583000 audit[1711]: SYSCALL arch=c000003e syscall=46 success=yes exit=508 a0=3 a1=7ffe5d9e5170 a2=0 a3=7ffe5d9e515c items=0 ppid=1631 pid=1711 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 01:00:44.583000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6F00646F636B657230002D6D00636F6E6E747261636B002D2D637473746174650052454C415445442C45535441424C4953484544002D6A00414343455054 Nov 1 01:00:44.585000 audit[1713]: NETFILTER_CFG table=filter:21 family=2 entries=1 op=nft_register_rule pid=1713 subj=system_u:system_r:kernel_t:s0 comm="iptables" Nov 1 01:00:44.585000 audit[1713]: SYSCALL arch=c000003e syscall=46 success=yes exit=240 a0=3 a1=7ffeb94f7f90 a2=0 a3=7ffeb94f7f7c items=0 ppid=1631 pid=1713 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 01:00:44.585000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D49534F4C4154494F4E2D53544147452D31 Nov 1 01:00:44.586000 audit[1715]: NETFILTER_CFG table=filter:22 family=2 entries=1 op=nft_register_rule pid=1715 subj=system_u:system_r:kernel_t:s0 comm="iptables" Nov 1 01:00:44.586000 audit[1715]: SYSCALL arch=c000003e syscall=46 success=yes exit=428 a0=3 a1=7ffcf5aa14e0 a2=0 a3=7ffcf5aa14cc items=0 ppid=1631 pid=1715 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 01:00:44.586000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D740066696C746572002D4900444F434B45522D49534F4C4154494F4E2D53544147452D31002D6900646F636B6572300000002D6F00646F636B657230002D6A00444F434B45522D49534F4C4154494F4E2D53544147452D32 Nov 1 01:00:44.589000 audit[1717]: NETFILTER_CFG table=filter:23 family=2 entries=1 op=nft_register_rule pid=1717 subj=system_u:system_r:kernel_t:s0 comm="iptables" Nov 1 01:00:44.589000 audit[1717]: SYSCALL arch=c000003e syscall=46 success=yes exit=312 a0=3 a1=7ffc6e775cb0 a2=0 a3=7ffc6e775c9c items=0 ppid=1631 pid=1717 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 01:00:44.589000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D740066696C746572002D4900444F434B45522D49534F4C4154494F4E2D53544147452D32002D6F00646F636B657230002D6A0044524F50 Nov 1 01:00:44.591016 systemd-networkd[1114]: docker0: Link UP Nov 1 01:00:44.601000 audit[1721]: NETFILTER_CFG table=filter:24 family=2 entries=1 op=nft_unregister_rule pid=1721 subj=system_u:system_r:kernel_t:s0 comm="iptables" Nov 1 01:00:44.601000 audit[1721]: SYSCALL arch=c000003e syscall=46 success=yes exit=228 a0=3 a1=7ffd444fb520 a2=0 a3=7ffd444fb50c items=0 ppid=1631 pid=1721 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 01:00:44.601000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4400464F5257415244002D6A00444F434B45522D55534552 Nov 1 01:00:44.606000 audit[1722]: NETFILTER_CFG table=filter:25 family=2 entries=1 op=nft_register_rule pid=1722 subj=system_u:system_r:kernel_t:s0 comm="iptables" Nov 1 01:00:44.606000 audit[1722]: SYSCALL arch=c000003e syscall=46 success=yes exit=224 a0=3 a1=7ffc3b442000 a2=0 a3=7ffc3b441fec items=0 ppid=1631 pid=1722 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 01:00:44.606000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D55534552 Nov 1 01:00:44.608782 env[1631]: time="2025-11-01T01:00:44.608758486Z" level=info msg="Loading containers: done." Nov 1 01:00:44.618784 env[1631]: time="2025-11-01T01:00:44.618752918Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Nov 1 01:00:44.618898 env[1631]: time="2025-11-01T01:00:44.618883230Z" level=info msg="Docker daemon" commit=112bdf3343 graphdriver(s)=overlay2 version=20.10.23 Nov 1 01:00:44.618960 env[1631]: time="2025-11-01T01:00:44.618947466Z" level=info msg="Daemon has completed initialization" Nov 1 01:00:44.624000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=docker comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 01:00:44.625786 systemd[1]: Started docker.service. Nov 1 01:00:44.629868 env[1631]: time="2025-11-01T01:00:44.629828094Z" level=info msg="API listen on /run/docker.sock" Nov 1 01:00:46.261591 env[1357]: time="2025-11-01T01:00:46.261565032Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.9\"" Nov 1 01:00:46.681460 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Nov 1 01:00:46.681589 systemd[1]: Stopped kubelet.service. Nov 1 01:00:46.679000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 01:00:46.679000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 01:00:46.682709 systemd[1]: Starting kubelet.service... Nov 1 01:00:46.871794 systemd[1]: Started kubelet.service. Nov 1 01:00:46.870000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 01:00:46.952466 kubelet[1761]: E1101 01:00:46.952252 1761 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Nov 1 01:00:46.953501 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Nov 1 01:00:46.953596 systemd[1]: kubelet.service: Failed with result 'exit-code'. Nov 1 01:00:46.952000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Nov 1 01:00:47.373609 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2126906506.mount: Deactivated successfully. Nov 1 01:00:48.823516 env[1357]: time="2025-11-01T01:00:48.823484502Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-apiserver:v1.32.9,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 01:00:48.824862 env[1357]: time="2025-11-01T01:00:48.824848582Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:abd2b525baf428ffb8b8b7d1e09761dc5cdb7ed0c7896a9427e29e84f8eafc59,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 01:00:48.825931 env[1357]: time="2025-11-01T01:00:48.825917141Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-apiserver:v1.32.9,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 01:00:48.827176 env[1357]: time="2025-11-01T01:00:48.827155784Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-apiserver@sha256:6df11cc2ad9679b1117be34d3a0230add88bc0a08fd7a3ebc26b680575e8de97,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 01:00:48.827745 env[1357]: time="2025-11-01T01:00:48.827727805Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.9\" returns image reference \"sha256:abd2b525baf428ffb8b8b7d1e09761dc5cdb7ed0c7896a9427e29e84f8eafc59\"" Nov 1 01:00:48.828272 env[1357]: time="2025-11-01T01:00:48.828257437Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.9\"" Nov 1 01:00:50.304962 env[1357]: time="2025-11-01T01:00:50.304910965Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-controller-manager:v1.32.9,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 01:00:50.321891 env[1357]: time="2025-11-01T01:00:50.321858901Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:0debe32fbb7223500fcf8c312f2a568a5abd3ed9274d8ec6780cfb30b8861e91,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 01:00:50.330138 env[1357]: time="2025-11-01T01:00:50.330114994Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-controller-manager:v1.32.9,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 01:00:50.333269 env[1357]: time="2025-11-01T01:00:50.333240452Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-controller-manager@sha256:243c4b8e3bce271fcb1b78008ab996ab6976b1a20096deac08338fcd17979922,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 01:00:50.334147 env[1357]: time="2025-11-01T01:00:50.334120056Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.9\" returns image reference \"sha256:0debe32fbb7223500fcf8c312f2a568a5abd3ed9274d8ec6780cfb30b8861e91\"" Nov 1 01:00:50.334603 env[1357]: time="2025-11-01T01:00:50.334582317Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.9\"" Nov 1 01:00:51.565828 env[1357]: time="2025-11-01T01:00:51.565782240Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-scheduler:v1.32.9,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 01:00:51.576367 env[1357]: time="2025-11-01T01:00:51.576334962Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:6934c23b154fcb9bf54ed5913782de746735a49f4daa4732285915050cd44ad5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 01:00:51.584321 env[1357]: time="2025-11-01T01:00:51.584294258Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-scheduler:v1.32.9,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 01:00:51.588734 env[1357]: time="2025-11-01T01:00:51.588704846Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-scheduler@sha256:50c49520dbd0e8b4076b6a5c77d8014df09ea3d59a73e8bafd2678d51ebb92d5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 01:00:51.589517 env[1357]: time="2025-11-01T01:00:51.589490214Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.9\" returns image reference \"sha256:6934c23b154fcb9bf54ed5913782de746735a49f4daa4732285915050cd44ad5\"" Nov 1 01:00:51.590582 env[1357]: time="2025-11-01T01:00:51.590559990Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.9\"" Nov 1 01:00:52.848153 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1542816883.mount: Deactivated successfully. Nov 1 01:00:53.441451 env[1357]: time="2025-11-01T01:00:53.441405903Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy:v1.32.9,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 01:00:53.460099 env[1357]: time="2025-11-01T01:00:53.460075009Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:fa3fdca615a501743d8deb39729a96e731312aac8d96accec061d5265360332f,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 01:00:53.477940 env[1357]: time="2025-11-01T01:00:53.477914160Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-proxy:v1.32.9,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 01:00:53.482098 env[1357]: time="2025-11-01T01:00:53.482077184Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy@sha256:886af02535dc34886e4618b902f8c140d89af57233a245621d29642224516064,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 01:00:53.482371 env[1357]: time="2025-11-01T01:00:53.482351100Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.9\" returns image reference \"sha256:fa3fdca615a501743d8deb39729a96e731312aac8d96accec061d5265360332f\"" Nov 1 01:00:53.482967 env[1357]: time="2025-11-01T01:00:53.482949522Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" Nov 1 01:00:54.156696 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2612518375.mount: Deactivated successfully. Nov 1 01:00:55.153082 env[1357]: time="2025-11-01T01:00:55.153044252Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/coredns/coredns:v1.11.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 01:00:55.172422 env[1357]: time="2025-11-01T01:00:55.172395662Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 01:00:55.177389 env[1357]: time="2025-11-01T01:00:55.177372259Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/coredns/coredns:v1.11.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 01:00:55.189363 env[1357]: time="2025-11-01T01:00:55.189344119Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 01:00:55.189876 env[1357]: time="2025-11-01T01:00:55.189858309Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\"" Nov 1 01:00:55.190188 env[1357]: time="2025-11-01T01:00:55.190160744Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Nov 1 01:00:55.875400 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4190828951.mount: Deactivated successfully. Nov 1 01:00:55.877710 env[1357]: time="2025-11-01T01:00:55.877660920Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.10,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 01:00:55.878700 env[1357]: time="2025-11-01T01:00:55.878671357Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 01:00:55.879732 env[1357]: time="2025-11-01T01:00:55.879714909Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.10,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 01:00:55.880711 env[1357]: time="2025-11-01T01:00:55.880692718Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 01:00:55.881057 env[1357]: time="2025-11-01T01:00:55.881036991Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Nov 1 01:00:55.881705 env[1357]: time="2025-11-01T01:00:55.881684489Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\"" Nov 1 01:00:56.419657 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount254995229.mount: Deactivated successfully. Nov 1 01:00:57.131022 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 4. Nov 1 01:00:57.131174 systemd[1]: Stopped kubelet.service. Nov 1 01:00:57.129000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 01:00:57.132572 systemd[1]: Starting kubelet.service... Nov 1 01:00:57.136941 kernel: kauditd_printk_skb: 88 callbacks suppressed Nov 1 01:00:57.136994 kernel: audit: type=1130 audit(1761958857.129:204): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 01:00:57.129000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 01:00:57.143617 kernel: audit: type=1131 audit(1761958857.129:205): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 01:00:58.757057 update_engine[1342]: I1101 01:00:58.756734 1342 update_attempter.cc:509] Updating boot flags... Nov 1 01:00:59.018000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 01:00:59.020054 systemd[1]: Started kubelet.service. Nov 1 01:00:59.023692 kernel: audit: type=1130 audit(1761958859.018:206): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 01:00:59.101463 kubelet[1779]: E1101 01:00:59.101433 1779 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Nov 1 01:00:59.101000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Nov 1 01:00:59.102594 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Nov 1 01:00:59.102698 systemd[1]: kubelet.service: Failed with result 'exit-code'. Nov 1 01:00:59.105683 kernel: audit: type=1131 audit(1761958859.101:207): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Nov 1 01:00:59.502631 env[1357]: time="2025-11-01T01:00:59.502586590Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/etcd:3.5.16-0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 01:00:59.523752 env[1357]: time="2025-11-01T01:00:59.523721641Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 01:00:59.535976 env[1357]: time="2025-11-01T01:00:59.535947365Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/etcd:3.5.16-0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 01:00:59.545438 env[1357]: time="2025-11-01T01:00:59.545406204Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 01:00:59.546145 env[1357]: time="2025-11-01T01:00:59.546123987Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\" returns image reference \"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\"" Nov 1 01:01:01.688071 systemd[1]: Stopped kubelet.service. Nov 1 01:01:01.686000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 01:01:01.690133 systemd[1]: Starting kubelet.service... Nov 1 01:01:01.693593 kernel: audit: type=1130 audit(1761958861.686:208): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 01:01:01.693637 kernel: audit: type=1131 audit(1761958861.686:209): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 01:01:01.686000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 01:01:01.710617 systemd[1]: Reloading. Nov 1 01:01:01.764238 /usr/lib/systemd/system-generators/torcx-generator[1845]: time="2025-11-01T01:01:01Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.8 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.8 /var/lib/torcx/store]" Nov 1 01:01:01.764510 /usr/lib/systemd/system-generators/torcx-generator[1845]: time="2025-11-01T01:01:01Z" level=info msg="torcx already run" Nov 1 01:01:01.828345 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Nov 1 01:01:01.828460 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Nov 1 01:01:01.841036 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Nov 1 01:01:01.901254 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Nov 1 01:01:01.901328 systemd[1]: kubelet.service: Failed with result 'signal'. Nov 1 01:01:01.901543 systemd[1]: Stopped kubelet.service. Nov 1 01:01:01.899000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Nov 1 01:01:01.903410 systemd[1]: Starting kubelet.service... Nov 1 01:01:01.904681 kernel: audit: type=1130 audit(1761958861.899:210): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Nov 1 01:01:03.221000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 01:01:03.223020 systemd[1]: Started kubelet.service. Nov 1 01:01:03.226675 kernel: audit: type=1130 audit(1761958863.221:211): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 01:01:03.504772 kubelet[1920]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Nov 1 01:01:03.504772 kubelet[1920]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Nov 1 01:01:03.504772 kubelet[1920]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Nov 1 01:01:03.504772 kubelet[1920]: I1101 01:01:03.504649 1920 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Nov 1 01:01:03.704710 kubelet[1920]: I1101 01:01:03.704673 1920 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Nov 1 01:01:03.704710 kubelet[1920]: I1101 01:01:03.704702 1920 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Nov 1 01:01:03.705112 kubelet[1920]: I1101 01:01:03.705097 1920 server.go:954] "Client rotation is on, will bootstrap in background" Nov 1 01:01:03.728699 kubelet[1920]: E1101 01:01:03.728649 1920 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://139.178.70.108:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 139.178.70.108:6443: connect: connection refused" logger="UnhandledError" Nov 1 01:01:03.729874 kubelet[1920]: I1101 01:01:03.729852 1920 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Nov 1 01:01:03.737719 kubelet[1920]: E1101 01:01:03.737693 1920 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Nov 1 01:01:03.737719 kubelet[1920]: I1101 01:01:03.737716 1920 server.go:1421] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Nov 1 01:01:03.739887 kubelet[1920]: I1101 01:01:03.739871 1920 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Nov 1 01:01:03.741771 kubelet[1920]: I1101 01:01:03.741743 1920 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Nov 1 01:01:03.741949 kubelet[1920]: I1101 01:01:03.741836 1920 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":1} Nov 1 01:01:03.742101 kubelet[1920]: I1101 01:01:03.742092 1920 topology_manager.go:138] "Creating topology manager with none policy" Nov 1 01:01:03.742151 kubelet[1920]: I1101 01:01:03.742144 1920 container_manager_linux.go:304] "Creating device plugin manager" Nov 1 01:01:03.742273 kubelet[1920]: I1101 01:01:03.742266 1920 state_mem.go:36] "Initialized new in-memory state store" Nov 1 01:01:03.779714 kubelet[1920]: I1101 01:01:03.779151 1920 kubelet.go:446] "Attempting to sync node with API server" Nov 1 01:01:03.779845 kubelet[1920]: I1101 01:01:03.779834 1920 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Nov 1 01:01:03.779919 kubelet[1920]: I1101 01:01:03.779910 1920 kubelet.go:352] "Adding apiserver pod source" Nov 1 01:01:03.779984 kubelet[1920]: I1101 01:01:03.779975 1920 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Nov 1 01:01:03.823840 kubelet[1920]: W1101 01:01:03.823783 1920 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://139.178.70.108:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 139.178.70.108:6443: connect: connection refused Nov 1 01:01:03.823959 kubelet[1920]: E1101 01:01:03.823844 1920 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://139.178.70.108:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 139.178.70.108:6443: connect: connection refused" logger="UnhandledError" Nov 1 01:01:03.824245 kubelet[1920]: W1101 01:01:03.824112 1920 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://139.178.70.108:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 139.178.70.108:6443: connect: connection refused Nov 1 01:01:03.824245 kubelet[1920]: E1101 01:01:03.824164 1920 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://139.178.70.108:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 139.178.70.108:6443: connect: connection refused" logger="UnhandledError" Nov 1 01:01:03.824331 kubelet[1920]: I1101 01:01:03.824305 1920 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" Nov 1 01:01:03.825017 kubelet[1920]: I1101 01:01:03.824585 1920 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Nov 1 01:01:03.827114 kubelet[1920]: W1101 01:01:03.827097 1920 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Nov 1 01:01:03.833512 kubelet[1920]: I1101 01:01:03.833492 1920 watchdog_linux.go:99] "Systemd watchdog is not enabled" Nov 1 01:01:03.833600 kubelet[1920]: I1101 01:01:03.833522 1920 server.go:1287] "Started kubelet" Nov 1 01:01:03.841739 kubelet[1920]: I1101 01:01:03.841700 1920 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Nov 1 01:01:03.842357 kubelet[1920]: I1101 01:01:03.842342 1920 server.go:479] "Adding debug handlers to kubelet server" Nov 1 01:01:03.841000 audit[1920]: AVC avc: denied { mac_admin } for pid=1920 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:01:03.843124 kubelet[1920]: I1101 01:01:03.843111 1920 kubelet.go:1507] "Unprivileged containerized plugins might not work, could not set selinux context on plugin registration dir" path="/var/lib/kubelet/plugins_registry" err="setxattr(label=system_u:object_r:container_file_t:s0) /var/lib/kubelet/plugins_registry: invalid argument" Nov 1 01:01:03.843191 kubelet[1920]: I1101 01:01:03.843181 1920 kubelet.go:1511] "Unprivileged containerized plugins might not work, could not set selinux context on plugins dir" path="/var/lib/kubelet/plugins" err="setxattr(label=system_u:object_r:container_file_t:s0) /var/lib/kubelet/plugins: invalid argument" Nov 1 01:01:03.843273 kubelet[1920]: I1101 01:01:03.843264 1920 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Nov 1 01:01:03.841000 audit: SELINUX_ERR op=setxattr invalid_context="system_u:object_r:container_file_t:s0" Nov 1 01:01:03.846830 kernel: audit: type=1400 audit(1761958863.841:212): avc: denied { mac_admin } for pid=1920 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:01:03.846876 kernel: audit: type=1401 audit(1761958863.841:212): op=setxattr invalid_context="system_u:object_r:container_file_t:s0" Nov 1 01:01:03.846904 kernel: audit: type=1300 audit(1761958863.841:212): arch=c000003e syscall=188 success=no exit=-22 a0=c000ae1590 a1=c000bf0b88 a2=c000ae1560 a3=25 items=0 ppid=1 pid=1920 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/usr/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 01:01:03.841000 audit[1920]: SYSCALL arch=c000003e syscall=188 success=no exit=-22 a0=c000ae1590 a1=c000bf0b88 a2=c000ae1560 a3=25 items=0 ppid=1 pid=1920 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/usr/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 01:01:03.848758 kubelet[1920]: I1101 01:01:03.848745 1920 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Nov 1 01:01:03.850655 kubelet[1920]: I1101 01:01:03.850648 1920 volume_manager.go:297] "Starting Kubelet Volume Manager" Nov 1 01:01:03.850741 kubelet[1920]: I1101 01:01:03.850703 1920 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Nov 1 01:01:03.850853 kubelet[1920]: I1101 01:01:03.850843 1920 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Nov 1 01:01:03.850959 kubelet[1920]: E1101 01:01:03.850938 1920 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Nov 1 01:01:03.841000 audit: PROCTITLE proctitle=2F7573722F62696E2F6B7562656C6574002D2D626F6F7473747261702D6B756265636F6E6669673D2F6574632F6B756265726E657465732F626F6F7473747261702D6B7562656C65742E636F6E66002D2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F6B7562656C65742E636F6E66002D2D636F6E6669 Nov 1 01:01:03.853092 kubelet[1920]: I1101 01:01:03.853081 1920 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Nov 1 01:01:03.853168 kubelet[1920]: I1101 01:01:03.853161 1920 reconciler.go:26] "Reconciler: start to sync state" Nov 1 01:01:03.855586 kubelet[1920]: E1101 01:01:03.851527 1920 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://139.178.70.108:6443/api/v1/namespaces/default/events\": dial tcp 139.178.70.108:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.1873bc405ba598ed default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-11-01 01:01:03.833503981 +0000 UTC m=+0.604528705,LastTimestamp:2025-11-01 01:01:03.833503981 +0000 UTC m=+0.604528705,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Nov 1 01:01:03.855730 kernel: audit: type=1327 audit(1761958863.841:212): proctitle=2F7573722F62696E2F6B7562656C6574002D2D626F6F7473747261702D6B756265636F6E6669673D2F6574632F6B756265726E657465732F626F6F7473747261702D6B7562656C65742E636F6E66002D2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F6B7562656C65742E636F6E66002D2D636F6E6669 Nov 1 01:01:03.855816 kubelet[1920]: E1101 01:01:03.855791 1920 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://139.178.70.108:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 139.178.70.108:6443: connect: connection refused" interval="200ms" Nov 1 01:01:03.841000 audit[1920]: AVC avc: denied { mac_admin } for pid=1920 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:01:03.858708 kernel: audit: type=1400 audit(1761958863.841:213): avc: denied { mac_admin } for pid=1920 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:01:03.841000 audit: SELINUX_ERR op=setxattr invalid_context="system_u:object_r:container_file_t:s0" Nov 1 01:01:03.860689 kernel: audit: type=1401 audit(1761958863.841:213): op=setxattr invalid_context="system_u:object_r:container_file_t:s0" Nov 1 01:01:03.860721 kubelet[1920]: I1101 01:01:03.859342 1920 factory.go:221] Registration of the systemd container factory successfully Nov 1 01:01:03.860721 kubelet[1920]: I1101 01:01:03.859395 1920 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Nov 1 01:01:03.860721 kubelet[1920]: I1101 01:01:03.860389 1920 factory.go:221] Registration of the containerd container factory successfully Nov 1 01:01:03.841000 audit[1920]: SYSCALL arch=c000003e syscall=188 success=no exit=-22 a0=c000942ae0 a1=c000bf0ba0 a2=c000ae1620 a3=25 items=0 ppid=1 pid=1920 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/usr/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 01:01:03.841000 audit: PROCTITLE proctitle=2F7573722F62696E2F6B7562656C6574002D2D626F6F7473747261702D6B756265636F6E6669673D2F6574632F6B756265726E657465732F626F6F7473747261702D6B7562656C65742E636F6E66002D2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F6B7562656C65742E636F6E66002D2D636F6E6669 Nov 1 01:01:03.868495 kernel: audit: type=1300 audit(1761958863.841:213): arch=c000003e syscall=188 success=no exit=-22 a0=c000942ae0 a1=c000bf0ba0 a2=c000ae1620 a3=25 items=0 ppid=1 pid=1920 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/usr/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 01:01:03.868537 kernel: audit: type=1327 audit(1761958863.841:213): proctitle=2F7573722F62696E2F6B7562656C6574002D2D626F6F7473747261702D6B756265636F6E6669673D2F6574632F6B756265726E657465732F626F6F7473747261702D6B7562656C65742E636F6E66002D2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F6B7562656C65742E636F6E66002D2D636F6E6669 Nov 1 01:01:03.846000 audit[1932]: NETFILTER_CFG table=mangle:26 family=2 entries=2 op=nft_register_chain pid=1932 subj=system_u:system_r:kernel_t:s0 comm="iptables" Nov 1 01:01:03.846000 audit[1932]: SYSCALL arch=c000003e syscall=46 success=yes exit=136 a0=3 a1=7ffda21b3de0 a2=0 a3=7ffda21b3dcc items=0 ppid=1920 pid=1932 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 01:01:03.846000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D49505441424C45532D48494E54002D74006D616E676C65 Nov 1 01:01:03.871674 kernel: audit: type=1325 audit(1761958863.846:214): table=mangle:26 family=2 entries=2 op=nft_register_chain pid=1932 subj=system_u:system_r:kernel_t:s0 comm="iptables" Nov 1 01:01:03.846000 audit[1933]: NETFILTER_CFG table=filter:27 family=2 entries=1 op=nft_register_chain pid=1933 subj=system_u:system_r:kernel_t:s0 comm="iptables" Nov 1 01:01:03.846000 audit[1933]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7fff8ad01640 a2=0 a3=7fff8ad0162c items=0 ppid=1920 pid=1933 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 01:01:03.846000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4649524557414C4C002D740066696C746572 Nov 1 01:01:03.849000 audit[1935]: NETFILTER_CFG table=filter:28 family=2 entries=2 op=nft_register_chain pid=1935 subj=system_u:system_r:kernel_t:s0 comm="iptables" Nov 1 01:01:03.849000 audit[1935]: SYSCALL arch=c000003e syscall=46 success=yes exit=312 a0=3 a1=7ffcaf673bf0 a2=0 a3=7ffcaf673bdc items=0 ppid=1920 pid=1935 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 01:01:03.849000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6A004B5542452D4649524557414C4C Nov 1 01:01:03.849000 audit[1937]: NETFILTER_CFG table=filter:29 family=2 entries=2 op=nft_register_chain pid=1937 subj=system_u:system_r:kernel_t:s0 comm="iptables" Nov 1 01:01:03.849000 audit[1937]: SYSCALL arch=c000003e syscall=46 success=yes exit=312 a0=3 a1=7ffe52a0bb40 a2=0 a3=7ffe52a0bb2c items=0 ppid=1920 pid=1937 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 01:01:03.849000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6A004B5542452D4649524557414C4C Nov 1 01:01:03.874040 kubelet[1920]: W1101 01:01:03.874014 1920 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://139.178.70.108:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 139.178.70.108:6443: connect: connection refused Nov 1 01:01:03.874099 kubelet[1920]: E1101 01:01:03.874064 1920 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://139.178.70.108:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 139.178.70.108:6443: connect: connection refused" logger="UnhandledError" Nov 1 01:01:03.874614 kubelet[1920]: E1101 01:01:03.874606 1920 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Nov 1 01:01:03.876960 kubelet[1920]: I1101 01:01:03.876950 1920 cpu_manager.go:221] "Starting CPU manager" policy="none" Nov 1 01:01:03.877027 kubelet[1920]: I1101 01:01:03.877019 1920 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Nov 1 01:01:03.877073 kubelet[1920]: I1101 01:01:03.877067 1920 state_mem.go:36] "Initialized new in-memory state store" Nov 1 01:01:03.876000 audit[1943]: NETFILTER_CFG table=filter:30 family=2 entries=1 op=nft_register_rule pid=1943 subj=system_u:system_r:kernel_t:s0 comm="iptables" Nov 1 01:01:03.876000 audit[1943]: SYSCALL arch=c000003e syscall=46 success=yes exit=924 a0=3 a1=7ffcd1750610 a2=0 a3=7ffcd17505fc items=0 ppid=1920 pid=1943 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 01:01:03.876000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D41004B5542452D4649524557414C4C002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E7400626C6F636B20696E636F6D696E67206C6F63616C6E657420636F6E6E656374696F6E73002D2D647374003132372E302E302E302F38 Nov 1 01:01:03.878290 kubelet[1920]: I1101 01:01:03.878279 1920 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Nov 1 01:01:03.877000 audit[1944]: NETFILTER_CFG table=mangle:31 family=10 entries=2 op=nft_register_chain pid=1944 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Nov 1 01:01:03.877000 audit[1944]: SYSCALL arch=c000003e syscall=46 success=yes exit=136 a0=3 a1=7fffeed435c0 a2=0 a3=7fffeed435ac items=0 ppid=1920 pid=1944 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 01:01:03.877000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D49505441424C45532D48494E54002D74006D616E676C65 Nov 1 01:01:03.877000 audit[1945]: NETFILTER_CFG table=mangle:32 family=2 entries=1 op=nft_register_chain pid=1945 subj=system_u:system_r:kernel_t:s0 comm="iptables" Nov 1 01:01:03.877000 audit[1945]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffcdd4153a0 a2=0 a3=7ffcdd41538c items=0 ppid=1920 pid=1945 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 01:01:03.877000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006D616E676C65 Nov 1 01:01:03.878000 audit[1947]: NETFILTER_CFG table=nat:33 family=2 entries=1 op=nft_register_chain pid=1947 subj=system_u:system_r:kernel_t:s0 comm="iptables" Nov 1 01:01:03.878000 audit[1947]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffee63fdd10 a2=0 a3=7ffee63fdcfc items=0 ppid=1920 pid=1947 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 01:01:03.878000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006E6174 Nov 1 01:01:03.878000 audit[1946]: NETFILTER_CFG table=mangle:34 family=10 entries=1 op=nft_register_chain pid=1946 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Nov 1 01:01:03.878000 audit[1946]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7fff58081f90 a2=0 a3=7fff58081f7c items=0 ppid=1920 pid=1946 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 01:01:03.878000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006D616E676C65 Nov 1 01:01:03.879000 audit[1949]: NETFILTER_CFG table=filter:35 family=2 entries=1 op=nft_register_chain pid=1949 subj=system_u:system_r:kernel_t:s0 comm="iptables" Nov 1 01:01:03.879000 audit[1949]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7fffea0b4e20 a2=0 a3=7fffea0b4e0c items=0 ppid=1920 pid=1949 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 01:01:03.879000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D740066696C746572 Nov 1 01:01:03.879000 audit[1950]: NETFILTER_CFG table=nat:36 family=10 entries=2 op=nft_register_chain pid=1950 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Nov 1 01:01:03.879000 audit[1950]: SYSCALL arch=c000003e syscall=46 success=yes exit=128 a0=3 a1=7fffb7664a70 a2=0 a3=7fffb7664a5c items=0 ppid=1920 pid=1950 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 01:01:03.879000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006E6174 Nov 1 01:01:03.881735 kubelet[1920]: I1101 01:01:03.879016 1920 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Nov 1 01:01:03.881735 kubelet[1920]: I1101 01:01:03.879032 1920 status_manager.go:227] "Starting to sync pod status with apiserver" Nov 1 01:01:03.881735 kubelet[1920]: I1101 01:01:03.879045 1920 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Nov 1 01:01:03.881735 kubelet[1920]: I1101 01:01:03.879055 1920 kubelet.go:2382] "Starting kubelet main sync loop" Nov 1 01:01:03.881735 kubelet[1920]: E1101 01:01:03.879078 1920 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Nov 1 01:01:03.881735 kubelet[1920]: W1101 01:01:03.879813 1920 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://139.178.70.108:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 139.178.70.108:6443: connect: connection refused Nov 1 01:01:03.881735 kubelet[1920]: E1101 01:01:03.879830 1920 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://139.178.70.108:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 139.178.70.108:6443: connect: connection refused" logger="UnhandledError" Nov 1 01:01:03.881906 kubelet[1920]: I1101 01:01:03.881898 1920 policy_none.go:49] "None policy: Start" Nov 1 01:01:03.881960 kubelet[1920]: I1101 01:01:03.881952 1920 memory_manager.go:186] "Starting memorymanager" policy="None" Nov 1 01:01:03.882012 kubelet[1920]: I1101 01:01:03.882005 1920 state_mem.go:35] "Initializing new in-memory state store" Nov 1 01:01:03.880000 audit[1951]: NETFILTER_CFG table=filter:37 family=10 entries=2 op=nft_register_chain pid=1951 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Nov 1 01:01:03.880000 audit[1951]: SYSCALL arch=c000003e syscall=46 success=yes exit=136 a0=3 a1=7ffcbb185ee0 a2=0 a3=7ffcbb185ecc items=0 ppid=1920 pid=1951 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 01:01:03.880000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D740066696C746572 Nov 1 01:01:03.888351 kubelet[1920]: I1101 01:01:03.888341 1920 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Nov 1 01:01:03.888000 audit[1920]: AVC avc: denied { mac_admin } for pid=1920 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:01:03.888000 audit: SELINUX_ERR op=setxattr invalid_context="system_u:object_r:container_file_t:s0" Nov 1 01:01:03.888000 audit[1920]: SYSCALL arch=c000003e syscall=188 success=no exit=-22 a0=c000cd12c0 a1=c001010720 a2=c000cd1290 a3=25 items=0 ppid=1 pid=1920 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/usr/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 01:01:03.888000 audit: PROCTITLE proctitle=2F7573722F62696E2F6B7562656C6574002D2D626F6F7473747261702D6B756265636F6E6669673D2F6574632F6B756265726E657465732F626F6F7473747261702D6B7562656C65742E636F6E66002D2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F6B7562656C65742E636F6E66002D2D636F6E6669 Nov 1 01:01:03.891710 kubelet[1920]: I1101 01:01:03.891699 1920 server.go:94] "Unprivileged containerized plugins might not work. Could not set selinux context on socket dir" path="/var/lib/kubelet/device-plugins/" err="setxattr(label=system_u:object_r:container_file_t:s0) /var/lib/kubelet/device-plugins/: invalid argument" Nov 1 01:01:03.891821 kubelet[1920]: I1101 01:01:03.891814 1920 eviction_manager.go:189] "Eviction manager: starting control loop" Nov 1 01:01:03.891886 kubelet[1920]: I1101 01:01:03.891861 1920 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Nov 1 01:01:03.892082 kubelet[1920]: I1101 01:01:03.892075 1920 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Nov 1 01:01:03.893088 kubelet[1920]: E1101 01:01:03.893073 1920 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Nov 1 01:01:03.893133 kubelet[1920]: E1101 01:01:03.893098 1920 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Nov 1 01:01:03.983361 kubelet[1920]: E1101 01:01:03.983339 1920 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Nov 1 01:01:03.984857 kubelet[1920]: E1101 01:01:03.984845 1920 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Nov 1 01:01:03.986543 kubelet[1920]: E1101 01:01:03.986526 1920 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Nov 1 01:01:03.992590 kubelet[1920]: I1101 01:01:03.992570 1920 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Nov 1 01:01:03.992833 kubelet[1920]: E1101 01:01:03.992816 1920 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://139.178.70.108:6443/api/v1/nodes\": dial tcp 139.178.70.108:6443: connect: connection refused" node="localhost" Nov 1 01:01:04.053550 kubelet[1920]: I1101 01:01:04.053483 1920 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/9b7e82b23872942deb692a4f9d774854-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"9b7e82b23872942deb692a4f9d774854\") " pod="kube-system/kube-apiserver-localhost" Nov 1 01:01:04.053773 kubelet[1920]: I1101 01:01:04.053758 1920 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/4654b122dbb389158fe3c0766e603624-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"4654b122dbb389158fe3c0766e603624\") " pod="kube-system/kube-controller-manager-localhost" Nov 1 01:01:04.053926 kubelet[1920]: I1101 01:01:04.053881 1920 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/4654b122dbb389158fe3c0766e603624-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"4654b122dbb389158fe3c0766e603624\") " pod="kube-system/kube-controller-manager-localhost" Nov 1 01:01:04.054101 kubelet[1920]: I1101 01:01:04.054089 1920 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/9b7e82b23872942deb692a4f9d774854-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"9b7e82b23872942deb692a4f9d774854\") " pod="kube-system/kube-apiserver-localhost" Nov 1 01:01:04.054288 kubelet[1920]: I1101 01:01:04.054244 1920 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/4654b122dbb389158fe3c0766e603624-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"4654b122dbb389158fe3c0766e603624\") " pod="kube-system/kube-controller-manager-localhost" Nov 1 01:01:04.054392 kubelet[1920]: I1101 01:01:04.054381 1920 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/4654b122dbb389158fe3c0766e603624-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"4654b122dbb389158fe3c0766e603624\") " pod="kube-system/kube-controller-manager-localhost" Nov 1 01:01:04.054537 kubelet[1920]: I1101 01:01:04.054526 1920 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/4654b122dbb389158fe3c0766e603624-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"4654b122dbb389158fe3c0766e603624\") " pod="kube-system/kube-controller-manager-localhost" Nov 1 01:01:04.054691 kubelet[1920]: I1101 01:01:04.054679 1920 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/a1d51be1ff02022474f2598f6e43038f-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"a1d51be1ff02022474f2598f6e43038f\") " pod="kube-system/kube-scheduler-localhost" Nov 1 01:01:04.054881 kubelet[1920]: I1101 01:01:04.054871 1920 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/9b7e82b23872942deb692a4f9d774854-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"9b7e82b23872942deb692a4f9d774854\") " pod="kube-system/kube-apiserver-localhost" Nov 1 01:01:04.056117 kubelet[1920]: E1101 01:01:04.056093 1920 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://139.178.70.108:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 139.178.70.108:6443: connect: connection refused" interval="400ms" Nov 1 01:01:04.194651 kubelet[1920]: I1101 01:01:04.194569 1920 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Nov 1 01:01:04.194801 kubelet[1920]: E1101 01:01:04.194788 1920 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://139.178.70.108:6443/api/v1/nodes\": dial tcp 139.178.70.108:6443: connect: connection refused" node="localhost" Nov 1 01:01:04.284380 env[1357]: time="2025-11-01T01:01:04.284341948Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:9b7e82b23872942deb692a4f9d774854,Namespace:kube-system,Attempt:0,}" Nov 1 01:01:04.286267 env[1357]: time="2025-11-01T01:01:04.286014426Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:4654b122dbb389158fe3c0766e603624,Namespace:kube-system,Attempt:0,}" Nov 1 01:01:04.287261 env[1357]: time="2025-11-01T01:01:04.287230074Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:a1d51be1ff02022474f2598f6e43038f,Namespace:kube-system,Attempt:0,}" Nov 1 01:01:04.457114 kubelet[1920]: E1101 01:01:04.457080 1920 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://139.178.70.108:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 139.178.70.108:6443: connect: connection refused" interval="800ms" Nov 1 01:01:04.596688 kubelet[1920]: I1101 01:01:04.596492 1920 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Nov 1 01:01:04.596688 kubelet[1920]: E1101 01:01:04.596655 1920 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://139.178.70.108:6443/api/v1/nodes\": dial tcp 139.178.70.108:6443: connect: connection refused" node="localhost" Nov 1 01:01:04.797060 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2023630055.mount: Deactivated successfully. Nov 1 01:01:04.799887 env[1357]: time="2025-11-01T01:01:04.799827865Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 01:01:04.801058 env[1357]: time="2025-11-01T01:01:04.801032220Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 01:01:04.802018 env[1357]: time="2025-11-01T01:01:04.801964336Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 01:01:04.803562 env[1357]: time="2025-11-01T01:01:04.803543554Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 01:01:04.804052 env[1357]: time="2025-11-01T01:01:04.804034023Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 01:01:04.804521 env[1357]: time="2025-11-01T01:01:04.804502743Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 01:01:04.804945 env[1357]: time="2025-11-01T01:01:04.804927965Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 01:01:04.806108 env[1357]: time="2025-11-01T01:01:04.806090564Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 01:01:04.808056 env[1357]: time="2025-11-01T01:01:04.808038069Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 01:01:04.808680 env[1357]: time="2025-11-01T01:01:04.808647622Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 01:01:04.810343 env[1357]: time="2025-11-01T01:01:04.810322371Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 01:01:04.813320 env[1357]: time="2025-11-01T01:01:04.813295611Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 01:01:04.834489 env[1357]: time="2025-11-01T01:01:04.825547731Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 1 01:01:04.834489 env[1357]: time="2025-11-01T01:01:04.825582049Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 1 01:01:04.834489 env[1357]: time="2025-11-01T01:01:04.825607761Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 1 01:01:04.834489 env[1357]: time="2025-11-01T01:01:04.825739808Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/b6c13d887609a529b780d664cffe89a74929aff4bbf1ffd0abcd764a120ae204 pid=1968 runtime=io.containerd.runc.v2 Nov 1 01:01:04.834747 env[1357]: time="2025-11-01T01:01:04.826744231Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 1 01:01:04.834747 env[1357]: time="2025-11-01T01:01:04.826781320Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 1 01:01:04.834747 env[1357]: time="2025-11-01T01:01:04.826791100Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 1 01:01:04.834747 env[1357]: time="2025-11-01T01:01:04.826920159Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/b6d50cdaf67143d641472f722ac4cd7d3d013835bf0afc9ffa0ce16cdf01986f pid=1967 runtime=io.containerd.runc.v2 Nov 1 01:01:04.857384 kubelet[1920]: W1101 01:01:04.857316 1920 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://139.178.70.108:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 139.178.70.108:6443: connect: connection refused Nov 1 01:01:04.857384 kubelet[1920]: E1101 01:01:04.857364 1920 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://139.178.70.108:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 139.178.70.108:6443: connect: connection refused" logger="UnhandledError" Nov 1 01:01:04.875270 env[1357]: time="2025-11-01T01:01:04.875235247Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 1 01:01:04.875409 env[1357]: time="2025-11-01T01:01:04.875394339Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 1 01:01:04.875492 env[1357]: time="2025-11-01T01:01:04.875476811Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 1 01:01:04.875644 env[1357]: time="2025-11-01T01:01:04.875629001Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/f9ce50e372e8571e78127e549fc03bad12d0b72468276488b75178a4bc12324a pid=2031 runtime=io.containerd.runc.v2 Nov 1 01:01:04.898599 env[1357]: time="2025-11-01T01:01:04.898568578Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:9b7e82b23872942deb692a4f9d774854,Namespace:kube-system,Attempt:0,} returns sandbox id \"b6c13d887609a529b780d664cffe89a74929aff4bbf1ffd0abcd764a120ae204\"" Nov 1 01:01:04.900741 kubelet[1920]: W1101 01:01:04.900640 1920 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://139.178.70.108:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 139.178.70.108:6443: connect: connection refused Nov 1 01:01:04.900842 kubelet[1920]: E1101 01:01:04.900720 1920 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://139.178.70.108:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 139.178.70.108:6443: connect: connection refused" logger="UnhandledError" Nov 1 01:01:04.902614 env[1357]: time="2025-11-01T01:01:04.902593241Z" level=info msg="CreateContainer within sandbox \"b6c13d887609a529b780d664cffe89a74929aff4bbf1ffd0abcd764a120ae204\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Nov 1 01:01:04.907136 env[1357]: time="2025-11-01T01:01:04.907110337Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:4654b122dbb389158fe3c0766e603624,Namespace:kube-system,Attempt:0,} returns sandbox id \"b6d50cdaf67143d641472f722ac4cd7d3d013835bf0afc9ffa0ce16cdf01986f\"" Nov 1 01:01:04.908448 env[1357]: time="2025-11-01T01:01:04.908430773Z" level=info msg="CreateContainer within sandbox \"b6d50cdaf67143d641472f722ac4cd7d3d013835bf0afc9ffa0ce16cdf01986f\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Nov 1 01:01:04.917075 kubelet[1920]: W1101 01:01:04.917026 1920 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://139.178.70.108:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 139.178.70.108:6443: connect: connection refused Nov 1 01:01:04.917075 kubelet[1920]: E1101 01:01:04.917079 1920 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://139.178.70.108:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 139.178.70.108:6443: connect: connection refused" logger="UnhandledError" Nov 1 01:01:04.920290 env[1357]: time="2025-11-01T01:01:04.920257985Z" level=info msg="CreateContainer within sandbox \"b6d50cdaf67143d641472f722ac4cd7d3d013835bf0afc9ffa0ce16cdf01986f\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"60b0176c88bb69e64e91c6429afe9140169706bbb755f8c420d220d1a41e6daf\"" Nov 1 01:01:04.920977 env[1357]: time="2025-11-01T01:01:04.920960965Z" level=info msg="StartContainer for \"60b0176c88bb69e64e91c6429afe9140169706bbb755f8c420d220d1a41e6daf\"" Nov 1 01:01:04.923448 env[1357]: time="2025-11-01T01:01:04.923420303Z" level=info msg="CreateContainer within sandbox \"b6c13d887609a529b780d664cffe89a74929aff4bbf1ffd0abcd764a120ae204\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"5711bef2e85e353e0b48354676eba7194c82e82afce3230243268eaf8c6838ad\"" Nov 1 01:01:04.923787 env[1357]: time="2025-11-01T01:01:04.923774992Z" level=info msg="StartContainer for \"5711bef2e85e353e0b48354676eba7194c82e82afce3230243268eaf8c6838ad\"" Nov 1 01:01:04.925951 env[1357]: time="2025-11-01T01:01:04.925930028Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:a1d51be1ff02022474f2598f6e43038f,Namespace:kube-system,Attempt:0,} returns sandbox id \"f9ce50e372e8571e78127e549fc03bad12d0b72468276488b75178a4bc12324a\"" Nov 1 01:01:04.927355 env[1357]: time="2025-11-01T01:01:04.927333603Z" level=info msg="CreateContainer within sandbox \"f9ce50e372e8571e78127e549fc03bad12d0b72468276488b75178a4bc12324a\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Nov 1 01:01:04.975530 env[1357]: time="2025-11-01T01:01:04.974707583Z" level=info msg="CreateContainer within sandbox \"f9ce50e372e8571e78127e549fc03bad12d0b72468276488b75178a4bc12324a\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"48df85c12165b96d0c3720569bc347f231c688a80be415e9018400a1f9b802d8\"" Nov 1 01:01:04.975745 env[1357]: time="2025-11-01T01:01:04.975730087Z" level=info msg="StartContainer for \"48df85c12165b96d0c3720569bc347f231c688a80be415e9018400a1f9b802d8\"" Nov 1 01:01:04.980691 env[1357]: time="2025-11-01T01:01:04.979824429Z" level=info msg="StartContainer for \"5711bef2e85e353e0b48354676eba7194c82e82afce3230243268eaf8c6838ad\" returns successfully" Nov 1 01:01:05.008154 env[1357]: time="2025-11-01T01:01:05.008121792Z" level=info msg="StartContainer for \"60b0176c88bb69e64e91c6429afe9140169706bbb755f8c420d220d1a41e6daf\" returns successfully" Nov 1 01:01:05.029811 env[1357]: time="2025-11-01T01:01:05.029780932Z" level=info msg="StartContainer for \"48df85c12165b96d0c3720569bc347f231c688a80be415e9018400a1f9b802d8\" returns successfully" Nov 1 01:01:05.143183 kubelet[1920]: W1101 01:01:05.143105 1920 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://139.178.70.108:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 139.178.70.108:6443: connect: connection refused Nov 1 01:01:05.143183 kubelet[1920]: E1101 01:01:05.143163 1920 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://139.178.70.108:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 139.178.70.108:6443: connect: connection refused" logger="UnhandledError" Nov 1 01:01:05.257373 kubelet[1920]: E1101 01:01:05.257340 1920 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://139.178.70.108:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 139.178.70.108:6443: connect: connection refused" interval="1.6s" Nov 1 01:01:05.398326 kubelet[1920]: I1101 01:01:05.398257 1920 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Nov 1 01:01:05.398549 kubelet[1920]: E1101 01:01:05.398525 1920 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://139.178.70.108:6443/api/v1/nodes\": dial tcp 139.178.70.108:6443: connect: connection refused" node="localhost" Nov 1 01:01:05.843950 kubelet[1920]: E1101 01:01:05.843886 1920 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://139.178.70.108:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 139.178.70.108:6443: connect: connection refused" logger="UnhandledError" Nov 1 01:01:05.882799 kubelet[1920]: E1101 01:01:05.882777 1920 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Nov 1 01:01:05.883846 kubelet[1920]: E1101 01:01:05.883832 1920 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Nov 1 01:01:05.884770 kubelet[1920]: E1101 01:01:05.884757 1920 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Nov 1 01:01:06.886283 kubelet[1920]: E1101 01:01:06.886263 1920 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Nov 1 01:01:06.886545 kubelet[1920]: E1101 01:01:06.886495 1920 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Nov 1 01:01:06.999447 kubelet[1920]: I1101 01:01:06.999428 1920 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Nov 1 01:01:07.177686 kubelet[1920]: E1101 01:01:07.177608 1920 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Nov 1 01:01:07.336452 kubelet[1920]: I1101 01:01:07.336424 1920 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Nov 1 01:01:07.336452 kubelet[1920]: E1101 01:01:07.336448 1920 kubelet_node_status.go:548] "Error updating node status, will retry" err="error getting node \"localhost\": node \"localhost\" not found" Nov 1 01:01:07.351996 kubelet[1920]: I1101 01:01:07.351980 1920 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Nov 1 01:01:07.355445 kubelet[1920]: E1101 01:01:07.355429 1920 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-localhost" Nov 1 01:01:07.355541 kubelet[1920]: I1101 01:01:07.355534 1920 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Nov 1 01:01:07.356478 kubelet[1920]: E1101 01:01:07.356466 1920 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-localhost" Nov 1 01:01:07.356558 kubelet[1920]: I1101 01:01:07.356551 1920 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Nov 1 01:01:07.357404 kubelet[1920]: E1101 01:01:07.357394 1920 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-localhost" Nov 1 01:01:07.827894 kubelet[1920]: I1101 01:01:07.827866 1920 apiserver.go:52] "Watching apiserver" Nov 1 01:01:07.853575 kubelet[1920]: I1101 01:01:07.853552 1920 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Nov 1 01:01:07.886621 kubelet[1920]: I1101 01:01:07.886596 1920 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Nov 1 01:01:07.887960 kubelet[1920]: E1101 01:01:07.887943 1920 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-localhost" Nov 1 01:01:08.960504 systemd[1]: Reloading. Nov 1 01:01:09.019803 /usr/lib/systemd/system-generators/torcx-generator[2214]: time="2025-11-01T01:01:09Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.8 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.8 /var/lib/torcx/store]" Nov 1 01:01:09.019820 /usr/lib/systemd/system-generators/torcx-generator[2214]: time="2025-11-01T01:01:09Z" level=info msg="torcx already run" Nov 1 01:01:09.062917 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Nov 1 01:01:09.063052 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Nov 1 01:01:09.075376 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Nov 1 01:01:09.131600 kubelet[1920]: I1101 01:01:09.131576 1920 dynamic_cafile_content.go:175] "Shutting down controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Nov 1 01:01:09.133252 systemd[1]: Stopping kubelet.service... Nov 1 01:01:09.151064 systemd[1]: kubelet.service: Deactivated successfully. Nov 1 01:01:09.151334 systemd[1]: Stopped kubelet.service. Nov 1 01:01:09.154607 kernel: kauditd_printk_skb: 39 callbacks suppressed Nov 1 01:01:09.154674 kernel: audit: type=1131 audit(1761958869.149:227): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 01:01:09.149000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 01:01:09.156762 systemd[1]: Starting kubelet.service... Nov 1 01:01:09.880000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 01:01:09.882366 systemd[1]: Started kubelet.service. Nov 1 01:01:09.886706 kernel: audit: type=1130 audit(1761958869.880:228): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 01:01:09.942214 kubelet[2289]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Nov 1 01:01:09.942214 kubelet[2289]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Nov 1 01:01:09.942214 kubelet[2289]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Nov 1 01:01:09.942486 kubelet[2289]: I1101 01:01:09.942251 2289 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Nov 1 01:01:09.946252 kubelet[2289]: I1101 01:01:09.946239 2289 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Nov 1 01:01:09.946322 kubelet[2289]: I1101 01:01:09.946314 2289 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Nov 1 01:01:09.946612 kubelet[2289]: I1101 01:01:09.946604 2289 server.go:954] "Client rotation is on, will bootstrap in background" Nov 1 01:01:09.948997 kubelet[2289]: I1101 01:01:09.948323 2289 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Nov 1 01:01:09.959833 kubelet[2289]: I1101 01:01:09.959470 2289 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Nov 1 01:01:09.961432 kubelet[2289]: E1101 01:01:09.961420 2289 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Nov 1 01:01:09.961492 kubelet[2289]: I1101 01:01:09.961484 2289 server.go:1421] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Nov 1 01:01:09.965720 kubelet[2289]: I1101 01:01:09.965709 2289 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Nov 1 01:01:09.967461 kubelet[2289]: I1101 01:01:09.967443 2289 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Nov 1 01:01:09.967604 kubelet[2289]: I1101 01:01:09.967508 2289 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":1} Nov 1 01:01:09.967708 kubelet[2289]: I1101 01:01:09.967700 2289 topology_manager.go:138] "Creating topology manager with none policy" Nov 1 01:01:09.967754 kubelet[2289]: I1101 01:01:09.967747 2289 container_manager_linux.go:304] "Creating device plugin manager" Nov 1 01:01:09.967819 kubelet[2289]: I1101 01:01:09.967812 2289 state_mem.go:36] "Initialized new in-memory state store" Nov 1 01:01:09.968506 kubelet[2289]: I1101 01:01:09.968499 2289 kubelet.go:446] "Attempting to sync node with API server" Nov 1 01:01:09.968564 kubelet[2289]: I1101 01:01:09.968557 2289 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Nov 1 01:01:09.968622 kubelet[2289]: I1101 01:01:09.968614 2289 kubelet.go:352] "Adding apiserver pod source" Nov 1 01:01:09.968687 kubelet[2289]: I1101 01:01:09.968680 2289 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Nov 1 01:01:09.977600 kubelet[2289]: I1101 01:01:09.977589 2289 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" Nov 1 01:01:09.977899 kubelet[2289]: I1101 01:01:09.977890 2289 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Nov 1 01:01:09.978170 kubelet[2289]: I1101 01:01:09.978162 2289 watchdog_linux.go:99] "Systemd watchdog is not enabled" Nov 1 01:01:09.978227 kubelet[2289]: I1101 01:01:09.978220 2289 server.go:1287] "Started kubelet" Nov 1 01:01:09.982319 kubelet[2289]: I1101 01:01:09.979439 2289 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Nov 1 01:01:09.982319 kubelet[2289]: I1101 01:01:09.979990 2289 server.go:479] "Adding debug handlers to kubelet server" Nov 1 01:01:09.982319 kubelet[2289]: I1101 01:01:09.980343 2289 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Nov 1 01:01:09.982319 kubelet[2289]: I1101 01:01:09.980518 2289 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Nov 1 01:01:09.986000 audit[2289]: AVC avc: denied { mac_admin } for pid=2289 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:01:09.986000 audit: SELINUX_ERR op=setxattr invalid_context="system_u:object_r:container_file_t:s0" Nov 1 01:01:09.991754 kubelet[2289]: E1101 01:01:09.991743 2289 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Nov 1 01:01:09.991839 kernel: audit: type=1400 audit(1761958869.986:229): avc: denied { mac_admin } for pid=2289 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:01:09.992554 kernel: audit: type=1401 audit(1761958869.986:229): op=setxattr invalid_context="system_u:object_r:container_file_t:s0" Nov 1 01:01:09.992568 kernel: audit: type=1300 audit(1761958869.986:229): arch=c000003e syscall=188 success=no exit=-22 a0=c000d1e0f0 a1=c000c92918 a2=c000d1e0c0 a3=25 items=0 ppid=1 pid=2289 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/usr/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 01:01:09.986000 audit[2289]: SYSCALL arch=c000003e syscall=188 success=no exit=-22 a0=c000d1e0f0 a1=c000c92918 a2=c000d1e0c0 a3=25 items=0 ppid=1 pid=2289 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/usr/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 01:01:09.995909 kubelet[2289]: I1101 01:01:09.995698 2289 kubelet.go:1507] "Unprivileged containerized plugins might not work, could not set selinux context on plugin registration dir" path="/var/lib/kubelet/plugins_registry" err="setxattr(label=system_u:object_r:container_file_t:s0) /var/lib/kubelet/plugins_registry: invalid argument" Nov 1 01:01:09.995909 kubelet[2289]: I1101 01:01:09.995732 2289 kubelet.go:1511] "Unprivileged containerized plugins might not work, could not set selinux context on plugins dir" path="/var/lib/kubelet/plugins" err="setxattr(label=system_u:object_r:container_file_t:s0) /var/lib/kubelet/plugins: invalid argument" Nov 1 01:01:09.995909 kubelet[2289]: I1101 01:01:09.995758 2289 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Nov 1 01:01:09.986000 audit: PROCTITLE proctitle=2F7573722F62696E2F6B7562656C6574002D2D626F6F7473747261702D6B756265636F6E6669673D2F6574632F6B756265726E657465732F626F6F7473747261702D6B7562656C65742E636F6E66002D2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F6B7562656C65742E636F6E66002D2D636F6E6669 Nov 1 01:01:09.997749 kubelet[2289]: I1101 01:01:09.997738 2289 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Nov 1 01:01:09.999926 kernel: audit: type=1327 audit(1761958869.986:229): proctitle=2F7573722F62696E2F6B7562656C6574002D2D626F6F7473747261702D6B756265636F6E6669673D2F6574632F6B756265726E657465732F626F6F7473747261702D6B7562656C65742E636F6E66002D2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F6B7562656C65742E636F6E66002D2D636F6E6669 Nov 1 01:01:09.999981 kernel: audit: type=1400 audit(1761958869.994:230): avc: denied { mac_admin } for pid=2289 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:01:09.994000 audit[2289]: AVC avc: denied { mac_admin } for pid=2289 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:01:10.001054 kubelet[2289]: I1101 01:01:10.001043 2289 volume_manager.go:297] "Starting Kubelet Volume Manager" Nov 1 01:01:10.001174 kubelet[2289]: I1101 01:01:10.001166 2289 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Nov 1 01:01:10.001281 kubelet[2289]: I1101 01:01:10.001275 2289 reconciler.go:26] "Reconciler: start to sync state" Nov 1 01:01:09.994000 audit: SELINUX_ERR op=setxattr invalid_context="system_u:object_r:container_file_t:s0" Nov 1 01:01:10.004234 kernel: audit: type=1401 audit(1761958869.994:230): op=setxattr invalid_context="system_u:object_r:container_file_t:s0" Nov 1 01:01:10.004277 kernel: audit: type=1300 audit(1761958869.994:230): arch=c000003e syscall=188 success=no exit=-22 a0=c000c4e060 a1=c000bd8018 a2=c000c7e180 a3=25 items=0 ppid=1 pid=2289 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/usr/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 01:01:09.994000 audit[2289]: SYSCALL arch=c000003e syscall=188 success=no exit=-22 a0=c000c4e060 a1=c000bd8018 a2=c000c7e180 a3=25 items=0 ppid=1 pid=2289 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/usr/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 01:01:10.005761 kubelet[2289]: I1101 01:01:10.005739 2289 factory.go:221] Registration of the containerd container factory successfully Nov 1 01:01:10.005839 kubelet[2289]: I1101 01:01:10.005832 2289 factory.go:221] Registration of the systemd container factory successfully Nov 1 01:01:10.005957 kubelet[2289]: I1101 01:01:10.005945 2289 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Nov 1 01:01:09.994000 audit: PROCTITLE proctitle=2F7573722F62696E2F6B7562656C6574002D2D626F6F7473747261702D6B756265636F6E6669673D2F6574632F6B756265726E657465732F626F6F7473747261702D6B7562656C65742E636F6E66002D2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F6B7562656C65742E636F6E66002D2D636F6E6669 Nov 1 01:01:10.013683 kernel: audit: type=1327 audit(1761958869.994:230): proctitle=2F7573722F62696E2F6B7562656C6574002D2D626F6F7473747261702D6B756265636F6E6669673D2F6574632F6B756265726E657465732F626F6F7473747261702D6B7562656C65742E636F6E66002D2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F6B7562656C65742E636F6E66002D2D636F6E6669 Nov 1 01:01:10.015058 kubelet[2289]: I1101 01:01:10.015030 2289 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Nov 1 01:01:10.015058 kubelet[2289]: I1101 01:01:10.015690 2289 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Nov 1 01:01:10.015058 kubelet[2289]: I1101 01:01:10.015706 2289 status_manager.go:227] "Starting to sync pod status with apiserver" Nov 1 01:01:10.015058 kubelet[2289]: I1101 01:01:10.015719 2289 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Nov 1 01:01:10.015058 kubelet[2289]: I1101 01:01:10.015725 2289 kubelet.go:2382] "Starting kubelet main sync loop" Nov 1 01:01:10.015058 kubelet[2289]: E1101 01:01:10.015751 2289 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Nov 1 01:01:10.055175 kubelet[2289]: I1101 01:01:10.055160 2289 cpu_manager.go:221] "Starting CPU manager" policy="none" Nov 1 01:01:10.055280 kubelet[2289]: I1101 01:01:10.055270 2289 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Nov 1 01:01:10.055330 kubelet[2289]: I1101 01:01:10.055324 2289 state_mem.go:36] "Initialized new in-memory state store" Nov 1 01:01:10.055460 kubelet[2289]: I1101 01:01:10.055452 2289 state_mem.go:88] "Updated default CPUSet" cpuSet="" Nov 1 01:01:10.055514 kubelet[2289]: I1101 01:01:10.055497 2289 state_mem.go:96] "Updated CPUSet assignments" assignments={} Nov 1 01:01:10.055560 kubelet[2289]: I1101 01:01:10.055553 2289 policy_none.go:49] "None policy: Start" Nov 1 01:01:10.055604 kubelet[2289]: I1101 01:01:10.055597 2289 memory_manager.go:186] "Starting memorymanager" policy="None" Nov 1 01:01:10.055646 kubelet[2289]: I1101 01:01:10.055639 2289 state_mem.go:35] "Initializing new in-memory state store" Nov 1 01:01:10.055776 kubelet[2289]: I1101 01:01:10.055769 2289 state_mem.go:75] "Updated machine memory state" Nov 1 01:01:10.056392 kubelet[2289]: I1101 01:01:10.056383 2289 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Nov 1 01:01:10.054000 audit[2289]: AVC avc: denied { mac_admin } for pid=2289 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:01:10.054000 audit: SELINUX_ERR op=setxattr invalid_context="system_u:object_r:container_file_t:s0" Nov 1 01:01:10.054000 audit[2289]: SYSCALL arch=c000003e syscall=188 success=no exit=-22 a0=c000d1fe90 a1=c000b9c6d8 a2=c000d1fe60 a3=25 items=0 ppid=1 pid=2289 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/usr/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 01:01:10.054000 audit: PROCTITLE proctitle=2F7573722F62696E2F6B7562656C6574002D2D626F6F7473747261702D6B756265636F6E6669673D2F6574632F6B756265726E657465732F626F6F7473747261702D6B7562656C65742E636F6E66002D2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F6B7562656C65742E636F6E66002D2D636F6E6669 Nov 1 01:01:10.056635 kubelet[2289]: I1101 01:01:10.056624 2289 server.go:94] "Unprivileged containerized plugins might not work. Could not set selinux context on socket dir" path="/var/lib/kubelet/device-plugins/" err="setxattr(label=system_u:object_r:container_file_t:s0) /var/lib/kubelet/device-plugins/: invalid argument" Nov 1 01:01:10.056763 kubelet[2289]: I1101 01:01:10.056756 2289 eviction_manager.go:189] "Eviction manager: starting control loop" Nov 1 01:01:10.057137 kubelet[2289]: I1101 01:01:10.057115 2289 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Nov 1 01:01:10.058336 kubelet[2289]: I1101 01:01:10.058297 2289 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Nov 1 01:01:10.059767 kubelet[2289]: E1101 01:01:10.059757 2289 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Nov 1 01:01:10.116538 kubelet[2289]: I1101 01:01:10.116488 2289 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Nov 1 01:01:10.125852 kubelet[2289]: I1101 01:01:10.125614 2289 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Nov 1 01:01:10.125852 kubelet[2289]: I1101 01:01:10.125792 2289 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Nov 1 01:01:10.160972 kubelet[2289]: I1101 01:01:10.160909 2289 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Nov 1 01:01:10.165524 kubelet[2289]: I1101 01:01:10.165507 2289 kubelet_node_status.go:124] "Node was previously registered" node="localhost" Nov 1 01:01:10.165650 kubelet[2289]: I1101 01:01:10.165643 2289 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Nov 1 01:01:10.303326 kubelet[2289]: I1101 01:01:10.303290 2289 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/4654b122dbb389158fe3c0766e603624-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"4654b122dbb389158fe3c0766e603624\") " pod="kube-system/kube-controller-manager-localhost" Nov 1 01:01:10.303492 kubelet[2289]: I1101 01:01:10.303478 2289 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/4654b122dbb389158fe3c0766e603624-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"4654b122dbb389158fe3c0766e603624\") " pod="kube-system/kube-controller-manager-localhost" Nov 1 01:01:10.303559 kubelet[2289]: I1101 01:01:10.303550 2289 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/9b7e82b23872942deb692a4f9d774854-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"9b7e82b23872942deb692a4f9d774854\") " pod="kube-system/kube-apiserver-localhost" Nov 1 01:01:10.303619 kubelet[2289]: I1101 01:01:10.303610 2289 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/4654b122dbb389158fe3c0766e603624-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"4654b122dbb389158fe3c0766e603624\") " pod="kube-system/kube-controller-manager-localhost" Nov 1 01:01:10.303693 kubelet[2289]: I1101 01:01:10.303684 2289 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/4654b122dbb389158fe3c0766e603624-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"4654b122dbb389158fe3c0766e603624\") " pod="kube-system/kube-controller-manager-localhost" Nov 1 01:01:10.303775 kubelet[2289]: I1101 01:01:10.303745 2289 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/4654b122dbb389158fe3c0766e603624-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"4654b122dbb389158fe3c0766e603624\") " pod="kube-system/kube-controller-manager-localhost" Nov 1 01:01:10.303840 kubelet[2289]: I1101 01:01:10.303831 2289 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/a1d51be1ff02022474f2598f6e43038f-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"a1d51be1ff02022474f2598f6e43038f\") " pod="kube-system/kube-scheduler-localhost" Nov 1 01:01:10.303926 kubelet[2289]: I1101 01:01:10.303908 2289 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/9b7e82b23872942deb692a4f9d774854-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"9b7e82b23872942deb692a4f9d774854\") " pod="kube-system/kube-apiserver-localhost" Nov 1 01:01:10.303986 kubelet[2289]: I1101 01:01:10.303978 2289 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/9b7e82b23872942deb692a4f9d774854-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"9b7e82b23872942deb692a4f9d774854\") " pod="kube-system/kube-apiserver-localhost" Nov 1 01:01:10.970457 kubelet[2289]: I1101 01:01:10.970434 2289 apiserver.go:52] "Watching apiserver" Nov 1 01:01:11.001605 kubelet[2289]: I1101 01:01:11.001576 2289 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Nov 1 01:01:11.033902 kubelet[2289]: I1101 01:01:11.033887 2289 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Nov 1 01:01:11.037795 kubelet[2289]: E1101 01:01:11.037772 2289 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Nov 1 01:01:11.047391 kubelet[2289]: I1101 01:01:11.047357 2289 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=1.047346124 podStartE2EDuration="1.047346124s" podCreationTimestamp="2025-11-01 01:01:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-01 01:01:11.046972791 +0000 UTC m=+1.146578403" watchObservedRunningTime="2025-11-01 01:01:11.047346124 +0000 UTC m=+1.146951731" Nov 1 01:01:11.056518 kubelet[2289]: I1101 01:01:11.056487 2289 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=1.056475896 podStartE2EDuration="1.056475896s" podCreationTimestamp="2025-11-01 01:01:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-01 01:01:11.052092922 +0000 UTC m=+1.151698536" watchObservedRunningTime="2025-11-01 01:01:11.056475896 +0000 UTC m=+1.156081505" Nov 1 01:01:11.061773 kubelet[2289]: I1101 01:01:11.061743 2289 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=1.061729293 podStartE2EDuration="1.061729293s" podCreationTimestamp="2025-11-01 01:01:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-01 01:01:11.056778926 +0000 UTC m=+1.156384541" watchObservedRunningTime="2025-11-01 01:01:11.061729293 +0000 UTC m=+1.161334900" Nov 1 01:01:14.955154 kubelet[2289]: I1101 01:01:14.955131 2289 kuberuntime_manager.go:1702] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Nov 1 01:01:14.955696 env[1357]: time="2025-11-01T01:01:14.955628173Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Nov 1 01:01:14.955866 kubelet[2289]: I1101 01:01:14.955735 2289 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Nov 1 01:01:15.734048 kubelet[2289]: I1101 01:01:15.734017 2289 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/6b7b6dca-b907-4f27-a9cb-357f1b0d2a3a-kube-proxy\") pod \"kube-proxy-g24l6\" (UID: \"6b7b6dca-b907-4f27-a9cb-357f1b0d2a3a\") " pod="kube-system/kube-proxy-g24l6" Nov 1 01:01:15.734166 kubelet[2289]: I1101 01:01:15.734052 2289 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/6b7b6dca-b907-4f27-a9cb-357f1b0d2a3a-lib-modules\") pod \"kube-proxy-g24l6\" (UID: \"6b7b6dca-b907-4f27-a9cb-357f1b0d2a3a\") " pod="kube-system/kube-proxy-g24l6" Nov 1 01:01:15.734166 kubelet[2289]: I1101 01:01:15.734086 2289 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-djrd8\" (UniqueName: \"kubernetes.io/projected/6b7b6dca-b907-4f27-a9cb-357f1b0d2a3a-kube-api-access-djrd8\") pod \"kube-proxy-g24l6\" (UID: \"6b7b6dca-b907-4f27-a9cb-357f1b0d2a3a\") " pod="kube-system/kube-proxy-g24l6" Nov 1 01:01:15.734166 kubelet[2289]: I1101 01:01:15.734103 2289 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/6b7b6dca-b907-4f27-a9cb-357f1b0d2a3a-xtables-lock\") pod \"kube-proxy-g24l6\" (UID: \"6b7b6dca-b907-4f27-a9cb-357f1b0d2a3a\") " pod="kube-system/kube-proxy-g24l6" Nov 1 01:01:15.838607 kubelet[2289]: I1101 01:01:15.838585 2289 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory" Nov 1 01:01:15.974134 env[1357]: time="2025-11-01T01:01:15.973833646Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-g24l6,Uid:6b7b6dca-b907-4f27-a9cb-357f1b0d2a3a,Namespace:kube-system,Attempt:0,}" Nov 1 01:01:15.983952 env[1357]: time="2025-11-01T01:01:15.983895706Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 1 01:01:15.984286 env[1357]: time="2025-11-01T01:01:15.983955084Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 1 01:01:15.984286 env[1357]: time="2025-11-01T01:01:15.983970838Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 1 01:01:15.984433 env[1357]: time="2025-11-01T01:01:15.984410902Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/df6014b135115f8fc5ac0e8e8299051cc6970431b2539c49035046bad355c244 pid=2335 runtime=io.containerd.runc.v2 Nov 1 01:01:16.017432 env[1357]: time="2025-11-01T01:01:16.017407828Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-g24l6,Uid:6b7b6dca-b907-4f27-a9cb-357f1b0d2a3a,Namespace:kube-system,Attempt:0,} returns sandbox id \"df6014b135115f8fc5ac0e8e8299051cc6970431b2539c49035046bad355c244\"" Nov 1 01:01:16.020155 env[1357]: time="2025-11-01T01:01:16.020131878Z" level=info msg="CreateContainer within sandbox \"df6014b135115f8fc5ac0e8e8299051cc6970431b2539c49035046bad355c244\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Nov 1 01:01:16.026792 env[1357]: time="2025-11-01T01:01:16.026770746Z" level=info msg="CreateContainer within sandbox \"df6014b135115f8fc5ac0e8e8299051cc6970431b2539c49035046bad355c244\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"da464e96ee696d432935078a142fa39c23cb16d8124baceff330fb7a783beb0e\"" Nov 1 01:01:16.027224 env[1357]: time="2025-11-01T01:01:16.027211214Z" level=info msg="StartContainer for \"da464e96ee696d432935078a142fa39c23cb16d8124baceff330fb7a783beb0e\"" Nov 1 01:01:16.035794 kubelet[2289]: I1101 01:01:16.035766 2289 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/f63ef1ee-c5f1-4531-810a-05451dd74dea-var-lib-calico\") pod \"tigera-operator-7dcd859c48-2skf2\" (UID: \"f63ef1ee-c5f1-4531-810a-05451dd74dea\") " pod="tigera-operator/tigera-operator-7dcd859c48-2skf2" Nov 1 01:01:16.035794 kubelet[2289]: I1101 01:01:16.035795 2289 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-55sp7\" (UniqueName: \"kubernetes.io/projected/f63ef1ee-c5f1-4531-810a-05451dd74dea-kube-api-access-55sp7\") pod \"tigera-operator-7dcd859c48-2skf2\" (UID: \"f63ef1ee-c5f1-4531-810a-05451dd74dea\") " pod="tigera-operator/tigera-operator-7dcd859c48-2skf2" Nov 1 01:01:16.063101 env[1357]: time="2025-11-01T01:01:16.063069964Z" level=info msg="StartContainer for \"da464e96ee696d432935078a142fa39c23cb16d8124baceff330fb7a783beb0e\" returns successfully" Nov 1 01:01:16.234440 env[1357]: time="2025-11-01T01:01:16.234371442Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-7dcd859c48-2skf2,Uid:f63ef1ee-c5f1-4531-810a-05451dd74dea,Namespace:tigera-operator,Attempt:0,}" Nov 1 01:01:16.243257 env[1357]: time="2025-11-01T01:01:16.243106883Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 1 01:01:16.243257 env[1357]: time="2025-11-01T01:01:16.243139129Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 1 01:01:16.243257 env[1357]: time="2025-11-01T01:01:16.243151448Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 1 01:01:16.243510 env[1357]: time="2025-11-01T01:01:16.243482412Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/05407793745a3711bf5cb3ee3c02c4155031bc9f57eb3e86cdd689a2ef964343 pid=2411 runtime=io.containerd.runc.v2 Nov 1 01:01:16.287617 env[1357]: time="2025-11-01T01:01:16.287589170Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-7dcd859c48-2skf2,Uid:f63ef1ee-c5f1-4531-810a-05451dd74dea,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"05407793745a3711bf5cb3ee3c02c4155031bc9f57eb3e86cdd689a2ef964343\"" Nov 1 01:01:16.289053 env[1357]: time="2025-11-01T01:01:16.289036455Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.7\"" Nov 1 01:01:16.845000 audit[2478]: NETFILTER_CFG table=mangle:38 family=2 entries=1 op=nft_register_chain pid=2478 subj=system_u:system_r:kernel_t:s0 comm="iptables" Nov 1 01:01:16.849385 kernel: kauditd_printk_skb: 4 callbacks suppressed Nov 1 01:01:16.849442 kernel: audit: type=1325 audit(1761958876.845:232): table=mangle:38 family=2 entries=1 op=nft_register_chain pid=2478 subj=system_u:system_r:kernel_t:s0 comm="iptables" Nov 1 01:01:16.849476 kernel: audit: type=1325 audit(1761958876.845:233): table=mangle:39 family=10 entries=1 op=nft_register_chain pid=2479 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Nov 1 01:01:16.845000 audit[2479]: NETFILTER_CFG table=mangle:39 family=10 entries=1 op=nft_register_chain pid=2479 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Nov 1 01:01:16.845000 audit[2479]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffc67536770 a2=0 a3=7ffc6753675c items=0 ppid=2387 pid=2479 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 01:01:16.855012 kernel: audit: type=1300 audit(1761958876.845:233): arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffc67536770 a2=0 a3=7ffc6753675c items=0 ppid=2387 pid=2479 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 01:01:16.845000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006D616E676C65 Nov 1 01:01:16.856932 kernel: audit: type=1327 audit(1761958876.845:233): proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006D616E676C65 Nov 1 01:01:16.845000 audit[2478]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffda7281a10 a2=0 a3=7ffda72819fc items=0 ppid=2387 pid=2478 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 01:01:16.861167 kernel: audit: type=1300 audit(1761958876.845:232): arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffda7281a10 a2=0 a3=7ffda72819fc items=0 ppid=2387 pid=2478 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 01:01:16.861208 kernel: audit: type=1327 audit(1761958876.845:232): proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006D616E676C65 Nov 1 01:01:16.845000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006D616E676C65 Nov 1 01:01:16.854000 audit[2481]: NETFILTER_CFG table=nat:40 family=10 entries=1 op=nft_register_chain pid=2481 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Nov 1 01:01:16.864793 kernel: audit: type=1325 audit(1761958876.854:234): table=nat:40 family=10 entries=1 op=nft_register_chain pid=2481 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Nov 1 01:01:16.864829 kernel: audit: type=1300 audit(1761958876.854:234): arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffd54f524b0 a2=0 a3=7ffd54f5249c items=0 ppid=2387 pid=2481 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 01:01:16.854000 audit[2481]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffd54f524b0 a2=0 a3=7ffd54f5249c items=0 ppid=2387 pid=2481 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 01:01:16.854000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006E6174 Nov 1 01:01:16.870257 kernel: audit: type=1327 audit(1761958876.854:234): proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006E6174 Nov 1 01:01:16.870292 kernel: audit: type=1325 audit(1761958876.855:235): table=filter:41 family=10 entries=1 op=nft_register_chain pid=2482 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Nov 1 01:01:16.855000 audit[2482]: NETFILTER_CFG table=filter:41 family=10 entries=1 op=nft_register_chain pid=2482 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Nov 1 01:01:16.855000 audit[2482]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffe22ce54f0 a2=0 a3=7ffe22ce54dc items=0 ppid=2387 pid=2482 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 01:01:16.855000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D740066696C746572 Nov 1 01:01:16.860000 audit[2483]: NETFILTER_CFG table=nat:42 family=2 entries=1 op=nft_register_chain pid=2483 subj=system_u:system_r:kernel_t:s0 comm="iptables" Nov 1 01:01:16.860000 audit[2483]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffd83ea1c70 a2=0 a3=7ffd83ea1c5c items=0 ppid=2387 pid=2483 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 01:01:16.860000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006E6174 Nov 1 01:01:16.861000 audit[2484]: NETFILTER_CFG table=filter:43 family=2 entries=1 op=nft_register_chain pid=2484 subj=system_u:system_r:kernel_t:s0 comm="iptables" Nov 1 01:01:16.861000 audit[2484]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffc5eba7c80 a2=0 a3=7ffc5eba7c6c items=0 ppid=2387 pid=2484 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 01:01:16.861000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D740066696C746572 Nov 1 01:01:16.885679 systemd[1]: run-containerd-runc-k8s.io-df6014b135115f8fc5ac0e8e8299051cc6970431b2539c49035046bad355c244-runc.SzjUtQ.mount: Deactivated successfully. Nov 1 01:01:16.978000 audit[2485]: NETFILTER_CFG table=filter:44 family=2 entries=1 op=nft_register_chain pid=2485 subj=system_u:system_r:kernel_t:s0 comm="iptables" Nov 1 01:01:16.978000 audit[2485]: SYSCALL arch=c000003e syscall=46 success=yes exit=108 a0=3 a1=7ffcd530d0e0 a2=0 a3=7ffcd530d0cc items=0 ppid=2387 pid=2485 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 01:01:16.978000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D45585445524E414C2D5345525649434553002D740066696C746572 Nov 1 01:01:16.983000 audit[2487]: NETFILTER_CFG table=filter:45 family=2 entries=1 op=nft_register_rule pid=2487 subj=system_u:system_r:kernel_t:s0 comm="iptables" Nov 1 01:01:16.983000 audit[2487]: SYSCALL arch=c000003e syscall=46 success=yes exit=752 a0=3 a1=7ffe7e49caa0 a2=0 a3=7ffe7e49ca8c items=0 ppid=2387 pid=2487 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 01:01:16.983000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C652073657276696365 Nov 1 01:01:16.986000 audit[2490]: NETFILTER_CFG table=filter:46 family=2 entries=1 op=nft_register_rule pid=2490 subj=system_u:system_r:kernel_t:s0 comm="iptables" Nov 1 01:01:16.986000 audit[2490]: SYSCALL arch=c000003e syscall=46 success=yes exit=752 a0=3 a1=7ffe64eec9a0 a2=0 a3=7ffe64eec98c items=0 ppid=2387 pid=2490 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 01:01:16.986000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C65207365727669 Nov 1 01:01:16.987000 audit[2491]: NETFILTER_CFG table=filter:47 family=2 entries=1 op=nft_register_chain pid=2491 subj=system_u:system_r:kernel_t:s0 comm="iptables" Nov 1 01:01:16.987000 audit[2491]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7fff98ddb500 a2=0 a3=7fff98ddb4ec items=0 ppid=2387 pid=2491 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 01:01:16.987000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4E4F4445504F525453002D740066696C746572 Nov 1 01:01:16.989000 audit[2493]: NETFILTER_CFG table=filter:48 family=2 entries=1 op=nft_register_rule pid=2493 subj=system_u:system_r:kernel_t:s0 comm="iptables" Nov 1 01:01:16.989000 audit[2493]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7ffd1a846110 a2=0 a3=7ffd1a8460fc items=0 ppid=2387 pid=2493 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 01:01:16.989000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206865616C746820636865636B207365727669636520706F727473002D6A004B5542452D4E4F4445504F525453 Nov 1 01:01:16.989000 audit[2494]: NETFILTER_CFG table=filter:49 family=2 entries=1 op=nft_register_chain pid=2494 subj=system_u:system_r:kernel_t:s0 comm="iptables" Nov 1 01:01:16.989000 audit[2494]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffe67806ee0 a2=0 a3=7ffe67806ecc items=0 ppid=2387 pid=2494 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 01:01:16.989000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D740066696C746572 Nov 1 01:01:16.991000 audit[2496]: NETFILTER_CFG table=filter:50 family=2 entries=1 op=nft_register_rule pid=2496 subj=system_u:system_r:kernel_t:s0 comm="iptables" Nov 1 01:01:16.991000 audit[2496]: SYSCALL arch=c000003e syscall=46 success=yes exit=744 a0=3 a1=7ffd5060b6d0 a2=0 a3=7ffd5060b6bc items=0 ppid=2387 pid=2496 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 01:01:16.991000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D Nov 1 01:01:16.993000 audit[2499]: NETFILTER_CFG table=filter:51 family=2 entries=1 op=nft_register_rule pid=2499 subj=system_u:system_r:kernel_t:s0 comm="iptables" Nov 1 01:01:16.993000 audit[2499]: SYSCALL arch=c000003e syscall=46 success=yes exit=744 a0=3 a1=7ffc2beac220 a2=0 a3=7ffc2beac20c items=0 ppid=2387 pid=2499 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 01:01:16.993000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D53 Nov 1 01:01:16.994000 audit[2500]: NETFILTER_CFG table=filter:52 family=2 entries=1 op=nft_register_chain pid=2500 subj=system_u:system_r:kernel_t:s0 comm="iptables" Nov 1 01:01:16.994000 audit[2500]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7fff803c1930 a2=0 a3=7fff803c191c items=0 ppid=2387 pid=2500 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 01:01:16.994000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D464F5257415244002D740066696C746572 Nov 1 01:01:16.996000 audit[2502]: NETFILTER_CFG table=filter:53 family=2 entries=1 op=nft_register_rule pid=2502 subj=system_u:system_r:kernel_t:s0 comm="iptables" Nov 1 01:01:16.996000 audit[2502]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7ffe551b8c90 a2=0 a3=7ffe551b8c7c items=0 ppid=2387 pid=2502 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 01:01:16.996000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320666F7277617264696E672072756C6573002D6A004B5542452D464F5257415244 Nov 1 01:01:16.997000 audit[2503]: NETFILTER_CFG table=filter:54 family=2 entries=1 op=nft_register_chain pid=2503 subj=system_u:system_r:kernel_t:s0 comm="iptables" Nov 1 01:01:16.997000 audit[2503]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffc427e1980 a2=0 a3=7ffc427e196c items=0 ppid=2387 pid=2503 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 01:01:16.997000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D4649524557414C4C002D740066696C746572 Nov 1 01:01:16.999000 audit[2505]: NETFILTER_CFG table=filter:55 family=2 entries=1 op=nft_register_rule pid=2505 subj=system_u:system_r:kernel_t:s0 comm="iptables" Nov 1 01:01:16.999000 audit[2505]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7fff1f0f2550 a2=0 a3=7fff1f0f253c items=0 ppid=2387 pid=2505 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 01:01:16.999000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A Nov 1 01:01:17.001000 audit[2508]: NETFILTER_CFG table=filter:56 family=2 entries=1 op=nft_register_rule pid=2508 subj=system_u:system_r:kernel_t:s0 comm="iptables" Nov 1 01:01:17.001000 audit[2508]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7ffe312c40e0 a2=0 a3=7ffe312c40cc items=0 ppid=2387 pid=2508 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 01:01:17.001000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A Nov 1 01:01:17.004000 audit[2511]: NETFILTER_CFG table=filter:57 family=2 entries=1 op=nft_register_rule pid=2511 subj=system_u:system_r:kernel_t:s0 comm="iptables" Nov 1 01:01:17.004000 audit[2511]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7ffff2ad7060 a2=0 a3=7ffff2ad704c items=0 ppid=2387 pid=2511 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 01:01:17.004000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D Nov 1 01:01:17.005000 audit[2512]: NETFILTER_CFG table=nat:58 family=2 entries=1 op=nft_register_chain pid=2512 subj=system_u:system_r:kernel_t:s0 comm="iptables" Nov 1 01:01:17.005000 audit[2512]: SYSCALL arch=c000003e syscall=46 success=yes exit=96 a0=3 a1=7ffe69d5afc0 a2=0 a3=7ffe69d5afac items=0 ppid=2387 pid=2512 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 01:01:17.005000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D74006E6174 Nov 1 01:01:17.007000 audit[2514]: NETFILTER_CFG table=nat:59 family=2 entries=1 op=nft_register_rule pid=2514 subj=system_u:system_r:kernel_t:s0 comm="iptables" Nov 1 01:01:17.007000 audit[2514]: SYSCALL arch=c000003e syscall=46 success=yes exit=524 a0=3 a1=7ffd6cda78c0 a2=0 a3=7ffd6cda78ac items=0 ppid=2387 pid=2514 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 01:01:17.007000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Nov 1 01:01:17.009000 audit[2517]: NETFILTER_CFG table=nat:60 family=2 entries=1 op=nft_register_rule pid=2517 subj=system_u:system_r:kernel_t:s0 comm="iptables" Nov 1 01:01:17.009000 audit[2517]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7ffcfa9782b0 a2=0 a3=7ffcfa97829c items=0 ppid=2387 pid=2517 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 01:01:17.009000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900505245524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Nov 1 01:01:17.010000 audit[2518]: NETFILTER_CFG table=nat:61 family=2 entries=1 op=nft_register_chain pid=2518 subj=system_u:system_r:kernel_t:s0 comm="iptables" Nov 1 01:01:17.010000 audit[2518]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffe0e522a80 a2=0 a3=7ffe0e522a6c items=0 ppid=2387 pid=2518 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 01:01:17.010000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D504F5354524F5554494E47002D74006E6174 Nov 1 01:01:17.011000 audit[2520]: NETFILTER_CFG table=nat:62 family=2 entries=1 op=nft_register_rule pid=2520 subj=system_u:system_r:kernel_t:s0 comm="iptables" Nov 1 01:01:17.011000 audit[2520]: SYSCALL arch=c000003e syscall=46 success=yes exit=532 a0=3 a1=7ffec8f62a70 a2=0 a3=7ffec8f62a5c items=0 ppid=2387 pid=2520 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 01:01:17.011000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900504F5354524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320706F7374726F7574696E672072756C6573002D6A004B5542452D504F5354524F5554494E47 Nov 1 01:01:17.034000 audit[2526]: NETFILTER_CFG table=filter:63 family=2 entries=8 op=nft_register_rule pid=2526 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Nov 1 01:01:17.034000 audit[2526]: SYSCALL arch=c000003e syscall=46 success=yes exit=5248 a0=3 a1=7ffed93cbb60 a2=0 a3=7ffed93cbb4c items=0 ppid=2387 pid=2526 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 01:01:17.034000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Nov 1 01:01:17.049000 audit[2526]: NETFILTER_CFG table=nat:64 family=2 entries=14 op=nft_register_chain pid=2526 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Nov 1 01:01:17.049000 audit[2526]: SYSCALL arch=c000003e syscall=46 success=yes exit=5508 a0=3 a1=7ffed93cbb60 a2=0 a3=7ffed93cbb4c items=0 ppid=2387 pid=2526 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 01:01:17.049000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Nov 1 01:01:17.050000 audit[2531]: NETFILTER_CFG table=filter:65 family=10 entries=1 op=nft_register_chain pid=2531 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Nov 1 01:01:17.050000 audit[2531]: SYSCALL arch=c000003e syscall=46 success=yes exit=108 a0=3 a1=7ffe5bc756e0 a2=0 a3=7ffe5bc756cc items=0 ppid=2387 pid=2531 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 01:01:17.050000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D45585445524E414C2D5345525649434553002D740066696C746572 Nov 1 01:01:17.054000 audit[2533]: NETFILTER_CFG table=filter:66 family=10 entries=2 op=nft_register_chain pid=2533 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Nov 1 01:01:17.054000 audit[2533]: SYSCALL arch=c000003e syscall=46 success=yes exit=836 a0=3 a1=7fffcb739740 a2=0 a3=7fffcb73972c items=0 ppid=2387 pid=2533 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 01:01:17.054000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C6520736572766963 Nov 1 01:01:17.057000 audit[2536]: NETFILTER_CFG table=filter:67 family=10 entries=2 op=nft_register_chain pid=2536 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Nov 1 01:01:17.057000 audit[2536]: SYSCALL arch=c000003e syscall=46 success=yes exit=836 a0=3 a1=7ffd14893d80 a2=0 a3=7ffd14893d6c items=0 ppid=2387 pid=2536 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 01:01:17.057000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C652073657276 Nov 1 01:01:17.057000 audit[2537]: NETFILTER_CFG table=filter:68 family=10 entries=1 op=nft_register_chain pid=2537 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Nov 1 01:01:17.057000 audit[2537]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffc2c1bf8e0 a2=0 a3=7ffc2c1bf8cc items=0 ppid=2387 pid=2537 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 01:01:17.057000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4E4F4445504F525453002D740066696C746572 Nov 1 01:01:17.059000 audit[2539]: NETFILTER_CFG table=filter:69 family=10 entries=1 op=nft_register_rule pid=2539 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Nov 1 01:01:17.059000 audit[2539]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7ffd0f2ede30 a2=0 a3=7ffd0f2ede1c items=0 ppid=2387 pid=2539 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 01:01:17.059000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206865616C746820636865636B207365727669636520706F727473002D6A004B5542452D4E4F4445504F525453 Nov 1 01:01:17.061000 audit[2540]: NETFILTER_CFG table=filter:70 family=10 entries=1 op=nft_register_chain pid=2540 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Nov 1 01:01:17.061000 audit[2540]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffd83606150 a2=0 a3=7ffd8360613c items=0 ppid=2387 pid=2540 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 01:01:17.061000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D740066696C746572 Nov 1 01:01:17.062000 audit[2542]: NETFILTER_CFG table=filter:71 family=10 entries=1 op=nft_register_rule pid=2542 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Nov 1 01:01:17.062000 audit[2542]: SYSCALL arch=c000003e syscall=46 success=yes exit=744 a0=3 a1=7ffdfed4b3e0 a2=0 a3=7ffdfed4b3cc items=0 ppid=2387 pid=2542 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 01:01:17.062000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B554245 Nov 1 01:01:17.067005 kubelet[2289]: I1101 01:01:17.066972 2289 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-g24l6" podStartSLOduration=2.066958901 podStartE2EDuration="2.066958901s" podCreationTimestamp="2025-11-01 01:01:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-01 01:01:17.053299377 +0000 UTC m=+7.152904992" watchObservedRunningTime="2025-11-01 01:01:17.066958901 +0000 UTC m=+7.166564510" Nov 1 01:01:17.066000 audit[2545]: NETFILTER_CFG table=filter:72 family=10 entries=2 op=nft_register_chain pid=2545 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Nov 1 01:01:17.066000 audit[2545]: SYSCALL arch=c000003e syscall=46 success=yes exit=828 a0=3 a1=7ffe3a70bbc0 a2=0 a3=7ffe3a70bbac items=0 ppid=2387 pid=2545 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 01:01:17.066000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D Nov 1 01:01:17.067000 audit[2546]: NETFILTER_CFG table=filter:73 family=10 entries=1 op=nft_register_chain pid=2546 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Nov 1 01:01:17.067000 audit[2546]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7fffd57ef7f0 a2=0 a3=7fffd57ef7dc items=0 ppid=2387 pid=2546 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 01:01:17.067000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D464F5257415244002D740066696C746572 Nov 1 01:01:17.069000 audit[2548]: NETFILTER_CFG table=filter:74 family=10 entries=1 op=nft_register_rule pid=2548 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Nov 1 01:01:17.069000 audit[2548]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7ffecc610740 a2=0 a3=7ffecc61072c items=0 ppid=2387 pid=2548 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 01:01:17.069000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320666F7277617264696E672072756C6573002D6A004B5542452D464F5257415244 Nov 1 01:01:17.070000 audit[2549]: NETFILTER_CFG table=filter:75 family=10 entries=1 op=nft_register_chain pid=2549 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Nov 1 01:01:17.070000 audit[2549]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7fff826e5800 a2=0 a3=7fff826e57ec items=0 ppid=2387 pid=2549 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 01:01:17.070000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D4649524557414C4C002D740066696C746572 Nov 1 01:01:17.071000 audit[2551]: NETFILTER_CFG table=filter:76 family=10 entries=1 op=nft_register_rule pid=2551 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Nov 1 01:01:17.071000 audit[2551]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7ffc3ed97f00 a2=0 a3=7ffc3ed97eec items=0 ppid=2387 pid=2551 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 01:01:17.071000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A Nov 1 01:01:17.074000 audit[2554]: NETFILTER_CFG table=filter:77 family=10 entries=1 op=nft_register_rule pid=2554 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Nov 1 01:01:17.074000 audit[2554]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7fff68f5dd80 a2=0 a3=7fff68f5dd6c items=0 ppid=2387 pid=2554 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 01:01:17.074000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D Nov 1 01:01:17.076000 audit[2557]: NETFILTER_CFG table=filter:78 family=10 entries=1 op=nft_register_rule pid=2557 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Nov 1 01:01:17.076000 audit[2557]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7ffe60510890 a2=0 a3=7ffe6051087c items=0 ppid=2387 pid=2557 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 01:01:17.076000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C Nov 1 01:01:17.077000 audit[2558]: NETFILTER_CFG table=nat:79 family=10 entries=1 op=nft_register_chain pid=2558 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Nov 1 01:01:17.077000 audit[2558]: SYSCALL arch=c000003e syscall=46 success=yes exit=96 a0=3 a1=7ffedd9ee490 a2=0 a3=7ffedd9ee47c items=0 ppid=2387 pid=2558 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 01:01:17.077000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D74006E6174 Nov 1 01:01:17.078000 audit[2560]: NETFILTER_CFG table=nat:80 family=10 entries=2 op=nft_register_chain pid=2560 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Nov 1 01:01:17.078000 audit[2560]: SYSCALL arch=c000003e syscall=46 success=yes exit=600 a0=3 a1=7fff16822730 a2=0 a3=7fff1682271c items=0 ppid=2387 pid=2560 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 01:01:17.078000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Nov 1 01:01:17.080000 audit[2563]: NETFILTER_CFG table=nat:81 family=10 entries=2 op=nft_register_chain pid=2563 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Nov 1 01:01:17.080000 audit[2563]: SYSCALL arch=c000003e syscall=46 success=yes exit=608 a0=3 a1=7ffca4bce940 a2=0 a3=7ffca4bce92c items=0 ppid=2387 pid=2563 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 01:01:17.080000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900505245524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Nov 1 01:01:17.081000 audit[2564]: NETFILTER_CFG table=nat:82 family=10 entries=1 op=nft_register_chain pid=2564 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Nov 1 01:01:17.081000 audit[2564]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffdcef124e0 a2=0 a3=7ffdcef124cc items=0 ppid=2387 pid=2564 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 01:01:17.081000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D504F5354524F5554494E47002D74006E6174 Nov 1 01:01:17.083000 audit[2566]: NETFILTER_CFG table=nat:83 family=10 entries=2 op=nft_register_chain pid=2566 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Nov 1 01:01:17.083000 audit[2566]: SYSCALL arch=c000003e syscall=46 success=yes exit=612 a0=3 a1=7fff1d29b240 a2=0 a3=7fff1d29b22c items=0 ppid=2387 pid=2566 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 01:01:17.083000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900504F5354524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320706F7374726F7574696E672072756C6573002D6A004B5542452D504F5354524F5554494E47 Nov 1 01:01:17.083000 audit[2567]: NETFILTER_CFG table=filter:84 family=10 entries=1 op=nft_register_chain pid=2567 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Nov 1 01:01:17.083000 audit[2567]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffef4615870 a2=0 a3=7ffef461585c items=0 ppid=2387 pid=2567 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 01:01:17.083000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4649524557414C4C002D740066696C746572 Nov 1 01:01:17.085000 audit[2569]: NETFILTER_CFG table=filter:85 family=10 entries=1 op=nft_register_rule pid=2569 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Nov 1 01:01:17.085000 audit[2569]: SYSCALL arch=c000003e syscall=46 success=yes exit=228 a0=3 a1=7ffc7a652140 a2=0 a3=7ffc7a65212c items=0 ppid=2387 pid=2569 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 01:01:17.085000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6A004B5542452D4649524557414C4C Nov 1 01:01:17.088000 audit[2572]: NETFILTER_CFG table=filter:86 family=10 entries=1 op=nft_register_rule pid=2572 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Nov 1 01:01:17.088000 audit[2572]: SYSCALL arch=c000003e syscall=46 success=yes exit=228 a0=3 a1=7ffc1288c4d0 a2=0 a3=7ffc1288c4bc items=0 ppid=2387 pid=2572 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 01:01:17.088000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6A004B5542452D4649524557414C4C Nov 1 01:01:17.090000 audit[2574]: NETFILTER_CFG table=filter:87 family=10 entries=3 op=nft_register_rule pid=2574 subj=system_u:system_r:kernel_t:s0 comm="ip6tables-resto" Nov 1 01:01:17.090000 audit[2574]: SYSCALL arch=c000003e syscall=46 success=yes exit=2088 a0=3 a1=7ffeb4d7a980 a2=0 a3=7ffeb4d7a96c items=0 ppid=2387 pid=2574 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables-resto" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 01:01:17.090000 audit: PROCTITLE proctitle=6970367461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Nov 1 01:01:17.090000 audit[2574]: NETFILTER_CFG table=nat:88 family=10 entries=7 op=nft_register_chain pid=2574 subj=system_u:system_r:kernel_t:s0 comm="ip6tables-resto" Nov 1 01:01:17.090000 audit[2574]: SYSCALL arch=c000003e syscall=46 success=yes exit=2056 a0=3 a1=7ffeb4d7a980 a2=0 a3=7ffeb4d7a96c items=0 ppid=2387 pid=2574 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables-resto" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 01:01:17.090000 audit: PROCTITLE proctitle=6970367461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Nov 1 01:01:17.560458 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1971171585.mount: Deactivated successfully. Nov 1 01:01:18.443007 env[1357]: time="2025-11-01T01:01:18.442968239Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/tigera/operator:v1.38.7,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 01:01:18.449934 env[1357]: time="2025-11-01T01:01:18.449908147Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:f2c1be207523e593db82e3b8cf356a12f3ad8d1aad2225f8114b2cf9d6486cf1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 01:01:18.455372 env[1357]: time="2025-11-01T01:01:18.455355836Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/tigera/operator:v1.38.7,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 01:01:18.464098 env[1357]: time="2025-11-01T01:01:18.464057952Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/tigera/operator@sha256:1b629a1403f5b6d7243f7dd523d04b8a50352a33c1d4d6970b6002a8733acf2e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 01:01:18.464616 env[1357]: time="2025-11-01T01:01:18.464596498Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.7\" returns image reference \"sha256:f2c1be207523e593db82e3b8cf356a12f3ad8d1aad2225f8114b2cf9d6486cf1\"" Nov 1 01:01:18.467628 env[1357]: time="2025-11-01T01:01:18.467591158Z" level=info msg="CreateContainer within sandbox \"05407793745a3711bf5cb3ee3c02c4155031bc9f57eb3e86cdd689a2ef964343\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Nov 1 01:01:18.527039 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1478513810.mount: Deactivated successfully. Nov 1 01:01:18.529869 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount399791601.mount: Deactivated successfully. Nov 1 01:01:18.572419 env[1357]: time="2025-11-01T01:01:18.572365629Z" level=info msg="CreateContainer within sandbox \"05407793745a3711bf5cb3ee3c02c4155031bc9f57eb3e86cdd689a2ef964343\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"b7240e0a99c0daa54d6fe2bcac9ea801d53de5d064c10b60008d16ac95ae8214\"" Nov 1 01:01:18.572911 env[1357]: time="2025-11-01T01:01:18.572889663Z" level=info msg="StartContainer for \"b7240e0a99c0daa54d6fe2bcac9ea801d53de5d064c10b60008d16ac95ae8214\"" Nov 1 01:01:18.623460 env[1357]: time="2025-11-01T01:01:18.623435262Z" level=info msg="StartContainer for \"b7240e0a99c0daa54d6fe2bcac9ea801d53de5d064c10b60008d16ac95ae8214\" returns successfully" Nov 1 01:01:19.253331 kubelet[2289]: I1101 01:01:19.253287 2289 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-7dcd859c48-2skf2" podStartSLOduration=2.076390838 podStartE2EDuration="4.253275404s" podCreationTimestamp="2025-11-01 01:01:15 +0000 UTC" firstStartedPulling="2025-11-01 01:01:16.288619821 +0000 UTC m=+6.388225429" lastFinishedPulling="2025-11-01 01:01:18.465504388 +0000 UTC m=+8.565109995" observedRunningTime="2025-11-01 01:01:19.062116023 +0000 UTC m=+9.161721649" watchObservedRunningTime="2025-11-01 01:01:19.253275404 +0000 UTC m=+9.352881014" Nov 1 01:01:24.008735 sudo[1621]: pam_unix(sudo:session): session closed for user root Nov 1 01:01:24.013392 kernel: kauditd_printk_skb: 143 callbacks suppressed Nov 1 01:01:24.013455 kernel: audit: type=1106 audit(1761958884.007:283): pid=1621 uid=500 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Nov 1 01:01:24.007000 audit[1621]: USER_END pid=1621 uid=500 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Nov 1 01:01:24.007000 audit[1621]: CRED_DISP pid=1621 uid=500 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Nov 1 01:01:24.017684 kernel: audit: type=1104 audit(1761958884.007:284): pid=1621 uid=500 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Nov 1 01:01:24.049902 sshd[1615]: pam_unix(sshd:session): session closed for user core Nov 1 01:01:24.053000 audit[1615]: USER_END pid=1615 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Nov 1 01:01:24.053000 audit[1615]: CRED_DISP pid=1615 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Nov 1 01:01:24.062657 kernel: audit: type=1106 audit(1761958884.053:285): pid=1615 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Nov 1 01:01:24.062764 kernel: audit: type=1104 audit(1761958884.053:286): pid=1615 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Nov 1 01:01:24.064000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@6-139.178.70.108:22-147.75.109.163:55818 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 01:01:24.065734 systemd[1]: sshd@6-139.178.70.108:22-147.75.109.163:55818.service: Deactivated successfully. Nov 1 01:01:24.070319 kernel: audit: type=1131 audit(1761958884.064:287): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@6-139.178.70.108:22-147.75.109.163:55818 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 01:01:24.069743 systemd[1]: session-9.scope: Deactivated successfully. Nov 1 01:01:24.070010 systemd-logind[1341]: Session 9 logged out. Waiting for processes to exit. Nov 1 01:01:24.071303 systemd-logind[1341]: Removed session 9. Nov 1 01:01:24.540000 audit[2657]: NETFILTER_CFG table=filter:89 family=2 entries=15 op=nft_register_rule pid=2657 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Nov 1 01:01:24.540000 audit[2657]: SYSCALL arch=c000003e syscall=46 success=yes exit=5992 a0=3 a1=7ffe8386bfd0 a2=0 a3=7ffe8386bfbc items=0 ppid=2387 pid=2657 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 01:01:24.548299 kernel: audit: type=1325 audit(1761958884.540:288): table=filter:89 family=2 entries=15 op=nft_register_rule pid=2657 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Nov 1 01:01:24.548360 kernel: audit: type=1300 audit(1761958884.540:288): arch=c000003e syscall=46 success=yes exit=5992 a0=3 a1=7ffe8386bfd0 a2=0 a3=7ffe8386bfbc items=0 ppid=2387 pid=2657 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 01:01:24.540000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Nov 1 01:01:24.550678 kernel: audit: type=1327 audit(1761958884.540:288): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Nov 1 01:01:24.549000 audit[2657]: NETFILTER_CFG table=nat:90 family=2 entries=12 op=nft_register_rule pid=2657 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Nov 1 01:01:24.555679 kernel: audit: type=1325 audit(1761958884.549:289): table=nat:90 family=2 entries=12 op=nft_register_rule pid=2657 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Nov 1 01:01:24.549000 audit[2657]: SYSCALL arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7ffe8386bfd0 a2=0 a3=0 items=0 ppid=2387 pid=2657 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 01:01:24.561679 kernel: audit: type=1300 audit(1761958884.549:289): arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7ffe8386bfd0 a2=0 a3=0 items=0 ppid=2387 pid=2657 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 01:01:24.549000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Nov 1 01:01:24.569000 audit[2659]: NETFILTER_CFG table=filter:91 family=2 entries=16 op=nft_register_rule pid=2659 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Nov 1 01:01:24.569000 audit[2659]: SYSCALL arch=c000003e syscall=46 success=yes exit=5992 a0=3 a1=7ffedbc9c670 a2=0 a3=7ffedbc9c65c items=0 ppid=2387 pid=2659 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 01:01:24.569000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Nov 1 01:01:24.572000 audit[2659]: NETFILTER_CFG table=nat:92 family=2 entries=12 op=nft_register_rule pid=2659 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Nov 1 01:01:24.572000 audit[2659]: SYSCALL arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7ffedbc9c670 a2=0 a3=0 items=0 ppid=2387 pid=2659 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 01:01:24.572000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Nov 1 01:01:27.471000 audit[2661]: NETFILTER_CFG table=filter:93 family=2 entries=17 op=nft_register_rule pid=2661 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Nov 1 01:01:27.471000 audit[2661]: SYSCALL arch=c000003e syscall=46 success=yes exit=6736 a0=3 a1=7ffe3e3f6de0 a2=0 a3=7ffe3e3f6dcc items=0 ppid=2387 pid=2661 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 01:01:27.471000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Nov 1 01:01:27.476000 audit[2661]: NETFILTER_CFG table=nat:94 family=2 entries=12 op=nft_register_rule pid=2661 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Nov 1 01:01:27.476000 audit[2661]: SYSCALL arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7ffe3e3f6de0 a2=0 a3=0 items=0 ppid=2387 pid=2661 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 01:01:27.476000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Nov 1 01:01:27.516000 audit[2663]: NETFILTER_CFG table=filter:95 family=2 entries=18 op=nft_register_rule pid=2663 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Nov 1 01:01:27.516000 audit[2663]: SYSCALL arch=c000003e syscall=46 success=yes exit=6736 a0=3 a1=7ffe57a2fe40 a2=0 a3=7ffe57a2fe2c items=0 ppid=2387 pid=2663 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 01:01:27.516000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Nov 1 01:01:27.520000 audit[2663]: NETFILTER_CFG table=nat:96 family=2 entries=12 op=nft_register_rule pid=2663 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Nov 1 01:01:27.520000 audit[2663]: SYSCALL arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7ffe57a2fe40 a2=0 a3=0 items=0 ppid=2387 pid=2663 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 01:01:27.520000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Nov 1 01:01:28.531000 audit[2665]: NETFILTER_CFG table=filter:97 family=2 entries=19 op=nft_register_rule pid=2665 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Nov 1 01:01:28.531000 audit[2665]: SYSCALL arch=c000003e syscall=46 success=yes exit=7480 a0=3 a1=7ffe64e91c40 a2=0 a3=7ffe64e91c2c items=0 ppid=2387 pid=2665 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 01:01:28.531000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Nov 1 01:01:28.537000 audit[2665]: NETFILTER_CFG table=nat:98 family=2 entries=12 op=nft_register_rule pid=2665 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Nov 1 01:01:28.537000 audit[2665]: SYSCALL arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7ffe64e91c40 a2=0 a3=0 items=0 ppid=2387 pid=2665 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 01:01:28.537000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Nov 1 01:01:29.544872 kernel: kauditd_printk_skb: 25 callbacks suppressed Nov 1 01:01:29.544975 kernel: audit: type=1325 audit(1761958889.542:298): table=filter:99 family=2 entries=21 op=nft_register_rule pid=2667 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Nov 1 01:01:29.542000 audit[2667]: NETFILTER_CFG table=filter:99 family=2 entries=21 op=nft_register_rule pid=2667 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Nov 1 01:01:29.542000 audit[2667]: SYSCALL arch=c000003e syscall=46 success=yes exit=8224 a0=3 a1=7ffd33d0a340 a2=0 a3=7ffd33d0a32c items=0 ppid=2387 pid=2667 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 01:01:29.551110 kernel: audit: type=1300 audit(1761958889.542:298): arch=c000003e syscall=46 success=yes exit=8224 a0=3 a1=7ffd33d0a340 a2=0 a3=7ffd33d0a32c items=0 ppid=2387 pid=2667 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 01:01:29.542000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Nov 1 01:01:29.552932 kernel: audit: type=1327 audit(1761958889.542:298): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Nov 1 01:01:29.552000 audit[2667]: NETFILTER_CFG table=nat:100 family=2 entries=12 op=nft_register_rule pid=2667 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Nov 1 01:01:29.552000 audit[2667]: SYSCALL arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7ffd33d0a340 a2=0 a3=0 items=0 ppid=2387 pid=2667 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 01:01:29.559584 kernel: audit: type=1325 audit(1761958889.552:299): table=nat:100 family=2 entries=12 op=nft_register_rule pid=2667 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Nov 1 01:01:29.559637 kernel: audit: type=1300 audit(1761958889.552:299): arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7ffd33d0a340 a2=0 a3=0 items=0 ppid=2387 pid=2667 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 01:01:29.559655 kernel: audit: type=1327 audit(1761958889.552:299): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Nov 1 01:01:29.552000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Nov 1 01:01:29.571604 kubelet[2289]: I1101 01:01:29.571571 2289 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/5fa24c8c-7ec4-4d2c-b0d5-2ed6241f2ef0-typha-certs\") pod \"calico-typha-68f99ddcb9-n4s98\" (UID: \"5fa24c8c-7ec4-4d2c-b0d5-2ed6241f2ef0\") " pod="calico-system/calico-typha-68f99ddcb9-n4s98" Nov 1 01:01:29.571898 kubelet[2289]: I1101 01:01:29.571608 2289 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5fa24c8c-7ec4-4d2c-b0d5-2ed6241f2ef0-tigera-ca-bundle\") pod \"calico-typha-68f99ddcb9-n4s98\" (UID: \"5fa24c8c-7ec4-4d2c-b0d5-2ed6241f2ef0\") " pod="calico-system/calico-typha-68f99ddcb9-n4s98" Nov 1 01:01:29.571898 kubelet[2289]: I1101 01:01:29.571628 2289 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7g5bw\" (UniqueName: \"kubernetes.io/projected/5fa24c8c-7ec4-4d2c-b0d5-2ed6241f2ef0-kube-api-access-7g5bw\") pod \"calico-typha-68f99ddcb9-n4s98\" (UID: \"5fa24c8c-7ec4-4d2c-b0d5-2ed6241f2ef0\") " pod="calico-system/calico-typha-68f99ddcb9-n4s98" Nov 1 01:01:29.773345 kubelet[2289]: I1101 01:01:29.773317 2289 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/6d278d62-614e-4629-8149-6759d48a73f6-node-certs\") pod \"calico-node-v446r\" (UID: \"6d278d62-614e-4629-8149-6759d48a73f6\") " pod="calico-system/calico-node-v446r" Nov 1 01:01:29.773345 kubelet[2289]: I1101 01:01:29.773345 2289 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/6d278d62-614e-4629-8149-6759d48a73f6-cni-bin-dir\") pod \"calico-node-v446r\" (UID: \"6d278d62-614e-4629-8149-6759d48a73f6\") " pod="calico-system/calico-node-v446r" Nov 1 01:01:29.773462 kubelet[2289]: I1101 01:01:29.773355 2289 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/6d278d62-614e-4629-8149-6759d48a73f6-xtables-lock\") pod \"calico-node-v446r\" (UID: \"6d278d62-614e-4629-8149-6759d48a73f6\") " pod="calico-system/calico-node-v446r" Nov 1 01:01:29.773462 kubelet[2289]: I1101 01:01:29.773380 2289 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/6d278d62-614e-4629-8149-6759d48a73f6-var-lib-calico\") pod \"calico-node-v446r\" (UID: \"6d278d62-614e-4629-8149-6759d48a73f6\") " pod="calico-system/calico-node-v446r" Nov 1 01:01:29.773462 kubelet[2289]: I1101 01:01:29.773393 2289 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qh2cg\" (UniqueName: \"kubernetes.io/projected/6d278d62-614e-4629-8149-6759d48a73f6-kube-api-access-qh2cg\") pod \"calico-node-v446r\" (UID: \"6d278d62-614e-4629-8149-6759d48a73f6\") " pod="calico-system/calico-node-v446r" Nov 1 01:01:29.773462 kubelet[2289]: I1101 01:01:29.773405 2289 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/6d278d62-614e-4629-8149-6759d48a73f6-cni-log-dir\") pod \"calico-node-v446r\" (UID: \"6d278d62-614e-4629-8149-6759d48a73f6\") " pod="calico-system/calico-node-v446r" Nov 1 01:01:29.773462 kubelet[2289]: I1101 01:01:29.773414 2289 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6d278d62-614e-4629-8149-6759d48a73f6-tigera-ca-bundle\") pod \"calico-node-v446r\" (UID: \"6d278d62-614e-4629-8149-6759d48a73f6\") " pod="calico-system/calico-node-v446r" Nov 1 01:01:29.773567 kubelet[2289]: I1101 01:01:29.773423 2289 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/6d278d62-614e-4629-8149-6759d48a73f6-flexvol-driver-host\") pod \"calico-node-v446r\" (UID: \"6d278d62-614e-4629-8149-6759d48a73f6\") " pod="calico-system/calico-node-v446r" Nov 1 01:01:29.773567 kubelet[2289]: I1101 01:01:29.773432 2289 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/6d278d62-614e-4629-8149-6759d48a73f6-lib-modules\") pod \"calico-node-v446r\" (UID: \"6d278d62-614e-4629-8149-6759d48a73f6\") " pod="calico-system/calico-node-v446r" Nov 1 01:01:29.773567 kubelet[2289]: I1101 01:01:29.773458 2289 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/6d278d62-614e-4629-8149-6759d48a73f6-cni-net-dir\") pod \"calico-node-v446r\" (UID: \"6d278d62-614e-4629-8149-6759d48a73f6\") " pod="calico-system/calico-node-v446r" Nov 1 01:01:29.773567 kubelet[2289]: I1101 01:01:29.773474 2289 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/6d278d62-614e-4629-8149-6759d48a73f6-policysync\") pod \"calico-node-v446r\" (UID: \"6d278d62-614e-4629-8149-6759d48a73f6\") " pod="calico-system/calico-node-v446r" Nov 1 01:01:29.773567 kubelet[2289]: I1101 01:01:29.773484 2289 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/6d278d62-614e-4629-8149-6759d48a73f6-var-run-calico\") pod \"calico-node-v446r\" (UID: \"6d278d62-614e-4629-8149-6759d48a73f6\") " pod="calico-system/calico-node-v446r" Nov 1 01:01:29.793553 env[1357]: time="2025-11-01T01:01:29.793511197Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-68f99ddcb9-n4s98,Uid:5fa24c8c-7ec4-4d2c-b0d5-2ed6241f2ef0,Namespace:calico-system,Attempt:0,}" Nov 1 01:01:29.826532 env[1357]: time="2025-11-01T01:01:29.826417610Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 1 01:01:29.827374 env[1357]: time="2025-11-01T01:01:29.826649883Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 1 01:01:29.827470 env[1357]: time="2025-11-01T01:01:29.827449186Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 1 01:01:29.827704 env[1357]: time="2025-11-01T01:01:29.827649535Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/49e4806bcdb369df2763bf19a0e94874059073e99b09ded55965c605aa92cad8 pid=2679 runtime=io.containerd.runc.v2 Nov 1 01:01:29.879015 kubelet[2289]: E1101 01:01:29.878995 2289 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 01:01:29.879282 kubelet[2289]: W1101 01:01:29.879246 2289 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 01:01:29.884748 kubelet[2289]: E1101 01:01:29.884713 2289 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 01:01:29.884889 kubelet[2289]: W1101 01:01:29.884876 2289 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 01:01:29.887156 kubelet[2289]: E1101 01:01:29.887141 2289 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 01:01:29.887584 kubelet[2289]: E1101 01:01:29.887296 2289 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 01:01:29.890079 kubelet[2289]: E1101 01:01:29.890066 2289 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 01:01:29.890193 kubelet[2289]: W1101 01:01:29.890181 2289 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 01:01:29.890270 kubelet[2289]: E1101 01:01:29.890260 2289 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 01:01:29.893334 env[1357]: time="2025-11-01T01:01:29.893313472Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-68f99ddcb9-n4s98,Uid:5fa24c8c-7ec4-4d2c-b0d5-2ed6241f2ef0,Namespace:calico-system,Attempt:0,} returns sandbox id \"49e4806bcdb369df2763bf19a0e94874059073e99b09ded55965c605aa92cad8\"" Nov 1 01:01:29.894724 env[1357]: time="2025-11-01T01:01:29.894709001Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.4\"" Nov 1 01:01:29.909413 kubelet[2289]: E1101 01:01:29.909391 2289 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-clw4d" podUID="2e92d5bc-ac6d-4f8c-9478-cb1bbeb19fda" Nov 1 01:01:29.918929 kubelet[2289]: E1101 01:01:29.918913 2289 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 01:01:29.919031 kubelet[2289]: W1101 01:01:29.919020 2289 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 01:01:29.919085 kubelet[2289]: E1101 01:01:29.919076 2289 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 01:01:29.919223 kubelet[2289]: E1101 01:01:29.919217 2289 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 01:01:29.919272 kubelet[2289]: W1101 01:01:29.919264 2289 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 01:01:29.919319 kubelet[2289]: E1101 01:01:29.919312 2289 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 01:01:29.919442 kubelet[2289]: E1101 01:01:29.919437 2289 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 01:01:29.919489 kubelet[2289]: W1101 01:01:29.919482 2289 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 01:01:29.919535 kubelet[2289]: E1101 01:01:29.919527 2289 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 01:01:29.919691 kubelet[2289]: E1101 01:01:29.919685 2289 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 01:01:29.919738 kubelet[2289]: W1101 01:01:29.919730 2289 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 01:01:29.919786 kubelet[2289]: E1101 01:01:29.919778 2289 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 01:01:29.919965 kubelet[2289]: E1101 01:01:29.919952 2289 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 01:01:29.920011 kubelet[2289]: W1101 01:01:29.920003 2289 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 01:01:29.921694 kubelet[2289]: E1101 01:01:29.921373 2289 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 01:01:29.921935 kubelet[2289]: E1101 01:01:29.921928 2289 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 01:01:29.921992 kubelet[2289]: W1101 01:01:29.921983 2289 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 01:01:29.922039 kubelet[2289]: E1101 01:01:29.922031 2289 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 01:01:29.922164 kubelet[2289]: E1101 01:01:29.922159 2289 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 01:01:29.922208 kubelet[2289]: W1101 01:01:29.922200 2289 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 01:01:29.922256 kubelet[2289]: E1101 01:01:29.922248 2289 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 01:01:29.925299 kubelet[2289]: E1101 01:01:29.925286 2289 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 01:01:29.925555 kubelet[2289]: W1101 01:01:29.925545 2289 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 01:01:29.925612 kubelet[2289]: E1101 01:01:29.925603 2289 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 01:01:29.926189 kubelet[2289]: E1101 01:01:29.926169 2289 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 01:01:29.926258 kubelet[2289]: W1101 01:01:29.926250 2289 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 01:01:29.926306 kubelet[2289]: E1101 01:01:29.926298 2289 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 01:01:29.926443 kubelet[2289]: E1101 01:01:29.926437 2289 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 01:01:29.926492 kubelet[2289]: W1101 01:01:29.926485 2289 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 01:01:29.926535 kubelet[2289]: E1101 01:01:29.926528 2289 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 01:01:29.926691 kubelet[2289]: E1101 01:01:29.926657 2289 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 01:01:29.926741 kubelet[2289]: W1101 01:01:29.926733 2289 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 01:01:29.926786 kubelet[2289]: E1101 01:01:29.926777 2289 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 01:01:29.926927 kubelet[2289]: E1101 01:01:29.926921 2289 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 01:01:29.926973 kubelet[2289]: W1101 01:01:29.926965 2289 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 01:01:29.927018 kubelet[2289]: E1101 01:01:29.927011 2289 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 01:01:29.927209 kubelet[2289]: E1101 01:01:29.927203 2289 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 01:01:29.927255 kubelet[2289]: W1101 01:01:29.927247 2289 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 01:01:29.927300 kubelet[2289]: E1101 01:01:29.927293 2289 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 01:01:29.927424 kubelet[2289]: E1101 01:01:29.927418 2289 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 01:01:29.927469 kubelet[2289]: W1101 01:01:29.927461 2289 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 01:01:29.927513 kubelet[2289]: E1101 01:01:29.927506 2289 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 01:01:29.927655 kubelet[2289]: E1101 01:01:29.927649 2289 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 01:01:29.927720 kubelet[2289]: W1101 01:01:29.927713 2289 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 01:01:29.927772 kubelet[2289]: E1101 01:01:29.927764 2289 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 01:01:29.927895 kubelet[2289]: E1101 01:01:29.927889 2289 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 01:01:29.927941 kubelet[2289]: W1101 01:01:29.927933 2289 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 01:01:29.927986 kubelet[2289]: E1101 01:01:29.927978 2289 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 01:01:29.928119 kubelet[2289]: E1101 01:01:29.928114 2289 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 01:01:29.928165 kubelet[2289]: W1101 01:01:29.928158 2289 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 01:01:29.928211 kubelet[2289]: E1101 01:01:29.928204 2289 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 01:01:29.928333 kubelet[2289]: E1101 01:01:29.928328 2289 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 01:01:29.928379 kubelet[2289]: W1101 01:01:29.928372 2289 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 01:01:29.928423 kubelet[2289]: E1101 01:01:29.928416 2289 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 01:01:29.928546 kubelet[2289]: E1101 01:01:29.928541 2289 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 01:01:29.928591 kubelet[2289]: W1101 01:01:29.928583 2289 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 01:01:29.928635 kubelet[2289]: E1101 01:01:29.928628 2289 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 01:01:29.928772 kubelet[2289]: E1101 01:01:29.928767 2289 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 01:01:29.928820 kubelet[2289]: W1101 01:01:29.928812 2289 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 01:01:29.928864 kubelet[2289]: E1101 01:01:29.928856 2289 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 01:01:29.975600 kubelet[2289]: E1101 01:01:29.975575 2289 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 01:01:29.975600 kubelet[2289]: W1101 01:01:29.975593 2289 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 01:01:29.975600 kubelet[2289]: E1101 01:01:29.975605 2289 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 01:01:29.975774 kubelet[2289]: I1101 01:01:29.975622 2289 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/2e92d5bc-ac6d-4f8c-9478-cb1bbeb19fda-socket-dir\") pod \"csi-node-driver-clw4d\" (UID: \"2e92d5bc-ac6d-4f8c-9478-cb1bbeb19fda\") " pod="calico-system/csi-node-driver-clw4d" Nov 1 01:01:29.975873 kubelet[2289]: E1101 01:01:29.975859 2289 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 01:01:29.975873 kubelet[2289]: W1101 01:01:29.975870 2289 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 01:01:29.975935 kubelet[2289]: E1101 01:01:29.975876 2289 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 01:01:29.975935 kubelet[2289]: I1101 01:01:29.975886 2289 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/2e92d5bc-ac6d-4f8c-9478-cb1bbeb19fda-varrun\") pod \"csi-node-driver-clw4d\" (UID: \"2e92d5bc-ac6d-4f8c-9478-cb1bbeb19fda\") " pod="calico-system/csi-node-driver-clw4d" Nov 1 01:01:29.976060 kubelet[2289]: E1101 01:01:29.976047 2289 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 01:01:29.976060 kubelet[2289]: W1101 01:01:29.976058 2289 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 01:01:29.976112 kubelet[2289]: E1101 01:01:29.976064 2289 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 01:01:29.976112 kubelet[2289]: I1101 01:01:29.976073 2289 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/2e92d5bc-ac6d-4f8c-9478-cb1bbeb19fda-kubelet-dir\") pod \"csi-node-driver-clw4d\" (UID: \"2e92d5bc-ac6d-4f8c-9478-cb1bbeb19fda\") " pod="calico-system/csi-node-driver-clw4d" Nov 1 01:01:29.976224 kubelet[2289]: E1101 01:01:29.976209 2289 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 01:01:29.976224 kubelet[2289]: W1101 01:01:29.976220 2289 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 01:01:29.976283 kubelet[2289]: E1101 01:01:29.976229 2289 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 01:01:29.976283 kubelet[2289]: I1101 01:01:29.976243 2289 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/2e92d5bc-ac6d-4f8c-9478-cb1bbeb19fda-registration-dir\") pod \"csi-node-driver-clw4d\" (UID: \"2e92d5bc-ac6d-4f8c-9478-cb1bbeb19fda\") " pod="calico-system/csi-node-driver-clw4d" Nov 1 01:01:29.976447 kubelet[2289]: E1101 01:01:29.976431 2289 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 01:01:29.976447 kubelet[2289]: W1101 01:01:29.976439 2289 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 01:01:29.976447 kubelet[2289]: E1101 01:01:29.976446 2289 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 01:01:29.976538 kubelet[2289]: I1101 01:01:29.976457 2289 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-442hg\" (UniqueName: \"kubernetes.io/projected/2e92d5bc-ac6d-4f8c-9478-cb1bbeb19fda-kube-api-access-442hg\") pod \"csi-node-driver-clw4d\" (UID: \"2e92d5bc-ac6d-4f8c-9478-cb1bbeb19fda\") " pod="calico-system/csi-node-driver-clw4d" Nov 1 01:01:29.976599 kubelet[2289]: E1101 01:01:29.976587 2289 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 01:01:29.976599 kubelet[2289]: W1101 01:01:29.976596 2289 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 01:01:29.977659 kubelet[2289]: E1101 01:01:29.976605 2289 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 01:01:29.977659 kubelet[2289]: E1101 01:01:29.976710 2289 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 01:01:29.977659 kubelet[2289]: W1101 01:01:29.976717 2289 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 01:01:29.977659 kubelet[2289]: E1101 01:01:29.976727 2289 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 01:01:29.979718 kubelet[2289]: E1101 01:01:29.979702 2289 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 01:01:29.980184 kubelet[2289]: W1101 01:01:29.979863 2289 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 01:01:29.980184 kubelet[2289]: E1101 01:01:29.979884 2289 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 01:01:29.980292 kubelet[2289]: E1101 01:01:29.980284 2289 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 01:01:29.980344 kubelet[2289]: W1101 01:01:29.980336 2289 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 01:01:29.980472 kubelet[2289]: E1101 01:01:29.980464 2289 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 01:01:29.980547 kubelet[2289]: E1101 01:01:29.980541 2289 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 01:01:29.980595 kubelet[2289]: W1101 01:01:29.980587 2289 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 01:01:29.980726 kubelet[2289]: E1101 01:01:29.980718 2289 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 01:01:29.983810 kubelet[2289]: E1101 01:01:29.983792 2289 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 01:01:29.983932 kubelet[2289]: W1101 01:01:29.983918 2289 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 01:01:29.984108 kubelet[2289]: E1101 01:01:29.984096 2289 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 01:01:29.984262 kubelet[2289]: E1101 01:01:29.984255 2289 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 01:01:29.984324 kubelet[2289]: W1101 01:01:29.984315 2289 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 01:01:29.984409 kubelet[2289]: E1101 01:01:29.984402 2289 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 01:01:29.984480 kubelet[2289]: E1101 01:01:29.984474 2289 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 01:01:29.984525 kubelet[2289]: W1101 01:01:29.984516 2289 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 01:01:29.984577 kubelet[2289]: E1101 01:01:29.984569 2289 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 01:01:29.987806 kubelet[2289]: E1101 01:01:29.987791 2289 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 01:01:29.987914 kubelet[2289]: W1101 01:01:29.987901 2289 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 01:01:29.987987 kubelet[2289]: E1101 01:01:29.987977 2289 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 01:01:29.989785 kubelet[2289]: E1101 01:01:29.989774 2289 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 01:01:29.989874 kubelet[2289]: W1101 01:01:29.989864 2289 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 01:01:29.989929 kubelet[2289]: E1101 01:01:29.989920 2289 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 01:01:30.004578 env[1357]: time="2025-11-01T01:01:30.004553889Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-v446r,Uid:6d278d62-614e-4629-8149-6759d48a73f6,Namespace:calico-system,Attempt:0,}" Nov 1 01:01:30.040569 env[1357]: time="2025-11-01T01:01:30.040527606Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 1 01:01:30.040697 env[1357]: time="2025-11-01T01:01:30.040555512Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 1 01:01:30.040697 env[1357]: time="2025-11-01T01:01:30.040562329Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 1 01:01:30.041063 env[1357]: time="2025-11-01T01:01:30.040932020Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/ddc77324a1897a4cc6ef631dd0601e11ddada97957e0de9e96a8b74deb0c9e87 pid=2769 runtime=io.containerd.runc.v2 Nov 1 01:01:30.074733 env[1357]: time="2025-11-01T01:01:30.074542762Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-v446r,Uid:6d278d62-614e-4629-8149-6759d48a73f6,Namespace:calico-system,Attempt:0,} returns sandbox id \"ddc77324a1897a4cc6ef631dd0601e11ddada97957e0de9e96a8b74deb0c9e87\"" Nov 1 01:01:30.080405 kubelet[2289]: E1101 01:01:30.077845 2289 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 01:01:30.080405 kubelet[2289]: W1101 01:01:30.077857 2289 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 01:01:30.080405 kubelet[2289]: E1101 01:01:30.077870 2289 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 01:01:30.080405 kubelet[2289]: E1101 01:01:30.077973 2289 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 01:01:30.080405 kubelet[2289]: W1101 01:01:30.077978 2289 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 01:01:30.080405 kubelet[2289]: E1101 01:01:30.077983 2289 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 01:01:30.080405 kubelet[2289]: E1101 01:01:30.078166 2289 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 01:01:30.080405 kubelet[2289]: W1101 01:01:30.078171 2289 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 01:01:30.080405 kubelet[2289]: E1101 01:01:30.078177 2289 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 01:01:30.080405 kubelet[2289]: E1101 01:01:30.078366 2289 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 01:01:30.080755 kubelet[2289]: W1101 01:01:30.078371 2289 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 01:01:30.080755 kubelet[2289]: E1101 01:01:30.078377 2289 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 01:01:30.080755 kubelet[2289]: E1101 01:01:30.078540 2289 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 01:01:30.080755 kubelet[2289]: W1101 01:01:30.078545 2289 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 01:01:30.080755 kubelet[2289]: E1101 01:01:30.078550 2289 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 01:01:30.080755 kubelet[2289]: E1101 01:01:30.078750 2289 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 01:01:30.080755 kubelet[2289]: W1101 01:01:30.078785 2289 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 01:01:30.080755 kubelet[2289]: E1101 01:01:30.078791 2289 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 01:01:30.080755 kubelet[2289]: E1101 01:01:30.078957 2289 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 01:01:30.080755 kubelet[2289]: W1101 01:01:30.078966 2289 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 01:01:30.080965 kubelet[2289]: E1101 01:01:30.078974 2289 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 01:01:30.080965 kubelet[2289]: E1101 01:01:30.079165 2289 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 01:01:30.080965 kubelet[2289]: W1101 01:01:30.079171 2289 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 01:01:30.080965 kubelet[2289]: E1101 01:01:30.079177 2289 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 01:01:30.080965 kubelet[2289]: E1101 01:01:30.079320 2289 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 01:01:30.080965 kubelet[2289]: W1101 01:01:30.079350 2289 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 01:01:30.080965 kubelet[2289]: E1101 01:01:30.079360 2289 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 01:01:30.080965 kubelet[2289]: E1101 01:01:30.079520 2289 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 01:01:30.080965 kubelet[2289]: W1101 01:01:30.079524 2289 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 01:01:30.080965 kubelet[2289]: E1101 01:01:30.079529 2289 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 01:01:30.081168 kubelet[2289]: E1101 01:01:30.079655 2289 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 01:01:30.081168 kubelet[2289]: W1101 01:01:30.079660 2289 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 01:01:30.081168 kubelet[2289]: E1101 01:01:30.079697 2289 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 01:01:30.081168 kubelet[2289]: E1101 01:01:30.079826 2289 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 01:01:30.081168 kubelet[2289]: W1101 01:01:30.079831 2289 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 01:01:30.081168 kubelet[2289]: E1101 01:01:30.079864 2289 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 01:01:30.081168 kubelet[2289]: E1101 01:01:30.080021 2289 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 01:01:30.081168 kubelet[2289]: W1101 01:01:30.080025 2289 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 01:01:30.081168 kubelet[2289]: E1101 01:01:30.080030 2289 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 01:01:30.081168 kubelet[2289]: E1101 01:01:30.080182 2289 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 01:01:30.081366 kubelet[2289]: W1101 01:01:30.080187 2289 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 01:01:30.081366 kubelet[2289]: E1101 01:01:30.080192 2289 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 01:01:30.081366 kubelet[2289]: E1101 01:01:30.080340 2289 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 01:01:30.081366 kubelet[2289]: W1101 01:01:30.080345 2289 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 01:01:30.081366 kubelet[2289]: E1101 01:01:30.080379 2289 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 01:01:30.081366 kubelet[2289]: E1101 01:01:30.080449 2289 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 01:01:30.081366 kubelet[2289]: W1101 01:01:30.080453 2289 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 01:01:30.081366 kubelet[2289]: E1101 01:01:30.080458 2289 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 01:01:30.081366 kubelet[2289]: E1101 01:01:30.080542 2289 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 01:01:30.081366 kubelet[2289]: W1101 01:01:30.080546 2289 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 01:01:30.081545 kubelet[2289]: E1101 01:01:30.080553 2289 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 01:01:30.081545 kubelet[2289]: E1101 01:01:30.080648 2289 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 01:01:30.081545 kubelet[2289]: W1101 01:01:30.080653 2289 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 01:01:30.081545 kubelet[2289]: E1101 01:01:30.080658 2289 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 01:01:30.081545 kubelet[2289]: E1101 01:01:30.080751 2289 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 01:01:30.081545 kubelet[2289]: W1101 01:01:30.080757 2289 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 01:01:30.081545 kubelet[2289]: E1101 01:01:30.080763 2289 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 01:01:30.081545 kubelet[2289]: E1101 01:01:30.080854 2289 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 01:01:30.081545 kubelet[2289]: W1101 01:01:30.080860 2289 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 01:01:30.081545 kubelet[2289]: E1101 01:01:30.080866 2289 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 01:01:30.081895 kubelet[2289]: E1101 01:01:30.080951 2289 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 01:01:30.081895 kubelet[2289]: W1101 01:01:30.080956 2289 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 01:01:30.081895 kubelet[2289]: E1101 01:01:30.080960 2289 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 01:01:30.082528 kubelet[2289]: E1101 01:01:30.081967 2289 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 01:01:30.082528 kubelet[2289]: W1101 01:01:30.081974 2289 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 01:01:30.082528 kubelet[2289]: E1101 01:01:30.081981 2289 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 01:01:30.082528 kubelet[2289]: E1101 01:01:30.082384 2289 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 01:01:30.082528 kubelet[2289]: W1101 01:01:30.082392 2289 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 01:01:30.082528 kubelet[2289]: E1101 01:01:30.082398 2289 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 01:01:30.084597 kubelet[2289]: E1101 01:01:30.084481 2289 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 01:01:30.084597 kubelet[2289]: W1101 01:01:30.084489 2289 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 01:01:30.084597 kubelet[2289]: E1101 01:01:30.084497 2289 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 01:01:30.085084 kubelet[2289]: E1101 01:01:30.084908 2289 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 01:01:30.085084 kubelet[2289]: W1101 01:01:30.084913 2289 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 01:01:30.085084 kubelet[2289]: E1101 01:01:30.084921 2289 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 01:01:30.086534 kubelet[2289]: E1101 01:01:30.086499 2289 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 01:01:30.086534 kubelet[2289]: W1101 01:01:30.086506 2289 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 01:01:30.086534 kubelet[2289]: E1101 01:01:30.086518 2289 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 01:01:30.677710 systemd[1]: run-containerd-runc-k8s.io-49e4806bcdb369df2763bf19a0e94874059073e99b09ded55965c605aa92cad8-runc.UJqqrG.mount: Deactivated successfully. Nov 1 01:01:31.096961 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount607276887.mount: Deactivated successfully. Nov 1 01:01:31.841684 env[1357]: time="2025-11-01T01:01:31.841644132Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/typha:v3.30.4,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 01:01:31.842411 env[1357]: time="2025-11-01T01:01:31.842392087Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:aa1490366a77160b4cc8f9af82281ab7201ffda0882871f860e1eb1c4f825958,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 01:01:31.843215 env[1357]: time="2025-11-01T01:01:31.843200183Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/typha:v3.30.4,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 01:01:31.843960 env[1357]: time="2025-11-01T01:01:31.843945343Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/typha@sha256:6f437220b5b3c627fb4a0fc8dc323363101f3c22a8f337612c2a1ddfb73b810c,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 01:01:31.844342 env[1357]: time="2025-11-01T01:01:31.844324495Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.4\" returns image reference \"sha256:aa1490366a77160b4cc8f9af82281ab7201ffda0882871f860e1eb1c4f825958\"" Nov 1 01:01:31.845039 env[1357]: time="2025-11-01T01:01:31.844963739Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\"" Nov 1 01:01:31.857862 env[1357]: time="2025-11-01T01:01:31.857838728Z" level=info msg="CreateContainer within sandbox \"49e4806bcdb369df2763bf19a0e94874059073e99b09ded55965c605aa92cad8\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Nov 1 01:01:31.865070 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1169803707.mount: Deactivated successfully. Nov 1 01:01:31.867380 env[1357]: time="2025-11-01T01:01:31.867349273Z" level=info msg="CreateContainer within sandbox \"49e4806bcdb369df2763bf19a0e94874059073e99b09ded55965c605aa92cad8\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"7627c0f9e6eb6eb95279fd014b1943231910f8901ddbe8fd2050a298353ee675\"" Nov 1 01:01:31.867807 env[1357]: time="2025-11-01T01:01:31.867788134Z" level=info msg="StartContainer for \"7627c0f9e6eb6eb95279fd014b1943231910f8901ddbe8fd2050a298353ee675\"" Nov 1 01:01:31.932671 env[1357]: time="2025-11-01T01:01:31.932636942Z" level=info msg="StartContainer for \"7627c0f9e6eb6eb95279fd014b1943231910f8901ddbe8fd2050a298353ee675\" returns successfully" Nov 1 01:01:32.016905 kubelet[2289]: E1101 01:01:32.016868 2289 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-clw4d" podUID="2e92d5bc-ac6d-4f8c-9478-cb1bbeb19fda" Nov 1 01:01:32.142463 kubelet[2289]: E1101 01:01:32.142432 2289 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 01:01:32.142463 kubelet[2289]: W1101 01:01:32.142457 2289 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 01:01:32.142585 kubelet[2289]: E1101 01:01:32.142472 2289 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 01:01:32.142710 kubelet[2289]: E1101 01:01:32.142699 2289 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 01:01:32.142710 kubelet[2289]: W1101 01:01:32.142707 2289 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 01:01:32.142762 kubelet[2289]: E1101 01:01:32.142712 2289 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 01:01:32.142864 kubelet[2289]: E1101 01:01:32.142852 2289 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 01:01:32.142864 kubelet[2289]: W1101 01:01:32.142860 2289 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 01:01:32.142935 kubelet[2289]: E1101 01:01:32.142866 2289 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 01:01:32.143028 kubelet[2289]: E1101 01:01:32.143013 2289 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 01:01:32.143060 kubelet[2289]: W1101 01:01:32.143028 2289 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 01:01:32.143060 kubelet[2289]: E1101 01:01:32.143035 2289 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 01:01:32.143147 kubelet[2289]: E1101 01:01:32.143137 2289 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 01:01:32.143180 kubelet[2289]: W1101 01:01:32.143154 2289 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 01:01:32.143180 kubelet[2289]: E1101 01:01:32.143161 2289 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 01:01:32.143258 kubelet[2289]: E1101 01:01:32.143248 2289 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 01:01:32.143258 kubelet[2289]: W1101 01:01:32.143256 2289 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 01:01:32.143313 kubelet[2289]: E1101 01:01:32.143262 2289 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 01:01:32.143352 kubelet[2289]: E1101 01:01:32.143342 2289 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 01:01:32.143352 kubelet[2289]: W1101 01:01:32.143349 2289 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 01:01:32.143403 kubelet[2289]: E1101 01:01:32.143353 2289 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 01:01:32.147404 kubelet[2289]: E1101 01:01:32.143437 2289 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 01:01:32.147404 kubelet[2289]: W1101 01:01:32.143442 2289 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 01:01:32.147404 kubelet[2289]: E1101 01:01:32.143446 2289 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 01:01:32.147404 kubelet[2289]: E1101 01:01:32.143545 2289 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 01:01:32.147404 kubelet[2289]: W1101 01:01:32.143550 2289 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 01:01:32.147404 kubelet[2289]: E1101 01:01:32.143555 2289 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 01:01:32.147404 kubelet[2289]: E1101 01:01:32.143652 2289 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 01:01:32.147404 kubelet[2289]: W1101 01:01:32.143657 2289 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 01:01:32.147404 kubelet[2289]: E1101 01:01:32.143662 2289 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 01:01:32.147404 kubelet[2289]: E1101 01:01:32.143760 2289 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 01:01:32.147606 kubelet[2289]: W1101 01:01:32.143764 2289 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 01:01:32.147606 kubelet[2289]: E1101 01:01:32.143769 2289 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 01:01:32.147606 kubelet[2289]: E1101 01:01:32.143851 2289 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 01:01:32.147606 kubelet[2289]: W1101 01:01:32.143855 2289 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 01:01:32.147606 kubelet[2289]: E1101 01:01:32.143860 2289 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 01:01:32.147606 kubelet[2289]: E1101 01:01:32.143971 2289 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 01:01:32.147606 kubelet[2289]: W1101 01:01:32.143977 2289 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 01:01:32.147606 kubelet[2289]: E1101 01:01:32.143982 2289 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 01:01:32.147606 kubelet[2289]: E1101 01:01:32.144121 2289 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 01:01:32.147606 kubelet[2289]: W1101 01:01:32.144126 2289 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 01:01:32.147805 kubelet[2289]: E1101 01:01:32.144131 2289 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 01:01:32.147805 kubelet[2289]: E1101 01:01:32.144217 2289 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 01:01:32.147805 kubelet[2289]: W1101 01:01:32.144222 2289 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 01:01:32.147805 kubelet[2289]: E1101 01:01:32.144226 2289 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 01:01:32.192780 kubelet[2289]: E1101 01:01:32.192759 2289 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 01:01:32.192780 kubelet[2289]: W1101 01:01:32.192774 2289 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 01:01:32.192899 kubelet[2289]: E1101 01:01:32.192786 2289 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 01:01:32.193068 kubelet[2289]: E1101 01:01:32.192981 2289 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 01:01:32.193068 kubelet[2289]: W1101 01:01:32.192988 2289 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 01:01:32.193068 kubelet[2289]: E1101 01:01:32.193000 2289 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 01:01:32.193218 kubelet[2289]: E1101 01:01:32.193184 2289 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 01:01:32.193218 kubelet[2289]: W1101 01:01:32.193192 2289 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 01:01:32.193218 kubelet[2289]: E1101 01:01:32.193203 2289 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 01:01:32.193358 kubelet[2289]: E1101 01:01:32.193346 2289 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 01:01:32.193358 kubelet[2289]: W1101 01:01:32.193355 2289 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 01:01:32.193423 kubelet[2289]: E1101 01:01:32.193364 2289 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 01:01:32.193522 kubelet[2289]: E1101 01:01:32.193512 2289 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 01:01:32.193522 kubelet[2289]: W1101 01:01:32.193520 2289 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 01:01:32.193586 kubelet[2289]: E1101 01:01:32.193527 2289 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 01:01:32.193696 kubelet[2289]: E1101 01:01:32.193685 2289 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 01:01:32.193696 kubelet[2289]: W1101 01:01:32.193693 2289 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 01:01:32.201059 kubelet[2289]: E1101 01:01:32.193701 2289 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 01:01:32.201059 kubelet[2289]: E1101 01:01:32.193851 2289 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 01:01:32.201059 kubelet[2289]: W1101 01:01:32.193856 2289 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 01:01:32.201059 kubelet[2289]: E1101 01:01:32.193861 2289 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 01:01:32.201059 kubelet[2289]: E1101 01:01:32.193971 2289 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 01:01:32.201059 kubelet[2289]: W1101 01:01:32.193976 2289 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 01:01:32.201059 kubelet[2289]: E1101 01:01:32.193983 2289 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 01:01:32.201059 kubelet[2289]: E1101 01:01:32.194167 2289 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 01:01:32.201059 kubelet[2289]: W1101 01:01:32.194172 2289 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 01:01:32.201059 kubelet[2289]: E1101 01:01:32.194184 2289 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 01:01:32.201279 kubelet[2289]: E1101 01:01:32.194277 2289 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 01:01:32.201279 kubelet[2289]: W1101 01:01:32.194281 2289 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 01:01:32.201279 kubelet[2289]: E1101 01:01:32.194290 2289 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 01:01:32.201279 kubelet[2289]: E1101 01:01:32.194389 2289 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 01:01:32.201279 kubelet[2289]: W1101 01:01:32.194394 2289 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 01:01:32.201279 kubelet[2289]: E1101 01:01:32.194443 2289 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 01:01:32.201279 kubelet[2289]: E1101 01:01:32.194487 2289 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 01:01:32.201279 kubelet[2289]: W1101 01:01:32.194491 2289 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 01:01:32.201279 kubelet[2289]: E1101 01:01:32.194500 2289 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 01:01:32.201279 kubelet[2289]: E1101 01:01:32.194600 2289 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 01:01:32.202090 kubelet[2289]: W1101 01:01:32.194606 2289 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 01:01:32.202090 kubelet[2289]: E1101 01:01:32.194615 2289 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 01:01:32.202090 kubelet[2289]: E1101 01:01:32.194731 2289 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 01:01:32.202090 kubelet[2289]: W1101 01:01:32.194736 2289 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 01:01:32.202090 kubelet[2289]: E1101 01:01:32.194745 2289 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 01:01:32.202090 kubelet[2289]: E1101 01:01:32.194917 2289 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 01:01:32.202090 kubelet[2289]: W1101 01:01:32.194923 2289 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 01:01:32.202090 kubelet[2289]: E1101 01:01:32.194930 2289 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 01:01:32.202090 kubelet[2289]: E1101 01:01:32.195032 2289 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 01:01:32.202090 kubelet[2289]: W1101 01:01:32.195036 2289 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 01:01:32.202276 kubelet[2289]: E1101 01:01:32.195040 2289 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 01:01:32.202276 kubelet[2289]: E1101 01:01:32.195353 2289 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 01:01:32.202276 kubelet[2289]: W1101 01:01:32.195359 2289 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 01:01:32.202276 kubelet[2289]: E1101 01:01:32.195369 2289 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 01:01:32.202276 kubelet[2289]: E1101 01:01:32.195477 2289 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 01:01:32.202276 kubelet[2289]: W1101 01:01:32.195482 2289 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 01:01:32.202276 kubelet[2289]: E1101 01:01:32.195486 2289 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 01:01:33.087017 kubelet[2289]: I1101 01:01:33.086993 2289 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Nov 1 01:01:33.106584 env[1357]: time="2025-11-01T01:01:33.106295456Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 01:01:33.107675 env[1357]: time="2025-11-01T01:01:33.107649806Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:570719e9c34097019014ae2ad94edf4e523bc6892e77fb1c64c23e5b7f390fe5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 01:01:33.108974 env[1357]: time="2025-11-01T01:01:33.108958166Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 01:01:33.109862 env[1357]: time="2025-11-01T01:01:33.109833496Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:50bdfe370b7308fa9957ed1eaccd094aa4f27f9a4f1dfcfef2f8a7696a1551e1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 01:01:33.110480 env[1357]: time="2025-11-01T01:01:33.110461446Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" returns image reference \"sha256:570719e9c34097019014ae2ad94edf4e523bc6892e77fb1c64c23e5b7f390fe5\"" Nov 1 01:01:33.113091 env[1357]: time="2025-11-01T01:01:33.113058586Z" level=info msg="CreateContainer within sandbox \"ddc77324a1897a4cc6ef631dd0601e11ddada97957e0de9e96a8b74deb0c9e87\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Nov 1 01:01:33.122323 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount60759723.mount: Deactivated successfully. Nov 1 01:01:33.126791 env[1357]: time="2025-11-01T01:01:33.126759252Z" level=info msg="CreateContainer within sandbox \"ddc77324a1897a4cc6ef631dd0601e11ddada97957e0de9e96a8b74deb0c9e87\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"6e81c2474606567185f12864ee4a7432b8c8129f55c3af80151d8274eeefbbea\"" Nov 1 01:01:33.128251 env[1357]: time="2025-11-01T01:01:33.128226712Z" level=info msg="StartContainer for \"6e81c2474606567185f12864ee4a7432b8c8129f55c3af80151d8274eeefbbea\"" Nov 1 01:01:33.150016 kubelet[2289]: E1101 01:01:33.149711 2289 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 01:01:33.150016 kubelet[2289]: W1101 01:01:33.149727 2289 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 01:01:33.150016 kubelet[2289]: E1101 01:01:33.149743 2289 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 01:01:33.150016 kubelet[2289]: E1101 01:01:33.149848 2289 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 01:01:33.150016 kubelet[2289]: W1101 01:01:33.149854 2289 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 01:01:33.150016 kubelet[2289]: E1101 01:01:33.149860 2289 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 01:01:33.150016 kubelet[2289]: E1101 01:01:33.149949 2289 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 01:01:33.150016 kubelet[2289]: W1101 01:01:33.149953 2289 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 01:01:33.150016 kubelet[2289]: E1101 01:01:33.149959 2289 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 01:01:33.152889 kubelet[2289]: E1101 01:01:33.151343 2289 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 01:01:33.152889 kubelet[2289]: W1101 01:01:33.151356 2289 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 01:01:33.152889 kubelet[2289]: E1101 01:01:33.151410 2289 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 01:01:33.152889 kubelet[2289]: E1101 01:01:33.151555 2289 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 01:01:33.152889 kubelet[2289]: W1101 01:01:33.151561 2289 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 01:01:33.152889 kubelet[2289]: E1101 01:01:33.151567 2289 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 01:01:33.152889 kubelet[2289]: E1101 01:01:33.151923 2289 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 01:01:33.152889 kubelet[2289]: W1101 01:01:33.151929 2289 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 01:01:33.152889 kubelet[2289]: E1101 01:01:33.151941 2289 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 01:01:33.152889 kubelet[2289]: E1101 01:01:33.152029 2289 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 01:01:33.153855 kubelet[2289]: W1101 01:01:33.152034 2289 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 01:01:33.153855 kubelet[2289]: E1101 01:01:33.152039 2289 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 01:01:33.153855 kubelet[2289]: E1101 01:01:33.152130 2289 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 01:01:33.153855 kubelet[2289]: W1101 01:01:33.152135 2289 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 01:01:33.153855 kubelet[2289]: E1101 01:01:33.152139 2289 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 01:01:33.153855 kubelet[2289]: E1101 01:01:33.152237 2289 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 01:01:33.153855 kubelet[2289]: W1101 01:01:33.152242 2289 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 01:01:33.153855 kubelet[2289]: E1101 01:01:33.152246 2289 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 01:01:33.153855 kubelet[2289]: E1101 01:01:33.152330 2289 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 01:01:33.153855 kubelet[2289]: W1101 01:01:33.152335 2289 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 01:01:33.154274 kubelet[2289]: E1101 01:01:33.152339 2289 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 01:01:33.154274 kubelet[2289]: E1101 01:01:33.152424 2289 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 01:01:33.154274 kubelet[2289]: W1101 01:01:33.152429 2289 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 01:01:33.154274 kubelet[2289]: E1101 01:01:33.152434 2289 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 01:01:33.154274 kubelet[2289]: E1101 01:01:33.152520 2289 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 01:01:33.154274 kubelet[2289]: W1101 01:01:33.152524 2289 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 01:01:33.154274 kubelet[2289]: E1101 01:01:33.152529 2289 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 01:01:33.154274 kubelet[2289]: E1101 01:01:33.152622 2289 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 01:01:33.154274 kubelet[2289]: W1101 01:01:33.152627 2289 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 01:01:33.154274 kubelet[2289]: E1101 01:01:33.152631 2289 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 01:01:33.154755 kubelet[2289]: E1101 01:01:33.152736 2289 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 01:01:33.154755 kubelet[2289]: W1101 01:01:33.152740 2289 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 01:01:33.154755 kubelet[2289]: E1101 01:01:33.152746 2289 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 01:01:33.154755 kubelet[2289]: E1101 01:01:33.152831 2289 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 01:01:33.154755 kubelet[2289]: W1101 01:01:33.152835 2289 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 01:01:33.154755 kubelet[2289]: E1101 01:01:33.152839 2289 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 01:01:33.185422 env[1357]: time="2025-11-01T01:01:33.185378551Z" level=info msg="StartContainer for \"6e81c2474606567185f12864ee4a7432b8c8129f55c3af80151d8274eeefbbea\" returns successfully" Nov 1 01:01:33.360484 env[1357]: time="2025-11-01T01:01:33.360403548Z" level=info msg="shim disconnected" id=6e81c2474606567185f12864ee4a7432b8c8129f55c3af80151d8274eeefbbea Nov 1 01:01:33.360641 env[1357]: time="2025-11-01T01:01:33.360627675Z" level=warning msg="cleaning up after shim disconnected" id=6e81c2474606567185f12864ee4a7432b8c8129f55c3af80151d8274eeefbbea namespace=k8s.io Nov 1 01:01:33.360709 env[1357]: time="2025-11-01T01:01:33.360699376Z" level=info msg="cleaning up dead shim" Nov 1 01:01:33.365231 env[1357]: time="2025-11-01T01:01:33.365210899Z" level=warning msg="cleanup warnings time=\"2025-11-01T01:01:33Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2976 runtime=io.containerd.runc.v2\n" Nov 1 01:01:33.850102 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-6e81c2474606567185f12864ee4a7432b8c8129f55c3af80151d8274eeefbbea-rootfs.mount: Deactivated successfully. Nov 1 01:01:34.016779 kubelet[2289]: E1101 01:01:34.016565 2289 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-clw4d" podUID="2e92d5bc-ac6d-4f8c-9478-cb1bbeb19fda" Nov 1 01:01:34.089369 env[1357]: time="2025-11-01T01:01:34.089345465Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.4\"" Nov 1 01:01:34.099690 kubelet[2289]: I1101 01:01:34.099641 2289 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-68f99ddcb9-n4s98" podStartSLOduration=3.149157745 podStartE2EDuration="5.099628244s" podCreationTimestamp="2025-11-01 01:01:29 +0000 UTC" firstStartedPulling="2025-11-01 01:01:29.894422973 +0000 UTC m=+19.994028579" lastFinishedPulling="2025-11-01 01:01:31.844893471 +0000 UTC m=+21.944499078" observedRunningTime="2025-11-01 01:01:32.095920832 +0000 UTC m=+22.195526446" watchObservedRunningTime="2025-11-01 01:01:34.099628244 +0000 UTC m=+24.199233854" Nov 1 01:01:36.017728 kubelet[2289]: E1101 01:01:36.017491 2289 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-clw4d" podUID="2e92d5bc-ac6d-4f8c-9478-cb1bbeb19fda" Nov 1 01:01:37.185703 env[1357]: time="2025-11-01T01:01:37.185650385Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/cni:v3.30.4,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 01:01:37.187918 env[1357]: time="2025-11-01T01:01:37.187875281Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:24e1e7377c738d4080eb462a29e2c6756d383d8d25ad87b7f49165581f20c3cd,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 01:01:37.193907 env[1357]: time="2025-11-01T01:01:37.193880407Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/cni:v3.30.4,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 01:01:37.198492 env[1357]: time="2025-11-01T01:01:37.198461415Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/cni@sha256:273501a9cfbd848ade2b6a8452dfafdd3adb4f9bf9aec45c398a5d19b8026627,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 01:01:37.198826 env[1357]: time="2025-11-01T01:01:37.198809560Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.4\" returns image reference \"sha256:24e1e7377c738d4080eb462a29e2c6756d383d8d25ad87b7f49165581f20c3cd\"" Nov 1 01:01:37.212785 env[1357]: time="2025-11-01T01:01:37.212748506Z" level=info msg="CreateContainer within sandbox \"ddc77324a1897a4cc6ef631dd0601e11ddada97957e0de9e96a8b74deb0c9e87\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Nov 1 01:01:37.239606 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount323753374.mount: Deactivated successfully. Nov 1 01:01:37.242549 env[1357]: time="2025-11-01T01:01:37.242515054Z" level=info msg="CreateContainer within sandbox \"ddc77324a1897a4cc6ef631dd0601e11ddada97957e0de9e96a8b74deb0c9e87\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"6b088fb7a73aee156070903104044e944fb93da5d20d9be615a0cb5764d06ca6\"" Nov 1 01:01:37.243868 env[1357]: time="2025-11-01T01:01:37.243848513Z" level=info msg="StartContainer for \"6b088fb7a73aee156070903104044e944fb93da5d20d9be615a0cb5764d06ca6\"" Nov 1 01:01:37.312531 env[1357]: time="2025-11-01T01:01:37.312499556Z" level=info msg="StartContainer for \"6b088fb7a73aee156070903104044e944fb93da5d20d9be615a0cb5764d06ca6\" returns successfully" Nov 1 01:01:38.016873 kubelet[2289]: E1101 01:01:38.016840 2289 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-clw4d" podUID="2e92d5bc-ac6d-4f8c-9478-cb1bbeb19fda" Nov 1 01:01:38.236961 systemd[1]: run-containerd-runc-k8s.io-6b088fb7a73aee156070903104044e944fb93da5d20d9be615a0cb5764d06ca6-runc.FRE7Ib.mount: Deactivated successfully. Nov 1 01:01:39.451715 env[1357]: time="2025-11-01T01:01:39.451660651Z" level=error msg="failed to reload cni configuration after receiving fs change event(\"/etc/cni/net.d/calico-kubeconfig\": WRITE)" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Nov 1 01:01:39.464385 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-6b088fb7a73aee156070903104044e944fb93da5d20d9be615a0cb5764d06ca6-rootfs.mount: Deactivated successfully. Nov 1 01:01:39.487646 env[1357]: time="2025-11-01T01:01:39.487600767Z" level=info msg="shim disconnected" id=6b088fb7a73aee156070903104044e944fb93da5d20d9be615a0cb5764d06ca6 Nov 1 01:01:39.487646 env[1357]: time="2025-11-01T01:01:39.487638279Z" level=warning msg="cleaning up after shim disconnected" id=6b088fb7a73aee156070903104044e944fb93da5d20d9be615a0cb5764d06ca6 namespace=k8s.io Nov 1 01:01:39.487646 env[1357]: time="2025-11-01T01:01:39.487644975Z" level=info msg="cleaning up dead shim" Nov 1 01:01:39.496016 env[1357]: time="2025-11-01T01:01:39.492277768Z" level=warning msg="cleanup warnings time=\"2025-11-01T01:01:39Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3041 runtime=io.containerd.runc.v2\n" Nov 1 01:01:39.513607 kubelet[2289]: I1101 01:01:39.502410 2289 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Nov 1 01:01:40.085262 kubelet[2289]: I1101 01:01:40.085227 2289 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/cb274633-10f4-4984-be19-e536608b3bf1-goldmane-key-pair\") pod \"goldmane-666569f655-hsw5f\" (UID: \"cb274633-10f4-4984-be19-e536608b3bf1\") " pod="calico-system/goldmane-666569f655-hsw5f" Nov 1 01:01:40.085262 kubelet[2289]: I1101 01:01:40.085266 2289 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cb274633-10f4-4984-be19-e536608b3bf1-config\") pod \"goldmane-666569f655-hsw5f\" (UID: \"cb274633-10f4-4984-be19-e536608b3bf1\") " pod="calico-system/goldmane-666569f655-hsw5f" Nov 1 01:01:40.085461 kubelet[2289]: I1101 01:01:40.085285 2289 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/cb274633-10f4-4984-be19-e536608b3bf1-goldmane-ca-bundle\") pod \"goldmane-666569f655-hsw5f\" (UID: \"cb274633-10f4-4984-be19-e536608b3bf1\") " pod="calico-system/goldmane-666569f655-hsw5f" Nov 1 01:01:40.085461 kubelet[2289]: I1101 01:01:40.085301 2289 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/c9757878-cb1e-46ae-a174-a7b9152136f7-config-volume\") pod \"coredns-668d6bf9bc-rzp9k\" (UID: \"c9757878-cb1e-46ae-a174-a7b9152136f7\") " pod="kube-system/coredns-668d6bf9bc-rzp9k" Nov 1 01:01:40.085461 kubelet[2289]: I1101 01:01:40.085310 2289 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mnnfr\" (UniqueName: \"kubernetes.io/projected/c9757878-cb1e-46ae-a174-a7b9152136f7-kube-api-access-mnnfr\") pod \"coredns-668d6bf9bc-rzp9k\" (UID: \"c9757878-cb1e-46ae-a174-a7b9152136f7\") " pod="kube-system/coredns-668d6bf9bc-rzp9k" Nov 1 01:01:40.085461 kubelet[2289]: I1101 01:01:40.085325 2289 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9mx6f\" (UniqueName: \"kubernetes.io/projected/cb274633-10f4-4984-be19-e536608b3bf1-kube-api-access-9mx6f\") pod \"goldmane-666569f655-hsw5f\" (UID: \"cb274633-10f4-4984-be19-e536608b3bf1\") " pod="calico-system/goldmane-666569f655-hsw5f" Nov 1 01:01:40.188153 kubelet[2289]: I1101 01:01:40.188124 2289 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wt4d7\" (UniqueName: \"kubernetes.io/projected/8387fb77-dd73-4b48-9e3f-a6209aeef170-kube-api-access-wt4d7\") pod \"coredns-668d6bf9bc-wzrxr\" (UID: \"8387fb77-dd73-4b48-9e3f-a6209aeef170\") " pod="kube-system/coredns-668d6bf9bc-wzrxr" Nov 1 01:01:40.188257 kubelet[2289]: I1101 01:01:40.188171 2289 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/8387fb77-dd73-4b48-9e3f-a6209aeef170-config-volume\") pod \"coredns-668d6bf9bc-wzrxr\" (UID: \"8387fb77-dd73-4b48-9e3f-a6209aeef170\") " pod="kube-system/coredns-668d6bf9bc-wzrxr" Nov 1 01:01:40.427105 kubelet[2289]: I1101 01:01:40.427058 2289 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pgbnf\" (UniqueName: \"kubernetes.io/projected/9e4c5135-dd57-4916-a0b6-81789ca74a77-kube-api-access-pgbnf\") pod \"calico-apiserver-7787d665fb-fb8nb\" (UID: \"9e4c5135-dd57-4916-a0b6-81789ca74a77\") " pod="calico-apiserver/calico-apiserver-7787d665fb-fb8nb" Nov 1 01:01:40.436486 kubelet[2289]: I1101 01:01:40.427114 2289 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/27248a54-b567-4865-8268-2eb8267aa120-calico-apiserver-certs\") pod \"calico-apiserver-7787d665fb-xxt5l\" (UID: \"27248a54-b567-4865-8268-2eb8267aa120\") " pod="calico-apiserver/calico-apiserver-7787d665fb-xxt5l" Nov 1 01:01:40.436582 kubelet[2289]: I1101 01:01:40.436507 2289 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wpnrd\" (UniqueName: \"kubernetes.io/projected/dbcec211-4cad-40a0-8aa5-e63111d93180-kube-api-access-wpnrd\") pod \"calico-apiserver-5c87d7ff56-95ljj\" (UID: \"dbcec211-4cad-40a0-8aa5-e63111d93180\") " pod="calico-apiserver/calico-apiserver-5c87d7ff56-95ljj" Nov 1 01:01:40.436582 kubelet[2289]: I1101 01:01:40.436522 2289 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kkcdd\" (UniqueName: \"kubernetes.io/projected/50add9c4-b601-48d0-bb62-20ee6a0f4cad-kube-api-access-kkcdd\") pod \"whisker-9467669d4-jjpzl\" (UID: \"50add9c4-b601-48d0-bb62-20ee6a0f4cad\") " pod="calico-system/whisker-9467669d4-jjpzl" Nov 1 01:01:40.436582 kubelet[2289]: I1101 01:01:40.436535 2289 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a48af7bf-dcba-4afd-bada-a7a0787cc063-tigera-ca-bundle\") pod \"calico-kube-controllers-647db64f7d-p9wv7\" (UID: \"a48af7bf-dcba-4afd-bada-a7a0787cc063\") " pod="calico-system/calico-kube-controllers-647db64f7d-p9wv7" Nov 1 01:01:40.436582 kubelet[2289]: I1101 01:01:40.436544 2289 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v6njt\" (UniqueName: \"kubernetes.io/projected/27248a54-b567-4865-8268-2eb8267aa120-kube-api-access-v6njt\") pod \"calico-apiserver-7787d665fb-xxt5l\" (UID: \"27248a54-b567-4865-8268-2eb8267aa120\") " pod="calico-apiserver/calico-apiserver-7787d665fb-xxt5l" Nov 1 01:01:40.436582 kubelet[2289]: I1101 01:01:40.436554 2289 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/50add9c4-b601-48d0-bb62-20ee6a0f4cad-whisker-backend-key-pair\") pod \"whisker-9467669d4-jjpzl\" (UID: \"50add9c4-b601-48d0-bb62-20ee6a0f4cad\") " pod="calico-system/whisker-9467669d4-jjpzl" Nov 1 01:01:40.436713 kubelet[2289]: I1101 01:01:40.436569 2289 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/50add9c4-b601-48d0-bb62-20ee6a0f4cad-whisker-ca-bundle\") pod \"whisker-9467669d4-jjpzl\" (UID: \"50add9c4-b601-48d0-bb62-20ee6a0f4cad\") " pod="calico-system/whisker-9467669d4-jjpzl" Nov 1 01:01:40.436713 kubelet[2289]: I1101 01:01:40.436582 2289 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/dbcec211-4cad-40a0-8aa5-e63111d93180-calico-apiserver-certs\") pod \"calico-apiserver-5c87d7ff56-95ljj\" (UID: \"dbcec211-4cad-40a0-8aa5-e63111d93180\") " pod="calico-apiserver/calico-apiserver-5c87d7ff56-95ljj" Nov 1 01:01:40.436713 kubelet[2289]: I1101 01:01:40.436593 2289 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/9e4c5135-dd57-4916-a0b6-81789ca74a77-calico-apiserver-certs\") pod \"calico-apiserver-7787d665fb-fb8nb\" (UID: \"9e4c5135-dd57-4916-a0b6-81789ca74a77\") " pod="calico-apiserver/calico-apiserver-7787d665fb-fb8nb" Nov 1 01:01:40.436713 kubelet[2289]: I1101 01:01:40.436610 2289 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-67xqj\" (UniqueName: \"kubernetes.io/projected/a48af7bf-dcba-4afd-bada-a7a0787cc063-kube-api-access-67xqj\") pod \"calico-kube-controllers-647db64f7d-p9wv7\" (UID: \"a48af7bf-dcba-4afd-bada-a7a0787cc063\") " pod="calico-system/calico-kube-controllers-647db64f7d-p9wv7" Nov 1 01:01:40.456053 env[1357]: time="2025-11-01T01:01:40.455335682Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-rzp9k,Uid:c9757878-cb1e-46ae-a174-a7b9152136f7,Namespace:kube-system,Attempt:0,}" Nov 1 01:01:40.456436 env[1357]: time="2025-11-01T01:01:40.456423564Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-hsw5f,Uid:cb274633-10f4-4984-be19-e536608b3bf1,Namespace:calico-system,Attempt:0,}" Nov 1 01:01:40.456822 env[1357]: time="2025-11-01T01:01:40.456805680Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-clw4d,Uid:2e92d5bc-ac6d-4f8c-9478-cb1bbeb19fda,Namespace:calico-system,Attempt:0,}" Nov 1 01:01:40.468370 env[1357]: time="2025-11-01T01:01:40.467502904Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.4\"" Nov 1 01:01:40.730250 env[1357]: time="2025-11-01T01:01:40.729836640Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-9467669d4-jjpzl,Uid:50add9c4-b601-48d0-bb62-20ee6a0f4cad,Namespace:calico-system,Attempt:0,}" Nov 1 01:01:40.730425 env[1357]: time="2025-11-01T01:01:40.730400693Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-wzrxr,Uid:8387fb77-dd73-4b48-9e3f-a6209aeef170,Namespace:kube-system,Attempt:0,}" Nov 1 01:01:40.734190 env[1357]: time="2025-11-01T01:01:40.734156526Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5c87d7ff56-95ljj,Uid:dbcec211-4cad-40a0-8aa5-e63111d93180,Namespace:calico-apiserver,Attempt:0,}" Nov 1 01:01:40.734638 env[1357]: time="2025-11-01T01:01:40.734617312Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7787d665fb-xxt5l,Uid:27248a54-b567-4865-8268-2eb8267aa120,Namespace:calico-apiserver,Attempt:0,}" Nov 1 01:01:40.736311 env[1357]: time="2025-11-01T01:01:40.736286460Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-647db64f7d-p9wv7,Uid:a48af7bf-dcba-4afd-bada-a7a0787cc063,Namespace:calico-system,Attempt:0,}" Nov 1 01:01:40.737322 env[1357]: time="2025-11-01T01:01:40.737301246Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7787d665fb-fb8nb,Uid:9e4c5135-dd57-4916-a0b6-81789ca74a77,Namespace:calico-apiserver,Attempt:0,}" Nov 1 01:01:40.987392 env[1357]: time="2025-11-01T01:01:40.987173547Z" level=error msg="Failed to destroy network for sandbox \"b12d86bfd044332a93a763b4593abffd6bff3a0520d093a99f14b1dc632feef7\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 01:01:40.987809 env[1357]: time="2025-11-01T01:01:40.987785952Z" level=error msg="encountered an error cleaning up failed sandbox \"b12d86bfd044332a93a763b4593abffd6bff3a0520d093a99f14b1dc632feef7\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 01:01:40.987893 env[1357]: time="2025-11-01T01:01:40.987874904Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-hsw5f,Uid:cb274633-10f4-4984-be19-e536608b3bf1,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"b12d86bfd044332a93a763b4593abffd6bff3a0520d093a99f14b1dc632feef7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 01:01:40.990120 kubelet[2289]: E1101 01:01:40.990020 2289 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b12d86bfd044332a93a763b4593abffd6bff3a0520d093a99f14b1dc632feef7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 01:01:40.991734 kubelet[2289]: E1101 01:01:40.991645 2289 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b12d86bfd044332a93a763b4593abffd6bff3a0520d093a99f14b1dc632feef7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-666569f655-hsw5f" Nov 1 01:01:40.993158 kubelet[2289]: E1101 01:01:40.992924 2289 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b12d86bfd044332a93a763b4593abffd6bff3a0520d093a99f14b1dc632feef7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-666569f655-hsw5f" Nov 1 01:01:40.993158 kubelet[2289]: E1101 01:01:40.992981 2289 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-666569f655-hsw5f_calico-system(cb274633-10f4-4984-be19-e536608b3bf1)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-666569f655-hsw5f_calico-system(cb274633-10f4-4984-be19-e536608b3bf1)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"b12d86bfd044332a93a763b4593abffd6bff3a0520d093a99f14b1dc632feef7\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-666569f655-hsw5f" podUID="cb274633-10f4-4984-be19-e536608b3bf1" Nov 1 01:01:40.999516 env[1357]: time="2025-11-01T01:01:40.999480871Z" level=error msg="Failed to destroy network for sandbox \"31847bfe1e26a430d45fba673108465220f466427ce10525cc6ebd996c954eb5\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 01:01:40.999849 env[1357]: time="2025-11-01T01:01:40.999832908Z" level=error msg="encountered an error cleaning up failed sandbox \"31847bfe1e26a430d45fba673108465220f466427ce10525cc6ebd996c954eb5\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 01:01:40.999926 env[1357]: time="2025-11-01T01:01:40.999908685Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-clw4d,Uid:2e92d5bc-ac6d-4f8c-9478-cb1bbeb19fda,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"31847bfe1e26a430d45fba673108465220f466427ce10525cc6ebd996c954eb5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 01:01:41.000397 kubelet[2289]: E1101 01:01:41.000107 2289 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"31847bfe1e26a430d45fba673108465220f466427ce10525cc6ebd996c954eb5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 01:01:41.000397 kubelet[2289]: E1101 01:01:41.000154 2289 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"31847bfe1e26a430d45fba673108465220f466427ce10525cc6ebd996c954eb5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-clw4d" Nov 1 01:01:41.000397 kubelet[2289]: E1101 01:01:41.000171 2289 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"31847bfe1e26a430d45fba673108465220f466427ce10525cc6ebd996c954eb5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-clw4d" Nov 1 01:01:41.001528 kubelet[2289]: E1101 01:01:41.000210 2289 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-clw4d_calico-system(2e92d5bc-ac6d-4f8c-9478-cb1bbeb19fda)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-clw4d_calico-system(2e92d5bc-ac6d-4f8c-9478-cb1bbeb19fda)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"31847bfe1e26a430d45fba673108465220f466427ce10525cc6ebd996c954eb5\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-clw4d" podUID="2e92d5bc-ac6d-4f8c-9478-cb1bbeb19fda" Nov 1 01:01:41.015237 env[1357]: time="2025-11-01T01:01:41.015200806Z" level=error msg="Failed to destroy network for sandbox \"cb109768aeefa75de5b45b846d9511a268159f390cbfc2b97f2bb59f0618f4bf\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 01:01:41.015529 env[1357]: time="2025-11-01T01:01:41.015509085Z" level=error msg="Failed to destroy network for sandbox \"9f726969058ef8f2e663650fe2b27dda62381218a5d5957de5522097806b0442\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 01:01:41.015818 env[1357]: time="2025-11-01T01:01:41.015802236Z" level=error msg="encountered an error cleaning up failed sandbox \"9f726969058ef8f2e663650fe2b27dda62381218a5d5957de5522097806b0442\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 01:01:41.015903 env[1357]: time="2025-11-01T01:01:41.015886471Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-rzp9k,Uid:c9757878-cb1e-46ae-a174-a7b9152136f7,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"9f726969058ef8f2e663650fe2b27dda62381218a5d5957de5522097806b0442\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 01:01:41.016403 kubelet[2289]: E1101 01:01:41.016081 2289 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9f726969058ef8f2e663650fe2b27dda62381218a5d5957de5522097806b0442\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 01:01:41.016403 kubelet[2289]: E1101 01:01:41.016149 2289 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9f726969058ef8f2e663650fe2b27dda62381218a5d5957de5522097806b0442\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-rzp9k" Nov 1 01:01:41.016403 kubelet[2289]: E1101 01:01:41.016163 2289 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9f726969058ef8f2e663650fe2b27dda62381218a5d5957de5522097806b0442\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-rzp9k" Nov 1 01:01:41.016609 kubelet[2289]: E1101 01:01:41.016209 2289 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-rzp9k_kube-system(c9757878-cb1e-46ae-a174-a7b9152136f7)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-rzp9k_kube-system(c9757878-cb1e-46ae-a174-a7b9152136f7)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"9f726969058ef8f2e663650fe2b27dda62381218a5d5957de5522097806b0442\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-rzp9k" podUID="c9757878-cb1e-46ae-a174-a7b9152136f7" Nov 1 01:01:41.016976 env[1357]: time="2025-11-01T01:01:41.016942793Z" level=error msg="encountered an error cleaning up failed sandbox \"cb109768aeefa75de5b45b846d9511a268159f390cbfc2b97f2bb59f0618f4bf\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 01:01:41.017101 env[1357]: time="2025-11-01T01:01:41.017069376Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-9467669d4-jjpzl,Uid:50add9c4-b601-48d0-bb62-20ee6a0f4cad,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"cb109768aeefa75de5b45b846d9511a268159f390cbfc2b97f2bb59f0618f4bf\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 01:01:41.017505 kubelet[2289]: E1101 01:01:41.017353 2289 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"cb109768aeefa75de5b45b846d9511a268159f390cbfc2b97f2bb59f0618f4bf\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 01:01:41.017505 kubelet[2289]: E1101 01:01:41.017396 2289 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"cb109768aeefa75de5b45b846d9511a268159f390cbfc2b97f2bb59f0618f4bf\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-9467669d4-jjpzl" Nov 1 01:01:41.017505 kubelet[2289]: E1101 01:01:41.017413 2289 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"cb109768aeefa75de5b45b846d9511a268159f390cbfc2b97f2bb59f0618f4bf\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-9467669d4-jjpzl" Nov 1 01:01:41.017651 kubelet[2289]: E1101 01:01:41.017454 2289 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-9467669d4-jjpzl_calico-system(50add9c4-b601-48d0-bb62-20ee6a0f4cad)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-9467669d4-jjpzl_calico-system(50add9c4-b601-48d0-bb62-20ee6a0f4cad)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"cb109768aeefa75de5b45b846d9511a268159f390cbfc2b97f2bb59f0618f4bf\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-9467669d4-jjpzl" podUID="50add9c4-b601-48d0-bb62-20ee6a0f4cad" Nov 1 01:01:41.038192 env[1357]: time="2025-11-01T01:01:41.038151030Z" level=error msg="Failed to destroy network for sandbox \"31276cef7e87a461939e11d45dc633439d500732f1e50e32643ef0ddbcb40df1\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 01:01:41.038690 env[1357]: time="2025-11-01T01:01:41.038648390Z" level=error msg="encountered an error cleaning up failed sandbox \"31276cef7e87a461939e11d45dc633439d500732f1e50e32643ef0ddbcb40df1\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 01:01:41.038784 env[1357]: time="2025-11-01T01:01:41.038765321Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-647db64f7d-p9wv7,Uid:a48af7bf-dcba-4afd-bada-a7a0787cc063,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"31276cef7e87a461939e11d45dc633439d500732f1e50e32643ef0ddbcb40df1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 01:01:41.038997 kubelet[2289]: E1101 01:01:41.038971 2289 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"31276cef7e87a461939e11d45dc633439d500732f1e50e32643ef0ddbcb40df1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 01:01:41.039049 kubelet[2289]: E1101 01:01:41.039018 2289 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"31276cef7e87a461939e11d45dc633439d500732f1e50e32643ef0ddbcb40df1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-647db64f7d-p9wv7" Nov 1 01:01:41.039049 kubelet[2289]: E1101 01:01:41.039035 2289 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"31276cef7e87a461939e11d45dc633439d500732f1e50e32643ef0ddbcb40df1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-647db64f7d-p9wv7" Nov 1 01:01:41.039125 kubelet[2289]: E1101 01:01:41.039078 2289 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-647db64f7d-p9wv7_calico-system(a48af7bf-dcba-4afd-bada-a7a0787cc063)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-647db64f7d-p9wv7_calico-system(a48af7bf-dcba-4afd-bada-a7a0787cc063)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"31276cef7e87a461939e11d45dc633439d500732f1e50e32643ef0ddbcb40df1\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-647db64f7d-p9wv7" podUID="a48af7bf-dcba-4afd-bada-a7a0787cc063" Nov 1 01:01:41.040094 env[1357]: time="2025-11-01T01:01:41.040066225Z" level=error msg="Failed to destroy network for sandbox \"fed4f3f1e9d521ee583ade003d4d117acd6da61bd471431ec09366c2c693dc1b\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 01:01:41.040870 env[1357]: time="2025-11-01T01:01:41.040400608Z" level=error msg="encountered an error cleaning up failed sandbox \"fed4f3f1e9d521ee583ade003d4d117acd6da61bd471431ec09366c2c693dc1b\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 01:01:41.040870 env[1357]: time="2025-11-01T01:01:41.040428705Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-wzrxr,Uid:8387fb77-dd73-4b48-9e3f-a6209aeef170,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"fed4f3f1e9d521ee583ade003d4d117acd6da61bd471431ec09366c2c693dc1b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 01:01:41.040973 kubelet[2289]: E1101 01:01:41.040550 2289 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fed4f3f1e9d521ee583ade003d4d117acd6da61bd471431ec09366c2c693dc1b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 01:01:41.040973 kubelet[2289]: E1101 01:01:41.040602 2289 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fed4f3f1e9d521ee583ade003d4d117acd6da61bd471431ec09366c2c693dc1b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-wzrxr" Nov 1 01:01:41.040973 kubelet[2289]: E1101 01:01:41.040637 2289 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fed4f3f1e9d521ee583ade003d4d117acd6da61bd471431ec09366c2c693dc1b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-wzrxr" Nov 1 01:01:41.045364 kubelet[2289]: E1101 01:01:41.040689 2289 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-wzrxr_kube-system(8387fb77-dd73-4b48-9e3f-a6209aeef170)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-wzrxr_kube-system(8387fb77-dd73-4b48-9e3f-a6209aeef170)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"fed4f3f1e9d521ee583ade003d4d117acd6da61bd471431ec09366c2c693dc1b\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-wzrxr" podUID="8387fb77-dd73-4b48-9e3f-a6209aeef170" Nov 1 01:01:41.045612 env[1357]: time="2025-11-01T01:01:41.045582774Z" level=error msg="Failed to destroy network for sandbox \"963e8a55b26f297f25ed2a9455e2a5c1bc4b6edfd1660a770d38a7ffea576ff5\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 01:01:41.045890 env[1357]: time="2025-11-01T01:01:41.045870223Z" level=error msg="encountered an error cleaning up failed sandbox \"963e8a55b26f297f25ed2a9455e2a5c1bc4b6edfd1660a770d38a7ffea576ff5\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 01:01:41.045978 env[1357]: time="2025-11-01T01:01:41.045960911Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7787d665fb-fb8nb,Uid:9e4c5135-dd57-4916-a0b6-81789ca74a77,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"963e8a55b26f297f25ed2a9455e2a5c1bc4b6edfd1660a770d38a7ffea576ff5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 01:01:41.046715 kubelet[2289]: E1101 01:01:41.046588 2289 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"963e8a55b26f297f25ed2a9455e2a5c1bc4b6edfd1660a770d38a7ffea576ff5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 01:01:41.046715 kubelet[2289]: E1101 01:01:41.046631 2289 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"963e8a55b26f297f25ed2a9455e2a5c1bc4b6edfd1660a770d38a7ffea576ff5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7787d665fb-fb8nb" Nov 1 01:01:41.046715 kubelet[2289]: E1101 01:01:41.046644 2289 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"963e8a55b26f297f25ed2a9455e2a5c1bc4b6edfd1660a770d38a7ffea576ff5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7787d665fb-fb8nb" Nov 1 01:01:41.047769 kubelet[2289]: E1101 01:01:41.046687 2289 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-7787d665fb-fb8nb_calico-apiserver(9e4c5135-dd57-4916-a0b6-81789ca74a77)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-7787d665fb-fb8nb_calico-apiserver(9e4c5135-dd57-4916-a0b6-81789ca74a77)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"963e8a55b26f297f25ed2a9455e2a5c1bc4b6edfd1660a770d38a7ffea576ff5\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-7787d665fb-fb8nb" podUID="9e4c5135-dd57-4916-a0b6-81789ca74a77" Nov 1 01:01:41.049611 env[1357]: time="2025-11-01T01:01:41.049569945Z" level=error msg="Failed to destroy network for sandbox \"33ae4ebef7b540364464ead42411a96d422840754565ac7e28881067ba4acc13\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 01:01:41.050233 env[1357]: time="2025-11-01T01:01:41.050043912Z" level=error msg="encountered an error cleaning up failed sandbox \"33ae4ebef7b540364464ead42411a96d422840754565ac7e28881067ba4acc13\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 01:01:41.050233 env[1357]: time="2025-11-01T01:01:41.050089587Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7787d665fb-xxt5l,Uid:27248a54-b567-4865-8268-2eb8267aa120,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"33ae4ebef7b540364464ead42411a96d422840754565ac7e28881067ba4acc13\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 01:01:41.050520 kubelet[2289]: E1101 01:01:41.050491 2289 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"33ae4ebef7b540364464ead42411a96d422840754565ac7e28881067ba4acc13\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 01:01:41.050580 kubelet[2289]: E1101 01:01:41.050539 2289 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"33ae4ebef7b540364464ead42411a96d422840754565ac7e28881067ba4acc13\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7787d665fb-xxt5l" Nov 1 01:01:41.050580 kubelet[2289]: E1101 01:01:41.050553 2289 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"33ae4ebef7b540364464ead42411a96d422840754565ac7e28881067ba4acc13\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7787d665fb-xxt5l" Nov 1 01:01:41.050718 kubelet[2289]: E1101 01:01:41.050586 2289 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-7787d665fb-xxt5l_calico-apiserver(27248a54-b567-4865-8268-2eb8267aa120)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-7787d665fb-xxt5l_calico-apiserver(27248a54-b567-4865-8268-2eb8267aa120)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"33ae4ebef7b540364464ead42411a96d422840754565ac7e28881067ba4acc13\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-7787d665fb-xxt5l" podUID="27248a54-b567-4865-8268-2eb8267aa120" Nov 1 01:01:41.061061 env[1357]: time="2025-11-01T01:01:41.061020192Z" level=error msg="Failed to destroy network for sandbox \"0f44848ccbeca3127b519c0e267b6cbc3a4df6d552d05fed271ffc9643835f54\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 01:01:41.063026 env[1357]: time="2025-11-01T01:01:41.061263209Z" level=error msg="encountered an error cleaning up failed sandbox \"0f44848ccbeca3127b519c0e267b6cbc3a4df6d552d05fed271ffc9643835f54\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 01:01:41.063026 env[1357]: time="2025-11-01T01:01:41.061291302Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5c87d7ff56-95ljj,Uid:dbcec211-4cad-40a0-8aa5-e63111d93180,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"0f44848ccbeca3127b519c0e267b6cbc3a4df6d552d05fed271ffc9643835f54\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 01:01:41.063154 kubelet[2289]: E1101 01:01:41.061459 2289 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0f44848ccbeca3127b519c0e267b6cbc3a4df6d552d05fed271ffc9643835f54\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 01:01:41.063154 kubelet[2289]: E1101 01:01:41.061496 2289 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0f44848ccbeca3127b519c0e267b6cbc3a4df6d552d05fed271ffc9643835f54\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5c87d7ff56-95ljj" Nov 1 01:01:41.063154 kubelet[2289]: E1101 01:01:41.061517 2289 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0f44848ccbeca3127b519c0e267b6cbc3a4df6d552d05fed271ffc9643835f54\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5c87d7ff56-95ljj" Nov 1 01:01:41.063281 kubelet[2289]: E1101 01:01:41.061556 2289 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-5c87d7ff56-95ljj_calico-apiserver(dbcec211-4cad-40a0-8aa5-e63111d93180)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-5c87d7ff56-95ljj_calico-apiserver(dbcec211-4cad-40a0-8aa5-e63111d93180)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"0f44848ccbeca3127b519c0e267b6cbc3a4df6d552d05fed271ffc9643835f54\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-5c87d7ff56-95ljj" podUID="dbcec211-4cad-40a0-8aa5-e63111d93180" Nov 1 01:01:41.461197 kubelet[2289]: I1101 01:01:41.461173 2289 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="fed4f3f1e9d521ee583ade003d4d117acd6da61bd471431ec09366c2c693dc1b" Nov 1 01:01:41.465208 kubelet[2289]: I1101 01:01:41.464731 2289 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="33ae4ebef7b540364464ead42411a96d422840754565ac7e28881067ba4acc13" Nov 1 01:01:41.466473 kubelet[2289]: I1101 01:01:41.466155 2289 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9f726969058ef8f2e663650fe2b27dda62381218a5d5957de5522097806b0442" Nov 1 01:01:41.467492 kubelet[2289]: I1101 01:01:41.467453 2289 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b12d86bfd044332a93a763b4593abffd6bff3a0520d093a99f14b1dc632feef7" Nov 1 01:01:41.472567 kubelet[2289]: I1101 01:01:41.472279 2289 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="963e8a55b26f297f25ed2a9455e2a5c1bc4b6edfd1660a770d38a7ffea576ff5" Nov 1 01:01:41.473563 env[1357]: time="2025-11-01T01:01:41.473446861Z" level=info msg="StopPodSandbox for \"b12d86bfd044332a93a763b4593abffd6bff3a0520d093a99f14b1dc632feef7\"" Nov 1 01:01:41.474090 env[1357]: time="2025-11-01T01:01:41.474075633Z" level=info msg="StopPodSandbox for \"fed4f3f1e9d521ee583ade003d4d117acd6da61bd471431ec09366c2c693dc1b\"" Nov 1 01:01:41.474556 env[1357]: time="2025-11-01T01:01:41.474542012Z" level=info msg="StopPodSandbox for \"9f726969058ef8f2e663650fe2b27dda62381218a5d5957de5522097806b0442\"" Nov 1 01:01:41.474842 env[1357]: time="2025-11-01T01:01:41.474829688Z" level=info msg="StopPodSandbox for \"33ae4ebef7b540364464ead42411a96d422840754565ac7e28881067ba4acc13\"" Nov 1 01:01:41.475260 env[1357]: time="2025-11-01T01:01:41.475247457Z" level=info msg="StopPodSandbox for \"963e8a55b26f297f25ed2a9455e2a5c1bc4b6edfd1660a770d38a7ffea576ff5\"" Nov 1 01:01:41.475309 kubelet[2289]: I1101 01:01:41.475271 2289 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0f44848ccbeca3127b519c0e267b6cbc3a4df6d552d05fed271ffc9643835f54" Nov 1 01:01:41.475635 env[1357]: time="2025-11-01T01:01:41.475622447Z" level=info msg="StopPodSandbox for \"0f44848ccbeca3127b519c0e267b6cbc3a4df6d552d05fed271ffc9643835f54\"" Nov 1 01:01:41.476406 kubelet[2289]: I1101 01:01:41.476344 2289 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="31276cef7e87a461939e11d45dc633439d500732f1e50e32643ef0ddbcb40df1" Nov 1 01:01:41.476813 env[1357]: time="2025-11-01T01:01:41.476790282Z" level=info msg="StopPodSandbox for \"31276cef7e87a461939e11d45dc633439d500732f1e50e32643ef0ddbcb40df1\"" Nov 1 01:01:41.477980 kubelet[2289]: I1101 01:01:41.477949 2289 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="31847bfe1e26a430d45fba673108465220f466427ce10525cc6ebd996c954eb5" Nov 1 01:01:41.478448 env[1357]: time="2025-11-01T01:01:41.478420774Z" level=info msg="StopPodSandbox for \"31847bfe1e26a430d45fba673108465220f466427ce10525cc6ebd996c954eb5\"" Nov 1 01:01:41.479351 kubelet[2289]: I1101 01:01:41.479289 2289 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="cb109768aeefa75de5b45b846d9511a268159f390cbfc2b97f2bb59f0618f4bf" Nov 1 01:01:41.479709 env[1357]: time="2025-11-01T01:01:41.479690651Z" level=info msg="StopPodSandbox for \"cb109768aeefa75de5b45b846d9511a268159f390cbfc2b97f2bb59f0618f4bf\"" Nov 1 01:01:41.509371 env[1357]: time="2025-11-01T01:01:41.509240081Z" level=error msg="StopPodSandbox for \"b12d86bfd044332a93a763b4593abffd6bff3a0520d093a99f14b1dc632feef7\" failed" error="failed to destroy network for sandbox \"b12d86bfd044332a93a763b4593abffd6bff3a0520d093a99f14b1dc632feef7\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 01:01:41.511212 kubelet[2289]: E1101 01:01:41.511072 2289 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"b12d86bfd044332a93a763b4593abffd6bff3a0520d093a99f14b1dc632feef7\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="b12d86bfd044332a93a763b4593abffd6bff3a0520d093a99f14b1dc632feef7" Nov 1 01:01:41.511212 kubelet[2289]: E1101 01:01:41.511125 2289 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"b12d86bfd044332a93a763b4593abffd6bff3a0520d093a99f14b1dc632feef7"} Nov 1 01:01:41.511212 kubelet[2289]: E1101 01:01:41.511168 2289 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"cb274633-10f4-4984-be19-e536608b3bf1\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"b12d86bfd044332a93a763b4593abffd6bff3a0520d093a99f14b1dc632feef7\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Nov 1 01:01:41.511212 kubelet[2289]: E1101 01:01:41.511190 2289 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"cb274633-10f4-4984-be19-e536608b3bf1\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"b12d86bfd044332a93a763b4593abffd6bff3a0520d093a99f14b1dc632feef7\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-666569f655-hsw5f" podUID="cb274633-10f4-4984-be19-e536608b3bf1" Nov 1 01:01:41.554269 env[1357]: time="2025-11-01T01:01:41.554236405Z" level=error msg="StopPodSandbox for \"31847bfe1e26a430d45fba673108465220f466427ce10525cc6ebd996c954eb5\" failed" error="failed to destroy network for sandbox \"31847bfe1e26a430d45fba673108465220f466427ce10525cc6ebd996c954eb5\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 01:01:41.554546 env[1357]: time="2025-11-01T01:01:41.554382868Z" level=error msg="StopPodSandbox for \"9f726969058ef8f2e663650fe2b27dda62381218a5d5957de5522097806b0442\" failed" error="failed to destroy network for sandbox \"9f726969058ef8f2e663650fe2b27dda62381218a5d5957de5522097806b0442\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 01:01:41.555140 kubelet[2289]: E1101 01:01:41.555108 2289 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"9f726969058ef8f2e663650fe2b27dda62381218a5d5957de5522097806b0442\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="9f726969058ef8f2e663650fe2b27dda62381218a5d5957de5522097806b0442" Nov 1 01:01:41.555195 kubelet[2289]: E1101 01:01:41.555144 2289 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"9f726969058ef8f2e663650fe2b27dda62381218a5d5957de5522097806b0442"} Nov 1 01:01:41.555195 kubelet[2289]: E1101 01:01:41.555168 2289 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"c9757878-cb1e-46ae-a174-a7b9152136f7\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"9f726969058ef8f2e663650fe2b27dda62381218a5d5957de5522097806b0442\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Nov 1 01:01:41.555195 kubelet[2289]: E1101 01:01:41.555183 2289 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"c9757878-cb1e-46ae-a174-a7b9152136f7\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"9f726969058ef8f2e663650fe2b27dda62381218a5d5957de5522097806b0442\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-rzp9k" podUID="c9757878-cb1e-46ae-a174-a7b9152136f7" Nov 1 01:01:41.555195 kubelet[2289]: E1101 01:01:41.555108 2289 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"31847bfe1e26a430d45fba673108465220f466427ce10525cc6ebd996c954eb5\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="31847bfe1e26a430d45fba673108465220f466427ce10525cc6ebd996c954eb5" Nov 1 01:01:41.555348 kubelet[2289]: E1101 01:01:41.555201 2289 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"31847bfe1e26a430d45fba673108465220f466427ce10525cc6ebd996c954eb5"} Nov 1 01:01:41.555348 kubelet[2289]: E1101 01:01:41.555213 2289 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"2e92d5bc-ac6d-4f8c-9478-cb1bbeb19fda\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"31847bfe1e26a430d45fba673108465220f466427ce10525cc6ebd996c954eb5\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Nov 1 01:01:41.555348 kubelet[2289]: E1101 01:01:41.555223 2289 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"2e92d5bc-ac6d-4f8c-9478-cb1bbeb19fda\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"31847bfe1e26a430d45fba673108465220f466427ce10525cc6ebd996c954eb5\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-clw4d" podUID="2e92d5bc-ac6d-4f8c-9478-cb1bbeb19fda" Nov 1 01:01:41.592526 env[1357]: time="2025-11-01T01:01:41.592481647Z" level=error msg="StopPodSandbox for \"fed4f3f1e9d521ee583ade003d4d117acd6da61bd471431ec09366c2c693dc1b\" failed" error="failed to destroy network for sandbox \"fed4f3f1e9d521ee583ade003d4d117acd6da61bd471431ec09366c2c693dc1b\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 01:01:41.592838 env[1357]: time="2025-11-01T01:01:41.592820451Z" level=error msg="StopPodSandbox for \"cb109768aeefa75de5b45b846d9511a268159f390cbfc2b97f2bb59f0618f4bf\" failed" error="failed to destroy network for sandbox \"cb109768aeefa75de5b45b846d9511a268159f390cbfc2b97f2bb59f0618f4bf\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 01:01:41.593124 kubelet[2289]: E1101 01:01:41.592992 2289 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"cb109768aeefa75de5b45b846d9511a268159f390cbfc2b97f2bb59f0618f4bf\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="cb109768aeefa75de5b45b846d9511a268159f390cbfc2b97f2bb59f0618f4bf" Nov 1 01:01:41.593124 kubelet[2289]: E1101 01:01:41.592992 2289 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"fed4f3f1e9d521ee583ade003d4d117acd6da61bd471431ec09366c2c693dc1b\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="fed4f3f1e9d521ee583ade003d4d117acd6da61bd471431ec09366c2c693dc1b" Nov 1 01:01:41.593124 kubelet[2289]: E1101 01:01:41.593025 2289 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"cb109768aeefa75de5b45b846d9511a268159f390cbfc2b97f2bb59f0618f4bf"} Nov 1 01:01:41.593124 kubelet[2289]: E1101 01:01:41.593039 2289 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"fed4f3f1e9d521ee583ade003d4d117acd6da61bd471431ec09366c2c693dc1b"} Nov 1 01:01:41.593124 kubelet[2289]: E1101 01:01:41.593048 2289 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"50add9c4-b601-48d0-bb62-20ee6a0f4cad\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"cb109768aeefa75de5b45b846d9511a268159f390cbfc2b97f2bb59f0618f4bf\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Nov 1 01:01:41.594357 kubelet[2289]: E1101 01:01:41.593064 2289 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"8387fb77-dd73-4b48-9e3f-a6209aeef170\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"fed4f3f1e9d521ee583ade003d4d117acd6da61bd471431ec09366c2c693dc1b\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Nov 1 01:01:41.594357 kubelet[2289]: E1101 01:01:41.593081 2289 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"8387fb77-dd73-4b48-9e3f-a6209aeef170\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"fed4f3f1e9d521ee583ade003d4d117acd6da61bd471431ec09366c2c693dc1b\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-wzrxr" podUID="8387fb77-dd73-4b48-9e3f-a6209aeef170" Nov 1 01:01:41.595000 kubelet[2289]: E1101 01:01:41.593061 2289 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"50add9c4-b601-48d0-bb62-20ee6a0f4cad\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"cb109768aeefa75de5b45b846d9511a268159f390cbfc2b97f2bb59f0618f4bf\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-9467669d4-jjpzl" podUID="50add9c4-b601-48d0-bb62-20ee6a0f4cad" Nov 1 01:01:41.595175 env[1357]: time="2025-11-01T01:01:41.595147726Z" level=error msg="StopPodSandbox for \"963e8a55b26f297f25ed2a9455e2a5c1bc4b6edfd1660a770d38a7ffea576ff5\" failed" error="failed to destroy network for sandbox \"963e8a55b26f297f25ed2a9455e2a5c1bc4b6edfd1660a770d38a7ffea576ff5\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 01:01:41.595393 kubelet[2289]: E1101 01:01:41.595320 2289 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"963e8a55b26f297f25ed2a9455e2a5c1bc4b6edfd1660a770d38a7ffea576ff5\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="963e8a55b26f297f25ed2a9455e2a5c1bc4b6edfd1660a770d38a7ffea576ff5" Nov 1 01:01:41.595393 kubelet[2289]: E1101 01:01:41.595345 2289 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"963e8a55b26f297f25ed2a9455e2a5c1bc4b6edfd1660a770d38a7ffea576ff5"} Nov 1 01:01:41.595393 kubelet[2289]: E1101 01:01:41.595365 2289 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"9e4c5135-dd57-4916-a0b6-81789ca74a77\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"963e8a55b26f297f25ed2a9455e2a5c1bc4b6edfd1660a770d38a7ffea576ff5\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Nov 1 01:01:41.595393 kubelet[2289]: E1101 01:01:41.595377 2289 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"9e4c5135-dd57-4916-a0b6-81789ca74a77\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"963e8a55b26f297f25ed2a9455e2a5c1bc4b6edfd1660a770d38a7ffea576ff5\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-7787d665fb-fb8nb" podUID="9e4c5135-dd57-4916-a0b6-81789ca74a77" Nov 1 01:01:41.596527 env[1357]: time="2025-11-01T01:01:41.596504882Z" level=error msg="StopPodSandbox for \"31276cef7e87a461939e11d45dc633439d500732f1e50e32643ef0ddbcb40df1\" failed" error="failed to destroy network for sandbox \"31276cef7e87a461939e11d45dc633439d500732f1e50e32643ef0ddbcb40df1\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 01:01:41.596807 kubelet[2289]: E1101 01:01:41.596737 2289 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"31276cef7e87a461939e11d45dc633439d500732f1e50e32643ef0ddbcb40df1\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="31276cef7e87a461939e11d45dc633439d500732f1e50e32643ef0ddbcb40df1" Nov 1 01:01:41.596807 kubelet[2289]: E1101 01:01:41.596757 2289 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"31276cef7e87a461939e11d45dc633439d500732f1e50e32643ef0ddbcb40df1"} Nov 1 01:01:41.596807 kubelet[2289]: E1101 01:01:41.596774 2289 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"a48af7bf-dcba-4afd-bada-a7a0787cc063\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"31276cef7e87a461939e11d45dc633439d500732f1e50e32643ef0ddbcb40df1\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Nov 1 01:01:41.596807 kubelet[2289]: E1101 01:01:41.596788 2289 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"a48af7bf-dcba-4afd-bada-a7a0787cc063\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"31276cef7e87a461939e11d45dc633439d500732f1e50e32643ef0ddbcb40df1\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-647db64f7d-p9wv7" podUID="a48af7bf-dcba-4afd-bada-a7a0787cc063" Nov 1 01:01:41.597530 env[1357]: time="2025-11-01T01:01:41.597511724Z" level=error msg="StopPodSandbox for \"33ae4ebef7b540364464ead42411a96d422840754565ac7e28881067ba4acc13\" failed" error="failed to destroy network for sandbox \"33ae4ebef7b540364464ead42411a96d422840754565ac7e28881067ba4acc13\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 01:01:41.597724 kubelet[2289]: E1101 01:01:41.597705 2289 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"33ae4ebef7b540364464ead42411a96d422840754565ac7e28881067ba4acc13\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="33ae4ebef7b540364464ead42411a96d422840754565ac7e28881067ba4acc13" Nov 1 01:01:41.597761 kubelet[2289]: E1101 01:01:41.597728 2289 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"33ae4ebef7b540364464ead42411a96d422840754565ac7e28881067ba4acc13"} Nov 1 01:01:41.597761 kubelet[2289]: E1101 01:01:41.597746 2289 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"27248a54-b567-4865-8268-2eb8267aa120\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"33ae4ebef7b540364464ead42411a96d422840754565ac7e28881067ba4acc13\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Nov 1 01:01:41.597761 kubelet[2289]: E1101 01:01:41.597757 2289 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"27248a54-b567-4865-8268-2eb8267aa120\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"33ae4ebef7b540364464ead42411a96d422840754565ac7e28881067ba4acc13\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-7787d665fb-xxt5l" podUID="27248a54-b567-4865-8268-2eb8267aa120" Nov 1 01:01:41.598337 env[1357]: time="2025-11-01T01:01:41.598317915Z" level=error msg="StopPodSandbox for \"0f44848ccbeca3127b519c0e267b6cbc3a4df6d552d05fed271ffc9643835f54\" failed" error="failed to destroy network for sandbox \"0f44848ccbeca3127b519c0e267b6cbc3a4df6d552d05fed271ffc9643835f54\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 01:01:41.598497 kubelet[2289]: E1101 01:01:41.598441 2289 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"0f44848ccbeca3127b519c0e267b6cbc3a4df6d552d05fed271ffc9643835f54\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="0f44848ccbeca3127b519c0e267b6cbc3a4df6d552d05fed271ffc9643835f54" Nov 1 01:01:41.598497 kubelet[2289]: E1101 01:01:41.598460 2289 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"0f44848ccbeca3127b519c0e267b6cbc3a4df6d552d05fed271ffc9643835f54"} Nov 1 01:01:41.598497 kubelet[2289]: E1101 01:01:41.598474 2289 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"dbcec211-4cad-40a0-8aa5-e63111d93180\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"0f44848ccbeca3127b519c0e267b6cbc3a4df6d552d05fed271ffc9643835f54\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Nov 1 01:01:41.598497 kubelet[2289]: E1101 01:01:41.598484 2289 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"dbcec211-4cad-40a0-8aa5-e63111d93180\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"0f44848ccbeca3127b519c0e267b6cbc3a4df6d552d05fed271ffc9643835f54\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-5c87d7ff56-95ljj" podUID="dbcec211-4cad-40a0-8aa5-e63111d93180" Nov 1 01:01:44.439000 audit[3445]: NETFILTER_CFG table=filter:101 family=2 entries=21 op=nft_register_rule pid=3445 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Nov 1 01:01:44.530458 kernel: audit: type=1325 audit(1761958904.439:300): table=filter:101 family=2 entries=21 op=nft_register_rule pid=3445 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Nov 1 01:01:44.557112 kernel: audit: type=1300 audit(1761958904.439:300): arch=c000003e syscall=46 success=yes exit=7480 a0=3 a1=7ffc8bc86df0 a2=0 a3=7ffc8bc86ddc items=0 ppid=2387 pid=3445 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 01:01:44.563579 kernel: audit: type=1327 audit(1761958904.439:300): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Nov 1 01:01:44.563623 kernel: audit: type=1325 audit(1761958904.452:301): table=nat:102 family=2 entries=19 op=nft_register_chain pid=3445 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Nov 1 01:01:44.570725 kernel: audit: type=1300 audit(1761958904.452:301): arch=c000003e syscall=46 success=yes exit=6276 a0=3 a1=7ffc8bc86df0 a2=0 a3=7ffc8bc86ddc items=0 ppid=2387 pid=3445 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 01:01:44.580308 kernel: audit: type=1327 audit(1761958904.452:301): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Nov 1 01:01:44.439000 audit[3445]: SYSCALL arch=c000003e syscall=46 success=yes exit=7480 a0=3 a1=7ffc8bc86df0 a2=0 a3=7ffc8bc86ddc items=0 ppid=2387 pid=3445 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 01:01:44.439000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Nov 1 01:01:44.452000 audit[3445]: NETFILTER_CFG table=nat:102 family=2 entries=19 op=nft_register_chain pid=3445 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Nov 1 01:01:44.452000 audit[3445]: SYSCALL arch=c000003e syscall=46 success=yes exit=6276 a0=3 a1=7ffc8bc86df0 a2=0 a3=7ffc8bc86ddc items=0 ppid=2387 pid=3445 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 01:01:44.452000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Nov 1 01:01:45.968762 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3689238090.mount: Deactivated successfully. Nov 1 01:01:46.007191 env[1357]: time="2025-11-01T01:01:46.007145934Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/node:v3.30.4,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 01:01:46.009926 env[1357]: time="2025-11-01T01:01:46.009904876Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:833e8e11d9dc187377eab6f31e275114a6b0f8f0afc3bf578a2a00507e85afc9,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 01:01:46.010681 env[1357]: time="2025-11-01T01:01:46.010651101Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/node:v3.30.4,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 01:01:46.011422 env[1357]: time="2025-11-01T01:01:46.011406779Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/node@sha256:e92cca333202c87d07bf57f38182fd68f0779f912ef55305eda1fccc9f33667c,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 01:01:46.011803 env[1357]: time="2025-11-01T01:01:46.011785241Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.4\" returns image reference \"sha256:833e8e11d9dc187377eab6f31e275114a6b0f8f0afc3bf578a2a00507e85afc9\"" Nov 1 01:01:46.049615 env[1357]: time="2025-11-01T01:01:46.049585128Z" level=info msg="CreateContainer within sandbox \"ddc77324a1897a4cc6ef631dd0601e11ddada97957e0de9e96a8b74deb0c9e87\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Nov 1 01:01:46.058877 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount26760169.mount: Deactivated successfully. Nov 1 01:01:46.061200 env[1357]: time="2025-11-01T01:01:46.061172236Z" level=info msg="CreateContainer within sandbox \"ddc77324a1897a4cc6ef631dd0601e11ddada97957e0de9e96a8b74deb0c9e87\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"a452617c1eafd1ae76c1a80d68b621eb5658aea9e6f3c56486c9c71db5808985\"" Nov 1 01:01:46.062627 env[1357]: time="2025-11-01T01:01:46.062573604Z" level=info msg="StartContainer for \"a452617c1eafd1ae76c1a80d68b621eb5658aea9e6f3c56486c9c71db5808985\"" Nov 1 01:01:46.102534 env[1357]: time="2025-11-01T01:01:46.102490764Z" level=info msg="StartContainer for \"a452617c1eafd1ae76c1a80d68b621eb5658aea9e6f3c56486c9c71db5808985\" returns successfully" Nov 1 01:01:46.877248 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Nov 1 01:01:46.877896 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Nov 1 01:01:47.377511 kubelet[2289]: I1101 01:01:47.377420 2289 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-v446r" podStartSLOduration=2.4387253859999998 podStartE2EDuration="18.375094706s" podCreationTimestamp="2025-11-01 01:01:29 +0000 UTC" firstStartedPulling="2025-11-01 01:01:30.076150568 +0000 UTC m=+20.175756173" lastFinishedPulling="2025-11-01 01:01:46.012519887 +0000 UTC m=+36.112125493" observedRunningTime="2025-11-01 01:01:46.506416196 +0000 UTC m=+36.606021810" watchObservedRunningTime="2025-11-01 01:01:47.375094706 +0000 UTC m=+37.474700468" Nov 1 01:01:47.382487 env[1357]: time="2025-11-01T01:01:47.382456789Z" level=info msg="StopPodSandbox for \"cb109768aeefa75de5b45b846d9511a268159f390cbfc2b97f2bb59f0618f4bf\"" Nov 1 01:01:47.533344 systemd[1]: run-containerd-runc-k8s.io-a452617c1eafd1ae76c1a80d68b621eb5658aea9e6f3c56486c9c71db5808985-runc.fWu6lQ.mount: Deactivated successfully. Nov 1 01:01:47.765854 env[1357]: 2025-11-01 01:01:47.478 [INFO][3541] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="cb109768aeefa75de5b45b846d9511a268159f390cbfc2b97f2bb59f0618f4bf" Nov 1 01:01:47.765854 env[1357]: 2025-11-01 01:01:47.479 [INFO][3541] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="cb109768aeefa75de5b45b846d9511a268159f390cbfc2b97f2bb59f0618f4bf" iface="eth0" netns="/var/run/netns/cni-8e825960-1cc0-824b-ecf2-5ef44016f921" Nov 1 01:01:47.765854 env[1357]: 2025-11-01 01:01:47.480 [INFO][3541] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="cb109768aeefa75de5b45b846d9511a268159f390cbfc2b97f2bb59f0618f4bf" iface="eth0" netns="/var/run/netns/cni-8e825960-1cc0-824b-ecf2-5ef44016f921" Nov 1 01:01:47.765854 env[1357]: 2025-11-01 01:01:47.482 [INFO][3541] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="cb109768aeefa75de5b45b846d9511a268159f390cbfc2b97f2bb59f0618f4bf" iface="eth0" netns="/var/run/netns/cni-8e825960-1cc0-824b-ecf2-5ef44016f921" Nov 1 01:01:47.765854 env[1357]: 2025-11-01 01:01:47.482 [INFO][3541] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="cb109768aeefa75de5b45b846d9511a268159f390cbfc2b97f2bb59f0618f4bf" Nov 1 01:01:47.765854 env[1357]: 2025-11-01 01:01:47.482 [INFO][3541] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="cb109768aeefa75de5b45b846d9511a268159f390cbfc2b97f2bb59f0618f4bf" Nov 1 01:01:47.765854 env[1357]: 2025-11-01 01:01:47.745 [INFO][3551] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="cb109768aeefa75de5b45b846d9511a268159f390cbfc2b97f2bb59f0618f4bf" HandleID="k8s-pod-network.cb109768aeefa75de5b45b846d9511a268159f390cbfc2b97f2bb59f0618f4bf" Workload="localhost-k8s-whisker--9467669d4--jjpzl-eth0" Nov 1 01:01:47.765854 env[1357]: 2025-11-01 01:01:47.749 [INFO][3551] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 1 01:01:47.765854 env[1357]: 2025-11-01 01:01:47.749 [INFO][3551] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 1 01:01:47.765854 env[1357]: 2025-11-01 01:01:47.762 [WARNING][3551] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="cb109768aeefa75de5b45b846d9511a268159f390cbfc2b97f2bb59f0618f4bf" HandleID="k8s-pod-network.cb109768aeefa75de5b45b846d9511a268159f390cbfc2b97f2bb59f0618f4bf" Workload="localhost-k8s-whisker--9467669d4--jjpzl-eth0" Nov 1 01:01:47.765854 env[1357]: 2025-11-01 01:01:47.762 [INFO][3551] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="cb109768aeefa75de5b45b846d9511a268159f390cbfc2b97f2bb59f0618f4bf" HandleID="k8s-pod-network.cb109768aeefa75de5b45b846d9511a268159f390cbfc2b97f2bb59f0618f4bf" Workload="localhost-k8s-whisker--9467669d4--jjpzl-eth0" Nov 1 01:01:47.765854 env[1357]: 2025-11-01 01:01:47.763 [INFO][3551] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 1 01:01:47.765854 env[1357]: 2025-11-01 01:01:47.764 [INFO][3541] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="cb109768aeefa75de5b45b846d9511a268159f390cbfc2b97f2bb59f0618f4bf" Nov 1 01:01:47.768514 env[1357]: time="2025-11-01T01:01:47.768164189Z" level=info msg="TearDown network for sandbox \"cb109768aeefa75de5b45b846d9511a268159f390cbfc2b97f2bb59f0618f4bf\" successfully" Nov 1 01:01:47.768514 env[1357]: time="2025-11-01T01:01:47.768190392Z" level=info msg="StopPodSandbox for \"cb109768aeefa75de5b45b846d9511a268159f390cbfc2b97f2bb59f0618f4bf\" returns successfully" Nov 1 01:01:47.767620 systemd[1]: run-netns-cni\x2d8e825960\x2d1cc0\x2d824b\x2decf2\x2d5ef44016f921.mount: Deactivated successfully. Nov 1 01:01:47.892924 kubelet[2289]: I1101 01:01:47.892874 2289 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kkcdd\" (UniqueName: \"kubernetes.io/projected/50add9c4-b601-48d0-bb62-20ee6a0f4cad-kube-api-access-kkcdd\") pod \"50add9c4-b601-48d0-bb62-20ee6a0f4cad\" (UID: \"50add9c4-b601-48d0-bb62-20ee6a0f4cad\") " Nov 1 01:01:47.893255 kubelet[2289]: I1101 01:01:47.893044 2289 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/50add9c4-b601-48d0-bb62-20ee6a0f4cad-whisker-ca-bundle\") pod \"50add9c4-b601-48d0-bb62-20ee6a0f4cad\" (UID: \"50add9c4-b601-48d0-bb62-20ee6a0f4cad\") " Nov 1 01:01:47.893255 kubelet[2289]: I1101 01:01:47.893068 2289 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/50add9c4-b601-48d0-bb62-20ee6a0f4cad-whisker-backend-key-pair\") pod \"50add9c4-b601-48d0-bb62-20ee6a0f4cad\" (UID: \"50add9c4-b601-48d0-bb62-20ee6a0f4cad\") " Nov 1 01:01:47.898529 kubelet[2289]: I1101 01:01:47.897598 2289 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/50add9c4-b601-48d0-bb62-20ee6a0f4cad-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "50add9c4-b601-48d0-bb62-20ee6a0f4cad" (UID: "50add9c4-b601-48d0-bb62-20ee6a0f4cad"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Nov 1 01:01:47.898694 kubelet[2289]: I1101 01:01:47.898600 2289 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/50add9c4-b601-48d0-bb62-20ee6a0f4cad-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "50add9c4-b601-48d0-bb62-20ee6a0f4cad" (UID: "50add9c4-b601-48d0-bb62-20ee6a0f4cad"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGIDValue "" Nov 1 01:01:47.899312 kubelet[2289]: I1101 01:01:47.897181 2289 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/50add9c4-b601-48d0-bb62-20ee6a0f4cad-kube-api-access-kkcdd" (OuterVolumeSpecName: "kube-api-access-kkcdd") pod "50add9c4-b601-48d0-bb62-20ee6a0f4cad" (UID: "50add9c4-b601-48d0-bb62-20ee6a0f4cad"). InnerVolumeSpecName "kube-api-access-kkcdd". PluginName "kubernetes.io/projected", VolumeGIDValue "" Nov 1 01:01:47.968101 systemd[1]: var-lib-kubelet-pods-50add9c4\x2db601\x2d48d0\x2dbb62\x2d20ee6a0f4cad-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dkkcdd.mount: Deactivated successfully. Nov 1 01:01:47.968196 systemd[1]: var-lib-kubelet-pods-50add9c4\x2db601\x2d48d0\x2dbb62\x2d20ee6a0f4cad-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Nov 1 01:01:47.994009 kubelet[2289]: I1101 01:01:47.993986 2289 reconciler_common.go:299] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/50add9c4-b601-48d0-bb62-20ee6a0f4cad-whisker-backend-key-pair\") on node \"localhost\" DevicePath \"\"" Nov 1 01:01:47.994009 kubelet[2289]: I1101 01:01:47.994006 2289 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-kkcdd\" (UniqueName: \"kubernetes.io/projected/50add9c4-b601-48d0-bb62-20ee6a0f4cad-kube-api-access-kkcdd\") on node \"localhost\" DevicePath \"\"" Nov 1 01:01:47.994009 kubelet[2289]: I1101 01:01:47.994013 2289 reconciler_common.go:299] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/50add9c4-b601-48d0-bb62-20ee6a0f4cad-whisker-ca-bundle\") on node \"localhost\" DevicePath \"\"" Nov 1 01:01:48.645000 audit[3648]: AVC avc: denied { write } for pid=3648 comm="tee" name="fd" dev="proc" ino=37608 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 Nov 1 01:01:48.667586 kernel: audit: type=1400 audit(1761958908.645:302): avc: denied { write } for pid=3648 comm="tee" name="fd" dev="proc" ino=37608 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 Nov 1 01:01:48.667676 kernel: audit: type=1400 audit(1761958908.649:303): avc: denied { write } for pid=3652 comm="tee" name="fd" dev="proc" ino=37613 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 Nov 1 01:01:48.667701 kernel: audit: type=1400 audit(1761958908.653:304): avc: denied { write } for pid=3650 comm="tee" name="fd" dev="proc" ino=37616 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 Nov 1 01:01:48.667720 kernel: audit: type=1400 audit(1761958908.656:305): avc: denied { write } for pid=3656 comm="tee" name="fd" dev="proc" ino=37619 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 Nov 1 01:01:48.649000 audit[3652]: AVC avc: denied { write } for pid=3652 comm="tee" name="fd" dev="proc" ino=37613 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 Nov 1 01:01:48.653000 audit[3650]: AVC avc: denied { write } for pid=3650 comm="tee" name="fd" dev="proc" ino=37616 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 Nov 1 01:01:48.656000 audit[3656]: AVC avc: denied { write } for pid=3656 comm="tee" name="fd" dev="proc" ino=37619 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 Nov 1 01:01:48.663000 audit[3646]: AVC avc: denied { write } for pid=3646 comm="tee" name="fd" dev="proc" ino=37622 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 Nov 1 01:01:48.663000 audit[3654]: AVC avc: denied { write } for pid=3654 comm="tee" name="fd" dev="proc" ino=37625 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 Nov 1 01:01:48.664000 audit[3658]: AVC avc: denied { write } for pid=3658 comm="tee" name="fd" dev="proc" ino=37628 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 Nov 1 01:01:48.656000 audit[3656]: SYSCALL arch=c000003e syscall=257 success=yes exit=3 a0=ffffff9c a1=7ffed27b97d1 a2=241 a3=1b6 items=1 ppid=3590 pid=3656 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 01:01:48.656000 audit: CWD cwd="/etc/service/enabled/allocate-tunnel-addrs/log" Nov 1 01:01:48.656000 audit: PATH item=0 name="/dev/fd/63" inode=37605 dev=00:0c mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 01:01:48.656000 audit: PROCTITLE proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 Nov 1 01:01:48.663000 audit[3654]: SYSCALL arch=c000003e syscall=257 success=yes exit=3 a0=ffffff9c a1=7ffd53a937e1 a2=241 a3=1b6 items=1 ppid=3593 pid=3654 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 01:01:48.663000 audit: CWD cwd="/etc/service/enabled/bird6/log" Nov 1 01:01:48.663000 audit: PATH item=0 name="/dev/fd/63" inode=37604 dev=00:0c mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 01:01:48.663000 audit: PROCTITLE proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 Nov 1 01:01:48.645000 audit[3648]: SYSCALL arch=c000003e syscall=257 success=yes exit=3 a0=ffffff9c a1=7ffd6d2787e1 a2=241 a3=1b6 items=1 ppid=3594 pid=3648 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 01:01:48.645000 audit: CWD cwd="/etc/service/enabled/confd/log" Nov 1 01:01:48.645000 audit: PATH item=0 name="/dev/fd/63" inode=37601 dev=00:0c mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 01:01:48.645000 audit: PROCTITLE proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 Nov 1 01:01:48.649000 audit[3652]: SYSCALL arch=c000003e syscall=257 success=yes exit=3 a0=ffffff9c a1=7fffcbbc07e3 a2=241 a3=1b6 items=1 ppid=3591 pid=3652 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 01:01:48.649000 audit: CWD cwd="/etc/service/enabled/cni/log" Nov 1 01:01:48.649000 audit: PATH item=0 name="/dev/fd/63" inode=37603 dev=00:0c mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 01:01:48.653000 audit[3650]: SYSCALL arch=c000003e syscall=257 success=yes exit=3 a0=ffffff9c a1=7ffc5b4ce7d2 a2=241 a3=1b6 items=1 ppid=3592 pid=3650 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 01:01:48.649000 audit: PROCTITLE proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 Nov 1 01:01:48.653000 audit: CWD cwd="/etc/service/enabled/node-status-reporter/log" Nov 1 01:01:48.653000 audit: PATH item=0 name="/dev/fd/63" inode=37602 dev=00:0c mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 01:01:48.653000 audit: PROCTITLE proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 Nov 1 01:01:48.663000 audit[3646]: SYSCALL arch=c000003e syscall=257 success=yes exit=3 a0=ffffff9c a1=7ffdf74047e1 a2=241 a3=1b6 items=1 ppid=3595 pid=3646 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 01:01:48.663000 audit: CWD cwd="/etc/service/enabled/felix/log" Nov 1 01:01:48.663000 audit: PATH item=0 name="/dev/fd/63" inode=37600 dev=00:0c mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 01:01:48.663000 audit: PROCTITLE proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 Nov 1 01:01:48.664000 audit[3658]: SYSCALL arch=c000003e syscall=257 success=yes exit=3 a0=ffffff9c a1=7ffd08cd17e2 a2=241 a3=1b6 items=1 ppid=3589 pid=3658 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 01:01:48.664000 audit: CWD cwd="/etc/service/enabled/bird/log" Nov 1 01:01:48.664000 audit: PATH item=0 name="/dev/fd/63" inode=36382 dev=00:0c mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 01:01:48.664000 audit: PROCTITLE proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 Nov 1 01:01:48.846000 audit[3678]: AVC avc: denied { bpf } for pid=3678 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:01:48.846000 audit[3678]: AVC avc: denied { bpf } for pid=3678 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:01:48.846000 audit[3678]: AVC avc: denied { perfmon } for pid=3678 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:01:48.846000 audit[3678]: AVC avc: denied { perfmon } for pid=3678 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:01:48.846000 audit[3678]: AVC avc: denied { perfmon } for pid=3678 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:01:48.846000 audit[3678]: AVC avc: denied { perfmon } for pid=3678 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:01:48.846000 audit[3678]: AVC avc: denied { perfmon } for pid=3678 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:01:48.846000 audit[3678]: AVC avc: denied { bpf } for pid=3678 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:01:48.846000 audit[3678]: AVC avc: denied { bpf } for pid=3678 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:01:48.846000 audit: BPF prog-id=10 op=LOAD Nov 1 01:01:48.846000 audit[3678]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffef554c4e0 a2=98 a3=1fffffffffffffff items=0 ppid=3612 pid=3678 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 01:01:48.846000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Nov 1 01:01:48.846000 audit: BPF prog-id=10 op=UNLOAD Nov 1 01:01:48.847000 audit[3678]: AVC avc: denied { bpf } for pid=3678 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:01:48.847000 audit[3678]: AVC avc: denied { bpf } for pid=3678 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:01:48.847000 audit[3678]: AVC avc: denied { perfmon } for pid=3678 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:01:48.847000 audit[3678]: AVC avc: denied { perfmon } for pid=3678 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:01:48.847000 audit[3678]: AVC avc: denied { perfmon } for pid=3678 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:01:48.847000 audit[3678]: AVC avc: denied { perfmon } for pid=3678 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:01:48.847000 audit[3678]: AVC avc: denied { perfmon } for pid=3678 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:01:48.847000 audit[3678]: AVC avc: denied { bpf } for pid=3678 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:01:48.847000 audit[3678]: AVC avc: denied { bpf } for pid=3678 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:01:48.847000 audit: BPF prog-id=11 op=LOAD Nov 1 01:01:48.847000 audit[3678]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffef554c3c0 a2=94 a3=3 items=0 ppid=3612 pid=3678 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 01:01:48.847000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Nov 1 01:01:48.847000 audit: BPF prog-id=11 op=UNLOAD Nov 1 01:01:48.847000 audit[3678]: AVC avc: denied { bpf } for pid=3678 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:01:48.847000 audit[3678]: AVC avc: denied { bpf } for pid=3678 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:01:48.847000 audit[3678]: AVC avc: denied { perfmon } for pid=3678 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:01:48.847000 audit[3678]: AVC avc: denied { perfmon } for pid=3678 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:01:48.847000 audit[3678]: AVC avc: denied { perfmon } for pid=3678 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:01:48.847000 audit[3678]: AVC avc: denied { perfmon } for pid=3678 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:01:48.847000 audit[3678]: AVC avc: denied { perfmon } for pid=3678 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:01:48.847000 audit[3678]: AVC avc: denied { bpf } for pid=3678 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:01:48.847000 audit[3678]: AVC avc: denied { bpf } for pid=3678 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:01:48.847000 audit: BPF prog-id=12 op=LOAD Nov 1 01:01:48.847000 audit[3678]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffef554c400 a2=94 a3=7ffef554c5e0 items=0 ppid=3612 pid=3678 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 01:01:48.847000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Nov 1 01:01:48.847000 audit: BPF prog-id=12 op=UNLOAD Nov 1 01:01:48.847000 audit[3678]: AVC avc: denied { perfmon } for pid=3678 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:01:48.847000 audit[3678]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=0 a1=7ffef554c4d0 a2=50 a3=a000000085 items=0 ppid=3612 pid=3678 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 01:01:48.847000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Nov 1 01:01:48.854000 audit[3679]: AVC avc: denied { bpf } for pid=3679 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:01:48.854000 audit[3679]: AVC avc: denied { bpf } for pid=3679 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:01:48.854000 audit[3679]: AVC avc: denied { perfmon } for pid=3679 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:01:48.854000 audit[3679]: AVC avc: denied { perfmon } for pid=3679 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:01:48.854000 audit[3679]: AVC avc: denied { perfmon } for pid=3679 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:01:48.854000 audit[3679]: AVC avc: denied { perfmon } for pid=3679 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:01:48.854000 audit[3679]: AVC avc: denied { perfmon } for pid=3679 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:01:48.854000 audit[3679]: AVC avc: denied { bpf } for pid=3679 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:01:48.854000 audit[3679]: AVC avc: denied { bpf } for pid=3679 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:01:48.854000 audit: BPF prog-id=13 op=LOAD Nov 1 01:01:48.854000 audit[3679]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffe8eada470 a2=98 a3=3 items=0 ppid=3612 pid=3679 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 01:01:48.854000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Nov 1 01:01:48.854000 audit: BPF prog-id=13 op=UNLOAD Nov 1 01:01:48.856000 audit[3679]: AVC avc: denied { bpf } for pid=3679 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:01:48.856000 audit[3679]: AVC avc: denied { bpf } for pid=3679 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:01:48.856000 audit[3679]: AVC avc: denied { perfmon } for pid=3679 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:01:48.856000 audit[3679]: AVC avc: denied { perfmon } for pid=3679 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:01:48.856000 audit[3679]: AVC avc: denied { perfmon } for pid=3679 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:01:48.856000 audit[3679]: AVC avc: denied { perfmon } for pid=3679 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:01:48.856000 audit[3679]: AVC avc: denied { perfmon } for pid=3679 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:01:48.856000 audit[3679]: AVC avc: denied { bpf } for pid=3679 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:01:48.856000 audit[3679]: AVC avc: denied { bpf } for pid=3679 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:01:48.856000 audit: BPF prog-id=14 op=LOAD Nov 1 01:01:48.856000 audit[3679]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7ffe8eada260 a2=94 a3=54428f items=0 ppid=3612 pid=3679 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 01:01:48.856000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Nov 1 01:01:48.856000 audit: BPF prog-id=14 op=UNLOAD Nov 1 01:01:48.856000 audit[3679]: AVC avc: denied { bpf } for pid=3679 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:01:48.856000 audit[3679]: AVC avc: denied { bpf } for pid=3679 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:01:48.856000 audit[3679]: AVC avc: denied { perfmon } for pid=3679 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:01:48.856000 audit[3679]: AVC avc: denied { perfmon } for pid=3679 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:01:48.856000 audit[3679]: AVC avc: denied { perfmon } for pid=3679 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:01:48.856000 audit[3679]: AVC avc: denied { perfmon } for pid=3679 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:01:48.856000 audit[3679]: AVC avc: denied { perfmon } for pid=3679 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:01:48.856000 audit[3679]: AVC avc: denied { bpf } for pid=3679 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:01:48.856000 audit[3679]: AVC avc: denied { bpf } for pid=3679 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:01:48.856000 audit: BPF prog-id=15 op=LOAD Nov 1 01:01:48.856000 audit[3679]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7ffe8eada290 a2=94 a3=2 items=0 ppid=3612 pid=3679 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 01:01:48.856000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Nov 1 01:01:48.856000 audit: BPF prog-id=15 op=UNLOAD Nov 1 01:01:48.910731 kubelet[2289]: I1101 01:01:48.910646 2289 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9jxtv\" (UniqueName: \"kubernetes.io/projected/e47c436e-7585-46a1-976f-d1673b769a3e-kube-api-access-9jxtv\") pod \"whisker-d4bf4bbfd-xnxgp\" (UID: \"e47c436e-7585-46a1-976f-d1673b769a3e\") " pod="calico-system/whisker-d4bf4bbfd-xnxgp" Nov 1 01:01:48.910731 kubelet[2289]: I1101 01:01:48.910693 2289 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e47c436e-7585-46a1-976f-d1673b769a3e-whisker-ca-bundle\") pod \"whisker-d4bf4bbfd-xnxgp\" (UID: \"e47c436e-7585-46a1-976f-d1673b769a3e\") " pod="calico-system/whisker-d4bf4bbfd-xnxgp" Nov 1 01:01:48.910731 kubelet[2289]: I1101 01:01:48.910713 2289 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/e47c436e-7585-46a1-976f-d1673b769a3e-whisker-backend-key-pair\") pod \"whisker-d4bf4bbfd-xnxgp\" (UID: \"e47c436e-7585-46a1-976f-d1673b769a3e\") " pod="calico-system/whisker-d4bf4bbfd-xnxgp" Nov 1 01:01:48.939000 audit[3679]: AVC avc: denied { bpf } for pid=3679 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:01:48.939000 audit[3679]: AVC avc: denied { bpf } for pid=3679 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:01:48.939000 audit[3679]: AVC avc: denied { perfmon } for pid=3679 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:01:48.939000 audit[3679]: AVC avc: denied { perfmon } for pid=3679 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:01:48.939000 audit[3679]: AVC avc: denied { perfmon } for pid=3679 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:01:48.939000 audit[3679]: AVC avc: denied { perfmon } for pid=3679 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:01:48.939000 audit[3679]: AVC avc: denied { perfmon } for pid=3679 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:01:48.939000 audit[3679]: AVC avc: denied { bpf } for pid=3679 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:01:48.939000 audit[3679]: AVC avc: denied { bpf } for pid=3679 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:01:48.939000 audit: BPF prog-id=16 op=LOAD Nov 1 01:01:48.939000 audit[3679]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7ffe8eada150 a2=94 a3=1 items=0 ppid=3612 pid=3679 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 01:01:48.939000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Nov 1 01:01:48.940000 audit: BPF prog-id=16 op=UNLOAD Nov 1 01:01:48.940000 audit[3679]: AVC avc: denied { perfmon } for pid=3679 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:01:48.940000 audit[3679]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=0 a1=7ffe8eada220 a2=50 a3=7ffe8eada300 items=0 ppid=3612 pid=3679 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 01:01:48.940000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Nov 1 01:01:48.946000 audit[3679]: AVC avc: denied { bpf } for pid=3679 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:01:48.946000 audit[3679]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=12 a1=7ffe8eada160 a2=28 a3=0 items=0 ppid=3612 pid=3679 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 01:01:48.946000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Nov 1 01:01:48.946000 audit[3679]: AVC avc: denied { bpf } for pid=3679 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:01:48.946000 audit[3679]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=12 a1=7ffe8eada190 a2=28 a3=0 items=0 ppid=3612 pid=3679 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 01:01:48.946000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Nov 1 01:01:48.946000 audit[3679]: AVC avc: denied { bpf } for pid=3679 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:01:48.946000 audit[3679]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=12 a1=7ffe8eada0a0 a2=28 a3=0 items=0 ppid=3612 pid=3679 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 01:01:48.946000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Nov 1 01:01:48.946000 audit[3679]: AVC avc: denied { bpf } for pid=3679 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:01:48.946000 audit[3679]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=12 a1=7ffe8eada1b0 a2=28 a3=0 items=0 ppid=3612 pid=3679 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 01:01:48.946000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Nov 1 01:01:48.946000 audit[3679]: AVC avc: denied { bpf } for pid=3679 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:01:48.946000 audit[3679]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=12 a1=7ffe8eada190 a2=28 a3=0 items=0 ppid=3612 pid=3679 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 01:01:48.946000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Nov 1 01:01:48.946000 audit[3679]: AVC avc: denied { bpf } for pid=3679 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:01:48.946000 audit[3679]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=12 a1=7ffe8eada180 a2=28 a3=0 items=0 ppid=3612 pid=3679 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 01:01:48.946000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Nov 1 01:01:48.946000 audit[3679]: AVC avc: denied { bpf } for pid=3679 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:01:48.946000 audit[3679]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=12 a1=7ffe8eada1b0 a2=28 a3=0 items=0 ppid=3612 pid=3679 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 01:01:48.946000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Nov 1 01:01:48.946000 audit[3679]: AVC avc: denied { bpf } for pid=3679 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:01:48.946000 audit[3679]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=12 a1=7ffe8eada190 a2=28 a3=0 items=0 ppid=3612 pid=3679 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 01:01:48.946000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Nov 1 01:01:48.946000 audit[3679]: AVC avc: denied { bpf } for pid=3679 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:01:48.946000 audit[3679]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=12 a1=7ffe8eada1b0 a2=28 a3=0 items=0 ppid=3612 pid=3679 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 01:01:48.946000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Nov 1 01:01:48.946000 audit[3679]: AVC avc: denied { bpf } for pid=3679 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:01:48.946000 audit[3679]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=12 a1=7ffe8eada180 a2=28 a3=0 items=0 ppid=3612 pid=3679 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 01:01:48.946000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Nov 1 01:01:48.947000 audit[3679]: AVC avc: denied { bpf } for pid=3679 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:01:48.947000 audit[3679]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=12 a1=7ffe8eada1f0 a2=28 a3=0 items=0 ppid=3612 pid=3679 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 01:01:48.947000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Nov 1 01:01:48.947000 audit[3679]: AVC avc: denied { perfmon } for pid=3679 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:01:48.947000 audit[3679]: SYSCALL arch=c000003e syscall=321 success=yes exit=5 a0=0 a1=7ffe8ead9fa0 a2=50 a3=1 items=0 ppid=3612 pid=3679 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 01:01:48.947000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Nov 1 01:01:48.947000 audit[3679]: AVC avc: denied { bpf } for pid=3679 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:01:48.947000 audit[3679]: AVC avc: denied { bpf } for pid=3679 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:01:48.947000 audit[3679]: AVC avc: denied { perfmon } for pid=3679 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:01:48.947000 audit[3679]: AVC avc: denied { perfmon } for pid=3679 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:01:48.947000 audit[3679]: AVC avc: denied { perfmon } for pid=3679 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:01:48.947000 audit[3679]: AVC avc: denied { perfmon } for pid=3679 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:01:48.947000 audit[3679]: AVC avc: denied { perfmon } for pid=3679 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:01:48.947000 audit[3679]: AVC avc: denied { bpf } for pid=3679 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:01:48.947000 audit[3679]: AVC avc: denied { bpf } for pid=3679 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:01:48.947000 audit: BPF prog-id=17 op=LOAD Nov 1 01:01:48.947000 audit[3679]: SYSCALL arch=c000003e syscall=321 success=yes exit=6 a0=5 a1=7ffe8ead9fa0 a2=94 a3=5 items=0 ppid=3612 pid=3679 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 01:01:48.947000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Nov 1 01:01:48.947000 audit: BPF prog-id=17 op=UNLOAD Nov 1 01:01:48.947000 audit[3679]: AVC avc: denied { perfmon } for pid=3679 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:01:48.947000 audit[3679]: SYSCALL arch=c000003e syscall=321 success=yes exit=5 a0=0 a1=7ffe8eada050 a2=50 a3=1 items=0 ppid=3612 pid=3679 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 01:01:48.947000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Nov 1 01:01:48.947000 audit[3679]: AVC avc: denied { bpf } for pid=3679 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:01:48.947000 audit[3679]: SYSCALL arch=c000003e syscall=321 success=yes exit=0 a0=16 a1=7ffe8eada170 a2=4 a3=38 items=0 ppid=3612 pid=3679 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 01:01:48.947000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Nov 1 01:01:48.947000 audit[3679]: AVC avc: denied { bpf } for pid=3679 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:01:48.947000 audit[3679]: AVC avc: denied { bpf } for pid=3679 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:01:48.947000 audit[3679]: AVC avc: denied { perfmon } for pid=3679 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:01:48.947000 audit[3679]: AVC avc: denied { bpf } for pid=3679 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:01:48.947000 audit[3679]: AVC avc: denied { perfmon } for pid=3679 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:01:48.947000 audit[3679]: AVC avc: denied { perfmon } for pid=3679 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:01:48.947000 audit[3679]: AVC avc: denied { perfmon } for pid=3679 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:01:48.947000 audit[3679]: AVC avc: denied { perfmon } for pid=3679 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:01:48.947000 audit[3679]: AVC avc: denied { perfmon } for pid=3679 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:01:48.947000 audit[3679]: AVC avc: denied { bpf } for pid=3679 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:01:48.947000 audit[3679]: AVC avc: denied { confidentiality } for pid=3679 comm="bpftool" lockdown_reason="use of bpf to read kernel RAM" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Nov 1 01:01:48.947000 audit[3679]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=5 a1=7ffe8eada1c0 a2=94 a3=6 items=0 ppid=3612 pid=3679 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 01:01:48.947000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Nov 1 01:01:48.947000 audit[3679]: AVC avc: denied { bpf } for pid=3679 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:01:48.947000 audit[3679]: AVC avc: denied { bpf } for pid=3679 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:01:48.947000 audit[3679]: AVC avc: denied { perfmon } for pid=3679 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:01:48.947000 audit[3679]: AVC avc: denied { bpf } for pid=3679 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:01:48.947000 audit[3679]: AVC avc: denied { perfmon } for pid=3679 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:01:48.947000 audit[3679]: AVC avc: denied { perfmon } for pid=3679 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:01:48.947000 audit[3679]: AVC avc: denied { perfmon } for pid=3679 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:01:48.947000 audit[3679]: AVC avc: denied { perfmon } for pid=3679 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:01:48.947000 audit[3679]: AVC avc: denied { perfmon } for pid=3679 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:01:48.947000 audit[3679]: AVC avc: denied { bpf } for pid=3679 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:01:48.947000 audit[3679]: AVC avc: denied { confidentiality } for pid=3679 comm="bpftool" lockdown_reason="use of bpf to read kernel RAM" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Nov 1 01:01:48.947000 audit[3679]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=5 a1=7ffe8ead9970 a2=94 a3=88 items=0 ppid=3612 pid=3679 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 01:01:48.947000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Nov 1 01:01:48.947000 audit[3679]: AVC avc: denied { bpf } for pid=3679 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:01:48.947000 audit[3679]: AVC avc: denied { bpf } for pid=3679 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:01:48.947000 audit[3679]: AVC avc: denied { perfmon } for pid=3679 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:01:48.947000 audit[3679]: AVC avc: denied { bpf } for pid=3679 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:01:48.947000 audit[3679]: AVC avc: denied { perfmon } for pid=3679 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:01:48.947000 audit[3679]: AVC avc: denied { perfmon } for pid=3679 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:01:48.947000 audit[3679]: AVC avc: denied { perfmon } for pid=3679 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:01:48.947000 audit[3679]: AVC avc: denied { perfmon } for pid=3679 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:01:48.947000 audit[3679]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=5 a1=7ffe8ead9970 a2=94 a3=88 items=0 ppid=3612 pid=3679 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 01:01:48.947000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Nov 1 01:01:48.954000 audit[3700]: AVC avc: denied { bpf } for pid=3700 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:01:48.954000 audit[3700]: AVC avc: denied { bpf } for pid=3700 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:01:48.954000 audit[3700]: AVC avc: denied { perfmon } for pid=3700 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:01:48.954000 audit[3700]: AVC avc: denied { perfmon } for pid=3700 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:01:48.954000 audit[3700]: AVC avc: denied { perfmon } for pid=3700 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:01:48.954000 audit[3700]: AVC avc: denied { perfmon } for pid=3700 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:01:48.954000 audit[3700]: AVC avc: denied { perfmon } for pid=3700 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:01:48.954000 audit[3700]: AVC avc: denied { bpf } for pid=3700 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:01:48.954000 audit[3700]: AVC avc: denied { bpf } for pid=3700 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:01:48.954000 audit: BPF prog-id=18 op=LOAD Nov 1 01:01:48.954000 audit[3700]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7fffa3773c80 a2=98 a3=1999999999999999 items=0 ppid=3612 pid=3700 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 01:01:48.954000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Nov 1 01:01:48.954000 audit: BPF prog-id=18 op=UNLOAD Nov 1 01:01:48.954000 audit[3700]: AVC avc: denied { bpf } for pid=3700 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:01:48.954000 audit[3700]: AVC avc: denied { bpf } for pid=3700 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:01:48.954000 audit[3700]: AVC avc: denied { perfmon } for pid=3700 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:01:48.954000 audit[3700]: AVC avc: denied { perfmon } for pid=3700 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:01:48.954000 audit[3700]: AVC avc: denied { perfmon } for pid=3700 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:01:48.954000 audit[3700]: AVC avc: denied { perfmon } for pid=3700 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:01:48.954000 audit[3700]: AVC avc: denied { perfmon } for pid=3700 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:01:48.954000 audit[3700]: AVC avc: denied { bpf } for pid=3700 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:01:48.954000 audit[3700]: AVC avc: denied { bpf } for pid=3700 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:01:48.954000 audit: BPF prog-id=19 op=LOAD Nov 1 01:01:48.954000 audit[3700]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7fffa3773b60 a2=94 a3=ffff items=0 ppid=3612 pid=3700 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 01:01:48.954000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Nov 1 01:01:48.954000 audit: BPF prog-id=19 op=UNLOAD Nov 1 01:01:48.954000 audit[3700]: AVC avc: denied { bpf } for pid=3700 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:01:48.954000 audit[3700]: AVC avc: denied { bpf } for pid=3700 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:01:48.954000 audit[3700]: AVC avc: denied { perfmon } for pid=3700 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:01:48.954000 audit[3700]: AVC avc: denied { perfmon } for pid=3700 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:01:48.954000 audit[3700]: AVC avc: denied { perfmon } for pid=3700 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:01:48.954000 audit[3700]: AVC avc: denied { perfmon } for pid=3700 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:01:48.954000 audit[3700]: AVC avc: denied { perfmon } for pid=3700 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:01:48.954000 audit[3700]: AVC avc: denied { bpf } for pid=3700 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:01:48.954000 audit[3700]: AVC avc: denied { bpf } for pid=3700 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:01:48.954000 audit: BPF prog-id=20 op=LOAD Nov 1 01:01:48.954000 audit[3700]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7fffa3773ba0 a2=94 a3=7fffa3773d80 items=0 ppid=3612 pid=3700 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 01:01:48.954000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Nov 1 01:01:48.954000 audit: BPF prog-id=20 op=UNLOAD Nov 1 01:01:49.012084 systemd-networkd[1114]: vxlan.calico: Link UP Nov 1 01:01:49.012091 systemd-networkd[1114]: vxlan.calico: Gained carrier Nov 1 01:01:49.047000 audit[3727]: AVC avc: denied { bpf } for pid=3727 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:01:49.047000 audit[3727]: AVC avc: denied { bpf } for pid=3727 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:01:49.047000 audit[3727]: AVC avc: denied { perfmon } for pid=3727 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:01:49.047000 audit[3727]: AVC avc: denied { perfmon } for pid=3727 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:01:49.047000 audit[3727]: AVC avc: denied { perfmon } for pid=3727 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:01:49.047000 audit[3727]: AVC avc: denied { perfmon } for pid=3727 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:01:49.047000 audit[3727]: AVC avc: denied { perfmon } for pid=3727 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:01:49.047000 audit[3727]: AVC avc: denied { bpf } for pid=3727 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:01:49.047000 audit[3727]: AVC avc: denied { bpf } for pid=3727 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:01:49.047000 audit: BPF prog-id=21 op=LOAD Nov 1 01:01:49.047000 audit[3727]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffdbfbbdef0 a2=98 a3=0 items=0 ppid=3612 pid=3727 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 01:01:49.047000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Nov 1 01:01:49.047000 audit: BPF prog-id=21 op=UNLOAD Nov 1 01:01:49.049000 audit[3727]: AVC avc: denied { bpf } for pid=3727 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:01:49.049000 audit[3727]: AVC avc: denied { bpf } for pid=3727 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:01:49.049000 audit[3727]: AVC avc: denied { perfmon } for pid=3727 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:01:49.049000 audit[3727]: AVC avc: denied { perfmon } for pid=3727 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:01:49.049000 audit[3727]: AVC avc: denied { perfmon } for pid=3727 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:01:49.049000 audit[3727]: AVC avc: denied { perfmon } for pid=3727 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:01:49.049000 audit[3727]: AVC avc: denied { perfmon } for pid=3727 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:01:49.049000 audit[3727]: AVC avc: denied { bpf } for pid=3727 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:01:49.049000 audit[3727]: AVC avc: denied { bpf } for pid=3727 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:01:49.049000 audit: BPF prog-id=22 op=LOAD Nov 1 01:01:49.049000 audit[3727]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffdbfbbdd00 a2=94 a3=54428f items=0 ppid=3612 pid=3727 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 01:01:49.049000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Nov 1 01:01:49.049000 audit: BPF prog-id=22 op=UNLOAD Nov 1 01:01:49.049000 audit[3727]: AVC avc: denied { bpf } for pid=3727 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:01:49.049000 audit[3727]: AVC avc: denied { bpf } for pid=3727 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:01:49.049000 audit[3727]: AVC avc: denied { perfmon } for pid=3727 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:01:49.049000 audit[3727]: AVC avc: denied { perfmon } for pid=3727 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:01:49.049000 audit[3727]: AVC avc: denied { perfmon } for pid=3727 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:01:49.049000 audit[3727]: AVC avc: denied { perfmon } for pid=3727 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:01:49.049000 audit[3727]: AVC avc: denied { perfmon } for pid=3727 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:01:49.049000 audit[3727]: AVC avc: denied { bpf } for pid=3727 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:01:49.049000 audit[3727]: AVC avc: denied { bpf } for pid=3727 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:01:49.049000 audit: BPF prog-id=23 op=LOAD Nov 1 01:01:49.049000 audit[3727]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffdbfbbdd30 a2=94 a3=2 items=0 ppid=3612 pid=3727 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 01:01:49.049000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Nov 1 01:01:49.049000 audit: BPF prog-id=23 op=UNLOAD Nov 1 01:01:49.049000 audit[3727]: AVC avc: denied { bpf } for pid=3727 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:01:49.049000 audit[3727]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=12 a1=7ffdbfbbdc00 a2=28 a3=0 items=0 ppid=3612 pid=3727 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 01:01:49.049000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Nov 1 01:01:49.049000 audit[3727]: AVC avc: denied { bpf } for pid=3727 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:01:49.049000 audit[3727]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=12 a1=7ffdbfbbdc30 a2=28 a3=0 items=0 ppid=3612 pid=3727 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 01:01:49.049000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Nov 1 01:01:49.049000 audit[3727]: AVC avc: denied { bpf } for pid=3727 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:01:49.049000 audit[3727]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=12 a1=7ffdbfbbdb40 a2=28 a3=0 items=0 ppid=3612 pid=3727 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 01:01:49.049000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Nov 1 01:01:49.049000 audit[3727]: AVC avc: denied { bpf } for pid=3727 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:01:49.049000 audit[3727]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=12 a1=7ffdbfbbdc50 a2=28 a3=0 items=0 ppid=3612 pid=3727 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 01:01:49.049000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Nov 1 01:01:49.049000 audit[3727]: AVC avc: denied { bpf } for pid=3727 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:01:49.049000 audit[3727]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=12 a1=7ffdbfbbdc30 a2=28 a3=0 items=0 ppid=3612 pid=3727 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 01:01:49.049000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Nov 1 01:01:49.049000 audit[3727]: AVC avc: denied { bpf } for pid=3727 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:01:49.049000 audit[3727]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=12 a1=7ffdbfbbdc20 a2=28 a3=0 items=0 ppid=3612 pid=3727 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 01:01:49.049000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Nov 1 01:01:49.049000 audit[3727]: AVC avc: denied { bpf } for pid=3727 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:01:49.049000 audit[3727]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=12 a1=7ffdbfbbdc50 a2=28 a3=0 items=0 ppid=3612 pid=3727 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 01:01:49.049000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Nov 1 01:01:49.049000 audit[3727]: AVC avc: denied { bpf } for pid=3727 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:01:49.049000 audit[3727]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=12 a1=7ffdbfbbdc30 a2=28 a3=0 items=0 ppid=3612 pid=3727 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 01:01:49.049000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Nov 1 01:01:49.049000 audit[3727]: AVC avc: denied { bpf } for pid=3727 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:01:49.049000 audit[3727]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=12 a1=7ffdbfbbdc50 a2=28 a3=0 items=0 ppid=3612 pid=3727 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 01:01:49.049000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Nov 1 01:01:49.049000 audit[3727]: AVC avc: denied { bpf } for pid=3727 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:01:49.049000 audit[3727]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=12 a1=7ffdbfbbdc20 a2=28 a3=0 items=0 ppid=3612 pid=3727 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 01:01:49.049000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Nov 1 01:01:49.049000 audit[3727]: AVC avc: denied { bpf } for pid=3727 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:01:49.049000 audit[3727]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=12 a1=7ffdbfbbdc90 a2=28 a3=0 items=0 ppid=3612 pid=3727 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 01:01:49.049000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Nov 1 01:01:49.049000 audit[3727]: AVC avc: denied { bpf } for pid=3727 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:01:49.049000 audit[3727]: AVC avc: denied { bpf } for pid=3727 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:01:49.049000 audit[3727]: AVC avc: denied { perfmon } for pid=3727 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:01:49.049000 audit[3727]: AVC avc: denied { perfmon } for pid=3727 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:01:49.049000 audit[3727]: AVC avc: denied { perfmon } for pid=3727 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:01:49.049000 audit[3727]: AVC avc: denied { perfmon } for pid=3727 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:01:49.049000 audit[3727]: AVC avc: denied { perfmon } for pid=3727 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:01:49.049000 audit[3727]: AVC avc: denied { bpf } for pid=3727 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:01:49.049000 audit[3727]: AVC avc: denied { bpf } for pid=3727 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:01:49.049000 audit: BPF prog-id=24 op=LOAD Nov 1 01:01:49.049000 audit[3727]: SYSCALL arch=c000003e syscall=321 success=yes exit=6 a0=5 a1=7ffdbfbbdb00 a2=94 a3=0 items=0 ppid=3612 pid=3727 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 01:01:49.049000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Nov 1 01:01:49.049000 audit: BPF prog-id=24 op=UNLOAD Nov 1 01:01:49.049000 audit[3727]: AVC avc: denied { bpf } for pid=3727 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:01:49.049000 audit[3727]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=0 a1=7ffdbfbbdaf0 a2=50 a3=2800 items=0 ppid=3612 pid=3727 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 01:01:49.049000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Nov 1 01:01:49.049000 audit[3727]: AVC avc: denied { bpf } for pid=3727 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:01:49.049000 audit[3727]: SYSCALL arch=c000003e syscall=321 success=yes exit=6 a0=0 a1=7ffdbfbbdaf0 a2=50 a3=2800 items=0 ppid=3612 pid=3727 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 01:01:49.049000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Nov 1 01:01:49.049000 audit[3727]: AVC avc: denied { bpf } for pid=3727 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:01:49.049000 audit[3727]: AVC avc: denied { bpf } for pid=3727 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:01:49.049000 audit[3727]: AVC avc: denied { bpf } for pid=3727 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:01:49.049000 audit[3727]: AVC avc: denied { perfmon } for pid=3727 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:01:49.049000 audit[3727]: AVC avc: denied { perfmon } for pid=3727 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:01:49.049000 audit[3727]: AVC avc: denied { perfmon } for pid=3727 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:01:49.049000 audit[3727]: AVC avc: denied { perfmon } for pid=3727 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:01:49.049000 audit[3727]: AVC avc: denied { perfmon } for pid=3727 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:01:49.049000 audit[3727]: AVC avc: denied { bpf } for pid=3727 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:01:49.049000 audit[3727]: AVC avc: denied { bpf } for pid=3727 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:01:49.049000 audit: BPF prog-id=25 op=LOAD Nov 1 01:01:49.049000 audit[3727]: SYSCALL arch=c000003e syscall=321 success=yes exit=6 a0=5 a1=7ffdbfbbd310 a2=94 a3=2 items=0 ppid=3612 pid=3727 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 01:01:49.049000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Nov 1 01:01:49.049000 audit: BPF prog-id=25 op=UNLOAD Nov 1 01:01:49.049000 audit[3727]: AVC avc: denied { bpf } for pid=3727 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:01:49.049000 audit[3727]: AVC avc: denied { bpf } for pid=3727 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:01:49.049000 audit[3727]: AVC avc: denied { bpf } for pid=3727 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:01:49.049000 audit[3727]: AVC avc: denied { perfmon } for pid=3727 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:01:49.049000 audit[3727]: AVC avc: denied { perfmon } for pid=3727 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:01:49.049000 audit[3727]: AVC avc: denied { perfmon } for pid=3727 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:01:49.049000 audit[3727]: AVC avc: denied { perfmon } for pid=3727 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:01:49.049000 audit[3727]: AVC avc: denied { perfmon } for pid=3727 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:01:49.049000 audit[3727]: AVC avc: denied { bpf } for pid=3727 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:01:49.049000 audit[3727]: AVC avc: denied { bpf } for pid=3727 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:01:49.049000 audit: BPF prog-id=26 op=LOAD Nov 1 01:01:49.049000 audit[3727]: SYSCALL arch=c000003e syscall=321 success=yes exit=6 a0=5 a1=7ffdbfbbd410 a2=94 a3=30 items=0 ppid=3612 pid=3727 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 01:01:49.049000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Nov 1 01:01:49.052000 audit[3731]: AVC avc: denied { bpf } for pid=3731 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:01:49.052000 audit[3731]: AVC avc: denied { bpf } for pid=3731 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:01:49.052000 audit[3731]: AVC avc: denied { perfmon } for pid=3731 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:01:49.052000 audit[3731]: AVC avc: denied { perfmon } for pid=3731 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:01:49.052000 audit[3731]: AVC avc: denied { perfmon } for pid=3731 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:01:49.052000 audit[3731]: AVC avc: denied { perfmon } for pid=3731 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:01:49.052000 audit[3731]: AVC avc: denied { perfmon } for pid=3731 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:01:49.052000 audit[3731]: AVC avc: denied { bpf } for pid=3731 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:01:49.052000 audit[3731]: AVC avc: denied { bpf } for pid=3731 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:01:49.052000 audit: BPF prog-id=27 op=LOAD Nov 1 01:01:49.052000 audit[3731]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffec8920030 a2=98 a3=0 items=0 ppid=3612 pid=3731 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 01:01:49.052000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Nov 1 01:01:49.052000 audit: BPF prog-id=27 op=UNLOAD Nov 1 01:01:49.053000 audit[3731]: AVC avc: denied { bpf } for pid=3731 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:01:49.053000 audit[3731]: AVC avc: denied { bpf } for pid=3731 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:01:49.053000 audit[3731]: AVC avc: denied { perfmon } for pid=3731 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:01:49.053000 audit[3731]: AVC avc: denied { perfmon } for pid=3731 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:01:49.053000 audit[3731]: AVC avc: denied { perfmon } for pid=3731 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:01:49.053000 audit[3731]: AVC avc: denied { perfmon } for pid=3731 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:01:49.053000 audit[3731]: AVC avc: denied { perfmon } for pid=3731 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:01:49.053000 audit[3731]: AVC avc: denied { bpf } for pid=3731 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:01:49.053000 audit[3731]: AVC avc: denied { bpf } for pid=3731 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:01:49.053000 audit: BPF prog-id=28 op=LOAD Nov 1 01:01:49.053000 audit[3731]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7ffec891fe20 a2=94 a3=54428f items=0 ppid=3612 pid=3731 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 01:01:49.053000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Nov 1 01:01:49.053000 audit: BPF prog-id=28 op=UNLOAD Nov 1 01:01:49.053000 audit[3731]: AVC avc: denied { bpf } for pid=3731 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:01:49.053000 audit[3731]: AVC avc: denied { bpf } for pid=3731 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:01:49.053000 audit[3731]: AVC avc: denied { perfmon } for pid=3731 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:01:49.053000 audit[3731]: AVC avc: denied { perfmon } for pid=3731 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:01:49.053000 audit[3731]: AVC avc: denied { perfmon } for pid=3731 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:01:49.053000 audit[3731]: AVC avc: denied { perfmon } for pid=3731 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:01:49.053000 audit[3731]: AVC avc: denied { perfmon } for pid=3731 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:01:49.053000 audit[3731]: AVC avc: denied { bpf } for pid=3731 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:01:49.053000 audit[3731]: AVC avc: denied { bpf } for pid=3731 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:01:49.053000 audit: BPF prog-id=29 op=LOAD Nov 1 01:01:49.053000 audit[3731]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7ffec891fe50 a2=94 a3=2 items=0 ppid=3612 pid=3731 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 01:01:49.053000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Nov 1 01:01:49.055000 audit: BPF prog-id=29 op=UNLOAD Nov 1 01:01:49.138356 env[1357]: time="2025-11-01T01:01:49.138323008Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-d4bf4bbfd-xnxgp,Uid:e47c436e-7585-46a1-976f-d1673b769a3e,Namespace:calico-system,Attempt:0,}" Nov 1 01:01:49.146000 audit[3731]: AVC avc: denied { bpf } for pid=3731 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:01:49.146000 audit[3731]: AVC avc: denied { bpf } for pid=3731 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:01:49.146000 audit[3731]: AVC avc: denied { perfmon } for pid=3731 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:01:49.146000 audit[3731]: AVC avc: denied { perfmon } for pid=3731 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:01:49.146000 audit[3731]: AVC avc: denied { perfmon } for pid=3731 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:01:49.146000 audit[3731]: AVC avc: denied { perfmon } for pid=3731 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:01:49.146000 audit[3731]: AVC avc: denied { perfmon } for pid=3731 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:01:49.146000 audit[3731]: AVC avc: denied { bpf } for pid=3731 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:01:49.146000 audit[3731]: AVC avc: denied { bpf } for pid=3731 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:01:49.146000 audit: BPF prog-id=30 op=LOAD Nov 1 01:01:49.146000 audit[3731]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7ffec891fd10 a2=94 a3=1 items=0 ppid=3612 pid=3731 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 01:01:49.146000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Nov 1 01:01:49.146000 audit: BPF prog-id=30 op=UNLOAD Nov 1 01:01:49.146000 audit[3731]: AVC avc: denied { perfmon } for pid=3731 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:01:49.146000 audit[3731]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=0 a1=7ffec891fde0 a2=50 a3=7ffec891fec0 items=0 ppid=3612 pid=3731 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 01:01:49.146000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Nov 1 01:01:49.160000 audit[3731]: AVC avc: denied { bpf } for pid=3731 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:01:49.160000 audit[3731]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=12 a1=7ffec891fd20 a2=28 a3=0 items=0 ppid=3612 pid=3731 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 01:01:49.160000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Nov 1 01:01:49.160000 audit[3731]: AVC avc: denied { bpf } for pid=3731 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:01:49.160000 audit[3731]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=12 a1=7ffec891fd50 a2=28 a3=0 items=0 ppid=3612 pid=3731 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 01:01:49.160000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Nov 1 01:01:49.160000 audit[3731]: AVC avc: denied { bpf } for pid=3731 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:01:49.160000 audit[3731]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=12 a1=7ffec891fc60 a2=28 a3=0 items=0 ppid=3612 pid=3731 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 01:01:49.160000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Nov 1 01:01:49.160000 audit[3731]: AVC avc: denied { bpf } for pid=3731 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:01:49.160000 audit[3731]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=12 a1=7ffec891fd70 a2=28 a3=0 items=0 ppid=3612 pid=3731 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 01:01:49.160000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Nov 1 01:01:49.160000 audit[3731]: AVC avc: denied { bpf } for pid=3731 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:01:49.160000 audit[3731]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=12 a1=7ffec891fd50 a2=28 a3=0 items=0 ppid=3612 pid=3731 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 01:01:49.160000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Nov 1 01:01:49.160000 audit[3731]: AVC avc: denied { bpf } for pid=3731 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:01:49.160000 audit[3731]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=12 a1=7ffec891fd40 a2=28 a3=0 items=0 ppid=3612 pid=3731 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 01:01:49.160000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Nov 1 01:01:49.160000 audit[3731]: AVC avc: denied { bpf } for pid=3731 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:01:49.160000 audit[3731]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=12 a1=7ffec891fd70 a2=28 a3=0 items=0 ppid=3612 pid=3731 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 01:01:49.160000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Nov 1 01:01:49.160000 audit[3731]: AVC avc: denied { bpf } for pid=3731 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:01:49.160000 audit[3731]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=12 a1=7ffec891fd50 a2=28 a3=0 items=0 ppid=3612 pid=3731 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 01:01:49.160000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Nov 1 01:01:49.160000 audit[3731]: AVC avc: denied { bpf } for pid=3731 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:01:49.160000 audit[3731]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=12 a1=7ffec891fd70 a2=28 a3=0 items=0 ppid=3612 pid=3731 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 01:01:49.160000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Nov 1 01:01:49.160000 audit[3731]: AVC avc: denied { bpf } for pid=3731 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:01:49.160000 audit[3731]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=12 a1=7ffec891fd40 a2=28 a3=0 items=0 ppid=3612 pid=3731 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 01:01:49.160000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Nov 1 01:01:49.160000 audit[3731]: AVC avc: denied { bpf } for pid=3731 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:01:49.160000 audit[3731]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=12 a1=7ffec891fdb0 a2=28 a3=0 items=0 ppid=3612 pid=3731 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 01:01:49.160000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Nov 1 01:01:49.160000 audit[3731]: AVC avc: denied { perfmon } for pid=3731 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:01:49.160000 audit[3731]: SYSCALL arch=c000003e syscall=321 success=yes exit=5 a0=0 a1=7ffec891fb60 a2=50 a3=1 items=0 ppid=3612 pid=3731 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 01:01:49.160000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Nov 1 01:01:49.160000 audit[3731]: AVC avc: denied { bpf } for pid=3731 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:01:49.160000 audit[3731]: AVC avc: denied { bpf } for pid=3731 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:01:49.160000 audit[3731]: AVC avc: denied { perfmon } for pid=3731 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:01:49.160000 audit[3731]: AVC avc: denied { perfmon } for pid=3731 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:01:49.160000 audit[3731]: AVC avc: denied { perfmon } for pid=3731 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:01:49.160000 audit[3731]: AVC avc: denied { perfmon } for pid=3731 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:01:49.160000 audit[3731]: AVC avc: denied { perfmon } for pid=3731 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:01:49.160000 audit[3731]: AVC avc: denied { bpf } for pid=3731 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:01:49.160000 audit[3731]: AVC avc: denied { bpf } for pid=3731 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:01:49.160000 audit: BPF prog-id=31 op=LOAD Nov 1 01:01:49.160000 audit[3731]: SYSCALL arch=c000003e syscall=321 success=yes exit=6 a0=5 a1=7ffec891fb60 a2=94 a3=5 items=0 ppid=3612 pid=3731 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 01:01:49.160000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Nov 1 01:01:49.160000 audit: BPF prog-id=31 op=UNLOAD Nov 1 01:01:49.160000 audit[3731]: AVC avc: denied { perfmon } for pid=3731 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:01:49.160000 audit[3731]: SYSCALL arch=c000003e syscall=321 success=yes exit=5 a0=0 a1=7ffec891fc10 a2=50 a3=1 items=0 ppid=3612 pid=3731 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 01:01:49.160000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Nov 1 01:01:49.160000 audit[3731]: AVC avc: denied { bpf } for pid=3731 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:01:49.160000 audit[3731]: SYSCALL arch=c000003e syscall=321 success=yes exit=0 a0=16 a1=7ffec891fd30 a2=4 a3=38 items=0 ppid=3612 pid=3731 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 01:01:49.160000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Nov 1 01:01:49.160000 audit[3731]: AVC avc: denied { bpf } for pid=3731 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:01:49.160000 audit[3731]: AVC avc: denied { bpf } for pid=3731 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:01:49.160000 audit[3731]: AVC avc: denied { perfmon } for pid=3731 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:01:49.160000 audit[3731]: AVC avc: denied { bpf } for pid=3731 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:01:49.160000 audit[3731]: AVC avc: denied { perfmon } for pid=3731 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:01:49.160000 audit[3731]: AVC avc: denied { perfmon } for pid=3731 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:01:49.160000 audit[3731]: AVC avc: denied { perfmon } for pid=3731 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:01:49.160000 audit[3731]: AVC avc: denied { perfmon } for pid=3731 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:01:49.160000 audit[3731]: AVC avc: denied { perfmon } for pid=3731 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:01:49.160000 audit[3731]: AVC avc: denied { bpf } for pid=3731 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:01:49.160000 audit[3731]: AVC avc: denied { confidentiality } for pid=3731 comm="bpftool" lockdown_reason="use of bpf to read kernel RAM" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Nov 1 01:01:49.160000 audit[3731]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=5 a1=7ffec891fd80 a2=94 a3=6 items=0 ppid=3612 pid=3731 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 01:01:49.160000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Nov 1 01:01:49.160000 audit[3731]: AVC avc: denied { bpf } for pid=3731 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:01:49.160000 audit[3731]: AVC avc: denied { bpf } for pid=3731 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:01:49.160000 audit[3731]: AVC avc: denied { perfmon } for pid=3731 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:01:49.160000 audit[3731]: AVC avc: denied { bpf } for pid=3731 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:01:49.160000 audit[3731]: AVC avc: denied { perfmon } for pid=3731 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:01:49.160000 audit[3731]: AVC avc: denied { perfmon } for pid=3731 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:01:49.160000 audit[3731]: AVC avc: denied { perfmon } for pid=3731 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:01:49.160000 audit[3731]: AVC avc: denied { perfmon } for pid=3731 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:01:49.160000 audit[3731]: AVC avc: denied { perfmon } for pid=3731 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:01:49.160000 audit[3731]: AVC avc: denied { bpf } for pid=3731 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:01:49.160000 audit[3731]: AVC avc: denied { confidentiality } for pid=3731 comm="bpftool" lockdown_reason="use of bpf to read kernel RAM" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Nov 1 01:01:49.160000 audit[3731]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=5 a1=7ffec891f530 a2=94 a3=88 items=0 ppid=3612 pid=3731 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 01:01:49.160000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Nov 1 01:01:49.160000 audit[3731]: AVC avc: denied { bpf } for pid=3731 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:01:49.160000 audit[3731]: AVC avc: denied { bpf } for pid=3731 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:01:49.160000 audit[3731]: AVC avc: denied { perfmon } for pid=3731 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:01:49.160000 audit[3731]: AVC avc: denied { bpf } for pid=3731 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:01:49.160000 audit[3731]: AVC avc: denied { perfmon } for pid=3731 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:01:49.160000 audit[3731]: AVC avc: denied { perfmon } for pid=3731 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:01:49.160000 audit[3731]: AVC avc: denied { perfmon } for pid=3731 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:01:49.160000 audit[3731]: AVC avc: denied { perfmon } for pid=3731 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:01:49.160000 audit[3731]: AVC avc: denied { perfmon } for pid=3731 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:01:49.160000 audit[3731]: AVC avc: denied { bpf } for pid=3731 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:01:49.160000 audit[3731]: AVC avc: denied { confidentiality } for pid=3731 comm="bpftool" lockdown_reason="use of bpf to read kernel RAM" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Nov 1 01:01:49.160000 audit[3731]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=5 a1=7ffec891f530 a2=94 a3=88 items=0 ppid=3612 pid=3731 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 01:01:49.160000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Nov 1 01:01:49.161000 audit[3731]: AVC avc: denied { bpf } for pid=3731 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:01:49.161000 audit[3731]: SYSCALL arch=c000003e syscall=321 success=yes exit=0 a0=f a1=7ffec8920f60 a2=10 a3=f8f00800 items=0 ppid=3612 pid=3731 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 01:01:49.161000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Nov 1 01:01:49.161000 audit[3731]: AVC avc: denied { bpf } for pid=3731 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:01:49.161000 audit[3731]: SYSCALL arch=c000003e syscall=321 success=yes exit=0 a0=f a1=7ffec8920e00 a2=10 a3=3 items=0 ppid=3612 pid=3731 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 01:01:49.161000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Nov 1 01:01:49.161000 audit[3731]: AVC avc: denied { bpf } for pid=3731 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:01:49.161000 audit[3731]: SYSCALL arch=c000003e syscall=321 success=yes exit=0 a0=f a1=7ffec8920da0 a2=10 a3=3 items=0 ppid=3612 pid=3731 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 01:01:49.161000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Nov 1 01:01:49.161000 audit[3731]: AVC avc: denied { bpf } for pid=3731 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 01:01:49.161000 audit[3731]: SYSCALL arch=c000003e syscall=321 success=yes exit=0 a0=f a1=7ffec8920da0 a2=10 a3=7 items=0 ppid=3612 pid=3731 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 01:01:49.161000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Nov 1 01:01:49.167000 audit: BPF prog-id=26 op=UNLOAD Nov 1 01:01:49.255000 audit[3783]: NETFILTER_CFG table=mangle:103 family=2 entries=16 op=nft_register_chain pid=3783 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Nov 1 01:01:49.255000 audit[3783]: SYSCALL arch=c000003e syscall=46 success=yes exit=6868 a0=3 a1=7ffc32105fd0 a2=0 a3=7ffc32105fbc items=0 ppid=3612 pid=3783 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 01:01:49.255000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Nov 1 01:01:49.261000 audit[3781]: NETFILTER_CFG table=nat:104 family=2 entries=15 op=nft_register_chain pid=3781 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Nov 1 01:01:49.261000 audit[3781]: SYSCALL arch=c000003e syscall=46 success=yes exit=5084 a0=3 a1=7fff282e0cc0 a2=0 a3=7fff282e0cac items=0 ppid=3612 pid=3781 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 01:01:49.261000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Nov 1 01:01:49.262000 audit[3784]: NETFILTER_CFG table=filter:105 family=2 entries=39 op=nft_register_chain pid=3784 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Nov 1 01:01:49.262000 audit[3784]: SYSCALL arch=c000003e syscall=46 success=yes exit=18968 a0=3 a1=7ffe783981d0 a2=0 a3=7ffe783981bc items=0 ppid=3612 pid=3784 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 01:01:49.262000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Nov 1 01:01:49.266000 audit[3782]: NETFILTER_CFG table=raw:106 family=2 entries=21 op=nft_register_chain pid=3782 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Nov 1 01:01:49.266000 audit[3782]: SYSCALL arch=c000003e syscall=46 success=yes exit=8452 a0=3 a1=7ffde246e710 a2=0 a3=7ffde246e6fc items=0 ppid=3612 pid=3782 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 01:01:49.266000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Nov 1 01:01:49.304403 systemd-networkd[1114]: calic5c9c2ed576: Link UP Nov 1 01:01:49.305814 systemd-networkd[1114]: calic5c9c2ed576: Gained carrier Nov 1 01:01:49.306760 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): calic5c9c2ed576: link becomes ready Nov 1 01:01:49.321732 env[1357]: 2025-11-01 01:01:49.217 [INFO][3743] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-whisker--d4bf4bbfd--xnxgp-eth0 whisker-d4bf4bbfd- calico-system e47c436e-7585-46a1-976f-d1673b769a3e 901 0 2025-11-01 01:01:48 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:d4bf4bbfd projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s localhost whisker-d4bf4bbfd-xnxgp eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] calic5c9c2ed576 [] [] }} ContainerID="40a43b7b83f28da8b23ff21ac441b1d726ef24222275fdc1940929746e7eeb13" Namespace="calico-system" Pod="whisker-d4bf4bbfd-xnxgp" WorkloadEndpoint="localhost-k8s-whisker--d4bf4bbfd--xnxgp-" Nov 1 01:01:49.321732 env[1357]: 2025-11-01 01:01:49.217 [INFO][3743] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="40a43b7b83f28da8b23ff21ac441b1d726ef24222275fdc1940929746e7eeb13" Namespace="calico-system" Pod="whisker-d4bf4bbfd-xnxgp" WorkloadEndpoint="localhost-k8s-whisker--d4bf4bbfd--xnxgp-eth0" Nov 1 01:01:49.321732 env[1357]: 2025-11-01 01:01:49.259 [INFO][3769] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="40a43b7b83f28da8b23ff21ac441b1d726ef24222275fdc1940929746e7eeb13" HandleID="k8s-pod-network.40a43b7b83f28da8b23ff21ac441b1d726ef24222275fdc1940929746e7eeb13" Workload="localhost-k8s-whisker--d4bf4bbfd--xnxgp-eth0" Nov 1 01:01:49.321732 env[1357]: 2025-11-01 01:01:49.260 [INFO][3769] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="40a43b7b83f28da8b23ff21ac441b1d726ef24222275fdc1940929746e7eeb13" HandleID="k8s-pod-network.40a43b7b83f28da8b23ff21ac441b1d726ef24222275fdc1940929746e7eeb13" Workload="localhost-k8s-whisker--d4bf4bbfd--xnxgp-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002cd010), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"whisker-d4bf4bbfd-xnxgp", "timestamp":"2025-11-01 01:01:49.259625515 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 1 01:01:49.321732 env[1357]: 2025-11-01 01:01:49.260 [INFO][3769] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 1 01:01:49.321732 env[1357]: 2025-11-01 01:01:49.260 [INFO][3769] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 1 01:01:49.321732 env[1357]: 2025-11-01 01:01:49.260 [INFO][3769] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Nov 1 01:01:49.321732 env[1357]: 2025-11-01 01:01:49.274 [INFO][3769] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.40a43b7b83f28da8b23ff21ac441b1d726ef24222275fdc1940929746e7eeb13" host="localhost" Nov 1 01:01:49.321732 env[1357]: 2025-11-01 01:01:49.285 [INFO][3769] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Nov 1 01:01:49.321732 env[1357]: 2025-11-01 01:01:49.287 [INFO][3769] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Nov 1 01:01:49.321732 env[1357]: 2025-11-01 01:01:49.288 [INFO][3769] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Nov 1 01:01:49.321732 env[1357]: 2025-11-01 01:01:49.290 [INFO][3769] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Nov 1 01:01:49.321732 env[1357]: 2025-11-01 01:01:49.290 [INFO][3769] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.40a43b7b83f28da8b23ff21ac441b1d726ef24222275fdc1940929746e7eeb13" host="localhost" Nov 1 01:01:49.321732 env[1357]: 2025-11-01 01:01:49.290 [INFO][3769] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.40a43b7b83f28da8b23ff21ac441b1d726ef24222275fdc1940929746e7eeb13 Nov 1 01:01:49.321732 env[1357]: 2025-11-01 01:01:49.295 [INFO][3769] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.40a43b7b83f28da8b23ff21ac441b1d726ef24222275fdc1940929746e7eeb13" host="localhost" Nov 1 01:01:49.321732 env[1357]: 2025-11-01 01:01:49.298 [INFO][3769] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.129/26] block=192.168.88.128/26 handle="k8s-pod-network.40a43b7b83f28da8b23ff21ac441b1d726ef24222275fdc1940929746e7eeb13" host="localhost" Nov 1 01:01:49.321732 env[1357]: 2025-11-01 01:01:49.298 [INFO][3769] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.129/26] handle="k8s-pod-network.40a43b7b83f28da8b23ff21ac441b1d726ef24222275fdc1940929746e7eeb13" host="localhost" Nov 1 01:01:49.321732 env[1357]: 2025-11-01 01:01:49.298 [INFO][3769] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 1 01:01:49.321732 env[1357]: 2025-11-01 01:01:49.298 [INFO][3769] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.129/26] IPv6=[] ContainerID="40a43b7b83f28da8b23ff21ac441b1d726ef24222275fdc1940929746e7eeb13" HandleID="k8s-pod-network.40a43b7b83f28da8b23ff21ac441b1d726ef24222275fdc1940929746e7eeb13" Workload="localhost-k8s-whisker--d4bf4bbfd--xnxgp-eth0" Nov 1 01:01:49.322216 env[1357]: 2025-11-01 01:01:49.301 [INFO][3743] cni-plugin/k8s.go 418: Populated endpoint ContainerID="40a43b7b83f28da8b23ff21ac441b1d726ef24222275fdc1940929746e7eeb13" Namespace="calico-system" Pod="whisker-d4bf4bbfd-xnxgp" WorkloadEndpoint="localhost-k8s-whisker--d4bf4bbfd--xnxgp-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--d4bf4bbfd--xnxgp-eth0", GenerateName:"whisker-d4bf4bbfd-", Namespace:"calico-system", SelfLink:"", UID:"e47c436e-7585-46a1-976f-d1673b769a3e", ResourceVersion:"901", Generation:0, CreationTimestamp:time.Date(2025, time.November, 1, 1, 1, 48, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"d4bf4bbfd", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"whisker-d4bf4bbfd-xnxgp", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"calic5c9c2ed576", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 1 01:01:49.322216 env[1357]: 2025-11-01 01:01:49.301 [INFO][3743] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.129/32] ContainerID="40a43b7b83f28da8b23ff21ac441b1d726ef24222275fdc1940929746e7eeb13" Namespace="calico-system" Pod="whisker-d4bf4bbfd-xnxgp" WorkloadEndpoint="localhost-k8s-whisker--d4bf4bbfd--xnxgp-eth0" Nov 1 01:01:49.322216 env[1357]: 2025-11-01 01:01:49.301 [INFO][3743] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calic5c9c2ed576 ContainerID="40a43b7b83f28da8b23ff21ac441b1d726ef24222275fdc1940929746e7eeb13" Namespace="calico-system" Pod="whisker-d4bf4bbfd-xnxgp" WorkloadEndpoint="localhost-k8s-whisker--d4bf4bbfd--xnxgp-eth0" Nov 1 01:01:49.322216 env[1357]: 2025-11-01 01:01:49.307 [INFO][3743] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="40a43b7b83f28da8b23ff21ac441b1d726ef24222275fdc1940929746e7eeb13" Namespace="calico-system" Pod="whisker-d4bf4bbfd-xnxgp" WorkloadEndpoint="localhost-k8s-whisker--d4bf4bbfd--xnxgp-eth0" Nov 1 01:01:49.322216 env[1357]: 2025-11-01 01:01:49.310 [INFO][3743] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="40a43b7b83f28da8b23ff21ac441b1d726ef24222275fdc1940929746e7eeb13" Namespace="calico-system" Pod="whisker-d4bf4bbfd-xnxgp" WorkloadEndpoint="localhost-k8s-whisker--d4bf4bbfd--xnxgp-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--d4bf4bbfd--xnxgp-eth0", GenerateName:"whisker-d4bf4bbfd-", Namespace:"calico-system", SelfLink:"", UID:"e47c436e-7585-46a1-976f-d1673b769a3e", ResourceVersion:"901", Generation:0, CreationTimestamp:time.Date(2025, time.November, 1, 1, 1, 48, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"d4bf4bbfd", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"40a43b7b83f28da8b23ff21ac441b1d726ef24222275fdc1940929746e7eeb13", Pod:"whisker-d4bf4bbfd-xnxgp", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"calic5c9c2ed576", MAC:"06:64:64:db:d2:ec", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 1 01:01:49.322216 env[1357]: 2025-11-01 01:01:49.319 [INFO][3743] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="40a43b7b83f28da8b23ff21ac441b1d726ef24222275fdc1940929746e7eeb13" Namespace="calico-system" Pod="whisker-d4bf4bbfd-xnxgp" WorkloadEndpoint="localhost-k8s-whisker--d4bf4bbfd--xnxgp-eth0" Nov 1 01:01:49.330253 env[1357]: time="2025-11-01T01:01:49.330198122Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 1 01:01:49.330384 env[1357]: time="2025-11-01T01:01:49.330369625Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 1 01:01:49.330462 env[1357]: time="2025-11-01T01:01:49.330449082Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 1 01:01:49.330612 env[1357]: time="2025-11-01T01:01:49.330598623Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/40a43b7b83f28da8b23ff21ac441b1d726ef24222275fdc1940929746e7eeb13 pid=3807 runtime=io.containerd.runc.v2 Nov 1 01:01:49.359763 systemd-resolved[1278]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Nov 1 01:01:49.382107 env[1357]: time="2025-11-01T01:01:49.382054746Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-d4bf4bbfd-xnxgp,Uid:e47c436e-7585-46a1-976f-d1673b769a3e,Namespace:calico-system,Attempt:0,} returns sandbox id \"40a43b7b83f28da8b23ff21ac441b1d726ef24222275fdc1940929746e7eeb13\"" Nov 1 01:01:49.339000 audit[3821]: NETFILTER_CFG table=filter:107 family=2 entries=59 op=nft_register_chain pid=3821 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Nov 1 01:01:49.450162 kernel: kauditd_printk_skb: 559 callbacks suppressed Nov 1 01:01:49.450199 kernel: audit: type=1325 audit(1761958909.339:411): table=filter:107 family=2 entries=59 op=nft_register_chain pid=3821 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Nov 1 01:01:49.477326 kernel: audit: type=1300 audit(1761958909.339:411): arch=c000003e syscall=46 success=yes exit=35860 a0=3 a1=7ffede10c1c0 a2=0 a3=7ffede10c1ac items=0 ppid=3612 pid=3821 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 01:01:49.477381 kernel: audit: type=1327 audit(1761958909.339:411): proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Nov 1 01:01:49.339000 audit[3821]: SYSCALL arch=c000003e syscall=46 success=yes exit=35860 a0=3 a1=7ffede10c1c0 a2=0 a3=7ffede10c1ac items=0 ppid=3612 pid=3821 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 01:01:49.339000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Nov 1 01:01:49.477482 env[1357]: time="2025-11-01T01:01:49.451191478Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Nov 1 01:01:49.798096 env[1357]: time="2025-11-01T01:01:49.797769657Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 1 01:01:49.798330 env[1357]: time="2025-11-01T01:01:49.798248187Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Nov 1 01:01:49.799746 kubelet[2289]: E1101 01:01:49.799719 2289 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Nov 1 01:01:49.800728 kubelet[2289]: E1101 01:01:49.799863 2289 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Nov 1 01:01:49.811922 kubelet[2289]: E1101 01:01:49.811873 2289 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:24a2a9b1916e445d94a05dba571afa1c,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-9jxtv,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-d4bf4bbfd-xnxgp_calico-system(e47c436e-7585-46a1-976f-d1673b769a3e): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Nov 1 01:01:49.815032 env[1357]: time="2025-11-01T01:01:49.814187965Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Nov 1 01:01:50.024312 kubelet[2289]: I1101 01:01:50.024275 2289 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="50add9c4-b601-48d0-bb62-20ee6a0f4cad" path="/var/lib/kubelet/pods/50add9c4-b601-48d0-bb62-20ee6a0f4cad/volumes" Nov 1 01:01:50.117601 env[1357]: time="2025-11-01T01:01:50.117155099Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 1 01:01:50.127056 env[1357]: time="2025-11-01T01:01:50.126988549Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Nov 1 01:01:50.127143 kubelet[2289]: E1101 01:01:50.127119 2289 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Nov 1 01:01:50.127187 kubelet[2289]: E1101 01:01:50.127151 2289 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Nov 1 01:01:50.133184 kubelet[2289]: E1101 01:01:50.127218 2289 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-9jxtv,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-d4bf4bbfd-xnxgp_calico-system(e47c436e-7585-46a1-976f-d1673b769a3e): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Nov 1 01:01:50.133184 kubelet[2289]: E1101 01:01:50.128448 2289 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-d4bf4bbfd-xnxgp" podUID="e47c436e-7585-46a1-976f-d1673b769a3e" Nov 1 01:01:50.524649 kubelet[2289]: E1101 01:01:50.524587 2289 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-d4bf4bbfd-xnxgp" podUID="e47c436e-7585-46a1-976f-d1673b769a3e" Nov 1 01:01:50.556817 systemd-networkd[1114]: calic5c9c2ed576: Gained IPv6LL Nov 1 01:01:50.557111 systemd-networkd[1114]: vxlan.calico: Gained IPv6LL Nov 1 01:01:50.614000 audit[3850]: NETFILTER_CFG table=filter:108 family=2 entries=20 op=nft_register_rule pid=3850 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Nov 1 01:01:50.614000 audit[3850]: SYSCALL arch=c000003e syscall=46 success=yes exit=7480 a0=3 a1=7fff2687d900 a2=0 a3=7fff2687d8ec items=0 ppid=2387 pid=3850 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 01:01:50.622076 kernel: audit: type=1325 audit(1761958910.614:412): table=filter:108 family=2 entries=20 op=nft_register_rule pid=3850 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Nov 1 01:01:50.622211 kernel: audit: type=1300 audit(1761958910.614:412): arch=c000003e syscall=46 success=yes exit=7480 a0=3 a1=7fff2687d900 a2=0 a3=7fff2687d8ec items=0 ppid=2387 pid=3850 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 01:01:50.622243 kernel: audit: type=1327 audit(1761958910.614:412): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Nov 1 01:01:50.614000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Nov 1 01:01:50.623000 audit[3850]: NETFILTER_CFG table=nat:109 family=2 entries=14 op=nft_register_rule pid=3850 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Nov 1 01:01:50.623000 audit[3850]: SYSCALL arch=c000003e syscall=46 success=yes exit=3468 a0=3 a1=7fff2687d900 a2=0 a3=0 items=0 ppid=2387 pid=3850 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 01:01:50.631155 kernel: audit: type=1325 audit(1761958910.623:413): table=nat:109 family=2 entries=14 op=nft_register_rule pid=3850 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Nov 1 01:01:50.631299 kernel: audit: type=1300 audit(1761958910.623:413): arch=c000003e syscall=46 success=yes exit=3468 a0=3 a1=7fff2687d900 a2=0 a3=0 items=0 ppid=2387 pid=3850 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 01:01:50.631328 kernel: audit: type=1327 audit(1761958910.623:413): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Nov 1 01:01:50.623000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Nov 1 01:01:52.017437 env[1357]: time="2025-11-01T01:01:52.017406146Z" level=info msg="StopPodSandbox for \"9f726969058ef8f2e663650fe2b27dda62381218a5d5957de5522097806b0442\"" Nov 1 01:01:52.105543 env[1357]: 2025-11-01 01:01:52.057 [INFO][3863] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="9f726969058ef8f2e663650fe2b27dda62381218a5d5957de5522097806b0442" Nov 1 01:01:52.105543 env[1357]: 2025-11-01 01:01:52.057 [INFO][3863] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="9f726969058ef8f2e663650fe2b27dda62381218a5d5957de5522097806b0442" iface="eth0" netns="/var/run/netns/cni-a26f7abd-b550-6e7d-28d0-470be9458b8d" Nov 1 01:01:52.105543 env[1357]: 2025-11-01 01:01:52.057 [INFO][3863] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="9f726969058ef8f2e663650fe2b27dda62381218a5d5957de5522097806b0442" iface="eth0" netns="/var/run/netns/cni-a26f7abd-b550-6e7d-28d0-470be9458b8d" Nov 1 01:01:52.105543 env[1357]: 2025-11-01 01:01:52.057 [INFO][3863] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="9f726969058ef8f2e663650fe2b27dda62381218a5d5957de5522097806b0442" iface="eth0" netns="/var/run/netns/cni-a26f7abd-b550-6e7d-28d0-470be9458b8d" Nov 1 01:01:52.105543 env[1357]: 2025-11-01 01:01:52.057 [INFO][3863] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="9f726969058ef8f2e663650fe2b27dda62381218a5d5957de5522097806b0442" Nov 1 01:01:52.105543 env[1357]: 2025-11-01 01:01:52.057 [INFO][3863] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="9f726969058ef8f2e663650fe2b27dda62381218a5d5957de5522097806b0442" Nov 1 01:01:52.105543 env[1357]: 2025-11-01 01:01:52.088 [INFO][3870] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="9f726969058ef8f2e663650fe2b27dda62381218a5d5957de5522097806b0442" HandleID="k8s-pod-network.9f726969058ef8f2e663650fe2b27dda62381218a5d5957de5522097806b0442" Workload="localhost-k8s-coredns--668d6bf9bc--rzp9k-eth0" Nov 1 01:01:52.105543 env[1357]: 2025-11-01 01:01:52.088 [INFO][3870] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 1 01:01:52.105543 env[1357]: 2025-11-01 01:01:52.088 [INFO][3870] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 1 01:01:52.105543 env[1357]: 2025-11-01 01:01:52.092 [WARNING][3870] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="9f726969058ef8f2e663650fe2b27dda62381218a5d5957de5522097806b0442" HandleID="k8s-pod-network.9f726969058ef8f2e663650fe2b27dda62381218a5d5957de5522097806b0442" Workload="localhost-k8s-coredns--668d6bf9bc--rzp9k-eth0" Nov 1 01:01:52.105543 env[1357]: 2025-11-01 01:01:52.092 [INFO][3870] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="9f726969058ef8f2e663650fe2b27dda62381218a5d5957de5522097806b0442" HandleID="k8s-pod-network.9f726969058ef8f2e663650fe2b27dda62381218a5d5957de5522097806b0442" Workload="localhost-k8s-coredns--668d6bf9bc--rzp9k-eth0" Nov 1 01:01:52.105543 env[1357]: 2025-11-01 01:01:52.103 [INFO][3870] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 1 01:01:52.105543 env[1357]: 2025-11-01 01:01:52.104 [INFO][3863] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="9f726969058ef8f2e663650fe2b27dda62381218a5d5957de5522097806b0442" Nov 1 01:01:52.108752 systemd[1]: run-netns-cni\x2da26f7abd\x2db550\x2d6e7d\x2d28d0\x2d470be9458b8d.mount: Deactivated successfully. Nov 1 01:01:52.109393 env[1357]: time="2025-11-01T01:01:52.109363361Z" level=info msg="TearDown network for sandbox \"9f726969058ef8f2e663650fe2b27dda62381218a5d5957de5522097806b0442\" successfully" Nov 1 01:01:52.109393 env[1357]: time="2025-11-01T01:01:52.109391155Z" level=info msg="StopPodSandbox for \"9f726969058ef8f2e663650fe2b27dda62381218a5d5957de5522097806b0442\" returns successfully" Nov 1 01:01:52.109902 env[1357]: time="2025-11-01T01:01:52.109886382Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-rzp9k,Uid:c9757878-cb1e-46ae-a174-a7b9152136f7,Namespace:kube-system,Attempt:1,}" Nov 1 01:01:52.202197 systemd-networkd[1114]: cali40be43ccdf6: Link UP Nov 1 01:01:52.203704 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Nov 1 01:01:52.205971 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cali40be43ccdf6: link becomes ready Nov 1 01:01:52.205748 systemd-networkd[1114]: cali40be43ccdf6: Gained carrier Nov 1 01:01:52.236695 env[1357]: 2025-11-01 01:01:52.148 [INFO][3876] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--668d6bf9bc--rzp9k-eth0 coredns-668d6bf9bc- kube-system c9757878-cb1e-46ae-a174-a7b9152136f7 923 0 2025-11-01 01:01:15 +0000 UTC map[k8s-app:kube-dns pod-template-hash:668d6bf9bc projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-668d6bf9bc-rzp9k eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali40be43ccdf6 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="70e53d85c4650d6cb1105db5c4c0b8a271f8e52e13c32cfa27868f4f1a455f40" Namespace="kube-system" Pod="coredns-668d6bf9bc-rzp9k" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--rzp9k-" Nov 1 01:01:52.236695 env[1357]: 2025-11-01 01:01:52.148 [INFO][3876] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="70e53d85c4650d6cb1105db5c4c0b8a271f8e52e13c32cfa27868f4f1a455f40" Namespace="kube-system" Pod="coredns-668d6bf9bc-rzp9k" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--rzp9k-eth0" Nov 1 01:01:52.236695 env[1357]: 2025-11-01 01:01:52.164 [INFO][3889] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="70e53d85c4650d6cb1105db5c4c0b8a271f8e52e13c32cfa27868f4f1a455f40" HandleID="k8s-pod-network.70e53d85c4650d6cb1105db5c4c0b8a271f8e52e13c32cfa27868f4f1a455f40" Workload="localhost-k8s-coredns--668d6bf9bc--rzp9k-eth0" Nov 1 01:01:52.236695 env[1357]: 2025-11-01 01:01:52.164 [INFO][3889] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="70e53d85c4650d6cb1105db5c4c0b8a271f8e52e13c32cfa27868f4f1a455f40" HandleID="k8s-pod-network.70e53d85c4650d6cb1105db5c4c0b8a271f8e52e13c32cfa27868f4f1a455f40" Workload="localhost-k8s-coredns--668d6bf9bc--rzp9k-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002d4fe0), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-668d6bf9bc-rzp9k", "timestamp":"2025-11-01 01:01:52.164170615 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 1 01:01:52.236695 env[1357]: 2025-11-01 01:01:52.164 [INFO][3889] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 1 01:01:52.236695 env[1357]: 2025-11-01 01:01:52.164 [INFO][3889] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 1 01:01:52.236695 env[1357]: 2025-11-01 01:01:52.164 [INFO][3889] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Nov 1 01:01:52.236695 env[1357]: 2025-11-01 01:01:52.171 [INFO][3889] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.70e53d85c4650d6cb1105db5c4c0b8a271f8e52e13c32cfa27868f4f1a455f40" host="localhost" Nov 1 01:01:52.236695 env[1357]: 2025-11-01 01:01:52.173 [INFO][3889] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Nov 1 01:01:52.236695 env[1357]: 2025-11-01 01:01:52.177 [INFO][3889] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Nov 1 01:01:52.236695 env[1357]: 2025-11-01 01:01:52.178 [INFO][3889] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Nov 1 01:01:52.236695 env[1357]: 2025-11-01 01:01:52.179 [INFO][3889] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Nov 1 01:01:52.236695 env[1357]: 2025-11-01 01:01:52.179 [INFO][3889] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.70e53d85c4650d6cb1105db5c4c0b8a271f8e52e13c32cfa27868f4f1a455f40" host="localhost" Nov 1 01:01:52.236695 env[1357]: 2025-11-01 01:01:52.180 [INFO][3889] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.70e53d85c4650d6cb1105db5c4c0b8a271f8e52e13c32cfa27868f4f1a455f40 Nov 1 01:01:52.236695 env[1357]: 2025-11-01 01:01:52.193 [INFO][3889] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.70e53d85c4650d6cb1105db5c4c0b8a271f8e52e13c32cfa27868f4f1a455f40" host="localhost" Nov 1 01:01:52.236695 env[1357]: 2025-11-01 01:01:52.198 [INFO][3889] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.130/26] block=192.168.88.128/26 handle="k8s-pod-network.70e53d85c4650d6cb1105db5c4c0b8a271f8e52e13c32cfa27868f4f1a455f40" host="localhost" Nov 1 01:01:52.236695 env[1357]: 2025-11-01 01:01:52.198 [INFO][3889] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.130/26] handle="k8s-pod-network.70e53d85c4650d6cb1105db5c4c0b8a271f8e52e13c32cfa27868f4f1a455f40" host="localhost" Nov 1 01:01:52.236695 env[1357]: 2025-11-01 01:01:52.199 [INFO][3889] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 1 01:01:52.236695 env[1357]: 2025-11-01 01:01:52.199 [INFO][3889] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.130/26] IPv6=[] ContainerID="70e53d85c4650d6cb1105db5c4c0b8a271f8e52e13c32cfa27868f4f1a455f40" HandleID="k8s-pod-network.70e53d85c4650d6cb1105db5c4c0b8a271f8e52e13c32cfa27868f4f1a455f40" Workload="localhost-k8s-coredns--668d6bf9bc--rzp9k-eth0" Nov 1 01:01:52.237251 env[1357]: 2025-11-01 01:01:52.200 [INFO][3876] cni-plugin/k8s.go 418: Populated endpoint ContainerID="70e53d85c4650d6cb1105db5c4c0b8a271f8e52e13c32cfa27868f4f1a455f40" Namespace="kube-system" Pod="coredns-668d6bf9bc-rzp9k" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--rzp9k-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--668d6bf9bc--rzp9k-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"c9757878-cb1e-46ae-a174-a7b9152136f7", ResourceVersion:"923", Generation:0, CreationTimestamp:time.Date(2025, time.November, 1, 1, 1, 15, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-668d6bf9bc-rzp9k", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali40be43ccdf6", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 1 01:01:52.237251 env[1357]: 2025-11-01 01:01:52.200 [INFO][3876] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.130/32] ContainerID="70e53d85c4650d6cb1105db5c4c0b8a271f8e52e13c32cfa27868f4f1a455f40" Namespace="kube-system" Pod="coredns-668d6bf9bc-rzp9k" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--rzp9k-eth0" Nov 1 01:01:52.237251 env[1357]: 2025-11-01 01:01:52.200 [INFO][3876] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali40be43ccdf6 ContainerID="70e53d85c4650d6cb1105db5c4c0b8a271f8e52e13c32cfa27868f4f1a455f40" Namespace="kube-system" Pod="coredns-668d6bf9bc-rzp9k" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--rzp9k-eth0" Nov 1 01:01:52.237251 env[1357]: 2025-11-01 01:01:52.206 [INFO][3876] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="70e53d85c4650d6cb1105db5c4c0b8a271f8e52e13c32cfa27868f4f1a455f40" Namespace="kube-system" Pod="coredns-668d6bf9bc-rzp9k" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--rzp9k-eth0" Nov 1 01:01:52.237251 env[1357]: 2025-11-01 01:01:52.206 [INFO][3876] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="70e53d85c4650d6cb1105db5c4c0b8a271f8e52e13c32cfa27868f4f1a455f40" Namespace="kube-system" Pod="coredns-668d6bf9bc-rzp9k" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--rzp9k-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--668d6bf9bc--rzp9k-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"c9757878-cb1e-46ae-a174-a7b9152136f7", ResourceVersion:"923", Generation:0, CreationTimestamp:time.Date(2025, time.November, 1, 1, 1, 15, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"70e53d85c4650d6cb1105db5c4c0b8a271f8e52e13c32cfa27868f4f1a455f40", Pod:"coredns-668d6bf9bc-rzp9k", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali40be43ccdf6", MAC:"76:0c:d7:3b:b2:bc", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 1 01:01:52.237251 env[1357]: 2025-11-01 01:01:52.230 [INFO][3876] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="70e53d85c4650d6cb1105db5c4c0b8a271f8e52e13c32cfa27868f4f1a455f40" Namespace="kube-system" Pod="coredns-668d6bf9bc-rzp9k" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--rzp9k-eth0" Nov 1 01:01:52.245281 env[1357]: time="2025-11-01T01:01:52.241422851Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 1 01:01:52.245281 env[1357]: time="2025-11-01T01:01:52.241470624Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 1 01:01:52.245281 env[1357]: time="2025-11-01T01:01:52.241487786Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 1 01:01:52.245281 env[1357]: time="2025-11-01T01:01:52.241571523Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/70e53d85c4650d6cb1105db5c4c0b8a271f8e52e13c32cfa27868f4f1a455f40 pid=3908 runtime=io.containerd.runc.v2 Nov 1 01:01:52.274864 systemd-resolved[1278]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Nov 1 01:01:52.293000 audit[3937]: NETFILTER_CFG table=filter:110 family=2 entries=42 op=nft_register_chain pid=3937 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Nov 1 01:01:52.293000 audit[3937]: SYSCALL arch=c000003e syscall=46 success=yes exit=22552 a0=3 a1=7ffd76067fa0 a2=0 a3=7ffd76067f8c items=0 ppid=3612 pid=3937 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 01:01:52.297730 kernel: audit: type=1325 audit(1761958912.293:414): table=filter:110 family=2 entries=42 op=nft_register_chain pid=3937 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Nov 1 01:01:52.293000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Nov 1 01:01:52.308464 env[1357]: time="2025-11-01T01:01:52.308438373Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-rzp9k,Uid:c9757878-cb1e-46ae-a174-a7b9152136f7,Namespace:kube-system,Attempt:1,} returns sandbox id \"70e53d85c4650d6cb1105db5c4c0b8a271f8e52e13c32cfa27868f4f1a455f40\"" Nov 1 01:01:52.326553 env[1357]: time="2025-11-01T01:01:52.326529391Z" level=info msg="CreateContainer within sandbox \"70e53d85c4650d6cb1105db5c4c0b8a271f8e52e13c32cfa27868f4f1a455f40\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Nov 1 01:01:52.346043 env[1357]: time="2025-11-01T01:01:52.346003160Z" level=info msg="CreateContainer within sandbox \"70e53d85c4650d6cb1105db5c4c0b8a271f8e52e13c32cfa27868f4f1a455f40\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"d625bff3974fbdd672776054a553cf105c166eac36ec27bbeffff0441b5721b3\"" Nov 1 01:01:52.347642 env[1357]: time="2025-11-01T01:01:52.347610514Z" level=info msg="StartContainer for \"d625bff3974fbdd672776054a553cf105c166eac36ec27bbeffff0441b5721b3\"" Nov 1 01:01:52.418763 env[1357]: time="2025-11-01T01:01:52.418736182Z" level=info msg="StartContainer for \"d625bff3974fbdd672776054a553cf105c166eac36ec27bbeffff0441b5721b3\" returns successfully" Nov 1 01:01:52.551000 audit[3976]: NETFILTER_CFG table=filter:111 family=2 entries=20 op=nft_register_rule pid=3976 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Nov 1 01:01:52.551000 audit[3976]: SYSCALL arch=c000003e syscall=46 success=yes exit=7480 a0=3 a1=7ffd7b2cba60 a2=0 a3=7ffd7b2cba4c items=0 ppid=2387 pid=3976 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 01:01:52.551000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Nov 1 01:01:52.556000 audit[3976]: NETFILTER_CFG table=nat:112 family=2 entries=14 op=nft_register_rule pid=3976 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Nov 1 01:01:52.556000 audit[3976]: SYSCALL arch=c000003e syscall=46 success=yes exit=3468 a0=3 a1=7ffd7b2cba60 a2=0 a3=0 items=0 ppid=2387 pid=3976 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 01:01:52.556000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Nov 1 01:01:53.022170 env[1357]: time="2025-11-01T01:01:53.022128185Z" level=info msg="StopPodSandbox for \"31276cef7e87a461939e11d45dc633439d500732f1e50e32643ef0ddbcb40df1\"" Nov 1 01:01:53.025792 env[1357]: time="2025-11-01T01:01:53.025765394Z" level=info msg="StopPodSandbox for \"31847bfe1e26a430d45fba673108465220f466427ce10525cc6ebd996c954eb5\"" Nov 1 01:01:53.095102 kubelet[2289]: I1101 01:01:53.095020 2289 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-rzp9k" podStartSLOduration=38.08408024 podStartE2EDuration="38.08408024s" podCreationTimestamp="2025-11-01 01:01:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-01 01:01:52.537907428 +0000 UTC m=+42.637513043" watchObservedRunningTime="2025-11-01 01:01:53.08408024 +0000 UTC m=+43.183685851" Nov 1 01:01:53.141481 env[1357]: 2025-11-01 01:01:53.092 [INFO][4003] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="31847bfe1e26a430d45fba673108465220f466427ce10525cc6ebd996c954eb5" Nov 1 01:01:53.141481 env[1357]: 2025-11-01 01:01:53.092 [INFO][4003] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="31847bfe1e26a430d45fba673108465220f466427ce10525cc6ebd996c954eb5" iface="eth0" netns="/var/run/netns/cni-e88606cf-d123-1eb3-66ae-2d80a495e780" Nov 1 01:01:53.141481 env[1357]: 2025-11-01 01:01:53.092 [INFO][4003] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="31847bfe1e26a430d45fba673108465220f466427ce10525cc6ebd996c954eb5" iface="eth0" netns="/var/run/netns/cni-e88606cf-d123-1eb3-66ae-2d80a495e780" Nov 1 01:01:53.141481 env[1357]: 2025-11-01 01:01:53.092 [INFO][4003] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="31847bfe1e26a430d45fba673108465220f466427ce10525cc6ebd996c954eb5" iface="eth0" netns="/var/run/netns/cni-e88606cf-d123-1eb3-66ae-2d80a495e780" Nov 1 01:01:53.141481 env[1357]: 2025-11-01 01:01:53.092 [INFO][4003] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="31847bfe1e26a430d45fba673108465220f466427ce10525cc6ebd996c954eb5" Nov 1 01:01:53.141481 env[1357]: 2025-11-01 01:01:53.092 [INFO][4003] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="31847bfe1e26a430d45fba673108465220f466427ce10525cc6ebd996c954eb5" Nov 1 01:01:53.141481 env[1357]: 2025-11-01 01:01:53.124 [INFO][4021] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="31847bfe1e26a430d45fba673108465220f466427ce10525cc6ebd996c954eb5" HandleID="k8s-pod-network.31847bfe1e26a430d45fba673108465220f466427ce10525cc6ebd996c954eb5" Workload="localhost-k8s-csi--node--driver--clw4d-eth0" Nov 1 01:01:53.141481 env[1357]: 2025-11-01 01:01:53.124 [INFO][4021] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 1 01:01:53.141481 env[1357]: 2025-11-01 01:01:53.125 [INFO][4021] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 1 01:01:53.141481 env[1357]: 2025-11-01 01:01:53.129 [WARNING][4021] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="31847bfe1e26a430d45fba673108465220f466427ce10525cc6ebd996c954eb5" HandleID="k8s-pod-network.31847bfe1e26a430d45fba673108465220f466427ce10525cc6ebd996c954eb5" Workload="localhost-k8s-csi--node--driver--clw4d-eth0" Nov 1 01:01:53.141481 env[1357]: 2025-11-01 01:01:53.129 [INFO][4021] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="31847bfe1e26a430d45fba673108465220f466427ce10525cc6ebd996c954eb5" HandleID="k8s-pod-network.31847bfe1e26a430d45fba673108465220f466427ce10525cc6ebd996c954eb5" Workload="localhost-k8s-csi--node--driver--clw4d-eth0" Nov 1 01:01:53.141481 env[1357]: 2025-11-01 01:01:53.130 [INFO][4021] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 1 01:01:53.141481 env[1357]: 2025-11-01 01:01:53.133 [INFO][4003] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="31847bfe1e26a430d45fba673108465220f466427ce10525cc6ebd996c954eb5" Nov 1 01:01:53.142049 env[1357]: time="2025-11-01T01:01:53.142019642Z" level=info msg="TearDown network for sandbox \"31847bfe1e26a430d45fba673108465220f466427ce10525cc6ebd996c954eb5\" successfully" Nov 1 01:01:53.142108 env[1357]: time="2025-11-01T01:01:53.142096733Z" level=info msg="StopPodSandbox for \"31847bfe1e26a430d45fba673108465220f466427ce10525cc6ebd996c954eb5\" returns successfully" Nov 1 01:01:53.142690 env[1357]: time="2025-11-01T01:01:53.142658395Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-clw4d,Uid:2e92d5bc-ac6d-4f8c-9478-cb1bbeb19fda,Namespace:calico-system,Attempt:1,}" Nov 1 01:01:53.143803 systemd[1]: run-netns-cni\x2de88606cf\x2dd123\x2d1eb3\x2d66ae\x2d2d80a495e780.mount: Deactivated successfully. Nov 1 01:01:53.149930 env[1357]: 2025-11-01 01:01:53.084 [INFO][4004] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="31276cef7e87a461939e11d45dc633439d500732f1e50e32643ef0ddbcb40df1" Nov 1 01:01:53.149930 env[1357]: 2025-11-01 01:01:53.084 [INFO][4004] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="31276cef7e87a461939e11d45dc633439d500732f1e50e32643ef0ddbcb40df1" iface="eth0" netns="/var/run/netns/cni-a6763c32-a545-9193-0f79-aaf1329e96bf" Nov 1 01:01:53.149930 env[1357]: 2025-11-01 01:01:53.084 [INFO][4004] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="31276cef7e87a461939e11d45dc633439d500732f1e50e32643ef0ddbcb40df1" iface="eth0" netns="/var/run/netns/cni-a6763c32-a545-9193-0f79-aaf1329e96bf" Nov 1 01:01:53.149930 env[1357]: 2025-11-01 01:01:53.084 [INFO][4004] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="31276cef7e87a461939e11d45dc633439d500732f1e50e32643ef0ddbcb40df1" iface="eth0" netns="/var/run/netns/cni-a6763c32-a545-9193-0f79-aaf1329e96bf" Nov 1 01:01:53.149930 env[1357]: 2025-11-01 01:01:53.084 [INFO][4004] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="31276cef7e87a461939e11d45dc633439d500732f1e50e32643ef0ddbcb40df1" Nov 1 01:01:53.149930 env[1357]: 2025-11-01 01:01:53.084 [INFO][4004] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="31276cef7e87a461939e11d45dc633439d500732f1e50e32643ef0ddbcb40df1" Nov 1 01:01:53.149930 env[1357]: 2025-11-01 01:01:53.125 [INFO][4016] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="31276cef7e87a461939e11d45dc633439d500732f1e50e32643ef0ddbcb40df1" HandleID="k8s-pod-network.31276cef7e87a461939e11d45dc633439d500732f1e50e32643ef0ddbcb40df1" Workload="localhost-k8s-calico--kube--controllers--647db64f7d--p9wv7-eth0" Nov 1 01:01:53.149930 env[1357]: 2025-11-01 01:01:53.127 [INFO][4016] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 1 01:01:53.149930 env[1357]: 2025-11-01 01:01:53.130 [INFO][4016] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 1 01:01:53.149930 env[1357]: 2025-11-01 01:01:53.136 [WARNING][4016] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="31276cef7e87a461939e11d45dc633439d500732f1e50e32643ef0ddbcb40df1" HandleID="k8s-pod-network.31276cef7e87a461939e11d45dc633439d500732f1e50e32643ef0ddbcb40df1" Workload="localhost-k8s-calico--kube--controllers--647db64f7d--p9wv7-eth0" Nov 1 01:01:53.149930 env[1357]: 2025-11-01 01:01:53.136 [INFO][4016] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="31276cef7e87a461939e11d45dc633439d500732f1e50e32643ef0ddbcb40df1" HandleID="k8s-pod-network.31276cef7e87a461939e11d45dc633439d500732f1e50e32643ef0ddbcb40df1" Workload="localhost-k8s-calico--kube--controllers--647db64f7d--p9wv7-eth0" Nov 1 01:01:53.149930 env[1357]: 2025-11-01 01:01:53.143 [INFO][4016] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 1 01:01:53.149930 env[1357]: 2025-11-01 01:01:53.148 [INFO][4004] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="31276cef7e87a461939e11d45dc633439d500732f1e50e32643ef0ddbcb40df1" Nov 1 01:01:53.151954 systemd[1]: run-netns-cni\x2da6763c32\x2da545\x2d9193\x2d0f79\x2daaf1329e96bf.mount: Deactivated successfully. Nov 1 01:01:53.155865 env[1357]: time="2025-11-01T01:01:53.152441624Z" level=info msg="TearDown network for sandbox \"31276cef7e87a461939e11d45dc633439d500732f1e50e32643ef0ddbcb40df1\" successfully" Nov 1 01:01:53.155865 env[1357]: time="2025-11-01T01:01:53.152468993Z" level=info msg="StopPodSandbox for \"31276cef7e87a461939e11d45dc633439d500732f1e50e32643ef0ddbcb40df1\" returns successfully" Nov 1 01:01:53.155865 env[1357]: time="2025-11-01T01:01:53.153207053Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-647db64f7d-p9wv7,Uid:a48af7bf-dcba-4afd-bada-a7a0787cc063,Namespace:calico-system,Attempt:1,}" Nov 1 01:01:53.301553 systemd-networkd[1114]: caliefccec1158f: Link UP Nov 1 01:01:53.306691 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Nov 1 01:01:53.306767 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): caliefccec1158f: link becomes ready Nov 1 01:01:53.306870 systemd-networkd[1114]: caliefccec1158f: Gained carrier Nov 1 01:01:53.324879 env[1357]: 2025-11-01 01:01:53.239 [INFO][4029] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-csi--node--driver--clw4d-eth0 csi-node-driver- calico-system 2e92d5bc-ac6d-4f8c-9478-cb1bbeb19fda 942 0 2025-11-01 01:01:29 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:857b56db8f k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s localhost csi-node-driver-clw4d eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] caliefccec1158f [] [] }} ContainerID="be57d3c066db22a9aefe2c4ed3914fd592e8c1dbd9b22359cd2de1fcfe0419d0" Namespace="calico-system" Pod="csi-node-driver-clw4d" WorkloadEndpoint="localhost-k8s-csi--node--driver--clw4d-" Nov 1 01:01:53.324879 env[1357]: 2025-11-01 01:01:53.239 [INFO][4029] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="be57d3c066db22a9aefe2c4ed3914fd592e8c1dbd9b22359cd2de1fcfe0419d0" Namespace="calico-system" Pod="csi-node-driver-clw4d" WorkloadEndpoint="localhost-k8s-csi--node--driver--clw4d-eth0" Nov 1 01:01:53.324879 env[1357]: 2025-11-01 01:01:53.264 [INFO][4052] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="be57d3c066db22a9aefe2c4ed3914fd592e8c1dbd9b22359cd2de1fcfe0419d0" HandleID="k8s-pod-network.be57d3c066db22a9aefe2c4ed3914fd592e8c1dbd9b22359cd2de1fcfe0419d0" Workload="localhost-k8s-csi--node--driver--clw4d-eth0" Nov 1 01:01:53.324879 env[1357]: 2025-11-01 01:01:53.265 [INFO][4052] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="be57d3c066db22a9aefe2c4ed3914fd592e8c1dbd9b22359cd2de1fcfe0419d0" HandleID="k8s-pod-network.be57d3c066db22a9aefe2c4ed3914fd592e8c1dbd9b22359cd2de1fcfe0419d0" Workload="localhost-k8s-csi--node--driver--clw4d-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002cd020), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"csi-node-driver-clw4d", "timestamp":"2025-11-01 01:01:53.264899607 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 1 01:01:53.324879 env[1357]: 2025-11-01 01:01:53.265 [INFO][4052] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 1 01:01:53.324879 env[1357]: 2025-11-01 01:01:53.265 [INFO][4052] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 1 01:01:53.324879 env[1357]: 2025-11-01 01:01:53.265 [INFO][4052] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Nov 1 01:01:53.324879 env[1357]: 2025-11-01 01:01:53.270 [INFO][4052] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.be57d3c066db22a9aefe2c4ed3914fd592e8c1dbd9b22359cd2de1fcfe0419d0" host="localhost" Nov 1 01:01:53.324879 env[1357]: 2025-11-01 01:01:53.273 [INFO][4052] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Nov 1 01:01:53.324879 env[1357]: 2025-11-01 01:01:53.277 [INFO][4052] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Nov 1 01:01:53.324879 env[1357]: 2025-11-01 01:01:53.279 [INFO][4052] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Nov 1 01:01:53.324879 env[1357]: 2025-11-01 01:01:53.281 [INFO][4052] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Nov 1 01:01:53.324879 env[1357]: 2025-11-01 01:01:53.281 [INFO][4052] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.be57d3c066db22a9aefe2c4ed3914fd592e8c1dbd9b22359cd2de1fcfe0419d0" host="localhost" Nov 1 01:01:53.324879 env[1357]: 2025-11-01 01:01:53.287 [INFO][4052] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.be57d3c066db22a9aefe2c4ed3914fd592e8c1dbd9b22359cd2de1fcfe0419d0 Nov 1 01:01:53.324879 env[1357]: 2025-11-01 01:01:53.289 [INFO][4052] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.be57d3c066db22a9aefe2c4ed3914fd592e8c1dbd9b22359cd2de1fcfe0419d0" host="localhost" Nov 1 01:01:53.324879 env[1357]: 2025-11-01 01:01:53.293 [INFO][4052] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.131/26] block=192.168.88.128/26 handle="k8s-pod-network.be57d3c066db22a9aefe2c4ed3914fd592e8c1dbd9b22359cd2de1fcfe0419d0" host="localhost" Nov 1 01:01:53.324879 env[1357]: 2025-11-01 01:01:53.293 [INFO][4052] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.131/26] handle="k8s-pod-network.be57d3c066db22a9aefe2c4ed3914fd592e8c1dbd9b22359cd2de1fcfe0419d0" host="localhost" Nov 1 01:01:53.324879 env[1357]: 2025-11-01 01:01:53.293 [INFO][4052] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 1 01:01:53.324879 env[1357]: 2025-11-01 01:01:53.293 [INFO][4052] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.131/26] IPv6=[] ContainerID="be57d3c066db22a9aefe2c4ed3914fd592e8c1dbd9b22359cd2de1fcfe0419d0" HandleID="k8s-pod-network.be57d3c066db22a9aefe2c4ed3914fd592e8c1dbd9b22359cd2de1fcfe0419d0" Workload="localhost-k8s-csi--node--driver--clw4d-eth0" Nov 1 01:01:53.326311 env[1357]: 2025-11-01 01:01:53.298 [INFO][4029] cni-plugin/k8s.go 418: Populated endpoint ContainerID="be57d3c066db22a9aefe2c4ed3914fd592e8c1dbd9b22359cd2de1fcfe0419d0" Namespace="calico-system" Pod="csi-node-driver-clw4d" WorkloadEndpoint="localhost-k8s-csi--node--driver--clw4d-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--clw4d-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"2e92d5bc-ac6d-4f8c-9478-cb1bbeb19fda", ResourceVersion:"942", Generation:0, CreationTimestamp:time.Date(2025, time.November, 1, 1, 1, 29, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"857b56db8f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"csi-node-driver-clw4d", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"caliefccec1158f", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 1 01:01:53.326311 env[1357]: 2025-11-01 01:01:53.298 [INFO][4029] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.131/32] ContainerID="be57d3c066db22a9aefe2c4ed3914fd592e8c1dbd9b22359cd2de1fcfe0419d0" Namespace="calico-system" Pod="csi-node-driver-clw4d" WorkloadEndpoint="localhost-k8s-csi--node--driver--clw4d-eth0" Nov 1 01:01:53.326311 env[1357]: 2025-11-01 01:01:53.298 [INFO][4029] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to caliefccec1158f ContainerID="be57d3c066db22a9aefe2c4ed3914fd592e8c1dbd9b22359cd2de1fcfe0419d0" Namespace="calico-system" Pod="csi-node-driver-clw4d" WorkloadEndpoint="localhost-k8s-csi--node--driver--clw4d-eth0" Nov 1 01:01:53.326311 env[1357]: 2025-11-01 01:01:53.305 [INFO][4029] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="be57d3c066db22a9aefe2c4ed3914fd592e8c1dbd9b22359cd2de1fcfe0419d0" Namespace="calico-system" Pod="csi-node-driver-clw4d" WorkloadEndpoint="localhost-k8s-csi--node--driver--clw4d-eth0" Nov 1 01:01:53.326311 env[1357]: 2025-11-01 01:01:53.307 [INFO][4029] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="be57d3c066db22a9aefe2c4ed3914fd592e8c1dbd9b22359cd2de1fcfe0419d0" Namespace="calico-system" Pod="csi-node-driver-clw4d" WorkloadEndpoint="localhost-k8s-csi--node--driver--clw4d-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--clw4d-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"2e92d5bc-ac6d-4f8c-9478-cb1bbeb19fda", ResourceVersion:"942", Generation:0, CreationTimestamp:time.Date(2025, time.November, 1, 1, 1, 29, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"857b56db8f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"be57d3c066db22a9aefe2c4ed3914fd592e8c1dbd9b22359cd2de1fcfe0419d0", Pod:"csi-node-driver-clw4d", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"caliefccec1158f", MAC:"4e:02:9a:e5:2e:37", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 1 01:01:53.326311 env[1357]: 2025-11-01 01:01:53.322 [INFO][4029] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="be57d3c066db22a9aefe2c4ed3914fd592e8c1dbd9b22359cd2de1fcfe0419d0" Namespace="calico-system" Pod="csi-node-driver-clw4d" WorkloadEndpoint="localhost-k8s-csi--node--driver--clw4d-eth0" Nov 1 01:01:53.377357 env[1357]: time="2025-11-01T01:01:53.377305667Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 1 01:01:53.377492 env[1357]: time="2025-11-01T01:01:53.377477031Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 1 01:01:53.377560 env[1357]: time="2025-11-01T01:01:53.377546644Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 1 01:01:53.377751 env[1357]: time="2025-11-01T01:01:53.377719742Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/be57d3c066db22a9aefe2c4ed3914fd592e8c1dbd9b22359cd2de1fcfe0419d0 pid=4082 runtime=io.containerd.runc.v2 Nov 1 01:01:53.405000 audit[4109]: NETFILTER_CFG table=filter:113 family=2 entries=40 op=nft_register_chain pid=4109 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Nov 1 01:01:53.405000 audit[4109]: SYSCALL arch=c000003e syscall=46 success=yes exit=20764 a0=3 a1=7ffcee89e050 a2=0 a3=7ffcee89e03c items=0 ppid=3612 pid=4109 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 01:01:53.405000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Nov 1 01:01:53.407043 systemd-resolved[1278]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Nov 1 01:01:53.427818 env[1357]: time="2025-11-01T01:01:53.427787008Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-clw4d,Uid:2e92d5bc-ac6d-4f8c-9478-cb1bbeb19fda,Namespace:calico-system,Attempt:1,} returns sandbox id \"be57d3c066db22a9aefe2c4ed3914fd592e8c1dbd9b22359cd2de1fcfe0419d0\"" Nov 1 01:01:53.438432 env[1357]: time="2025-11-01T01:01:53.438353193Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Nov 1 01:01:53.452231 systemd-networkd[1114]: cali9e0739d5afb: Link UP Nov 1 01:01:53.453877 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cali9e0739d5afb: link becomes ready Nov 1 01:01:53.453742 systemd-networkd[1114]: cali9e0739d5afb: Gained carrier Nov 1 01:01:53.486000 audit[4119]: NETFILTER_CFG table=filter:114 family=2 entries=44 op=nft_register_chain pid=4119 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Nov 1 01:01:53.486000 audit[4119]: SYSCALL arch=c000003e syscall=46 success=yes exit=21952 a0=3 a1=7ffc9968d920 a2=0 a3=7ffc9968d90c items=0 ppid=3612 pid=4119 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 01:01:53.486000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Nov 1 01:01:53.489105 env[1357]: 2025-11-01 01:01:53.252 [INFO][4031] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--kube--controllers--647db64f7d--p9wv7-eth0 calico-kube-controllers-647db64f7d- calico-system a48af7bf-dcba-4afd-bada-a7a0787cc063 941 0 2025-11-01 01:01:29 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:647db64f7d projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s localhost calico-kube-controllers-647db64f7d-p9wv7 eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali9e0739d5afb [] [] }} ContainerID="4ff3a8a7896b458686ff020f251fcf5dd16baa30f0c16b87b262da918bc58d56" Namespace="calico-system" Pod="calico-kube-controllers-647db64f7d-p9wv7" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--647db64f7d--p9wv7-" Nov 1 01:01:53.489105 env[1357]: 2025-11-01 01:01:53.252 [INFO][4031] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="4ff3a8a7896b458686ff020f251fcf5dd16baa30f0c16b87b262da918bc58d56" Namespace="calico-system" Pod="calico-kube-controllers-647db64f7d-p9wv7" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--647db64f7d--p9wv7-eth0" Nov 1 01:01:53.489105 env[1357]: 2025-11-01 01:01:53.285 [INFO][4059] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="4ff3a8a7896b458686ff020f251fcf5dd16baa30f0c16b87b262da918bc58d56" HandleID="k8s-pod-network.4ff3a8a7896b458686ff020f251fcf5dd16baa30f0c16b87b262da918bc58d56" Workload="localhost-k8s-calico--kube--controllers--647db64f7d--p9wv7-eth0" Nov 1 01:01:53.489105 env[1357]: 2025-11-01 01:01:53.285 [INFO][4059] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="4ff3a8a7896b458686ff020f251fcf5dd16baa30f0c16b87b262da918bc58d56" HandleID="k8s-pod-network.4ff3a8a7896b458686ff020f251fcf5dd16baa30f0c16b87b262da918bc58d56" Workload="localhost-k8s-calico--kube--controllers--647db64f7d--p9wv7-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000250fe0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"calico-kube-controllers-647db64f7d-p9wv7", "timestamp":"2025-11-01 01:01:53.285121218 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 1 01:01:53.489105 env[1357]: 2025-11-01 01:01:53.285 [INFO][4059] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 1 01:01:53.489105 env[1357]: 2025-11-01 01:01:53.294 [INFO][4059] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 1 01:01:53.489105 env[1357]: 2025-11-01 01:01:53.294 [INFO][4059] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Nov 1 01:01:53.489105 env[1357]: 2025-11-01 01:01:53.371 [INFO][4059] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.4ff3a8a7896b458686ff020f251fcf5dd16baa30f0c16b87b262da918bc58d56" host="localhost" Nov 1 01:01:53.489105 env[1357]: 2025-11-01 01:01:53.413 [INFO][4059] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Nov 1 01:01:53.489105 env[1357]: 2025-11-01 01:01:53.430 [INFO][4059] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Nov 1 01:01:53.489105 env[1357]: 2025-11-01 01:01:53.432 [INFO][4059] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Nov 1 01:01:53.489105 env[1357]: 2025-11-01 01:01:53.434 [INFO][4059] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Nov 1 01:01:53.489105 env[1357]: 2025-11-01 01:01:53.435 [INFO][4059] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.4ff3a8a7896b458686ff020f251fcf5dd16baa30f0c16b87b262da918bc58d56" host="localhost" Nov 1 01:01:53.489105 env[1357]: 2025-11-01 01:01:53.436 [INFO][4059] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.4ff3a8a7896b458686ff020f251fcf5dd16baa30f0c16b87b262da918bc58d56 Nov 1 01:01:53.489105 env[1357]: 2025-11-01 01:01:53.439 [INFO][4059] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.4ff3a8a7896b458686ff020f251fcf5dd16baa30f0c16b87b262da918bc58d56" host="localhost" Nov 1 01:01:53.489105 env[1357]: 2025-11-01 01:01:53.444 [INFO][4059] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.132/26] block=192.168.88.128/26 handle="k8s-pod-network.4ff3a8a7896b458686ff020f251fcf5dd16baa30f0c16b87b262da918bc58d56" host="localhost" Nov 1 01:01:53.489105 env[1357]: 2025-11-01 01:01:53.444 [INFO][4059] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.132/26] handle="k8s-pod-network.4ff3a8a7896b458686ff020f251fcf5dd16baa30f0c16b87b262da918bc58d56" host="localhost" Nov 1 01:01:53.489105 env[1357]: 2025-11-01 01:01:53.444 [INFO][4059] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 1 01:01:53.489105 env[1357]: 2025-11-01 01:01:53.444 [INFO][4059] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.132/26] IPv6=[] ContainerID="4ff3a8a7896b458686ff020f251fcf5dd16baa30f0c16b87b262da918bc58d56" HandleID="k8s-pod-network.4ff3a8a7896b458686ff020f251fcf5dd16baa30f0c16b87b262da918bc58d56" Workload="localhost-k8s-calico--kube--controllers--647db64f7d--p9wv7-eth0" Nov 1 01:01:53.497560 env[1357]: 2025-11-01 01:01:53.448 [INFO][4031] cni-plugin/k8s.go 418: Populated endpoint ContainerID="4ff3a8a7896b458686ff020f251fcf5dd16baa30f0c16b87b262da918bc58d56" Namespace="calico-system" Pod="calico-kube-controllers-647db64f7d-p9wv7" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--647db64f7d--p9wv7-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--647db64f7d--p9wv7-eth0", GenerateName:"calico-kube-controllers-647db64f7d-", Namespace:"calico-system", SelfLink:"", UID:"a48af7bf-dcba-4afd-bada-a7a0787cc063", ResourceVersion:"941", Generation:0, CreationTimestamp:time.Date(2025, time.November, 1, 1, 1, 29, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"647db64f7d", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-kube-controllers-647db64f7d-p9wv7", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali9e0739d5afb", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 1 01:01:53.497560 env[1357]: 2025-11-01 01:01:53.449 [INFO][4031] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.132/32] ContainerID="4ff3a8a7896b458686ff020f251fcf5dd16baa30f0c16b87b262da918bc58d56" Namespace="calico-system" Pod="calico-kube-controllers-647db64f7d-p9wv7" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--647db64f7d--p9wv7-eth0" Nov 1 01:01:53.497560 env[1357]: 2025-11-01 01:01:53.449 [INFO][4031] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali9e0739d5afb ContainerID="4ff3a8a7896b458686ff020f251fcf5dd16baa30f0c16b87b262da918bc58d56" Namespace="calico-system" Pod="calico-kube-controllers-647db64f7d-p9wv7" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--647db64f7d--p9wv7-eth0" Nov 1 01:01:53.497560 env[1357]: 2025-11-01 01:01:53.454 [INFO][4031] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="4ff3a8a7896b458686ff020f251fcf5dd16baa30f0c16b87b262da918bc58d56" Namespace="calico-system" Pod="calico-kube-controllers-647db64f7d-p9wv7" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--647db64f7d--p9wv7-eth0" Nov 1 01:01:53.497560 env[1357]: 2025-11-01 01:01:53.461 [INFO][4031] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="4ff3a8a7896b458686ff020f251fcf5dd16baa30f0c16b87b262da918bc58d56" Namespace="calico-system" Pod="calico-kube-controllers-647db64f7d-p9wv7" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--647db64f7d--p9wv7-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--647db64f7d--p9wv7-eth0", GenerateName:"calico-kube-controllers-647db64f7d-", Namespace:"calico-system", SelfLink:"", UID:"a48af7bf-dcba-4afd-bada-a7a0787cc063", ResourceVersion:"941", Generation:0, CreationTimestamp:time.Date(2025, time.November, 1, 1, 1, 29, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"647db64f7d", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"4ff3a8a7896b458686ff020f251fcf5dd16baa30f0c16b87b262da918bc58d56", Pod:"calico-kube-controllers-647db64f7d-p9wv7", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali9e0739d5afb", MAC:"ca:5e:1b:c0:f8:1d", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 1 01:01:53.497560 env[1357]: 2025-11-01 01:01:53.476 [INFO][4031] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="4ff3a8a7896b458686ff020f251fcf5dd16baa30f0c16b87b262da918bc58d56" Namespace="calico-system" Pod="calico-kube-controllers-647db64f7d-p9wv7" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--647db64f7d--p9wv7-eth0" Nov 1 01:01:53.582717 env[1357]: time="2025-11-01T01:01:53.582623865Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 1 01:01:53.582717 env[1357]: time="2025-11-01T01:01:53.582684885Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 1 01:01:53.589366 env[1357]: time="2025-11-01T01:01:53.582704302Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 1 01:01:53.589366 env[1357]: time="2025-11-01T01:01:53.583971033Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/4ff3a8a7896b458686ff020f251fcf5dd16baa30f0c16b87b262da918bc58d56 pid=4132 runtime=io.containerd.runc.v2 Nov 1 01:01:53.602380 systemd-resolved[1278]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Nov 1 01:01:53.631276 env[1357]: time="2025-11-01T01:01:53.631227332Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-647db64f7d-p9wv7,Uid:a48af7bf-dcba-4afd-bada-a7a0787cc063,Namespace:calico-system,Attempt:1,} returns sandbox id \"4ff3a8a7896b458686ff020f251fcf5dd16baa30f0c16b87b262da918bc58d56\"" Nov 1 01:01:53.743000 audit[4169]: NETFILTER_CFG table=filter:115 family=2 entries=17 op=nft_register_rule pid=4169 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Nov 1 01:01:53.743000 audit[4169]: SYSCALL arch=c000003e syscall=46 success=yes exit=5248 a0=3 a1=7ffd9354d420 a2=0 a3=7ffd9354d40c items=0 ppid=2387 pid=4169 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 01:01:53.743000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Nov 1 01:01:53.746000 audit[4169]: NETFILTER_CFG table=nat:116 family=2 entries=35 op=nft_register_chain pid=4169 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Nov 1 01:01:53.746000 audit[4169]: SYSCALL arch=c000003e syscall=46 success=yes exit=14196 a0=3 a1=7ffd9354d420 a2=0 a3=7ffd9354d40c items=0 ppid=2387 pid=4169 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 01:01:53.746000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Nov 1 01:01:53.756314 env[1357]: time="2025-11-01T01:01:53.756264621Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 1 01:01:53.770156 env[1357]: time="2025-11-01T01:01:53.770095708Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Nov 1 01:01:53.795603 kubelet[2289]: E1101 01:01:53.795563 2289 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Nov 1 01:01:53.795751 kubelet[2289]: E1101 01:01:53.795613 2289 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Nov 1 01:01:53.796238 env[1357]: time="2025-11-01T01:01:53.796211108Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Nov 1 01:01:53.803645 kubelet[2289]: E1101 01:01:53.803606 2289 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-442hg,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-clw4d_calico-system(2e92d5bc-ac6d-4f8c-9478-cb1bbeb19fda): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Nov 1 01:01:54.012784 systemd-networkd[1114]: cali40be43ccdf6: Gained IPv6LL Nov 1 01:01:54.017558 env[1357]: time="2025-11-01T01:01:54.017505021Z" level=info msg="StopPodSandbox for \"963e8a55b26f297f25ed2a9455e2a5c1bc4b6edfd1660a770d38a7ffea576ff5\"" Nov 1 01:01:54.017759 env[1357]: time="2025-11-01T01:01:54.017739889Z" level=info msg="StopPodSandbox for \"0f44848ccbeca3127b519c0e267b6cbc3a4df6d552d05fed271ffc9643835f54\"" Nov 1 01:01:54.139605 env[1357]: time="2025-11-01T01:01:54.139566078Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 1 01:01:54.147147 env[1357]: time="2025-11-01T01:01:54.146709333Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Nov 1 01:01:54.147248 kubelet[2289]: E1101 01:01:54.146864 2289 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Nov 1 01:01:54.147248 kubelet[2289]: E1101 01:01:54.146931 2289 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Nov 1 01:01:54.147248 kubelet[2289]: E1101 01:01:54.147100 2289 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-67xqj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-647db64f7d-p9wv7_calico-system(a48af7bf-dcba-4afd-bada-a7a0787cc063): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Nov 1 01:01:54.148592 kubelet[2289]: E1101 01:01:54.148389 2289 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-647db64f7d-p9wv7" podUID="a48af7bf-dcba-4afd-bada-a7a0787cc063" Nov 1 01:01:54.148818 env[1357]: time="2025-11-01T01:01:54.148791210Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Nov 1 01:01:54.246334 env[1357]: 2025-11-01 01:01:54.174 [INFO][4190] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="0f44848ccbeca3127b519c0e267b6cbc3a4df6d552d05fed271ffc9643835f54" Nov 1 01:01:54.246334 env[1357]: 2025-11-01 01:01:54.175 [INFO][4190] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="0f44848ccbeca3127b519c0e267b6cbc3a4df6d552d05fed271ffc9643835f54" iface="eth0" netns="/var/run/netns/cni-b169e79b-457e-c6cb-30c1-1d156f7b72b9" Nov 1 01:01:54.246334 env[1357]: 2025-11-01 01:01:54.175 [INFO][4190] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="0f44848ccbeca3127b519c0e267b6cbc3a4df6d552d05fed271ffc9643835f54" iface="eth0" netns="/var/run/netns/cni-b169e79b-457e-c6cb-30c1-1d156f7b72b9" Nov 1 01:01:54.246334 env[1357]: 2025-11-01 01:01:54.175 [INFO][4190] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="0f44848ccbeca3127b519c0e267b6cbc3a4df6d552d05fed271ffc9643835f54" iface="eth0" netns="/var/run/netns/cni-b169e79b-457e-c6cb-30c1-1d156f7b72b9" Nov 1 01:01:54.246334 env[1357]: 2025-11-01 01:01:54.175 [INFO][4190] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="0f44848ccbeca3127b519c0e267b6cbc3a4df6d552d05fed271ffc9643835f54" Nov 1 01:01:54.246334 env[1357]: 2025-11-01 01:01:54.175 [INFO][4190] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="0f44848ccbeca3127b519c0e267b6cbc3a4df6d552d05fed271ffc9643835f54" Nov 1 01:01:54.246334 env[1357]: 2025-11-01 01:01:54.227 [INFO][4204] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="0f44848ccbeca3127b519c0e267b6cbc3a4df6d552d05fed271ffc9643835f54" HandleID="k8s-pod-network.0f44848ccbeca3127b519c0e267b6cbc3a4df6d552d05fed271ffc9643835f54" Workload="localhost-k8s-calico--apiserver--5c87d7ff56--95ljj-eth0" Nov 1 01:01:54.246334 env[1357]: 2025-11-01 01:01:54.227 [INFO][4204] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 1 01:01:54.246334 env[1357]: 2025-11-01 01:01:54.227 [INFO][4204] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 1 01:01:54.246334 env[1357]: 2025-11-01 01:01:54.241 [WARNING][4204] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="0f44848ccbeca3127b519c0e267b6cbc3a4df6d552d05fed271ffc9643835f54" HandleID="k8s-pod-network.0f44848ccbeca3127b519c0e267b6cbc3a4df6d552d05fed271ffc9643835f54" Workload="localhost-k8s-calico--apiserver--5c87d7ff56--95ljj-eth0" Nov 1 01:01:54.246334 env[1357]: 2025-11-01 01:01:54.241 [INFO][4204] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="0f44848ccbeca3127b519c0e267b6cbc3a4df6d552d05fed271ffc9643835f54" HandleID="k8s-pod-network.0f44848ccbeca3127b519c0e267b6cbc3a4df6d552d05fed271ffc9643835f54" Workload="localhost-k8s-calico--apiserver--5c87d7ff56--95ljj-eth0" Nov 1 01:01:54.246334 env[1357]: 2025-11-01 01:01:54.243 [INFO][4204] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 1 01:01:54.246334 env[1357]: 2025-11-01 01:01:54.245 [INFO][4190] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="0f44848ccbeca3127b519c0e267b6cbc3a4df6d552d05fed271ffc9643835f54" Nov 1 01:01:54.248572 systemd[1]: run-netns-cni\x2db169e79b\x2d457e\x2dc6cb\x2d30c1\x2d1d156f7b72b9.mount: Deactivated successfully. Nov 1 01:01:54.255754 env[1357]: time="2025-11-01T01:01:54.248763558Z" level=info msg="TearDown network for sandbox \"0f44848ccbeca3127b519c0e267b6cbc3a4df6d552d05fed271ffc9643835f54\" successfully" Nov 1 01:01:54.255754 env[1357]: time="2025-11-01T01:01:54.248785020Z" level=info msg="StopPodSandbox for \"0f44848ccbeca3127b519c0e267b6cbc3a4df6d552d05fed271ffc9643835f54\" returns successfully" Nov 1 01:01:54.255754 env[1357]: time="2025-11-01T01:01:54.250007889Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5c87d7ff56-95ljj,Uid:dbcec211-4cad-40a0-8aa5-e63111d93180,Namespace:calico-apiserver,Attempt:1,}" Nov 1 01:01:54.277992 env[1357]: 2025-11-01 01:01:54.217 [INFO][4189] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="963e8a55b26f297f25ed2a9455e2a5c1bc4b6edfd1660a770d38a7ffea576ff5" Nov 1 01:01:54.277992 env[1357]: 2025-11-01 01:01:54.217 [INFO][4189] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="963e8a55b26f297f25ed2a9455e2a5c1bc4b6edfd1660a770d38a7ffea576ff5" iface="eth0" netns="/var/run/netns/cni-3ea3af37-7fb9-8959-5264-7cb92efaaeaa" Nov 1 01:01:54.277992 env[1357]: 2025-11-01 01:01:54.218 [INFO][4189] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="963e8a55b26f297f25ed2a9455e2a5c1bc4b6edfd1660a770d38a7ffea576ff5" iface="eth0" netns="/var/run/netns/cni-3ea3af37-7fb9-8959-5264-7cb92efaaeaa" Nov 1 01:01:54.277992 env[1357]: 2025-11-01 01:01:54.218 [INFO][4189] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="963e8a55b26f297f25ed2a9455e2a5c1bc4b6edfd1660a770d38a7ffea576ff5" iface="eth0" netns="/var/run/netns/cni-3ea3af37-7fb9-8959-5264-7cb92efaaeaa" Nov 1 01:01:54.277992 env[1357]: 2025-11-01 01:01:54.218 [INFO][4189] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="963e8a55b26f297f25ed2a9455e2a5c1bc4b6edfd1660a770d38a7ffea576ff5" Nov 1 01:01:54.277992 env[1357]: 2025-11-01 01:01:54.218 [INFO][4189] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="963e8a55b26f297f25ed2a9455e2a5c1bc4b6edfd1660a770d38a7ffea576ff5" Nov 1 01:01:54.277992 env[1357]: 2025-11-01 01:01:54.253 [INFO][4210] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="963e8a55b26f297f25ed2a9455e2a5c1bc4b6edfd1660a770d38a7ffea576ff5" HandleID="k8s-pod-network.963e8a55b26f297f25ed2a9455e2a5c1bc4b6edfd1660a770d38a7ffea576ff5" Workload="localhost-k8s-calico--apiserver--7787d665fb--fb8nb-eth0" Nov 1 01:01:54.277992 env[1357]: 2025-11-01 01:01:54.253 [INFO][4210] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 1 01:01:54.277992 env[1357]: 2025-11-01 01:01:54.253 [INFO][4210] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 1 01:01:54.277992 env[1357]: 2025-11-01 01:01:54.268 [WARNING][4210] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="963e8a55b26f297f25ed2a9455e2a5c1bc4b6edfd1660a770d38a7ffea576ff5" HandleID="k8s-pod-network.963e8a55b26f297f25ed2a9455e2a5c1bc4b6edfd1660a770d38a7ffea576ff5" Workload="localhost-k8s-calico--apiserver--7787d665fb--fb8nb-eth0" Nov 1 01:01:54.277992 env[1357]: 2025-11-01 01:01:54.268 [INFO][4210] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="963e8a55b26f297f25ed2a9455e2a5c1bc4b6edfd1660a770d38a7ffea576ff5" HandleID="k8s-pod-network.963e8a55b26f297f25ed2a9455e2a5c1bc4b6edfd1660a770d38a7ffea576ff5" Workload="localhost-k8s-calico--apiserver--7787d665fb--fb8nb-eth0" Nov 1 01:01:54.277992 env[1357]: 2025-11-01 01:01:54.269 [INFO][4210] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 1 01:01:54.277992 env[1357]: 2025-11-01 01:01:54.270 [INFO][4189] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="963e8a55b26f297f25ed2a9455e2a5c1bc4b6edfd1660a770d38a7ffea576ff5" Nov 1 01:01:54.277992 env[1357]: time="2025-11-01T01:01:54.273885009Z" level=info msg="TearDown network for sandbox \"963e8a55b26f297f25ed2a9455e2a5c1bc4b6edfd1660a770d38a7ffea576ff5\" successfully" Nov 1 01:01:54.277992 env[1357]: time="2025-11-01T01:01:54.273911738Z" level=info msg="StopPodSandbox for \"963e8a55b26f297f25ed2a9455e2a5c1bc4b6edfd1660a770d38a7ffea576ff5\" returns successfully" Nov 1 01:01:54.274159 systemd[1]: run-netns-cni\x2d3ea3af37\x2d7fb9\x2d8959\x2d5264\x2d7cb92efaaeaa.mount: Deactivated successfully. Nov 1 01:01:54.278638 env[1357]: time="2025-11-01T01:01:54.278619457Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7787d665fb-fb8nb,Uid:9e4c5135-dd57-4916-a0b6-81789ca74a77,Namespace:calico-apiserver,Attempt:1,}" Nov 1 01:01:54.492910 systemd-networkd[1114]: calic97f3f941bb: Link UP Nov 1 01:01:54.498067 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Nov 1 01:01:54.498166 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): calic97f3f941bb: link becomes ready Nov 1 01:01:54.498259 systemd-networkd[1114]: calic97f3f941bb: Gained carrier Nov 1 01:01:54.500306 env[1357]: time="2025-11-01T01:01:54.500272679Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 1 01:01:54.515980 env[1357]: time="2025-11-01T01:01:54.515934277Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Nov 1 01:01:54.516141 kubelet[2289]: E1101 01:01:54.516119 2289 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Nov 1 01:01:54.516183 kubelet[2289]: E1101 01:01:54.516149 2289 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Nov 1 01:01:54.516245 kubelet[2289]: E1101 01:01:54.516219 2289 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-442hg,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-clw4d_calico-system(2e92d5bc-ac6d-4f8c-9478-cb1bbeb19fda): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Nov 1 01:01:54.517472 kubelet[2289]: E1101 01:01:54.517449 2289 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-clw4d" podUID="2e92d5bc-ac6d-4f8c-9478-cb1bbeb19fda" Nov 1 01:01:54.524801 systemd-networkd[1114]: caliefccec1158f: Gained IPv6LL Nov 1 01:01:54.602680 kubelet[2289]: E1101 01:01:54.602057 2289 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-647db64f7d-p9wv7" podUID="a48af7bf-dcba-4afd-bada-a7a0787cc063" Nov 1 01:01:54.602680 kubelet[2289]: E1101 01:01:54.602001 2289 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-clw4d" podUID="2e92d5bc-ac6d-4f8c-9478-cb1bbeb19fda" Nov 1 01:01:54.644094 env[1357]: 2025-11-01 01:01:54.380 [INFO][4223] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--5c87d7ff56--95ljj-eth0 calico-apiserver-5c87d7ff56- calico-apiserver dbcec211-4cad-40a0-8aa5-e63111d93180 963 0 2025-11-01 01:01:25 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:5c87d7ff56 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-5c87d7ff56-95ljj eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] calic97f3f941bb [] [] }} ContainerID="dada8b763cc8da5aa944cf471592f1b4e440e2450d286e38554c9100a52f2ac6" Namespace="calico-apiserver" Pod="calico-apiserver-5c87d7ff56-95ljj" WorkloadEndpoint="localhost-k8s-calico--apiserver--5c87d7ff56--95ljj-" Nov 1 01:01:54.644094 env[1357]: 2025-11-01 01:01:54.380 [INFO][4223] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="dada8b763cc8da5aa944cf471592f1b4e440e2450d286e38554c9100a52f2ac6" Namespace="calico-apiserver" Pod="calico-apiserver-5c87d7ff56-95ljj" WorkloadEndpoint="localhost-k8s-calico--apiserver--5c87d7ff56--95ljj-eth0" Nov 1 01:01:54.644094 env[1357]: 2025-11-01 01:01:54.418 [INFO][4244] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="dada8b763cc8da5aa944cf471592f1b4e440e2450d286e38554c9100a52f2ac6" HandleID="k8s-pod-network.dada8b763cc8da5aa944cf471592f1b4e440e2450d286e38554c9100a52f2ac6" Workload="localhost-k8s-calico--apiserver--5c87d7ff56--95ljj-eth0" Nov 1 01:01:54.644094 env[1357]: 2025-11-01 01:01:54.418 [INFO][4244] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="dada8b763cc8da5aa944cf471592f1b4e440e2450d286e38554c9100a52f2ac6" HandleID="k8s-pod-network.dada8b763cc8da5aa944cf471592f1b4e440e2450d286e38554c9100a52f2ac6" Workload="localhost-k8s-calico--apiserver--5c87d7ff56--95ljj-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0003253a0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-5c87d7ff56-95ljj", "timestamp":"2025-11-01 01:01:54.418658389 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 1 01:01:54.644094 env[1357]: 2025-11-01 01:01:54.418 [INFO][4244] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 1 01:01:54.644094 env[1357]: 2025-11-01 01:01:54.418 [INFO][4244] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 1 01:01:54.644094 env[1357]: 2025-11-01 01:01:54.418 [INFO][4244] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Nov 1 01:01:54.644094 env[1357]: 2025-11-01 01:01:54.433 [INFO][4244] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.dada8b763cc8da5aa944cf471592f1b4e440e2450d286e38554c9100a52f2ac6" host="localhost" Nov 1 01:01:54.644094 env[1357]: 2025-11-01 01:01:54.436 [INFO][4244] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Nov 1 01:01:54.644094 env[1357]: 2025-11-01 01:01:54.439 [INFO][4244] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Nov 1 01:01:54.644094 env[1357]: 2025-11-01 01:01:54.440 [INFO][4244] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Nov 1 01:01:54.644094 env[1357]: 2025-11-01 01:01:54.444 [INFO][4244] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Nov 1 01:01:54.644094 env[1357]: 2025-11-01 01:01:54.444 [INFO][4244] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.dada8b763cc8da5aa944cf471592f1b4e440e2450d286e38554c9100a52f2ac6" host="localhost" Nov 1 01:01:54.644094 env[1357]: 2025-11-01 01:01:54.446 [INFO][4244] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.dada8b763cc8da5aa944cf471592f1b4e440e2450d286e38554c9100a52f2ac6 Nov 1 01:01:54.644094 env[1357]: 2025-11-01 01:01:54.469 [INFO][4244] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.dada8b763cc8da5aa944cf471592f1b4e440e2450d286e38554c9100a52f2ac6" host="localhost" Nov 1 01:01:54.644094 env[1357]: 2025-11-01 01:01:54.487 [INFO][4244] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.133/26] block=192.168.88.128/26 handle="k8s-pod-network.dada8b763cc8da5aa944cf471592f1b4e440e2450d286e38554c9100a52f2ac6" host="localhost" Nov 1 01:01:54.644094 env[1357]: 2025-11-01 01:01:54.487 [INFO][4244] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.133/26] handle="k8s-pod-network.dada8b763cc8da5aa944cf471592f1b4e440e2450d286e38554c9100a52f2ac6" host="localhost" Nov 1 01:01:54.644094 env[1357]: 2025-11-01 01:01:54.487 [INFO][4244] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 1 01:01:54.644094 env[1357]: 2025-11-01 01:01:54.487 [INFO][4244] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.133/26] IPv6=[] ContainerID="dada8b763cc8da5aa944cf471592f1b4e440e2450d286e38554c9100a52f2ac6" HandleID="k8s-pod-network.dada8b763cc8da5aa944cf471592f1b4e440e2450d286e38554c9100a52f2ac6" Workload="localhost-k8s-calico--apiserver--5c87d7ff56--95ljj-eth0" Nov 1 01:01:54.644601 env[1357]: 2025-11-01 01:01:54.490 [INFO][4223] cni-plugin/k8s.go 418: Populated endpoint ContainerID="dada8b763cc8da5aa944cf471592f1b4e440e2450d286e38554c9100a52f2ac6" Namespace="calico-apiserver" Pod="calico-apiserver-5c87d7ff56-95ljj" WorkloadEndpoint="localhost-k8s-calico--apiserver--5c87d7ff56--95ljj-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--5c87d7ff56--95ljj-eth0", GenerateName:"calico-apiserver-5c87d7ff56-", Namespace:"calico-apiserver", SelfLink:"", UID:"dbcec211-4cad-40a0-8aa5-e63111d93180", ResourceVersion:"963", Generation:0, CreationTimestamp:time.Date(2025, time.November, 1, 1, 1, 25, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5c87d7ff56", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-5c87d7ff56-95ljj", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calic97f3f941bb", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 1 01:01:54.644601 env[1357]: 2025-11-01 01:01:54.490 [INFO][4223] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.133/32] ContainerID="dada8b763cc8da5aa944cf471592f1b4e440e2450d286e38554c9100a52f2ac6" Namespace="calico-apiserver" Pod="calico-apiserver-5c87d7ff56-95ljj" WorkloadEndpoint="localhost-k8s-calico--apiserver--5c87d7ff56--95ljj-eth0" Nov 1 01:01:54.644601 env[1357]: 2025-11-01 01:01:54.490 [INFO][4223] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calic97f3f941bb ContainerID="dada8b763cc8da5aa944cf471592f1b4e440e2450d286e38554c9100a52f2ac6" Namespace="calico-apiserver" Pod="calico-apiserver-5c87d7ff56-95ljj" WorkloadEndpoint="localhost-k8s-calico--apiserver--5c87d7ff56--95ljj-eth0" Nov 1 01:01:54.644601 env[1357]: 2025-11-01 01:01:54.501 [INFO][4223] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="dada8b763cc8da5aa944cf471592f1b4e440e2450d286e38554c9100a52f2ac6" Namespace="calico-apiserver" Pod="calico-apiserver-5c87d7ff56-95ljj" WorkloadEndpoint="localhost-k8s-calico--apiserver--5c87d7ff56--95ljj-eth0" Nov 1 01:01:54.644601 env[1357]: 2025-11-01 01:01:54.501 [INFO][4223] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="dada8b763cc8da5aa944cf471592f1b4e440e2450d286e38554c9100a52f2ac6" Namespace="calico-apiserver" Pod="calico-apiserver-5c87d7ff56-95ljj" WorkloadEndpoint="localhost-k8s-calico--apiserver--5c87d7ff56--95ljj-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--5c87d7ff56--95ljj-eth0", GenerateName:"calico-apiserver-5c87d7ff56-", Namespace:"calico-apiserver", SelfLink:"", UID:"dbcec211-4cad-40a0-8aa5-e63111d93180", ResourceVersion:"963", Generation:0, CreationTimestamp:time.Date(2025, time.November, 1, 1, 1, 25, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5c87d7ff56", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"dada8b763cc8da5aa944cf471592f1b4e440e2450d286e38554c9100a52f2ac6", Pod:"calico-apiserver-5c87d7ff56-95ljj", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calic97f3f941bb", MAC:"be:ea:b2:46:59:c6", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 1 01:01:54.644601 env[1357]: 2025-11-01 01:01:54.639 [INFO][4223] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="dada8b763cc8da5aa944cf471592f1b4e440e2450d286e38554c9100a52f2ac6" Namespace="calico-apiserver" Pod="calico-apiserver-5c87d7ff56-95ljj" WorkloadEndpoint="localhost-k8s-calico--apiserver--5c87d7ff56--95ljj-eth0" Nov 1 01:01:54.662268 env[1357]: time="2025-11-01T01:01:54.662232627Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 1 01:01:54.662394 env[1357]: time="2025-11-01T01:01:54.662380064Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 1 01:01:54.662467 env[1357]: time="2025-11-01T01:01:54.662453521Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 1 01:01:54.662604 env[1357]: time="2025-11-01T01:01:54.662590049Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/dada8b763cc8da5aa944cf471592f1b4e440e2450d286e38554c9100a52f2ac6 pid=4275 runtime=io.containerd.runc.v2 Nov 1 01:01:54.668000 audit[4288]: NETFILTER_CFG table=filter:117 family=2 entries=68 op=nft_register_chain pid=4288 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Nov 1 01:01:54.678747 kernel: kauditd_printk_skb: 20 callbacks suppressed Nov 1 01:01:54.681611 kernel: audit: type=1325 audit(1761958914.668:421): table=filter:117 family=2 entries=68 op=nft_register_chain pid=4288 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Nov 1 01:01:54.681647 kernel: audit: type=1300 audit(1761958914.668:421): arch=c000003e syscall=46 success=yes exit=34624 a0=3 a1=7ffe6f4f3470 a2=0 a3=7ffe6f4f345c items=0 ppid=3612 pid=4288 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 01:01:54.681671 kernel: audit: type=1327 audit(1761958914.668:421): proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Nov 1 01:01:54.668000 audit[4288]: SYSCALL arch=c000003e syscall=46 success=yes exit=34624 a0=3 a1=7ffe6f4f3470 a2=0 a3=7ffe6f4f345c items=0 ppid=3612 pid=4288 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 01:01:54.668000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Nov 1 01:01:54.699040 systemd-resolved[1278]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Nov 1 01:01:54.733737 env[1357]: time="2025-11-01T01:01:54.733697999Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5c87d7ff56-95ljj,Uid:dbcec211-4cad-40a0-8aa5-e63111d93180,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"dada8b763cc8da5aa944cf471592f1b4e440e2450d286e38554c9100a52f2ac6\"" Nov 1 01:01:54.741052 env[1357]: time="2025-11-01T01:01:54.741026615Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 1 01:01:54.779800 systemd-networkd[1114]: caliadad37b4f0e: Link UP Nov 1 01:01:54.781688 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): caliadad37b4f0e: link becomes ready Nov 1 01:01:54.783764 systemd-networkd[1114]: caliadad37b4f0e: Gained carrier Nov 1 01:01:54.806929 env[1357]: 2025-11-01 01:01:54.411 [INFO][4234] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--7787d665fb--fb8nb-eth0 calico-apiserver-7787d665fb- calico-apiserver 9e4c5135-dd57-4916-a0b6-81789ca74a77 964 0 2025-11-01 01:01:24 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:7787d665fb projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-7787d665fb-fb8nb eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] caliadad37b4f0e [] [] }} ContainerID="318ef8caeef0d2b0d47f17e468d4a037d915d7ce126c00b963aeb315540bc4c2" Namespace="calico-apiserver" Pod="calico-apiserver-7787d665fb-fb8nb" WorkloadEndpoint="localhost-k8s-calico--apiserver--7787d665fb--fb8nb-" Nov 1 01:01:54.806929 env[1357]: 2025-11-01 01:01:54.411 [INFO][4234] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="318ef8caeef0d2b0d47f17e468d4a037d915d7ce126c00b963aeb315540bc4c2" Namespace="calico-apiserver" Pod="calico-apiserver-7787d665fb-fb8nb" WorkloadEndpoint="localhost-k8s-calico--apiserver--7787d665fb--fb8nb-eth0" Nov 1 01:01:54.806929 env[1357]: 2025-11-01 01:01:54.436 [INFO][4253] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="318ef8caeef0d2b0d47f17e468d4a037d915d7ce126c00b963aeb315540bc4c2" HandleID="k8s-pod-network.318ef8caeef0d2b0d47f17e468d4a037d915d7ce126c00b963aeb315540bc4c2" Workload="localhost-k8s-calico--apiserver--7787d665fb--fb8nb-eth0" Nov 1 01:01:54.806929 env[1357]: 2025-11-01 01:01:54.436 [INFO][4253] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="318ef8caeef0d2b0d47f17e468d4a037d915d7ce126c00b963aeb315540bc4c2" HandleID="k8s-pod-network.318ef8caeef0d2b0d47f17e468d4a037d915d7ce126c00b963aeb315540bc4c2" Workload="localhost-k8s-calico--apiserver--7787d665fb--fb8nb-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002d55a0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-7787d665fb-fb8nb", "timestamp":"2025-11-01 01:01:54.436034959 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 1 01:01:54.806929 env[1357]: 2025-11-01 01:01:54.436 [INFO][4253] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 1 01:01:54.806929 env[1357]: 2025-11-01 01:01:54.487 [INFO][4253] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 1 01:01:54.806929 env[1357]: 2025-11-01 01:01:54.488 [INFO][4253] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Nov 1 01:01:54.806929 env[1357]: 2025-11-01 01:01:54.634 [INFO][4253] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.318ef8caeef0d2b0d47f17e468d4a037d915d7ce126c00b963aeb315540bc4c2" host="localhost" Nov 1 01:01:54.806929 env[1357]: 2025-11-01 01:01:54.685 [INFO][4253] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Nov 1 01:01:54.806929 env[1357]: 2025-11-01 01:01:54.721 [INFO][4253] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Nov 1 01:01:54.806929 env[1357]: 2025-11-01 01:01:54.727 [INFO][4253] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Nov 1 01:01:54.806929 env[1357]: 2025-11-01 01:01:54.732 [INFO][4253] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Nov 1 01:01:54.806929 env[1357]: 2025-11-01 01:01:54.732 [INFO][4253] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.318ef8caeef0d2b0d47f17e468d4a037d915d7ce126c00b963aeb315540bc4c2" host="localhost" Nov 1 01:01:54.806929 env[1357]: 2025-11-01 01:01:54.737 [INFO][4253] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.318ef8caeef0d2b0d47f17e468d4a037d915d7ce126c00b963aeb315540bc4c2 Nov 1 01:01:54.806929 env[1357]: 2025-11-01 01:01:54.745 [INFO][4253] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.318ef8caeef0d2b0d47f17e468d4a037d915d7ce126c00b963aeb315540bc4c2" host="localhost" Nov 1 01:01:54.806929 env[1357]: 2025-11-01 01:01:54.752 [INFO][4253] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.134/26] block=192.168.88.128/26 handle="k8s-pod-network.318ef8caeef0d2b0d47f17e468d4a037d915d7ce126c00b963aeb315540bc4c2" host="localhost" Nov 1 01:01:54.806929 env[1357]: 2025-11-01 01:01:54.752 [INFO][4253] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.134/26] handle="k8s-pod-network.318ef8caeef0d2b0d47f17e468d4a037d915d7ce126c00b963aeb315540bc4c2" host="localhost" Nov 1 01:01:54.806929 env[1357]: 2025-11-01 01:01:54.752 [INFO][4253] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 1 01:01:54.806929 env[1357]: 2025-11-01 01:01:54.752 [INFO][4253] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.134/26] IPv6=[] ContainerID="318ef8caeef0d2b0d47f17e468d4a037d915d7ce126c00b963aeb315540bc4c2" HandleID="k8s-pod-network.318ef8caeef0d2b0d47f17e468d4a037d915d7ce126c00b963aeb315540bc4c2" Workload="localhost-k8s-calico--apiserver--7787d665fb--fb8nb-eth0" Nov 1 01:01:54.807437 env[1357]: 2025-11-01 01:01:54.753 [INFO][4234] cni-plugin/k8s.go 418: Populated endpoint ContainerID="318ef8caeef0d2b0d47f17e468d4a037d915d7ce126c00b963aeb315540bc4c2" Namespace="calico-apiserver" Pod="calico-apiserver-7787d665fb-fb8nb" WorkloadEndpoint="localhost-k8s-calico--apiserver--7787d665fb--fb8nb-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--7787d665fb--fb8nb-eth0", GenerateName:"calico-apiserver-7787d665fb-", Namespace:"calico-apiserver", SelfLink:"", UID:"9e4c5135-dd57-4916-a0b6-81789ca74a77", ResourceVersion:"964", Generation:0, CreationTimestamp:time.Date(2025, time.November, 1, 1, 1, 24, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7787d665fb", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-7787d665fb-fb8nb", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"caliadad37b4f0e", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 1 01:01:54.807437 env[1357]: 2025-11-01 01:01:54.753 [INFO][4234] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.134/32] ContainerID="318ef8caeef0d2b0d47f17e468d4a037d915d7ce126c00b963aeb315540bc4c2" Namespace="calico-apiserver" Pod="calico-apiserver-7787d665fb-fb8nb" WorkloadEndpoint="localhost-k8s-calico--apiserver--7787d665fb--fb8nb-eth0" Nov 1 01:01:54.807437 env[1357]: 2025-11-01 01:01:54.753 [INFO][4234] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to caliadad37b4f0e ContainerID="318ef8caeef0d2b0d47f17e468d4a037d915d7ce126c00b963aeb315540bc4c2" Namespace="calico-apiserver" Pod="calico-apiserver-7787d665fb-fb8nb" WorkloadEndpoint="localhost-k8s-calico--apiserver--7787d665fb--fb8nb-eth0" Nov 1 01:01:54.807437 env[1357]: 2025-11-01 01:01:54.786 [INFO][4234] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="318ef8caeef0d2b0d47f17e468d4a037d915d7ce126c00b963aeb315540bc4c2" Namespace="calico-apiserver" Pod="calico-apiserver-7787d665fb-fb8nb" WorkloadEndpoint="localhost-k8s-calico--apiserver--7787d665fb--fb8nb-eth0" Nov 1 01:01:54.807437 env[1357]: 2025-11-01 01:01:54.786 [INFO][4234] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="318ef8caeef0d2b0d47f17e468d4a037d915d7ce126c00b963aeb315540bc4c2" Namespace="calico-apiserver" Pod="calico-apiserver-7787d665fb-fb8nb" WorkloadEndpoint="localhost-k8s-calico--apiserver--7787d665fb--fb8nb-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--7787d665fb--fb8nb-eth0", GenerateName:"calico-apiserver-7787d665fb-", Namespace:"calico-apiserver", SelfLink:"", UID:"9e4c5135-dd57-4916-a0b6-81789ca74a77", ResourceVersion:"964", Generation:0, CreationTimestamp:time.Date(2025, time.November, 1, 1, 1, 24, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7787d665fb", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"318ef8caeef0d2b0d47f17e468d4a037d915d7ce126c00b963aeb315540bc4c2", Pod:"calico-apiserver-7787d665fb-fb8nb", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"caliadad37b4f0e", MAC:"ba:4d:50:7f:c9:7d", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 1 01:01:54.807437 env[1357]: 2025-11-01 01:01:54.805 [INFO][4234] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="318ef8caeef0d2b0d47f17e468d4a037d915d7ce126c00b963aeb315540bc4c2" Namespace="calico-apiserver" Pod="calico-apiserver-7787d665fb-fb8nb" WorkloadEndpoint="localhost-k8s-calico--apiserver--7787d665fb--fb8nb-eth0" Nov 1 01:01:54.823413 env[1357]: time="2025-11-01T01:01:54.821884153Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 1 01:01:54.823413 env[1357]: time="2025-11-01T01:01:54.821925293Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 1 01:01:54.823413 env[1357]: time="2025-11-01T01:01:54.821940618Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 1 01:01:54.823413 env[1357]: time="2025-11-01T01:01:54.822046084Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/318ef8caeef0d2b0d47f17e468d4a037d915d7ce126c00b963aeb315540bc4c2 pid=4324 runtime=io.containerd.runc.v2 Nov 1 01:01:54.838258 kernel: audit: type=1325 audit(1761958914.826:422): table=filter:118 family=2 entries=49 op=nft_register_chain pid=4334 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Nov 1 01:01:54.838350 kernel: audit: type=1300 audit(1761958914.826:422): arch=c000003e syscall=46 success=yes exit=25436 a0=3 a1=7fffc8518040 a2=0 a3=7fffc851802c items=0 ppid=3612 pid=4334 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 01:01:54.842762 kernel: audit: type=1327 audit(1761958914.826:422): proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Nov 1 01:01:54.826000 audit[4334]: NETFILTER_CFG table=filter:118 family=2 entries=49 op=nft_register_chain pid=4334 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Nov 1 01:01:54.826000 audit[4334]: SYSCALL arch=c000003e syscall=46 success=yes exit=25436 a0=3 a1=7fffc8518040 a2=0 a3=7fffc851802c items=0 ppid=3612 pid=4334 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 01:01:54.826000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Nov 1 01:01:54.844707 systemd-resolved[1278]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Nov 1 01:01:54.870712 env[1357]: time="2025-11-01T01:01:54.868907327Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7787d665fb-fb8nb,Uid:9e4c5135-dd57-4916-a0b6-81789ca74a77,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"318ef8caeef0d2b0d47f17e468d4a037d915d7ce126c00b963aeb315540bc4c2\"" Nov 1 01:01:55.017104 env[1357]: time="2025-11-01T01:01:55.017071305Z" level=info msg="StopPodSandbox for \"b12d86bfd044332a93a763b4593abffd6bff3a0520d093a99f14b1dc632feef7\"" Nov 1 01:01:55.024794 env[1357]: time="2025-11-01T01:01:55.024052123Z" level=info msg="StopPodSandbox for \"33ae4ebef7b540364464ead42411a96d422840754565ac7e28881067ba4acc13\"" Nov 1 01:01:55.071808 env[1357]: time="2025-11-01T01:01:55.069334954Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 1 01:01:55.073856 env[1357]: time="2025-11-01T01:01:55.073811061Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 1 01:01:55.074016 kubelet[2289]: E1101 01:01:55.073979 2289 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 1 01:01:55.074057 kubelet[2289]: E1101 01:01:55.074025 2289 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 1 01:01:55.077856 kubelet[2289]: E1101 01:01:55.074277 2289 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-wpnrd,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-5c87d7ff56-95ljj_calico-apiserver(dbcec211-4cad-40a0-8aa5-e63111d93180): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 1 01:01:55.077856 kubelet[2289]: E1101 01:01:55.075769 2289 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5c87d7ff56-95ljj" podUID="dbcec211-4cad-40a0-8aa5-e63111d93180" Nov 1 01:01:55.078113 env[1357]: time="2025-11-01T01:01:55.074559267Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 1 01:01:55.141952 env[1357]: 2025-11-01 01:01:55.075 [INFO][4368] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="b12d86bfd044332a93a763b4593abffd6bff3a0520d093a99f14b1dc632feef7" Nov 1 01:01:55.141952 env[1357]: 2025-11-01 01:01:55.076 [INFO][4368] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="b12d86bfd044332a93a763b4593abffd6bff3a0520d093a99f14b1dc632feef7" iface="eth0" netns="/var/run/netns/cni-32185720-5ee0-41fd-2ab4-f869f6c0e495" Nov 1 01:01:55.141952 env[1357]: 2025-11-01 01:01:55.076 [INFO][4368] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="b12d86bfd044332a93a763b4593abffd6bff3a0520d093a99f14b1dc632feef7" iface="eth0" netns="/var/run/netns/cni-32185720-5ee0-41fd-2ab4-f869f6c0e495" Nov 1 01:01:55.141952 env[1357]: 2025-11-01 01:01:55.076 [INFO][4368] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="b12d86bfd044332a93a763b4593abffd6bff3a0520d093a99f14b1dc632feef7" iface="eth0" netns="/var/run/netns/cni-32185720-5ee0-41fd-2ab4-f869f6c0e495" Nov 1 01:01:55.141952 env[1357]: 2025-11-01 01:01:55.076 [INFO][4368] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="b12d86bfd044332a93a763b4593abffd6bff3a0520d093a99f14b1dc632feef7" Nov 1 01:01:55.141952 env[1357]: 2025-11-01 01:01:55.076 [INFO][4368] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="b12d86bfd044332a93a763b4593abffd6bff3a0520d093a99f14b1dc632feef7" Nov 1 01:01:55.141952 env[1357]: 2025-11-01 01:01:55.131 [INFO][4389] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="b12d86bfd044332a93a763b4593abffd6bff3a0520d093a99f14b1dc632feef7" HandleID="k8s-pod-network.b12d86bfd044332a93a763b4593abffd6bff3a0520d093a99f14b1dc632feef7" Workload="localhost-k8s-goldmane--666569f655--hsw5f-eth0" Nov 1 01:01:55.141952 env[1357]: 2025-11-01 01:01:55.131 [INFO][4389] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 1 01:01:55.141952 env[1357]: 2025-11-01 01:01:55.131 [INFO][4389] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 1 01:01:55.141952 env[1357]: 2025-11-01 01:01:55.136 [WARNING][4389] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="b12d86bfd044332a93a763b4593abffd6bff3a0520d093a99f14b1dc632feef7" HandleID="k8s-pod-network.b12d86bfd044332a93a763b4593abffd6bff3a0520d093a99f14b1dc632feef7" Workload="localhost-k8s-goldmane--666569f655--hsw5f-eth0" Nov 1 01:01:55.141952 env[1357]: 2025-11-01 01:01:55.137 [INFO][4389] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="b12d86bfd044332a93a763b4593abffd6bff3a0520d093a99f14b1dc632feef7" HandleID="k8s-pod-network.b12d86bfd044332a93a763b4593abffd6bff3a0520d093a99f14b1dc632feef7" Workload="localhost-k8s-goldmane--666569f655--hsw5f-eth0" Nov 1 01:01:55.141952 env[1357]: 2025-11-01 01:01:55.137 [INFO][4389] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 1 01:01:55.141952 env[1357]: 2025-11-01 01:01:55.140 [INFO][4368] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="b12d86bfd044332a93a763b4593abffd6bff3a0520d093a99f14b1dc632feef7" Nov 1 01:01:55.145599 env[1357]: time="2025-11-01T01:01:55.144582389Z" level=info msg="TearDown network for sandbox \"b12d86bfd044332a93a763b4593abffd6bff3a0520d093a99f14b1dc632feef7\" successfully" Nov 1 01:01:55.145599 env[1357]: time="2025-11-01T01:01:55.144606051Z" level=info msg="StopPodSandbox for \"b12d86bfd044332a93a763b4593abffd6bff3a0520d093a99f14b1dc632feef7\" returns successfully" Nov 1 01:01:55.145599 env[1357]: time="2025-11-01T01:01:55.145129305Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-hsw5f,Uid:cb274633-10f4-4984-be19-e536608b3bf1,Namespace:calico-system,Attempt:1,}" Nov 1 01:01:55.143878 systemd[1]: run-netns-cni\x2d32185720\x2d5ee0\x2d41fd\x2d2ab4\x2df869f6c0e495.mount: Deactivated successfully. Nov 1 01:01:55.172938 env[1357]: 2025-11-01 01:01:55.102 [INFO][4382] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="33ae4ebef7b540364464ead42411a96d422840754565ac7e28881067ba4acc13" Nov 1 01:01:55.172938 env[1357]: 2025-11-01 01:01:55.102 [INFO][4382] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="33ae4ebef7b540364464ead42411a96d422840754565ac7e28881067ba4acc13" iface="eth0" netns="/var/run/netns/cni-6998f30a-c125-b466-7a65-156fb86e6119" Nov 1 01:01:55.172938 env[1357]: 2025-11-01 01:01:55.103 [INFO][4382] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="33ae4ebef7b540364464ead42411a96d422840754565ac7e28881067ba4acc13" iface="eth0" netns="/var/run/netns/cni-6998f30a-c125-b466-7a65-156fb86e6119" Nov 1 01:01:55.172938 env[1357]: 2025-11-01 01:01:55.113 [INFO][4382] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="33ae4ebef7b540364464ead42411a96d422840754565ac7e28881067ba4acc13" iface="eth0" netns="/var/run/netns/cni-6998f30a-c125-b466-7a65-156fb86e6119" Nov 1 01:01:55.172938 env[1357]: 2025-11-01 01:01:55.113 [INFO][4382] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="33ae4ebef7b540364464ead42411a96d422840754565ac7e28881067ba4acc13" Nov 1 01:01:55.172938 env[1357]: 2025-11-01 01:01:55.113 [INFO][4382] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="33ae4ebef7b540364464ead42411a96d422840754565ac7e28881067ba4acc13" Nov 1 01:01:55.172938 env[1357]: 2025-11-01 01:01:55.164 [INFO][4398] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="33ae4ebef7b540364464ead42411a96d422840754565ac7e28881067ba4acc13" HandleID="k8s-pod-network.33ae4ebef7b540364464ead42411a96d422840754565ac7e28881067ba4acc13" Workload="localhost-k8s-calico--apiserver--7787d665fb--xxt5l-eth0" Nov 1 01:01:55.172938 env[1357]: 2025-11-01 01:01:55.164 [INFO][4398] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 1 01:01:55.172938 env[1357]: 2025-11-01 01:01:55.164 [INFO][4398] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 1 01:01:55.172938 env[1357]: 2025-11-01 01:01:55.169 [WARNING][4398] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="33ae4ebef7b540364464ead42411a96d422840754565ac7e28881067ba4acc13" HandleID="k8s-pod-network.33ae4ebef7b540364464ead42411a96d422840754565ac7e28881067ba4acc13" Workload="localhost-k8s-calico--apiserver--7787d665fb--xxt5l-eth0" Nov 1 01:01:55.172938 env[1357]: 2025-11-01 01:01:55.169 [INFO][4398] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="33ae4ebef7b540364464ead42411a96d422840754565ac7e28881067ba4acc13" HandleID="k8s-pod-network.33ae4ebef7b540364464ead42411a96d422840754565ac7e28881067ba4acc13" Workload="localhost-k8s-calico--apiserver--7787d665fb--xxt5l-eth0" Nov 1 01:01:55.172938 env[1357]: 2025-11-01 01:01:55.170 [INFO][4398] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 1 01:01:55.172938 env[1357]: 2025-11-01 01:01:55.171 [INFO][4382] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="33ae4ebef7b540364464ead42411a96d422840754565ac7e28881067ba4acc13" Nov 1 01:01:55.175226 systemd[1]: run-netns-cni\x2d6998f30a\x2dc125\x2db466\x2d7a65\x2d156fb86e6119.mount: Deactivated successfully. Nov 1 01:01:55.175608 env[1357]: time="2025-11-01T01:01:55.175583904Z" level=info msg="TearDown network for sandbox \"33ae4ebef7b540364464ead42411a96d422840754565ac7e28881067ba4acc13\" successfully" Nov 1 01:01:55.175685 env[1357]: time="2025-11-01T01:01:55.175651397Z" level=info msg="StopPodSandbox for \"33ae4ebef7b540364464ead42411a96d422840754565ac7e28881067ba4acc13\" returns successfully" Nov 1 01:01:55.176382 env[1357]: time="2025-11-01T01:01:55.176361787Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7787d665fb-xxt5l,Uid:27248a54-b567-4865-8268-2eb8267aa120,Namespace:calico-apiserver,Attempt:1,}" Nov 1 01:01:55.305934 systemd-networkd[1114]: cali5630d412530: Link UP Nov 1 01:01:55.307401 systemd-networkd[1114]: cali5630d412530: Gained carrier Nov 1 01:01:55.307676 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cali5630d412530: link becomes ready Nov 1 01:01:55.352744 env[1357]: 2025-11-01 01:01:55.204 [INFO][4403] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-goldmane--666569f655--hsw5f-eth0 goldmane-666569f655- calico-system cb274633-10f4-4984-be19-e536608b3bf1 988 0 2025-11-01 01:01:27 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:666569f655 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s localhost goldmane-666569f655-hsw5f eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] cali5630d412530 [] [] }} ContainerID="7584e08ceda31396cbd1e1f19981d2e703b451b91978ef00ef02b7129fd71f9c" Namespace="calico-system" Pod="goldmane-666569f655-hsw5f" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--hsw5f-" Nov 1 01:01:55.352744 env[1357]: 2025-11-01 01:01:55.204 [INFO][4403] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="7584e08ceda31396cbd1e1f19981d2e703b451b91978ef00ef02b7129fd71f9c" Namespace="calico-system" Pod="goldmane-666569f655-hsw5f" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--hsw5f-eth0" Nov 1 01:01:55.352744 env[1357]: 2025-11-01 01:01:55.241 [INFO][4428] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="7584e08ceda31396cbd1e1f19981d2e703b451b91978ef00ef02b7129fd71f9c" HandleID="k8s-pod-network.7584e08ceda31396cbd1e1f19981d2e703b451b91978ef00ef02b7129fd71f9c" Workload="localhost-k8s-goldmane--666569f655--hsw5f-eth0" Nov 1 01:01:55.352744 env[1357]: 2025-11-01 01:01:55.241 [INFO][4428] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="7584e08ceda31396cbd1e1f19981d2e703b451b91978ef00ef02b7129fd71f9c" HandleID="k8s-pod-network.7584e08ceda31396cbd1e1f19981d2e703b451b91978ef00ef02b7129fd71f9c" Workload="localhost-k8s-goldmane--666569f655--hsw5f-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00040f060), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"goldmane-666569f655-hsw5f", "timestamp":"2025-11-01 01:01:55.241398631 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 1 01:01:55.352744 env[1357]: 2025-11-01 01:01:55.241 [INFO][4428] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 1 01:01:55.352744 env[1357]: 2025-11-01 01:01:55.241 [INFO][4428] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 1 01:01:55.352744 env[1357]: 2025-11-01 01:01:55.241 [INFO][4428] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Nov 1 01:01:55.352744 env[1357]: 2025-11-01 01:01:55.267 [INFO][4428] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.7584e08ceda31396cbd1e1f19981d2e703b451b91978ef00ef02b7129fd71f9c" host="localhost" Nov 1 01:01:55.352744 env[1357]: 2025-11-01 01:01:55.270 [INFO][4428] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Nov 1 01:01:55.352744 env[1357]: 2025-11-01 01:01:55.272 [INFO][4428] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Nov 1 01:01:55.352744 env[1357]: 2025-11-01 01:01:55.272 [INFO][4428] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Nov 1 01:01:55.352744 env[1357]: 2025-11-01 01:01:55.273 [INFO][4428] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Nov 1 01:01:55.352744 env[1357]: 2025-11-01 01:01:55.273 [INFO][4428] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.7584e08ceda31396cbd1e1f19981d2e703b451b91978ef00ef02b7129fd71f9c" host="localhost" Nov 1 01:01:55.352744 env[1357]: 2025-11-01 01:01:55.274 [INFO][4428] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.7584e08ceda31396cbd1e1f19981d2e703b451b91978ef00ef02b7129fd71f9c Nov 1 01:01:55.352744 env[1357]: 2025-11-01 01:01:55.286 [INFO][4428] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.7584e08ceda31396cbd1e1f19981d2e703b451b91978ef00ef02b7129fd71f9c" host="localhost" Nov 1 01:01:55.352744 env[1357]: 2025-11-01 01:01:55.302 [INFO][4428] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.135/26] block=192.168.88.128/26 handle="k8s-pod-network.7584e08ceda31396cbd1e1f19981d2e703b451b91978ef00ef02b7129fd71f9c" host="localhost" Nov 1 01:01:55.352744 env[1357]: 2025-11-01 01:01:55.302 [INFO][4428] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.135/26] handle="k8s-pod-network.7584e08ceda31396cbd1e1f19981d2e703b451b91978ef00ef02b7129fd71f9c" host="localhost" Nov 1 01:01:55.352744 env[1357]: 2025-11-01 01:01:55.303 [INFO][4428] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 1 01:01:55.352744 env[1357]: 2025-11-01 01:01:55.303 [INFO][4428] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.135/26] IPv6=[] ContainerID="7584e08ceda31396cbd1e1f19981d2e703b451b91978ef00ef02b7129fd71f9c" HandleID="k8s-pod-network.7584e08ceda31396cbd1e1f19981d2e703b451b91978ef00ef02b7129fd71f9c" Workload="localhost-k8s-goldmane--666569f655--hsw5f-eth0" Nov 1 01:01:55.374574 env[1357]: 2025-11-01 01:01:55.304 [INFO][4403] cni-plugin/k8s.go 418: Populated endpoint ContainerID="7584e08ceda31396cbd1e1f19981d2e703b451b91978ef00ef02b7129fd71f9c" Namespace="calico-system" Pod="goldmane-666569f655-hsw5f" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--hsw5f-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--666569f655--hsw5f-eth0", GenerateName:"goldmane-666569f655-", Namespace:"calico-system", SelfLink:"", UID:"cb274633-10f4-4984-be19-e536608b3bf1", ResourceVersion:"988", Generation:0, CreationTimestamp:time.Date(2025, time.November, 1, 1, 1, 27, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"666569f655", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"goldmane-666569f655-hsw5f", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali5630d412530", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 1 01:01:55.374574 env[1357]: 2025-11-01 01:01:55.304 [INFO][4403] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.135/32] ContainerID="7584e08ceda31396cbd1e1f19981d2e703b451b91978ef00ef02b7129fd71f9c" Namespace="calico-system" Pod="goldmane-666569f655-hsw5f" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--hsw5f-eth0" Nov 1 01:01:55.374574 env[1357]: 2025-11-01 01:01:55.304 [INFO][4403] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali5630d412530 ContainerID="7584e08ceda31396cbd1e1f19981d2e703b451b91978ef00ef02b7129fd71f9c" Namespace="calico-system" Pod="goldmane-666569f655-hsw5f" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--hsw5f-eth0" Nov 1 01:01:55.374574 env[1357]: 2025-11-01 01:01:55.307 [INFO][4403] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="7584e08ceda31396cbd1e1f19981d2e703b451b91978ef00ef02b7129fd71f9c" Namespace="calico-system" Pod="goldmane-666569f655-hsw5f" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--hsw5f-eth0" Nov 1 01:01:55.374574 env[1357]: 2025-11-01 01:01:55.308 [INFO][4403] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="7584e08ceda31396cbd1e1f19981d2e703b451b91978ef00ef02b7129fd71f9c" Namespace="calico-system" Pod="goldmane-666569f655-hsw5f" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--hsw5f-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--666569f655--hsw5f-eth0", GenerateName:"goldmane-666569f655-", Namespace:"calico-system", SelfLink:"", UID:"cb274633-10f4-4984-be19-e536608b3bf1", ResourceVersion:"988", Generation:0, CreationTimestamp:time.Date(2025, time.November, 1, 1, 1, 27, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"666569f655", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"7584e08ceda31396cbd1e1f19981d2e703b451b91978ef00ef02b7129fd71f9c", Pod:"goldmane-666569f655-hsw5f", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali5630d412530", MAC:"ce:fb:87:62:c7:c2", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 1 01:01:55.374574 env[1357]: 2025-11-01 01:01:55.350 [INFO][4403] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="7584e08ceda31396cbd1e1f19981d2e703b451b91978ef00ef02b7129fd71f9c" Namespace="calico-system" Pod="goldmane-666569f655-hsw5f" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--hsw5f-eth0" Nov 1 01:01:55.385065 env[1357]: time="2025-11-01T01:01:55.385035261Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 1 01:01:55.385377 env[1357]: time="2025-11-01T01:01:55.383997067Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 1 01:01:55.385377 env[1357]: time="2025-11-01T01:01:55.384066425Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 1 01:01:55.385377 env[1357]: time="2025-11-01T01:01:55.384075618Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 1 01:01:55.385377 env[1357]: time="2025-11-01T01:01:55.384172461Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/7584e08ceda31396cbd1e1f19981d2e703b451b91978ef00ef02b7129fd71f9c pid=4461 runtime=io.containerd.runc.v2 Nov 1 01:01:55.387000 audit[4460]: NETFILTER_CFG table=filter:119 family=2 entries=60 op=nft_register_chain pid=4460 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Nov 1 01:01:55.396581 kernel: audit: type=1325 audit(1761958915.387:423): table=filter:119 family=2 entries=60 op=nft_register_chain pid=4460 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Nov 1 01:01:55.396637 kernel: audit: type=1300 audit(1761958915.387:423): arch=c000003e syscall=46 success=yes exit=29916 a0=3 a1=7fff38593c70 a2=0 a3=7fff38593c5c items=0 ppid=3612 pid=4460 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 01:01:55.396657 kernel: audit: type=1327 audit(1761958915.387:423): proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Nov 1 01:01:55.387000 audit[4460]: SYSCALL arch=c000003e syscall=46 success=yes exit=29916 a0=3 a1=7fff38593c70 a2=0 a3=7fff38593c5c items=0 ppid=3612 pid=4460 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 01:01:55.387000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Nov 1 01:01:55.396863 env[1357]: time="2025-11-01T01:01:55.394327292Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 1 01:01:55.421456 systemd-networkd[1114]: cali598f3cd8adc: Link UP Nov 1 01:01:55.424087 systemd-networkd[1114]: cali598f3cd8adc: Gained carrier Nov 1 01:01:55.424701 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cali598f3cd8adc: link becomes ready Nov 1 01:01:55.442187 env[1357]: 2025-11-01 01:01:55.223 [INFO][4415] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--7787d665fb--xxt5l-eth0 calico-apiserver-7787d665fb- calico-apiserver 27248a54-b567-4865-8268-2eb8267aa120 991 0 2025-11-01 01:01:24 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:7787d665fb projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-7787d665fb-xxt5l eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali598f3cd8adc [] [] }} ContainerID="78672cbdd6e46872c7070107312b762ba099b2584ff73b8bcca802e6825f511c" Namespace="calico-apiserver" Pod="calico-apiserver-7787d665fb-xxt5l" WorkloadEndpoint="localhost-k8s-calico--apiserver--7787d665fb--xxt5l-" Nov 1 01:01:55.442187 env[1357]: 2025-11-01 01:01:55.224 [INFO][4415] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="78672cbdd6e46872c7070107312b762ba099b2584ff73b8bcca802e6825f511c" Namespace="calico-apiserver" Pod="calico-apiserver-7787d665fb-xxt5l" WorkloadEndpoint="localhost-k8s-calico--apiserver--7787d665fb--xxt5l-eth0" Nov 1 01:01:55.442187 env[1357]: 2025-11-01 01:01:55.258 [INFO][4436] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="78672cbdd6e46872c7070107312b762ba099b2584ff73b8bcca802e6825f511c" HandleID="k8s-pod-network.78672cbdd6e46872c7070107312b762ba099b2584ff73b8bcca802e6825f511c" Workload="localhost-k8s-calico--apiserver--7787d665fb--xxt5l-eth0" Nov 1 01:01:55.442187 env[1357]: 2025-11-01 01:01:55.258 [INFO][4436] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="78672cbdd6e46872c7070107312b762ba099b2584ff73b8bcca802e6825f511c" HandleID="k8s-pod-network.78672cbdd6e46872c7070107312b762ba099b2584ff73b8bcca802e6825f511c" Workload="localhost-k8s-calico--apiserver--7787d665fb--xxt5l-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000251010), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-7787d665fb-xxt5l", "timestamp":"2025-11-01 01:01:55.258312606 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 1 01:01:55.442187 env[1357]: 2025-11-01 01:01:55.258 [INFO][4436] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 1 01:01:55.442187 env[1357]: 2025-11-01 01:01:55.303 [INFO][4436] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 1 01:01:55.442187 env[1357]: 2025-11-01 01:01:55.303 [INFO][4436] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Nov 1 01:01:55.442187 env[1357]: 2025-11-01 01:01:55.351 [INFO][4436] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.78672cbdd6e46872c7070107312b762ba099b2584ff73b8bcca802e6825f511c" host="localhost" Nov 1 01:01:55.442187 env[1357]: 2025-11-01 01:01:55.371 [INFO][4436] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Nov 1 01:01:55.442187 env[1357]: 2025-11-01 01:01:55.374 [INFO][4436] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Nov 1 01:01:55.442187 env[1357]: 2025-11-01 01:01:55.376 [INFO][4436] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Nov 1 01:01:55.442187 env[1357]: 2025-11-01 01:01:55.378 [INFO][4436] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Nov 1 01:01:55.442187 env[1357]: 2025-11-01 01:01:55.378 [INFO][4436] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.78672cbdd6e46872c7070107312b762ba099b2584ff73b8bcca802e6825f511c" host="localhost" Nov 1 01:01:55.442187 env[1357]: 2025-11-01 01:01:55.380 [INFO][4436] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.78672cbdd6e46872c7070107312b762ba099b2584ff73b8bcca802e6825f511c Nov 1 01:01:55.442187 env[1357]: 2025-11-01 01:01:55.387 [INFO][4436] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.78672cbdd6e46872c7070107312b762ba099b2584ff73b8bcca802e6825f511c" host="localhost" Nov 1 01:01:55.442187 env[1357]: 2025-11-01 01:01:55.401 [INFO][4436] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.136/26] block=192.168.88.128/26 handle="k8s-pod-network.78672cbdd6e46872c7070107312b762ba099b2584ff73b8bcca802e6825f511c" host="localhost" Nov 1 01:01:55.442187 env[1357]: 2025-11-01 01:01:55.401 [INFO][4436] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.136/26] handle="k8s-pod-network.78672cbdd6e46872c7070107312b762ba099b2584ff73b8bcca802e6825f511c" host="localhost" Nov 1 01:01:55.442187 env[1357]: 2025-11-01 01:01:55.401 [INFO][4436] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 1 01:01:55.442187 env[1357]: 2025-11-01 01:01:55.401 [INFO][4436] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.136/26] IPv6=[] ContainerID="78672cbdd6e46872c7070107312b762ba099b2584ff73b8bcca802e6825f511c" HandleID="k8s-pod-network.78672cbdd6e46872c7070107312b762ba099b2584ff73b8bcca802e6825f511c" Workload="localhost-k8s-calico--apiserver--7787d665fb--xxt5l-eth0" Nov 1 01:01:55.450810 systemd-resolved[1278]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Nov 1 01:01:55.451319 env[1357]: 2025-11-01 01:01:55.403 [INFO][4415] cni-plugin/k8s.go 418: Populated endpoint ContainerID="78672cbdd6e46872c7070107312b762ba099b2584ff73b8bcca802e6825f511c" Namespace="calico-apiserver" Pod="calico-apiserver-7787d665fb-xxt5l" WorkloadEndpoint="localhost-k8s-calico--apiserver--7787d665fb--xxt5l-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--7787d665fb--xxt5l-eth0", GenerateName:"calico-apiserver-7787d665fb-", Namespace:"calico-apiserver", SelfLink:"", UID:"27248a54-b567-4865-8268-2eb8267aa120", ResourceVersion:"991", Generation:0, CreationTimestamp:time.Date(2025, time.November, 1, 1, 1, 24, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7787d665fb", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-7787d665fb-xxt5l", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali598f3cd8adc", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 1 01:01:55.451319 env[1357]: 2025-11-01 01:01:55.406 [INFO][4415] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.136/32] ContainerID="78672cbdd6e46872c7070107312b762ba099b2584ff73b8bcca802e6825f511c" Namespace="calico-apiserver" Pod="calico-apiserver-7787d665fb-xxt5l" WorkloadEndpoint="localhost-k8s-calico--apiserver--7787d665fb--xxt5l-eth0" Nov 1 01:01:55.451319 env[1357]: 2025-11-01 01:01:55.406 [INFO][4415] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali598f3cd8adc ContainerID="78672cbdd6e46872c7070107312b762ba099b2584ff73b8bcca802e6825f511c" Namespace="calico-apiserver" Pod="calico-apiserver-7787d665fb-xxt5l" WorkloadEndpoint="localhost-k8s-calico--apiserver--7787d665fb--xxt5l-eth0" Nov 1 01:01:55.451319 env[1357]: 2025-11-01 01:01:55.424 [INFO][4415] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="78672cbdd6e46872c7070107312b762ba099b2584ff73b8bcca802e6825f511c" Namespace="calico-apiserver" Pod="calico-apiserver-7787d665fb-xxt5l" WorkloadEndpoint="localhost-k8s-calico--apiserver--7787d665fb--xxt5l-eth0" Nov 1 01:01:55.451319 env[1357]: 2025-11-01 01:01:55.425 [INFO][4415] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="78672cbdd6e46872c7070107312b762ba099b2584ff73b8bcca802e6825f511c" Namespace="calico-apiserver" Pod="calico-apiserver-7787d665fb-xxt5l" WorkloadEndpoint="localhost-k8s-calico--apiserver--7787d665fb--xxt5l-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--7787d665fb--xxt5l-eth0", GenerateName:"calico-apiserver-7787d665fb-", Namespace:"calico-apiserver", SelfLink:"", UID:"27248a54-b567-4865-8268-2eb8267aa120", ResourceVersion:"991", Generation:0, CreationTimestamp:time.Date(2025, time.November, 1, 1, 1, 24, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7787d665fb", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"78672cbdd6e46872c7070107312b762ba099b2584ff73b8bcca802e6825f511c", Pod:"calico-apiserver-7787d665fb-xxt5l", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali598f3cd8adc", MAC:"6a:1a:f6:4d:60:e9", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 1 01:01:55.451319 env[1357]: 2025-11-01 01:01:55.441 [INFO][4415] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="78672cbdd6e46872c7070107312b762ba099b2584ff73b8bcca802e6825f511c" Namespace="calico-apiserver" Pod="calico-apiserver-7787d665fb-xxt5l" WorkloadEndpoint="localhost-k8s-calico--apiserver--7787d665fb--xxt5l-eth0" Nov 1 01:01:55.449000 audit[4493]: NETFILTER_CFG table=filter:120 family=2 entries=63 op=nft_register_chain pid=4493 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Nov 1 01:01:55.449000 audit[4493]: SYSCALL arch=c000003e syscall=46 success=yes exit=30664 a0=3 a1=7ffee56d57d0 a2=0 a3=7ffee56d57bc items=0 ppid=3612 pid=4493 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 01:01:55.449000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Nov 1 01:01:55.454697 kernel: audit: type=1325 audit(1761958915.449:424): table=filter:120 family=2 entries=63 op=nft_register_chain pid=4493 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Nov 1 01:01:55.469563 env[1357]: time="2025-11-01T01:01:55.469483709Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 1 01:01:55.469563 env[1357]: time="2025-11-01T01:01:55.469557373Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 1 01:01:55.469700 env[1357]: time="2025-11-01T01:01:55.469579682Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 1 01:01:55.469753 env[1357]: time="2025-11-01T01:01:55.469726708Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/78672cbdd6e46872c7070107312b762ba099b2584ff73b8bcca802e6825f511c pid=4502 runtime=io.containerd.runc.v2 Nov 1 01:01:55.484778 systemd-networkd[1114]: cali9e0739d5afb: Gained IPv6LL Nov 1 01:01:55.503115 systemd-resolved[1278]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Nov 1 01:01:55.513167 env[1357]: time="2025-11-01T01:01:55.513139081Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-hsw5f,Uid:cb274633-10f4-4984-be19-e536608b3bf1,Namespace:calico-system,Attempt:1,} returns sandbox id \"7584e08ceda31396cbd1e1f19981d2e703b451b91978ef00ef02b7129fd71f9c\"" Nov 1 01:01:55.527930 env[1357]: time="2025-11-01T01:01:55.527896290Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7787d665fb-xxt5l,Uid:27248a54-b567-4865-8268-2eb8267aa120,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"78672cbdd6e46872c7070107312b762ba099b2584ff73b8bcca802e6825f511c\"" Nov 1 01:01:55.614139 kubelet[2289]: E1101 01:01:55.394819 2289 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 1 01:01:55.614429 kubelet[2289]: E1101 01:01:55.614148 2289 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 1 01:01:55.624446 kubelet[2289]: E1101 01:01:55.624399 2289 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-pgbnf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-7787d665fb-fb8nb_calico-apiserver(9e4c5135-dd57-4916-a0b6-81789ca74a77): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 1 01:01:55.629718 kubelet[2289]: E1101 01:01:55.629694 2289 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7787d665fb-fb8nb" podUID="9e4c5135-dd57-4916-a0b6-81789ca74a77" Nov 1 01:01:55.631275 env[1357]: time="2025-11-01T01:01:55.631254071Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 1 01:01:55.658300 kubelet[2289]: E1101 01:01:55.656946 2289 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7787d665fb-fb8nb" podUID="9e4c5135-dd57-4916-a0b6-81789ca74a77" Nov 1 01:01:55.659057 kubelet[2289]: E1101 01:01:55.659026 2289 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-647db64f7d-p9wv7" podUID="a48af7bf-dcba-4afd-bada-a7a0787cc063" Nov 1 01:01:55.659147 kubelet[2289]: E1101 01:01:55.659096 2289 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5c87d7ff56-95ljj" podUID="dbcec211-4cad-40a0-8aa5-e63111d93180" Nov 1 01:01:55.825000 audit[4547]: NETFILTER_CFG table=filter:121 family=2 entries=14 op=nft_register_rule pid=4547 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Nov 1 01:01:55.825000 audit[4547]: SYSCALL arch=c000003e syscall=46 success=yes exit=5248 a0=3 a1=7ffda87061a0 a2=0 a3=7ffda870618c items=0 ppid=2387 pid=4547 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 01:01:55.825000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Nov 1 01:01:55.831000 audit[4547]: NETFILTER_CFG table=nat:122 family=2 entries=20 op=nft_register_rule pid=4547 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Nov 1 01:01:55.831000 audit[4547]: SYSCALL arch=c000003e syscall=46 success=yes exit=5772 a0=3 a1=7ffda87061a0 a2=0 a3=7ffda870618c items=0 ppid=2387 pid=4547 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 01:01:55.831000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Nov 1 01:01:55.844000 audit[4549]: NETFILTER_CFG table=filter:123 family=2 entries=14 op=nft_register_rule pid=4549 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Nov 1 01:01:55.844000 audit[4549]: SYSCALL arch=c000003e syscall=46 success=yes exit=5248 a0=3 a1=7ffd1a731370 a2=0 a3=7ffd1a73135c items=0 ppid=2387 pid=4549 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 01:01:55.844000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Nov 1 01:01:55.849000 audit[4549]: NETFILTER_CFG table=nat:124 family=2 entries=20 op=nft_register_rule pid=4549 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Nov 1 01:01:55.849000 audit[4549]: SYSCALL arch=c000003e syscall=46 success=yes exit=5772 a0=3 a1=7ffd1a731370 a2=0 a3=7ffd1a73135c items=0 ppid=2387 pid=4549 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 01:01:55.849000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Nov 1 01:01:55.939881 env[1357]: time="2025-11-01T01:01:55.939035168Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 1 01:01:55.949756 env[1357]: time="2025-11-01T01:01:55.949674215Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 1 01:01:55.955622 env[1357]: time="2025-11-01T01:01:55.950563717Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Nov 1 01:01:55.955704 kubelet[2289]: E1101 01:01:55.949920 2289 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 1 01:01:55.955704 kubelet[2289]: E1101 01:01:55.950017 2289 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 1 01:01:55.955704 kubelet[2289]: E1101 01:01:55.950183 2289 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-v6njt,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-7787d665fb-xxt5l_calico-apiserver(27248a54-b567-4865-8268-2eb8267aa120): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 1 01:01:55.955704 kubelet[2289]: E1101 01:01:55.952835 2289 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7787d665fb-xxt5l" podUID="27248a54-b567-4865-8268-2eb8267aa120" Nov 1 01:01:56.189095 systemd-networkd[1114]: caliadad37b4f0e: Gained IPv6LL Nov 1 01:01:56.252834 systemd-networkd[1114]: calic97f3f941bb: Gained IPv6LL Nov 1 01:01:56.275159 env[1357]: time="2025-11-01T01:01:56.275112047Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 1 01:01:56.284192 env[1357]: time="2025-11-01T01:01:56.284150996Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Nov 1 01:01:56.286317 kubelet[2289]: E1101 01:01:56.286281 2289 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Nov 1 01:01:56.286527 kubelet[2289]: E1101 01:01:56.286337 2289 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Nov 1 01:01:56.286691 kubelet[2289]: E1101 01:01:56.286442 2289 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-9mx6f,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-hsw5f_calico-system(cb274633-10f4-4984-be19-e536608b3bf1): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Nov 1 01:01:56.289334 kubelet[2289]: E1101 01:01:56.289305 2289 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-hsw5f" podUID="cb274633-10f4-4984-be19-e536608b3bf1" Nov 1 01:01:56.660101 kubelet[2289]: E1101 01:01:56.660076 2289 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7787d665fb-fb8nb" podUID="9e4c5135-dd57-4916-a0b6-81789ca74a77" Nov 1 01:01:56.660540 kubelet[2289]: E1101 01:01:56.660162 2289 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7787d665fb-xxt5l" podUID="27248a54-b567-4865-8268-2eb8267aa120" Nov 1 01:01:56.660607 kubelet[2289]: E1101 01:01:56.660205 2289 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5c87d7ff56-95ljj" podUID="dbcec211-4cad-40a0-8aa5-e63111d93180" Nov 1 01:01:56.660677 kubelet[2289]: E1101 01:01:56.660247 2289 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-hsw5f" podUID="cb274633-10f4-4984-be19-e536608b3bf1" Nov 1 01:01:56.756000 audit[4551]: NETFILTER_CFG table=filter:125 family=2 entries=14 op=nft_register_rule pid=4551 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Nov 1 01:01:56.756000 audit[4551]: SYSCALL arch=c000003e syscall=46 success=yes exit=5248 a0=3 a1=7fffc30d7110 a2=0 a3=7fffc30d70fc items=0 ppid=2387 pid=4551 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 01:01:56.756000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Nov 1 01:01:56.761000 audit[4551]: NETFILTER_CFG table=nat:126 family=2 entries=20 op=nft_register_rule pid=4551 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Nov 1 01:01:56.761000 audit[4551]: SYSCALL arch=c000003e syscall=46 success=yes exit=5772 a0=3 a1=7fffc30d7110 a2=0 a3=7fffc30d70fc items=0 ppid=2387 pid=4551 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 01:01:56.761000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Nov 1 01:01:57.016967 env[1357]: time="2025-11-01T01:01:57.016878183Z" level=info msg="StopPodSandbox for \"fed4f3f1e9d521ee583ade003d4d117acd6da61bd471431ec09366c2c693dc1b\"" Nov 1 01:01:57.072023 env[1357]: 2025-11-01 01:01:57.047 [INFO][4561] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="fed4f3f1e9d521ee583ade003d4d117acd6da61bd471431ec09366c2c693dc1b" Nov 1 01:01:57.072023 env[1357]: 2025-11-01 01:01:57.047 [INFO][4561] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="fed4f3f1e9d521ee583ade003d4d117acd6da61bd471431ec09366c2c693dc1b" iface="eth0" netns="/var/run/netns/cni-07fcc8a1-f2ea-5922-4fb6-a0d960d50f12" Nov 1 01:01:57.072023 env[1357]: 2025-11-01 01:01:57.047 [INFO][4561] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="fed4f3f1e9d521ee583ade003d4d117acd6da61bd471431ec09366c2c693dc1b" iface="eth0" netns="/var/run/netns/cni-07fcc8a1-f2ea-5922-4fb6-a0d960d50f12" Nov 1 01:01:57.072023 env[1357]: 2025-11-01 01:01:57.047 [INFO][4561] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="fed4f3f1e9d521ee583ade003d4d117acd6da61bd471431ec09366c2c693dc1b" iface="eth0" netns="/var/run/netns/cni-07fcc8a1-f2ea-5922-4fb6-a0d960d50f12" Nov 1 01:01:57.072023 env[1357]: 2025-11-01 01:01:57.047 [INFO][4561] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="fed4f3f1e9d521ee583ade003d4d117acd6da61bd471431ec09366c2c693dc1b" Nov 1 01:01:57.072023 env[1357]: 2025-11-01 01:01:57.047 [INFO][4561] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="fed4f3f1e9d521ee583ade003d4d117acd6da61bd471431ec09366c2c693dc1b" Nov 1 01:01:57.072023 env[1357]: 2025-11-01 01:01:57.062 [INFO][4568] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="fed4f3f1e9d521ee583ade003d4d117acd6da61bd471431ec09366c2c693dc1b" HandleID="k8s-pod-network.fed4f3f1e9d521ee583ade003d4d117acd6da61bd471431ec09366c2c693dc1b" Workload="localhost-k8s-coredns--668d6bf9bc--wzrxr-eth0" Nov 1 01:01:57.072023 env[1357]: 2025-11-01 01:01:57.062 [INFO][4568] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 1 01:01:57.072023 env[1357]: 2025-11-01 01:01:57.062 [INFO][4568] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 1 01:01:57.072023 env[1357]: 2025-11-01 01:01:57.067 [WARNING][4568] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="fed4f3f1e9d521ee583ade003d4d117acd6da61bd471431ec09366c2c693dc1b" HandleID="k8s-pod-network.fed4f3f1e9d521ee583ade003d4d117acd6da61bd471431ec09366c2c693dc1b" Workload="localhost-k8s-coredns--668d6bf9bc--wzrxr-eth0" Nov 1 01:01:57.072023 env[1357]: 2025-11-01 01:01:57.068 [INFO][4568] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="fed4f3f1e9d521ee583ade003d4d117acd6da61bd471431ec09366c2c693dc1b" HandleID="k8s-pod-network.fed4f3f1e9d521ee583ade003d4d117acd6da61bd471431ec09366c2c693dc1b" Workload="localhost-k8s-coredns--668d6bf9bc--wzrxr-eth0" Nov 1 01:01:57.072023 env[1357]: 2025-11-01 01:01:57.069 [INFO][4568] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 1 01:01:57.072023 env[1357]: 2025-11-01 01:01:57.070 [INFO][4561] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="fed4f3f1e9d521ee583ade003d4d117acd6da61bd471431ec09366c2c693dc1b" Nov 1 01:01:57.074124 systemd[1]: run-netns-cni\x2d07fcc8a1\x2df2ea\x2d5922\x2d4fb6\x2da0d960d50f12.mount: Deactivated successfully. Nov 1 01:01:57.075141 env[1357]: time="2025-11-01T01:01:57.074754157Z" level=info msg="TearDown network for sandbox \"fed4f3f1e9d521ee583ade003d4d117acd6da61bd471431ec09366c2c693dc1b\" successfully" Nov 1 01:01:57.075141 env[1357]: time="2025-11-01T01:01:57.074794881Z" level=info msg="StopPodSandbox for \"fed4f3f1e9d521ee583ade003d4d117acd6da61bd471431ec09366c2c693dc1b\" returns successfully" Nov 1 01:01:57.076419 env[1357]: time="2025-11-01T01:01:57.076393514Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-wzrxr,Uid:8387fb77-dd73-4b48-9e3f-a6209aeef170,Namespace:kube-system,Attempt:1,}" Nov 1 01:01:57.084812 systemd-networkd[1114]: cali5630d412530: Gained IPv6LL Nov 1 01:01:57.209403 systemd-networkd[1114]: calif7369b63cde: Link UP Nov 1 01:01:57.212216 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Nov 1 01:01:57.212339 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): calif7369b63cde: link becomes ready Nov 1 01:01:57.212474 systemd-networkd[1114]: calif7369b63cde: Gained carrier Nov 1 01:01:57.236502 env[1357]: 2025-11-01 01:01:57.137 [INFO][4574] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--668d6bf9bc--wzrxr-eth0 coredns-668d6bf9bc- kube-system 8387fb77-dd73-4b48-9e3f-a6209aeef170 1039 0 2025-11-01 01:01:15 +0000 UTC map[k8s-app:kube-dns pod-template-hash:668d6bf9bc projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-668d6bf9bc-wzrxr eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calif7369b63cde [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="907aa702979e91979c9a388ad6f02f1b1c9e519f21dfd76f44ffe1a8795c2156" Namespace="kube-system" Pod="coredns-668d6bf9bc-wzrxr" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--wzrxr-" Nov 1 01:01:57.236502 env[1357]: 2025-11-01 01:01:57.137 [INFO][4574] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="907aa702979e91979c9a388ad6f02f1b1c9e519f21dfd76f44ffe1a8795c2156" Namespace="kube-system" Pod="coredns-668d6bf9bc-wzrxr" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--wzrxr-eth0" Nov 1 01:01:57.236502 env[1357]: 2025-11-01 01:01:57.176 [INFO][4588] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="907aa702979e91979c9a388ad6f02f1b1c9e519f21dfd76f44ffe1a8795c2156" HandleID="k8s-pod-network.907aa702979e91979c9a388ad6f02f1b1c9e519f21dfd76f44ffe1a8795c2156" Workload="localhost-k8s-coredns--668d6bf9bc--wzrxr-eth0" Nov 1 01:01:57.236502 env[1357]: 2025-11-01 01:01:57.176 [INFO][4588] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="907aa702979e91979c9a388ad6f02f1b1c9e519f21dfd76f44ffe1a8795c2156" HandleID="k8s-pod-network.907aa702979e91979c9a388ad6f02f1b1c9e519f21dfd76f44ffe1a8795c2156" Workload="localhost-k8s-coredns--668d6bf9bc--wzrxr-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002cd030), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-668d6bf9bc-wzrxr", "timestamp":"2025-11-01 01:01:57.17625204 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 1 01:01:57.236502 env[1357]: 2025-11-01 01:01:57.176 [INFO][4588] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 1 01:01:57.236502 env[1357]: 2025-11-01 01:01:57.176 [INFO][4588] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 1 01:01:57.236502 env[1357]: 2025-11-01 01:01:57.176 [INFO][4588] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Nov 1 01:01:57.236502 env[1357]: 2025-11-01 01:01:57.181 [INFO][4588] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.907aa702979e91979c9a388ad6f02f1b1c9e519f21dfd76f44ffe1a8795c2156" host="localhost" Nov 1 01:01:57.236502 env[1357]: 2025-11-01 01:01:57.188 [INFO][4588] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Nov 1 01:01:57.236502 env[1357]: 2025-11-01 01:01:57.192 [INFO][4588] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Nov 1 01:01:57.236502 env[1357]: 2025-11-01 01:01:57.193 [INFO][4588] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Nov 1 01:01:57.236502 env[1357]: 2025-11-01 01:01:57.195 [INFO][4588] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Nov 1 01:01:57.236502 env[1357]: 2025-11-01 01:01:57.195 [INFO][4588] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.907aa702979e91979c9a388ad6f02f1b1c9e519f21dfd76f44ffe1a8795c2156" host="localhost" Nov 1 01:01:57.236502 env[1357]: 2025-11-01 01:01:57.196 [INFO][4588] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.907aa702979e91979c9a388ad6f02f1b1c9e519f21dfd76f44ffe1a8795c2156 Nov 1 01:01:57.236502 env[1357]: 2025-11-01 01:01:57.200 [INFO][4588] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.907aa702979e91979c9a388ad6f02f1b1c9e519f21dfd76f44ffe1a8795c2156" host="localhost" Nov 1 01:01:57.236502 env[1357]: 2025-11-01 01:01:57.204 [INFO][4588] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.137/26] block=192.168.88.128/26 handle="k8s-pod-network.907aa702979e91979c9a388ad6f02f1b1c9e519f21dfd76f44ffe1a8795c2156" host="localhost" Nov 1 01:01:57.236502 env[1357]: 2025-11-01 01:01:57.204 [INFO][4588] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.137/26] handle="k8s-pod-network.907aa702979e91979c9a388ad6f02f1b1c9e519f21dfd76f44ffe1a8795c2156" host="localhost" Nov 1 01:01:57.236502 env[1357]: 2025-11-01 01:01:57.204 [INFO][4588] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 1 01:01:57.236502 env[1357]: 2025-11-01 01:01:57.204 [INFO][4588] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.137/26] IPv6=[] ContainerID="907aa702979e91979c9a388ad6f02f1b1c9e519f21dfd76f44ffe1a8795c2156" HandleID="k8s-pod-network.907aa702979e91979c9a388ad6f02f1b1c9e519f21dfd76f44ffe1a8795c2156" Workload="localhost-k8s-coredns--668d6bf9bc--wzrxr-eth0" Nov 1 01:01:57.247119 env[1357]: 2025-11-01 01:01:57.206 [INFO][4574] cni-plugin/k8s.go 418: Populated endpoint ContainerID="907aa702979e91979c9a388ad6f02f1b1c9e519f21dfd76f44ffe1a8795c2156" Namespace="kube-system" Pod="coredns-668d6bf9bc-wzrxr" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--wzrxr-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--668d6bf9bc--wzrxr-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"8387fb77-dd73-4b48-9e3f-a6209aeef170", ResourceVersion:"1039", Generation:0, CreationTimestamp:time.Date(2025, time.November, 1, 1, 1, 15, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-668d6bf9bc-wzrxr", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.137/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calif7369b63cde", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 1 01:01:57.247119 env[1357]: 2025-11-01 01:01:57.206 [INFO][4574] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.137/32] ContainerID="907aa702979e91979c9a388ad6f02f1b1c9e519f21dfd76f44ffe1a8795c2156" Namespace="kube-system" Pod="coredns-668d6bf9bc-wzrxr" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--wzrxr-eth0" Nov 1 01:01:57.247119 env[1357]: 2025-11-01 01:01:57.206 [INFO][4574] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calif7369b63cde ContainerID="907aa702979e91979c9a388ad6f02f1b1c9e519f21dfd76f44ffe1a8795c2156" Namespace="kube-system" Pod="coredns-668d6bf9bc-wzrxr" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--wzrxr-eth0" Nov 1 01:01:57.247119 env[1357]: 2025-11-01 01:01:57.214 [INFO][4574] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="907aa702979e91979c9a388ad6f02f1b1c9e519f21dfd76f44ffe1a8795c2156" Namespace="kube-system" Pod="coredns-668d6bf9bc-wzrxr" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--wzrxr-eth0" Nov 1 01:01:57.247119 env[1357]: 2025-11-01 01:01:57.218 [INFO][4574] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="907aa702979e91979c9a388ad6f02f1b1c9e519f21dfd76f44ffe1a8795c2156" Namespace="kube-system" Pod="coredns-668d6bf9bc-wzrxr" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--wzrxr-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--668d6bf9bc--wzrxr-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"8387fb77-dd73-4b48-9e3f-a6209aeef170", ResourceVersion:"1039", Generation:0, CreationTimestamp:time.Date(2025, time.November, 1, 1, 1, 15, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"907aa702979e91979c9a388ad6f02f1b1c9e519f21dfd76f44ffe1a8795c2156", Pod:"coredns-668d6bf9bc-wzrxr", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.137/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calif7369b63cde", MAC:"fe:6a:ca:9c:17:50", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 1 01:01:57.247119 env[1357]: 2025-11-01 01:01:57.232 [INFO][4574] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="907aa702979e91979c9a388ad6f02f1b1c9e519f21dfd76f44ffe1a8795c2156" Namespace="kube-system" Pod="coredns-668d6bf9bc-wzrxr" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--wzrxr-eth0" Nov 1 01:01:57.251473 env[1357]: time="2025-11-01T01:01:57.251441429Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 1 01:01:57.251543 env[1357]: time="2025-11-01T01:01:57.251525325Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 1 01:01:57.251609 env[1357]: time="2025-11-01T01:01:57.251595411Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 1 01:01:57.251766 env[1357]: time="2025-11-01T01:01:57.251748555Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/907aa702979e91979c9a388ad6f02f1b1c9e519f21dfd76f44ffe1a8795c2156 pid=4610 runtime=io.containerd.runc.v2 Nov 1 01:01:57.256000 audit[4603]: NETFILTER_CFG table=filter:127 family=2 entries=52 op=nft_register_chain pid=4603 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Nov 1 01:01:57.256000 audit[4603]: SYSCALL arch=c000003e syscall=46 success=yes exit=23876 a0=3 a1=7ffd41896060 a2=0 a3=7ffd4189604c items=0 ppid=3612 pid=4603 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 01:01:57.256000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Nov 1 01:01:57.292205 systemd-resolved[1278]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Nov 1 01:01:57.322303 env[1357]: time="2025-11-01T01:01:57.322009034Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-wzrxr,Uid:8387fb77-dd73-4b48-9e3f-a6209aeef170,Namespace:kube-system,Attempt:1,} returns sandbox id \"907aa702979e91979c9a388ad6f02f1b1c9e519f21dfd76f44ffe1a8795c2156\"" Nov 1 01:01:57.341094 systemd-networkd[1114]: cali598f3cd8adc: Gained IPv6LL Nov 1 01:01:57.447708 env[1357]: time="2025-11-01T01:01:57.447072726Z" level=info msg="CreateContainer within sandbox \"907aa702979e91979c9a388ad6f02f1b1c9e519f21dfd76f44ffe1a8795c2156\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Nov 1 01:01:57.471371 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount89378638.mount: Deactivated successfully. Nov 1 01:01:57.487826 env[1357]: time="2025-11-01T01:01:57.487792206Z" level=info msg="CreateContainer within sandbox \"907aa702979e91979c9a388ad6f02f1b1c9e519f21dfd76f44ffe1a8795c2156\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"de52e06212ef43d39db0748c715a9005ef349f7d66d4610c403aa1ac5f0b5efb\"" Nov 1 01:01:57.488363 env[1357]: time="2025-11-01T01:01:57.488348606Z" level=info msg="StartContainer for \"de52e06212ef43d39db0748c715a9005ef349f7d66d4610c403aa1ac5f0b5efb\"" Nov 1 01:01:57.535295 env[1357]: time="2025-11-01T01:01:57.535261283Z" level=info msg="StartContainer for \"de52e06212ef43d39db0748c715a9005ef349f7d66d4610c403aa1ac5f0b5efb\" returns successfully" Nov 1 01:01:57.744439 kubelet[2289]: I1101 01:01:57.744401 2289 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-wzrxr" podStartSLOduration=42.721041964 podStartE2EDuration="42.721041964s" podCreationTimestamp="2025-11-01 01:01:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-01 01:01:57.720270385 +0000 UTC m=+47.819876001" watchObservedRunningTime="2025-11-01 01:01:57.721041964 +0000 UTC m=+47.820647575" Nov 1 01:01:57.749000 audit[4685]: NETFILTER_CFG table=filter:128 family=2 entries=14 op=nft_register_rule pid=4685 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Nov 1 01:01:57.749000 audit[4685]: SYSCALL arch=c000003e syscall=46 success=yes exit=5248 a0=3 a1=7ffcb2c6d710 a2=0 a3=7ffcb2c6d6fc items=0 ppid=2387 pid=4685 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 01:01:57.749000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Nov 1 01:01:57.753000 audit[4685]: NETFILTER_CFG table=nat:129 family=2 entries=44 op=nft_register_rule pid=4685 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Nov 1 01:01:57.753000 audit[4685]: SYSCALL arch=c000003e syscall=46 success=yes exit=14196 a0=3 a1=7ffcb2c6d710 a2=0 a3=7ffcb2c6d6fc items=0 ppid=2387 pid=4685 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 01:01:57.753000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Nov 1 01:01:58.254675 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3891850304.mount: Deactivated successfully. Nov 1 01:01:58.620786 systemd-networkd[1114]: calif7369b63cde: Gained IPv6LL Nov 1 01:01:58.798000 audit[4687]: NETFILTER_CFG table=filter:130 family=2 entries=14 op=nft_register_rule pid=4687 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Nov 1 01:01:58.798000 audit[4687]: SYSCALL arch=c000003e syscall=46 success=yes exit=5248 a0=3 a1=7ffcef983950 a2=0 a3=7ffcef98393c items=0 ppid=2387 pid=4687 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 01:01:58.798000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Nov 1 01:01:58.984000 audit[4687]: NETFILTER_CFG table=nat:131 family=2 entries=56 op=nft_register_chain pid=4687 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Nov 1 01:01:58.984000 audit[4687]: SYSCALL arch=c000003e syscall=46 success=yes exit=19860 a0=3 a1=7ffcef983950 a2=0 a3=7ffcef98393c items=0 ppid=2387 pid=4687 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 01:01:58.984000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Nov 1 01:02:03.017477 env[1357]: time="2025-11-01T01:02:03.017436595Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Nov 1 01:02:03.356275 env[1357]: time="2025-11-01T01:02:03.356167008Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 1 01:02:03.359279 env[1357]: time="2025-11-01T01:02:03.359243161Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Nov 1 01:02:03.363174 kubelet[2289]: E1101 01:02:03.363119 2289 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Nov 1 01:02:03.367374 kubelet[2289]: E1101 01:02:03.367335 2289 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Nov 1 01:02:03.367490 kubelet[2289]: E1101 01:02:03.367461 2289 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:24a2a9b1916e445d94a05dba571afa1c,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-9jxtv,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-d4bf4bbfd-xnxgp_calico-system(e47c436e-7585-46a1-976f-d1673b769a3e): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Nov 1 01:02:03.369493 env[1357]: time="2025-11-01T01:02:03.369293706Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Nov 1 01:02:03.690881 env[1357]: time="2025-11-01T01:02:03.690832165Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 1 01:02:03.695088 env[1357]: time="2025-11-01T01:02:03.695049162Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Nov 1 01:02:03.695214 kubelet[2289]: E1101 01:02:03.695171 2289 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Nov 1 01:02:03.695267 kubelet[2289]: E1101 01:02:03.695218 2289 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Nov 1 01:02:03.695327 kubelet[2289]: E1101 01:02:03.695298 2289 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-9jxtv,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-d4bf4bbfd-xnxgp_calico-system(e47c436e-7585-46a1-976f-d1673b769a3e): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Nov 1 01:02:03.696393 kubelet[2289]: E1101 01:02:03.696375 2289 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-d4bf4bbfd-xnxgp" podUID="e47c436e-7585-46a1-976f-d1673b769a3e" Nov 1 01:02:07.017835 env[1357]: time="2025-11-01T01:02:07.017792448Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Nov 1 01:02:07.347650 env[1357]: time="2025-11-01T01:02:07.347547531Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 1 01:02:07.349563 env[1357]: time="2025-11-01T01:02:07.349519310Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Nov 1 01:02:07.349747 kubelet[2289]: E1101 01:02:07.349713 2289 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Nov 1 01:02:07.350001 kubelet[2289]: E1101 01:02:07.349778 2289 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Nov 1 01:02:07.350199 kubelet[2289]: E1101 01:02:07.350144 2289 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-67xqj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-647db64f7d-p9wv7_calico-system(a48af7bf-dcba-4afd-bada-a7a0787cc063): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Nov 1 01:02:07.351799 kubelet[2289]: E1101 01:02:07.351765 2289 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-647db64f7d-p9wv7" podUID="a48af7bf-dcba-4afd-bada-a7a0787cc063" Nov 1 01:02:08.017272 env[1357]: time="2025-11-01T01:02:08.017242884Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 1 01:02:08.332885 env[1357]: time="2025-11-01T01:02:08.332797260Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 1 01:02:08.337437 env[1357]: time="2025-11-01T01:02:08.337395250Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 1 01:02:08.337595 kubelet[2289]: E1101 01:02:08.337523 2289 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 1 01:02:08.337595 kubelet[2289]: E1101 01:02:08.337553 2289 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 1 01:02:08.337749 kubelet[2289]: E1101 01:02:08.337716 2289 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-pgbnf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-7787d665fb-fb8nb_calico-apiserver(9e4c5135-dd57-4916-a0b6-81789ca74a77): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 1 01:02:08.338246 env[1357]: time="2025-11-01T01:02:08.338098373Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Nov 1 01:02:08.339647 kubelet[2289]: E1101 01:02:08.339623 2289 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7787d665fb-fb8nb" podUID="9e4c5135-dd57-4916-a0b6-81789ca74a77" Nov 1 01:02:08.648478 env[1357]: time="2025-11-01T01:02:08.648433723Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 1 01:02:08.648879 env[1357]: time="2025-11-01T01:02:08.648856197Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Nov 1 01:02:08.649082 kubelet[2289]: E1101 01:02:08.649055 2289 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Nov 1 01:02:08.649275 kubelet[2289]: E1101 01:02:08.649089 2289 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Nov 1 01:02:08.649275 kubelet[2289]: E1101 01:02:08.649194 2289 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-9mx6f,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-hsw5f_calico-system(cb274633-10f4-4984-be19-e536608b3bf1): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Nov 1 01:02:08.650488 kubelet[2289]: E1101 01:02:08.650467 2289 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-hsw5f" podUID="cb274633-10f4-4984-be19-e536608b3bf1" Nov 1 01:02:09.017040 env[1357]: time="2025-11-01T01:02:09.016931500Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Nov 1 01:02:09.345296 env[1357]: time="2025-11-01T01:02:09.345213053Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 1 01:02:09.346016 env[1357]: time="2025-11-01T01:02:09.345990405Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Nov 1 01:02:09.346262 kubelet[2289]: E1101 01:02:09.346228 2289 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Nov 1 01:02:09.346347 kubelet[2289]: E1101 01:02:09.346333 2289 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Nov 1 01:02:09.346487 kubelet[2289]: E1101 01:02:09.346462 2289 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-442hg,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-clw4d_calico-system(2e92d5bc-ac6d-4f8c-9478-cb1bbeb19fda): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Nov 1 01:02:09.348491 env[1357]: time="2025-11-01T01:02:09.348472013Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Nov 1 01:02:09.662541 env[1357]: time="2025-11-01T01:02:09.662498698Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 1 01:02:09.664864 env[1357]: time="2025-11-01T01:02:09.664814835Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Nov 1 01:02:09.665110 kubelet[2289]: E1101 01:02:09.665080 2289 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Nov 1 01:02:09.665332 kubelet[2289]: E1101 01:02:09.665127 2289 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Nov 1 01:02:09.665332 kubelet[2289]: E1101 01:02:09.665224 2289 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-442hg,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-clw4d_calico-system(2e92d5bc-ac6d-4f8c-9478-cb1bbeb19fda): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Nov 1 01:02:09.667187 kubelet[2289]: E1101 01:02:09.666774 2289 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-clw4d" podUID="2e92d5bc-ac6d-4f8c-9478-cb1bbeb19fda" Nov 1 01:02:09.995216 env[1357]: time="2025-11-01T01:02:09.994930638Z" level=info msg="StopPodSandbox for \"31847bfe1e26a430d45fba673108465220f466427ce10525cc6ebd996c954eb5\"" Nov 1 01:02:10.042232 env[1357]: time="2025-11-01T01:02:10.037147202Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 1 01:02:10.062597 env[1357]: 2025-11-01 01:02:10.028 [WARNING][4713] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="31847bfe1e26a430d45fba673108465220f466427ce10525cc6ebd996c954eb5" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--clw4d-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"2e92d5bc-ac6d-4f8c-9478-cb1bbeb19fda", ResourceVersion:"1105", Generation:0, CreationTimestamp:time.Date(2025, time.November, 1, 1, 1, 29, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"857b56db8f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"be57d3c066db22a9aefe2c4ed3914fd592e8c1dbd9b22359cd2de1fcfe0419d0", Pod:"csi-node-driver-clw4d", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"caliefccec1158f", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 1 01:02:10.062597 env[1357]: 2025-11-01 01:02:10.028 [INFO][4713] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="31847bfe1e26a430d45fba673108465220f466427ce10525cc6ebd996c954eb5" Nov 1 01:02:10.062597 env[1357]: 2025-11-01 01:02:10.028 [INFO][4713] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="31847bfe1e26a430d45fba673108465220f466427ce10525cc6ebd996c954eb5" iface="eth0" netns="" Nov 1 01:02:10.062597 env[1357]: 2025-11-01 01:02:10.028 [INFO][4713] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="31847bfe1e26a430d45fba673108465220f466427ce10525cc6ebd996c954eb5" Nov 1 01:02:10.062597 env[1357]: 2025-11-01 01:02:10.028 [INFO][4713] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="31847bfe1e26a430d45fba673108465220f466427ce10525cc6ebd996c954eb5" Nov 1 01:02:10.062597 env[1357]: 2025-11-01 01:02:10.048 [INFO][4720] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="31847bfe1e26a430d45fba673108465220f466427ce10525cc6ebd996c954eb5" HandleID="k8s-pod-network.31847bfe1e26a430d45fba673108465220f466427ce10525cc6ebd996c954eb5" Workload="localhost-k8s-csi--node--driver--clw4d-eth0" Nov 1 01:02:10.062597 env[1357]: 2025-11-01 01:02:10.048 [INFO][4720] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 1 01:02:10.062597 env[1357]: 2025-11-01 01:02:10.048 [INFO][4720] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 1 01:02:10.062597 env[1357]: 2025-11-01 01:02:10.057 [WARNING][4720] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="31847bfe1e26a430d45fba673108465220f466427ce10525cc6ebd996c954eb5" HandleID="k8s-pod-network.31847bfe1e26a430d45fba673108465220f466427ce10525cc6ebd996c954eb5" Workload="localhost-k8s-csi--node--driver--clw4d-eth0" Nov 1 01:02:10.062597 env[1357]: 2025-11-01 01:02:10.058 [INFO][4720] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="31847bfe1e26a430d45fba673108465220f466427ce10525cc6ebd996c954eb5" HandleID="k8s-pod-network.31847bfe1e26a430d45fba673108465220f466427ce10525cc6ebd996c954eb5" Workload="localhost-k8s-csi--node--driver--clw4d-eth0" Nov 1 01:02:10.062597 env[1357]: 2025-11-01 01:02:10.058 [INFO][4720] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 1 01:02:10.062597 env[1357]: 2025-11-01 01:02:10.060 [INFO][4713] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="31847bfe1e26a430d45fba673108465220f466427ce10525cc6ebd996c954eb5" Nov 1 01:02:10.063031 env[1357]: time="2025-11-01T01:02:10.063009218Z" level=info msg="TearDown network for sandbox \"31847bfe1e26a430d45fba673108465220f466427ce10525cc6ebd996c954eb5\" successfully" Nov 1 01:02:10.063078 env[1357]: time="2025-11-01T01:02:10.063067290Z" level=info msg="StopPodSandbox for \"31847bfe1e26a430d45fba673108465220f466427ce10525cc6ebd996c954eb5\" returns successfully" Nov 1 01:02:10.113790 env[1357]: time="2025-11-01T01:02:10.113765145Z" level=info msg="RemovePodSandbox for \"31847bfe1e26a430d45fba673108465220f466427ce10525cc6ebd996c954eb5\"" Nov 1 01:02:10.113893 env[1357]: time="2025-11-01T01:02:10.113792286Z" level=info msg="Forcibly stopping sandbox \"31847bfe1e26a430d45fba673108465220f466427ce10525cc6ebd996c954eb5\"" Nov 1 01:02:10.171067 env[1357]: 2025-11-01 01:02:10.141 [WARNING][4736] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="31847bfe1e26a430d45fba673108465220f466427ce10525cc6ebd996c954eb5" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--clw4d-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"2e92d5bc-ac6d-4f8c-9478-cb1bbeb19fda", ResourceVersion:"1105", Generation:0, CreationTimestamp:time.Date(2025, time.November, 1, 1, 1, 29, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"857b56db8f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"be57d3c066db22a9aefe2c4ed3914fd592e8c1dbd9b22359cd2de1fcfe0419d0", Pod:"csi-node-driver-clw4d", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"caliefccec1158f", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 1 01:02:10.171067 env[1357]: 2025-11-01 01:02:10.142 [INFO][4736] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="31847bfe1e26a430d45fba673108465220f466427ce10525cc6ebd996c954eb5" Nov 1 01:02:10.171067 env[1357]: 2025-11-01 01:02:10.142 [INFO][4736] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="31847bfe1e26a430d45fba673108465220f466427ce10525cc6ebd996c954eb5" iface="eth0" netns="" Nov 1 01:02:10.171067 env[1357]: 2025-11-01 01:02:10.142 [INFO][4736] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="31847bfe1e26a430d45fba673108465220f466427ce10525cc6ebd996c954eb5" Nov 1 01:02:10.171067 env[1357]: 2025-11-01 01:02:10.142 [INFO][4736] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="31847bfe1e26a430d45fba673108465220f466427ce10525cc6ebd996c954eb5" Nov 1 01:02:10.171067 env[1357]: 2025-11-01 01:02:10.157 [INFO][4744] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="31847bfe1e26a430d45fba673108465220f466427ce10525cc6ebd996c954eb5" HandleID="k8s-pod-network.31847bfe1e26a430d45fba673108465220f466427ce10525cc6ebd996c954eb5" Workload="localhost-k8s-csi--node--driver--clw4d-eth0" Nov 1 01:02:10.171067 env[1357]: 2025-11-01 01:02:10.157 [INFO][4744] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 1 01:02:10.171067 env[1357]: 2025-11-01 01:02:10.158 [INFO][4744] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 1 01:02:10.171067 env[1357]: 2025-11-01 01:02:10.167 [WARNING][4744] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="31847bfe1e26a430d45fba673108465220f466427ce10525cc6ebd996c954eb5" HandleID="k8s-pod-network.31847bfe1e26a430d45fba673108465220f466427ce10525cc6ebd996c954eb5" Workload="localhost-k8s-csi--node--driver--clw4d-eth0" Nov 1 01:02:10.171067 env[1357]: 2025-11-01 01:02:10.167 [INFO][4744] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="31847bfe1e26a430d45fba673108465220f466427ce10525cc6ebd996c954eb5" HandleID="k8s-pod-network.31847bfe1e26a430d45fba673108465220f466427ce10525cc6ebd996c954eb5" Workload="localhost-k8s-csi--node--driver--clw4d-eth0" Nov 1 01:02:10.171067 env[1357]: 2025-11-01 01:02:10.168 [INFO][4744] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 1 01:02:10.171067 env[1357]: 2025-11-01 01:02:10.169 [INFO][4736] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="31847bfe1e26a430d45fba673108465220f466427ce10525cc6ebd996c954eb5" Nov 1 01:02:10.171503 env[1357]: time="2025-11-01T01:02:10.171481044Z" level=info msg="TearDown network for sandbox \"31847bfe1e26a430d45fba673108465220f466427ce10525cc6ebd996c954eb5\" successfully" Nov 1 01:02:10.188792 env[1357]: time="2025-11-01T01:02:10.188758309Z" level=info msg="RemovePodSandbox \"31847bfe1e26a430d45fba673108465220f466427ce10525cc6ebd996c954eb5\" returns successfully" Nov 1 01:02:10.206627 env[1357]: time="2025-11-01T01:02:10.206604048Z" level=info msg="StopPodSandbox for \"0f44848ccbeca3127b519c0e267b6cbc3a4df6d552d05fed271ffc9643835f54\"" Nov 1 01:02:10.254258 env[1357]: 2025-11-01 01:02:10.228 [WARNING][4758] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="0f44848ccbeca3127b519c0e267b6cbc3a4df6d552d05fed271ffc9643835f54" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--5c87d7ff56--95ljj-eth0", GenerateName:"calico-apiserver-5c87d7ff56-", Namespace:"calico-apiserver", SelfLink:"", UID:"dbcec211-4cad-40a0-8aa5-e63111d93180", ResourceVersion:"1029", Generation:0, CreationTimestamp:time.Date(2025, time.November, 1, 1, 1, 25, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5c87d7ff56", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"dada8b763cc8da5aa944cf471592f1b4e440e2450d286e38554c9100a52f2ac6", Pod:"calico-apiserver-5c87d7ff56-95ljj", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calic97f3f941bb", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 1 01:02:10.254258 env[1357]: 2025-11-01 01:02:10.228 [INFO][4758] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="0f44848ccbeca3127b519c0e267b6cbc3a4df6d552d05fed271ffc9643835f54" Nov 1 01:02:10.254258 env[1357]: 2025-11-01 01:02:10.228 [INFO][4758] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="0f44848ccbeca3127b519c0e267b6cbc3a4df6d552d05fed271ffc9643835f54" iface="eth0" netns="" Nov 1 01:02:10.254258 env[1357]: 2025-11-01 01:02:10.228 [INFO][4758] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="0f44848ccbeca3127b519c0e267b6cbc3a4df6d552d05fed271ffc9643835f54" Nov 1 01:02:10.254258 env[1357]: 2025-11-01 01:02:10.228 [INFO][4758] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="0f44848ccbeca3127b519c0e267b6cbc3a4df6d552d05fed271ffc9643835f54" Nov 1 01:02:10.254258 env[1357]: 2025-11-01 01:02:10.247 [INFO][4765] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="0f44848ccbeca3127b519c0e267b6cbc3a4df6d552d05fed271ffc9643835f54" HandleID="k8s-pod-network.0f44848ccbeca3127b519c0e267b6cbc3a4df6d552d05fed271ffc9643835f54" Workload="localhost-k8s-calico--apiserver--5c87d7ff56--95ljj-eth0" Nov 1 01:02:10.254258 env[1357]: 2025-11-01 01:02:10.247 [INFO][4765] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 1 01:02:10.254258 env[1357]: 2025-11-01 01:02:10.247 [INFO][4765] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 1 01:02:10.254258 env[1357]: 2025-11-01 01:02:10.251 [WARNING][4765] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="0f44848ccbeca3127b519c0e267b6cbc3a4df6d552d05fed271ffc9643835f54" HandleID="k8s-pod-network.0f44848ccbeca3127b519c0e267b6cbc3a4df6d552d05fed271ffc9643835f54" Workload="localhost-k8s-calico--apiserver--5c87d7ff56--95ljj-eth0" Nov 1 01:02:10.254258 env[1357]: 2025-11-01 01:02:10.251 [INFO][4765] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="0f44848ccbeca3127b519c0e267b6cbc3a4df6d552d05fed271ffc9643835f54" HandleID="k8s-pod-network.0f44848ccbeca3127b519c0e267b6cbc3a4df6d552d05fed271ffc9643835f54" Workload="localhost-k8s-calico--apiserver--5c87d7ff56--95ljj-eth0" Nov 1 01:02:10.254258 env[1357]: 2025-11-01 01:02:10.251 [INFO][4765] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 1 01:02:10.254258 env[1357]: 2025-11-01 01:02:10.253 [INFO][4758] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="0f44848ccbeca3127b519c0e267b6cbc3a4df6d552d05fed271ffc9643835f54" Nov 1 01:02:10.255628 env[1357]: time="2025-11-01T01:02:10.254236156Z" level=info msg="TearDown network for sandbox \"0f44848ccbeca3127b519c0e267b6cbc3a4df6d552d05fed271ffc9643835f54\" successfully" Nov 1 01:02:10.255628 env[1357]: time="2025-11-01T01:02:10.255275171Z" level=info msg="StopPodSandbox for \"0f44848ccbeca3127b519c0e267b6cbc3a4df6d552d05fed271ffc9643835f54\" returns successfully" Nov 1 01:02:10.255628 env[1357]: time="2025-11-01T01:02:10.255589102Z" level=info msg="RemovePodSandbox for \"0f44848ccbeca3127b519c0e267b6cbc3a4df6d552d05fed271ffc9643835f54\"" Nov 1 01:02:10.255628 env[1357]: time="2025-11-01T01:02:10.255606995Z" level=info msg="Forcibly stopping sandbox \"0f44848ccbeca3127b519c0e267b6cbc3a4df6d552d05fed271ffc9643835f54\"" Nov 1 01:02:10.297687 env[1357]: 2025-11-01 01:02:10.277 [WARNING][4779] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="0f44848ccbeca3127b519c0e267b6cbc3a4df6d552d05fed271ffc9643835f54" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--5c87d7ff56--95ljj-eth0", GenerateName:"calico-apiserver-5c87d7ff56-", Namespace:"calico-apiserver", SelfLink:"", UID:"dbcec211-4cad-40a0-8aa5-e63111d93180", ResourceVersion:"1029", Generation:0, CreationTimestamp:time.Date(2025, time.November, 1, 1, 1, 25, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5c87d7ff56", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"dada8b763cc8da5aa944cf471592f1b4e440e2450d286e38554c9100a52f2ac6", Pod:"calico-apiserver-5c87d7ff56-95ljj", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calic97f3f941bb", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 1 01:02:10.297687 env[1357]: 2025-11-01 01:02:10.278 [INFO][4779] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="0f44848ccbeca3127b519c0e267b6cbc3a4df6d552d05fed271ffc9643835f54" Nov 1 01:02:10.297687 env[1357]: 2025-11-01 01:02:10.278 [INFO][4779] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="0f44848ccbeca3127b519c0e267b6cbc3a4df6d552d05fed271ffc9643835f54" iface="eth0" netns="" Nov 1 01:02:10.297687 env[1357]: 2025-11-01 01:02:10.278 [INFO][4779] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="0f44848ccbeca3127b519c0e267b6cbc3a4df6d552d05fed271ffc9643835f54" Nov 1 01:02:10.297687 env[1357]: 2025-11-01 01:02:10.278 [INFO][4779] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="0f44848ccbeca3127b519c0e267b6cbc3a4df6d552d05fed271ffc9643835f54" Nov 1 01:02:10.297687 env[1357]: 2025-11-01 01:02:10.290 [INFO][4787] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="0f44848ccbeca3127b519c0e267b6cbc3a4df6d552d05fed271ffc9643835f54" HandleID="k8s-pod-network.0f44848ccbeca3127b519c0e267b6cbc3a4df6d552d05fed271ffc9643835f54" Workload="localhost-k8s-calico--apiserver--5c87d7ff56--95ljj-eth0" Nov 1 01:02:10.297687 env[1357]: 2025-11-01 01:02:10.291 [INFO][4787] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 1 01:02:10.297687 env[1357]: 2025-11-01 01:02:10.291 [INFO][4787] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 1 01:02:10.297687 env[1357]: 2025-11-01 01:02:10.294 [WARNING][4787] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="0f44848ccbeca3127b519c0e267b6cbc3a4df6d552d05fed271ffc9643835f54" HandleID="k8s-pod-network.0f44848ccbeca3127b519c0e267b6cbc3a4df6d552d05fed271ffc9643835f54" Workload="localhost-k8s-calico--apiserver--5c87d7ff56--95ljj-eth0" Nov 1 01:02:10.297687 env[1357]: 2025-11-01 01:02:10.294 [INFO][4787] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="0f44848ccbeca3127b519c0e267b6cbc3a4df6d552d05fed271ffc9643835f54" HandleID="k8s-pod-network.0f44848ccbeca3127b519c0e267b6cbc3a4df6d552d05fed271ffc9643835f54" Workload="localhost-k8s-calico--apiserver--5c87d7ff56--95ljj-eth0" Nov 1 01:02:10.297687 env[1357]: 2025-11-01 01:02:10.295 [INFO][4787] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 1 01:02:10.297687 env[1357]: 2025-11-01 01:02:10.296 [INFO][4779] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="0f44848ccbeca3127b519c0e267b6cbc3a4df6d552d05fed271ffc9643835f54" Nov 1 01:02:10.298077 env[1357]: time="2025-11-01T01:02:10.298054207Z" level=info msg="TearDown network for sandbox \"0f44848ccbeca3127b519c0e267b6cbc3a4df6d552d05fed271ffc9643835f54\" successfully" Nov 1 01:02:10.299559 env[1357]: time="2025-11-01T01:02:10.299545983Z" level=info msg="RemovePodSandbox \"0f44848ccbeca3127b519c0e267b6cbc3a4df6d552d05fed271ffc9643835f54\" returns successfully" Nov 1 01:02:10.300048 env[1357]: time="2025-11-01T01:02:10.300020050Z" level=info msg="StopPodSandbox for \"9f726969058ef8f2e663650fe2b27dda62381218a5d5957de5522097806b0442\"" Nov 1 01:02:10.337573 env[1357]: time="2025-11-01T01:02:10.337539814Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 1 01:02:10.337993 env[1357]: time="2025-11-01T01:02:10.337969405Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 1 01:02:10.338347 kubelet[2289]: E1101 01:02:10.338190 2289 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 1 01:02:10.338347 kubelet[2289]: E1101 01:02:10.338238 2289 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 1 01:02:10.338424 kubelet[2289]: E1101 01:02:10.338400 2289 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-wpnrd,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-5c87d7ff56-95ljj_calico-apiserver(dbcec211-4cad-40a0-8aa5-e63111d93180): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 1 01:02:10.338719 env[1357]: time="2025-11-01T01:02:10.338706409Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 1 01:02:10.339647 kubelet[2289]: E1101 01:02:10.339556 2289 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5c87d7ff56-95ljj" podUID="dbcec211-4cad-40a0-8aa5-e63111d93180" Nov 1 01:02:10.348293 env[1357]: 2025-11-01 01:02:10.324 [WARNING][4802] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="9f726969058ef8f2e663650fe2b27dda62381218a5d5957de5522097806b0442" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--668d6bf9bc--rzp9k-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"c9757878-cb1e-46ae-a174-a7b9152136f7", ResourceVersion:"951", Generation:0, CreationTimestamp:time.Date(2025, time.November, 1, 1, 1, 15, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"70e53d85c4650d6cb1105db5c4c0b8a271f8e52e13c32cfa27868f4f1a455f40", Pod:"coredns-668d6bf9bc-rzp9k", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali40be43ccdf6", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 1 01:02:10.348293 env[1357]: 2025-11-01 01:02:10.325 [INFO][4802] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="9f726969058ef8f2e663650fe2b27dda62381218a5d5957de5522097806b0442" Nov 1 01:02:10.348293 env[1357]: 2025-11-01 01:02:10.325 [INFO][4802] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="9f726969058ef8f2e663650fe2b27dda62381218a5d5957de5522097806b0442" iface="eth0" netns="" Nov 1 01:02:10.348293 env[1357]: 2025-11-01 01:02:10.325 [INFO][4802] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="9f726969058ef8f2e663650fe2b27dda62381218a5d5957de5522097806b0442" Nov 1 01:02:10.348293 env[1357]: 2025-11-01 01:02:10.325 [INFO][4802] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="9f726969058ef8f2e663650fe2b27dda62381218a5d5957de5522097806b0442" Nov 1 01:02:10.348293 env[1357]: 2025-11-01 01:02:10.338 [INFO][4809] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="9f726969058ef8f2e663650fe2b27dda62381218a5d5957de5522097806b0442" HandleID="k8s-pod-network.9f726969058ef8f2e663650fe2b27dda62381218a5d5957de5522097806b0442" Workload="localhost-k8s-coredns--668d6bf9bc--rzp9k-eth0" Nov 1 01:02:10.348293 env[1357]: 2025-11-01 01:02:10.339 [INFO][4809] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 1 01:02:10.348293 env[1357]: 2025-11-01 01:02:10.339 [INFO][4809] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 1 01:02:10.348293 env[1357]: 2025-11-01 01:02:10.344 [WARNING][4809] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="9f726969058ef8f2e663650fe2b27dda62381218a5d5957de5522097806b0442" HandleID="k8s-pod-network.9f726969058ef8f2e663650fe2b27dda62381218a5d5957de5522097806b0442" Workload="localhost-k8s-coredns--668d6bf9bc--rzp9k-eth0" Nov 1 01:02:10.348293 env[1357]: 2025-11-01 01:02:10.344 [INFO][4809] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="9f726969058ef8f2e663650fe2b27dda62381218a5d5957de5522097806b0442" HandleID="k8s-pod-network.9f726969058ef8f2e663650fe2b27dda62381218a5d5957de5522097806b0442" Workload="localhost-k8s-coredns--668d6bf9bc--rzp9k-eth0" Nov 1 01:02:10.348293 env[1357]: 2025-11-01 01:02:10.345 [INFO][4809] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 1 01:02:10.348293 env[1357]: 2025-11-01 01:02:10.347 [INFO][4802] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="9f726969058ef8f2e663650fe2b27dda62381218a5d5957de5522097806b0442" Nov 1 01:02:10.350598 env[1357]: time="2025-11-01T01:02:10.348721932Z" level=info msg="TearDown network for sandbox \"9f726969058ef8f2e663650fe2b27dda62381218a5d5957de5522097806b0442\" successfully" Nov 1 01:02:10.350598 env[1357]: time="2025-11-01T01:02:10.348742380Z" level=info msg="StopPodSandbox for \"9f726969058ef8f2e663650fe2b27dda62381218a5d5957de5522097806b0442\" returns successfully" Nov 1 01:02:10.350598 env[1357]: time="2025-11-01T01:02:10.349183546Z" level=info msg="RemovePodSandbox for \"9f726969058ef8f2e663650fe2b27dda62381218a5d5957de5522097806b0442\"" Nov 1 01:02:10.350598 env[1357]: time="2025-11-01T01:02:10.349200711Z" level=info msg="Forcibly stopping sandbox \"9f726969058ef8f2e663650fe2b27dda62381218a5d5957de5522097806b0442\"" Nov 1 01:02:10.395988 env[1357]: 2025-11-01 01:02:10.375 [WARNING][4824] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="9f726969058ef8f2e663650fe2b27dda62381218a5d5957de5522097806b0442" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--668d6bf9bc--rzp9k-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"c9757878-cb1e-46ae-a174-a7b9152136f7", ResourceVersion:"951", Generation:0, CreationTimestamp:time.Date(2025, time.November, 1, 1, 1, 15, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"70e53d85c4650d6cb1105db5c4c0b8a271f8e52e13c32cfa27868f4f1a455f40", Pod:"coredns-668d6bf9bc-rzp9k", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali40be43ccdf6", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 1 01:02:10.395988 env[1357]: 2025-11-01 01:02:10.375 [INFO][4824] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="9f726969058ef8f2e663650fe2b27dda62381218a5d5957de5522097806b0442" Nov 1 01:02:10.395988 env[1357]: 2025-11-01 01:02:10.375 [INFO][4824] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="9f726969058ef8f2e663650fe2b27dda62381218a5d5957de5522097806b0442" iface="eth0" netns="" Nov 1 01:02:10.395988 env[1357]: 2025-11-01 01:02:10.375 [INFO][4824] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="9f726969058ef8f2e663650fe2b27dda62381218a5d5957de5522097806b0442" Nov 1 01:02:10.395988 env[1357]: 2025-11-01 01:02:10.375 [INFO][4824] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="9f726969058ef8f2e663650fe2b27dda62381218a5d5957de5522097806b0442" Nov 1 01:02:10.395988 env[1357]: 2025-11-01 01:02:10.389 [INFO][4831] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="9f726969058ef8f2e663650fe2b27dda62381218a5d5957de5522097806b0442" HandleID="k8s-pod-network.9f726969058ef8f2e663650fe2b27dda62381218a5d5957de5522097806b0442" Workload="localhost-k8s-coredns--668d6bf9bc--rzp9k-eth0" Nov 1 01:02:10.395988 env[1357]: 2025-11-01 01:02:10.389 [INFO][4831] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 1 01:02:10.395988 env[1357]: 2025-11-01 01:02:10.389 [INFO][4831] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 1 01:02:10.395988 env[1357]: 2025-11-01 01:02:10.393 [WARNING][4831] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="9f726969058ef8f2e663650fe2b27dda62381218a5d5957de5522097806b0442" HandleID="k8s-pod-network.9f726969058ef8f2e663650fe2b27dda62381218a5d5957de5522097806b0442" Workload="localhost-k8s-coredns--668d6bf9bc--rzp9k-eth0" Nov 1 01:02:10.395988 env[1357]: 2025-11-01 01:02:10.393 [INFO][4831] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="9f726969058ef8f2e663650fe2b27dda62381218a5d5957de5522097806b0442" HandleID="k8s-pod-network.9f726969058ef8f2e663650fe2b27dda62381218a5d5957de5522097806b0442" Workload="localhost-k8s-coredns--668d6bf9bc--rzp9k-eth0" Nov 1 01:02:10.395988 env[1357]: 2025-11-01 01:02:10.393 [INFO][4831] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 1 01:02:10.395988 env[1357]: 2025-11-01 01:02:10.394 [INFO][4824] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="9f726969058ef8f2e663650fe2b27dda62381218a5d5957de5522097806b0442" Nov 1 01:02:10.396790 env[1357]: time="2025-11-01T01:02:10.396767817Z" level=info msg="TearDown network for sandbox \"9f726969058ef8f2e663650fe2b27dda62381218a5d5957de5522097806b0442\" successfully" Nov 1 01:02:10.398312 env[1357]: time="2025-11-01T01:02:10.398299670Z" level=info msg="RemovePodSandbox \"9f726969058ef8f2e663650fe2b27dda62381218a5d5957de5522097806b0442\" returns successfully" Nov 1 01:02:10.398681 env[1357]: time="2025-11-01T01:02:10.398646218Z" level=info msg="StopPodSandbox for \"963e8a55b26f297f25ed2a9455e2a5c1bc4b6edfd1660a770d38a7ffea576ff5\"" Nov 1 01:02:10.448118 env[1357]: 2025-11-01 01:02:10.420 [WARNING][4845] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="963e8a55b26f297f25ed2a9455e2a5c1bc4b6edfd1660a770d38a7ffea576ff5" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--7787d665fb--fb8nb-eth0", GenerateName:"calico-apiserver-7787d665fb-", Namespace:"calico-apiserver", SelfLink:"", UID:"9e4c5135-dd57-4916-a0b6-81789ca74a77", ResourceVersion:"1037", Generation:0, CreationTimestamp:time.Date(2025, time.November, 1, 1, 1, 24, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7787d665fb", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"318ef8caeef0d2b0d47f17e468d4a037d915d7ce126c00b963aeb315540bc4c2", Pod:"calico-apiserver-7787d665fb-fb8nb", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"caliadad37b4f0e", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 1 01:02:10.448118 env[1357]: 2025-11-01 01:02:10.420 [INFO][4845] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="963e8a55b26f297f25ed2a9455e2a5c1bc4b6edfd1660a770d38a7ffea576ff5" Nov 1 01:02:10.448118 env[1357]: 2025-11-01 01:02:10.420 [INFO][4845] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="963e8a55b26f297f25ed2a9455e2a5c1bc4b6edfd1660a770d38a7ffea576ff5" iface="eth0" netns="" Nov 1 01:02:10.448118 env[1357]: 2025-11-01 01:02:10.420 [INFO][4845] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="963e8a55b26f297f25ed2a9455e2a5c1bc4b6edfd1660a770d38a7ffea576ff5" Nov 1 01:02:10.448118 env[1357]: 2025-11-01 01:02:10.420 [INFO][4845] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="963e8a55b26f297f25ed2a9455e2a5c1bc4b6edfd1660a770d38a7ffea576ff5" Nov 1 01:02:10.448118 env[1357]: 2025-11-01 01:02:10.440 [INFO][4852] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="963e8a55b26f297f25ed2a9455e2a5c1bc4b6edfd1660a770d38a7ffea576ff5" HandleID="k8s-pod-network.963e8a55b26f297f25ed2a9455e2a5c1bc4b6edfd1660a770d38a7ffea576ff5" Workload="localhost-k8s-calico--apiserver--7787d665fb--fb8nb-eth0" Nov 1 01:02:10.448118 env[1357]: 2025-11-01 01:02:10.440 [INFO][4852] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 1 01:02:10.448118 env[1357]: 2025-11-01 01:02:10.440 [INFO][4852] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 1 01:02:10.448118 env[1357]: 2025-11-01 01:02:10.444 [WARNING][4852] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="963e8a55b26f297f25ed2a9455e2a5c1bc4b6edfd1660a770d38a7ffea576ff5" HandleID="k8s-pod-network.963e8a55b26f297f25ed2a9455e2a5c1bc4b6edfd1660a770d38a7ffea576ff5" Workload="localhost-k8s-calico--apiserver--7787d665fb--fb8nb-eth0" Nov 1 01:02:10.448118 env[1357]: 2025-11-01 01:02:10.444 [INFO][4852] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="963e8a55b26f297f25ed2a9455e2a5c1bc4b6edfd1660a770d38a7ffea576ff5" HandleID="k8s-pod-network.963e8a55b26f297f25ed2a9455e2a5c1bc4b6edfd1660a770d38a7ffea576ff5" Workload="localhost-k8s-calico--apiserver--7787d665fb--fb8nb-eth0" Nov 1 01:02:10.448118 env[1357]: 2025-11-01 01:02:10.445 [INFO][4852] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 1 01:02:10.448118 env[1357]: 2025-11-01 01:02:10.446 [INFO][4845] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="963e8a55b26f297f25ed2a9455e2a5c1bc4b6edfd1660a770d38a7ffea576ff5" Nov 1 01:02:10.448578 env[1357]: time="2025-11-01T01:02:10.448557714Z" level=info msg="TearDown network for sandbox \"963e8a55b26f297f25ed2a9455e2a5c1bc4b6edfd1660a770d38a7ffea576ff5\" successfully" Nov 1 01:02:10.448933 env[1357]: time="2025-11-01T01:02:10.448669048Z" level=info msg="StopPodSandbox for \"963e8a55b26f297f25ed2a9455e2a5c1bc4b6edfd1660a770d38a7ffea576ff5\" returns successfully" Nov 1 01:02:10.449042 env[1357]: time="2025-11-01T01:02:10.449031226Z" level=info msg="RemovePodSandbox for \"963e8a55b26f297f25ed2a9455e2a5c1bc4b6edfd1660a770d38a7ffea576ff5\"" Nov 1 01:02:10.449120 env[1357]: time="2025-11-01T01:02:10.449096018Z" level=info msg="Forcibly stopping sandbox \"963e8a55b26f297f25ed2a9455e2a5c1bc4b6edfd1660a770d38a7ffea576ff5\"" Nov 1 01:02:10.494460 env[1357]: 2025-11-01 01:02:10.470 [WARNING][4866] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="963e8a55b26f297f25ed2a9455e2a5c1bc4b6edfd1660a770d38a7ffea576ff5" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--7787d665fb--fb8nb-eth0", GenerateName:"calico-apiserver-7787d665fb-", Namespace:"calico-apiserver", SelfLink:"", UID:"9e4c5135-dd57-4916-a0b6-81789ca74a77", ResourceVersion:"1037", Generation:0, CreationTimestamp:time.Date(2025, time.November, 1, 1, 1, 24, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7787d665fb", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"318ef8caeef0d2b0d47f17e468d4a037d915d7ce126c00b963aeb315540bc4c2", Pod:"calico-apiserver-7787d665fb-fb8nb", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"caliadad37b4f0e", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 1 01:02:10.494460 env[1357]: 2025-11-01 01:02:10.470 [INFO][4866] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="963e8a55b26f297f25ed2a9455e2a5c1bc4b6edfd1660a770d38a7ffea576ff5" Nov 1 01:02:10.494460 env[1357]: 2025-11-01 01:02:10.470 [INFO][4866] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="963e8a55b26f297f25ed2a9455e2a5c1bc4b6edfd1660a770d38a7ffea576ff5" iface="eth0" netns="" Nov 1 01:02:10.494460 env[1357]: 2025-11-01 01:02:10.470 [INFO][4866] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="963e8a55b26f297f25ed2a9455e2a5c1bc4b6edfd1660a770d38a7ffea576ff5" Nov 1 01:02:10.494460 env[1357]: 2025-11-01 01:02:10.470 [INFO][4866] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="963e8a55b26f297f25ed2a9455e2a5c1bc4b6edfd1660a770d38a7ffea576ff5" Nov 1 01:02:10.494460 env[1357]: 2025-11-01 01:02:10.484 [INFO][4873] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="963e8a55b26f297f25ed2a9455e2a5c1bc4b6edfd1660a770d38a7ffea576ff5" HandleID="k8s-pod-network.963e8a55b26f297f25ed2a9455e2a5c1bc4b6edfd1660a770d38a7ffea576ff5" Workload="localhost-k8s-calico--apiserver--7787d665fb--fb8nb-eth0" Nov 1 01:02:10.494460 env[1357]: 2025-11-01 01:02:10.485 [INFO][4873] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 1 01:02:10.494460 env[1357]: 2025-11-01 01:02:10.485 [INFO][4873] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 1 01:02:10.494460 env[1357]: 2025-11-01 01:02:10.490 [WARNING][4873] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="963e8a55b26f297f25ed2a9455e2a5c1bc4b6edfd1660a770d38a7ffea576ff5" HandleID="k8s-pod-network.963e8a55b26f297f25ed2a9455e2a5c1bc4b6edfd1660a770d38a7ffea576ff5" Workload="localhost-k8s-calico--apiserver--7787d665fb--fb8nb-eth0" Nov 1 01:02:10.494460 env[1357]: 2025-11-01 01:02:10.490 [INFO][4873] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="963e8a55b26f297f25ed2a9455e2a5c1bc4b6edfd1660a770d38a7ffea576ff5" HandleID="k8s-pod-network.963e8a55b26f297f25ed2a9455e2a5c1bc4b6edfd1660a770d38a7ffea576ff5" Workload="localhost-k8s-calico--apiserver--7787d665fb--fb8nb-eth0" Nov 1 01:02:10.494460 env[1357]: 2025-11-01 01:02:10.492 [INFO][4873] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 1 01:02:10.494460 env[1357]: 2025-11-01 01:02:10.493 [INFO][4866] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="963e8a55b26f297f25ed2a9455e2a5c1bc4b6edfd1660a770d38a7ffea576ff5" Nov 1 01:02:10.494947 env[1357]: time="2025-11-01T01:02:10.494468598Z" level=info msg="TearDown network for sandbox \"963e8a55b26f297f25ed2a9455e2a5c1bc4b6edfd1660a770d38a7ffea576ff5\" successfully" Nov 1 01:02:10.495891 env[1357]: time="2025-11-01T01:02:10.495873782Z" level=info msg="RemovePodSandbox \"963e8a55b26f297f25ed2a9455e2a5c1bc4b6edfd1660a770d38a7ffea576ff5\" returns successfully" Nov 1 01:02:10.496216 env[1357]: time="2025-11-01T01:02:10.496202121Z" level=info msg="StopPodSandbox for \"33ae4ebef7b540364464ead42411a96d422840754565ac7e28881067ba4acc13\"" Nov 1 01:02:10.540640 env[1357]: 2025-11-01 01:02:10.517 [WARNING][4888] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="33ae4ebef7b540364464ead42411a96d422840754565ac7e28881067ba4acc13" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--7787d665fb--xxt5l-eth0", GenerateName:"calico-apiserver-7787d665fb-", Namespace:"calico-apiserver", SelfLink:"", UID:"27248a54-b567-4865-8268-2eb8267aa120", ResourceVersion:"1111", Generation:0, CreationTimestamp:time.Date(2025, time.November, 1, 1, 1, 24, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7787d665fb", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"78672cbdd6e46872c7070107312b762ba099b2584ff73b8bcca802e6825f511c", Pod:"calico-apiserver-7787d665fb-xxt5l", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali598f3cd8adc", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 1 01:02:10.540640 env[1357]: 2025-11-01 01:02:10.517 [INFO][4888] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="33ae4ebef7b540364464ead42411a96d422840754565ac7e28881067ba4acc13" Nov 1 01:02:10.540640 env[1357]: 2025-11-01 01:02:10.517 [INFO][4888] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="33ae4ebef7b540364464ead42411a96d422840754565ac7e28881067ba4acc13" iface="eth0" netns="" Nov 1 01:02:10.540640 env[1357]: 2025-11-01 01:02:10.517 [INFO][4888] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="33ae4ebef7b540364464ead42411a96d422840754565ac7e28881067ba4acc13" Nov 1 01:02:10.540640 env[1357]: 2025-11-01 01:02:10.517 [INFO][4888] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="33ae4ebef7b540364464ead42411a96d422840754565ac7e28881067ba4acc13" Nov 1 01:02:10.540640 env[1357]: 2025-11-01 01:02:10.533 [INFO][4896] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="33ae4ebef7b540364464ead42411a96d422840754565ac7e28881067ba4acc13" HandleID="k8s-pod-network.33ae4ebef7b540364464ead42411a96d422840754565ac7e28881067ba4acc13" Workload="localhost-k8s-calico--apiserver--7787d665fb--xxt5l-eth0" Nov 1 01:02:10.540640 env[1357]: 2025-11-01 01:02:10.533 [INFO][4896] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 1 01:02:10.540640 env[1357]: 2025-11-01 01:02:10.533 [INFO][4896] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 1 01:02:10.540640 env[1357]: 2025-11-01 01:02:10.537 [WARNING][4896] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="33ae4ebef7b540364464ead42411a96d422840754565ac7e28881067ba4acc13" HandleID="k8s-pod-network.33ae4ebef7b540364464ead42411a96d422840754565ac7e28881067ba4acc13" Workload="localhost-k8s-calico--apiserver--7787d665fb--xxt5l-eth0" Nov 1 01:02:10.540640 env[1357]: 2025-11-01 01:02:10.537 [INFO][4896] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="33ae4ebef7b540364464ead42411a96d422840754565ac7e28881067ba4acc13" HandleID="k8s-pod-network.33ae4ebef7b540364464ead42411a96d422840754565ac7e28881067ba4acc13" Workload="localhost-k8s-calico--apiserver--7787d665fb--xxt5l-eth0" Nov 1 01:02:10.540640 env[1357]: 2025-11-01 01:02:10.538 [INFO][4896] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 1 01:02:10.540640 env[1357]: 2025-11-01 01:02:10.539 [INFO][4888] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="33ae4ebef7b540364464ead42411a96d422840754565ac7e28881067ba4acc13" Nov 1 01:02:10.541555 env[1357]: time="2025-11-01T01:02:10.541010194Z" level=info msg="TearDown network for sandbox \"33ae4ebef7b540364464ead42411a96d422840754565ac7e28881067ba4acc13\" successfully" Nov 1 01:02:10.541555 env[1357]: time="2025-11-01T01:02:10.541042016Z" level=info msg="StopPodSandbox for \"33ae4ebef7b540364464ead42411a96d422840754565ac7e28881067ba4acc13\" returns successfully" Nov 1 01:02:10.543346 env[1357]: time="2025-11-01T01:02:10.543325279Z" level=info msg="RemovePodSandbox for \"33ae4ebef7b540364464ead42411a96d422840754565ac7e28881067ba4acc13\"" Nov 1 01:02:10.543391 env[1357]: time="2025-11-01T01:02:10.543349020Z" level=info msg="Forcibly stopping sandbox \"33ae4ebef7b540364464ead42411a96d422840754565ac7e28881067ba4acc13\"" Nov 1 01:02:10.589932 env[1357]: 2025-11-01 01:02:10.565 [WARNING][4911] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="33ae4ebef7b540364464ead42411a96d422840754565ac7e28881067ba4acc13" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--7787d665fb--xxt5l-eth0", GenerateName:"calico-apiserver-7787d665fb-", Namespace:"calico-apiserver", SelfLink:"", UID:"27248a54-b567-4865-8268-2eb8267aa120", ResourceVersion:"1111", Generation:0, CreationTimestamp:time.Date(2025, time.November, 1, 1, 1, 24, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7787d665fb", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"78672cbdd6e46872c7070107312b762ba099b2584ff73b8bcca802e6825f511c", Pod:"calico-apiserver-7787d665fb-xxt5l", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali598f3cd8adc", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 1 01:02:10.589932 env[1357]: 2025-11-01 01:02:10.565 [INFO][4911] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="33ae4ebef7b540364464ead42411a96d422840754565ac7e28881067ba4acc13" Nov 1 01:02:10.589932 env[1357]: 2025-11-01 01:02:10.565 [INFO][4911] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="33ae4ebef7b540364464ead42411a96d422840754565ac7e28881067ba4acc13" iface="eth0" netns="" Nov 1 01:02:10.589932 env[1357]: 2025-11-01 01:02:10.565 [INFO][4911] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="33ae4ebef7b540364464ead42411a96d422840754565ac7e28881067ba4acc13" Nov 1 01:02:10.589932 env[1357]: 2025-11-01 01:02:10.565 [INFO][4911] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="33ae4ebef7b540364464ead42411a96d422840754565ac7e28881067ba4acc13" Nov 1 01:02:10.589932 env[1357]: 2025-11-01 01:02:10.579 [INFO][4918] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="33ae4ebef7b540364464ead42411a96d422840754565ac7e28881067ba4acc13" HandleID="k8s-pod-network.33ae4ebef7b540364464ead42411a96d422840754565ac7e28881067ba4acc13" Workload="localhost-k8s-calico--apiserver--7787d665fb--xxt5l-eth0" Nov 1 01:02:10.589932 env[1357]: 2025-11-01 01:02:10.579 [INFO][4918] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 1 01:02:10.589932 env[1357]: 2025-11-01 01:02:10.579 [INFO][4918] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 1 01:02:10.589932 env[1357]: 2025-11-01 01:02:10.586 [WARNING][4918] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="33ae4ebef7b540364464ead42411a96d422840754565ac7e28881067ba4acc13" HandleID="k8s-pod-network.33ae4ebef7b540364464ead42411a96d422840754565ac7e28881067ba4acc13" Workload="localhost-k8s-calico--apiserver--7787d665fb--xxt5l-eth0" Nov 1 01:02:10.589932 env[1357]: 2025-11-01 01:02:10.586 [INFO][4918] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="33ae4ebef7b540364464ead42411a96d422840754565ac7e28881067ba4acc13" HandleID="k8s-pod-network.33ae4ebef7b540364464ead42411a96d422840754565ac7e28881067ba4acc13" Workload="localhost-k8s-calico--apiserver--7787d665fb--xxt5l-eth0" Nov 1 01:02:10.589932 env[1357]: 2025-11-01 01:02:10.587 [INFO][4918] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 1 01:02:10.589932 env[1357]: 2025-11-01 01:02:10.588 [INFO][4911] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="33ae4ebef7b540364464ead42411a96d422840754565ac7e28881067ba4acc13" Nov 1 01:02:10.590289 env[1357]: time="2025-11-01T01:02:10.589951643Z" level=info msg="TearDown network for sandbox \"33ae4ebef7b540364464ead42411a96d422840754565ac7e28881067ba4acc13\" successfully" Nov 1 01:02:10.601414 env[1357]: time="2025-11-01T01:02:10.601381260Z" level=info msg="RemovePodSandbox \"33ae4ebef7b540364464ead42411a96d422840754565ac7e28881067ba4acc13\" returns successfully" Nov 1 01:02:10.623541 env[1357]: time="2025-11-01T01:02:10.623518414Z" level=info msg="StopPodSandbox for \"fed4f3f1e9d521ee583ade003d4d117acd6da61bd471431ec09366c2c693dc1b\"" Nov 1 01:02:10.651342 env[1357]: time="2025-11-01T01:02:10.651308701Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 1 01:02:10.651697 env[1357]: time="2025-11-01T01:02:10.651659684Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 1 01:02:10.652464 kubelet[2289]: E1101 01:02:10.651813 2289 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 1 01:02:10.652464 kubelet[2289]: E1101 01:02:10.651846 2289 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 1 01:02:10.652464 kubelet[2289]: E1101 01:02:10.651922 2289 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-v6njt,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-7787d665fb-xxt5l_calico-apiserver(27248a54-b567-4865-8268-2eb8267aa120): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 1 01:02:10.654243 kubelet[2289]: E1101 01:02:10.653917 2289 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7787d665fb-xxt5l" podUID="27248a54-b567-4865-8268-2eb8267aa120" Nov 1 01:02:10.674309 env[1357]: 2025-11-01 01:02:10.646 [WARNING][4932] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="fed4f3f1e9d521ee583ade003d4d117acd6da61bd471431ec09366c2c693dc1b" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--668d6bf9bc--wzrxr-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"8387fb77-dd73-4b48-9e3f-a6209aeef170", ResourceVersion:"1054", Generation:0, CreationTimestamp:time.Date(2025, time.November, 1, 1, 1, 15, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"907aa702979e91979c9a388ad6f02f1b1c9e519f21dfd76f44ffe1a8795c2156", Pod:"coredns-668d6bf9bc-wzrxr", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.137/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calif7369b63cde", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 1 01:02:10.674309 env[1357]: 2025-11-01 01:02:10.646 [INFO][4932] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="fed4f3f1e9d521ee583ade003d4d117acd6da61bd471431ec09366c2c693dc1b" Nov 1 01:02:10.674309 env[1357]: 2025-11-01 01:02:10.646 [INFO][4932] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="fed4f3f1e9d521ee583ade003d4d117acd6da61bd471431ec09366c2c693dc1b" iface="eth0" netns="" Nov 1 01:02:10.674309 env[1357]: 2025-11-01 01:02:10.646 [INFO][4932] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="fed4f3f1e9d521ee583ade003d4d117acd6da61bd471431ec09366c2c693dc1b" Nov 1 01:02:10.674309 env[1357]: 2025-11-01 01:02:10.646 [INFO][4932] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="fed4f3f1e9d521ee583ade003d4d117acd6da61bd471431ec09366c2c693dc1b" Nov 1 01:02:10.674309 env[1357]: 2025-11-01 01:02:10.667 [INFO][4939] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="fed4f3f1e9d521ee583ade003d4d117acd6da61bd471431ec09366c2c693dc1b" HandleID="k8s-pod-network.fed4f3f1e9d521ee583ade003d4d117acd6da61bd471431ec09366c2c693dc1b" Workload="localhost-k8s-coredns--668d6bf9bc--wzrxr-eth0" Nov 1 01:02:10.674309 env[1357]: 2025-11-01 01:02:10.667 [INFO][4939] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 1 01:02:10.674309 env[1357]: 2025-11-01 01:02:10.667 [INFO][4939] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 1 01:02:10.674309 env[1357]: 2025-11-01 01:02:10.670 [WARNING][4939] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="fed4f3f1e9d521ee583ade003d4d117acd6da61bd471431ec09366c2c693dc1b" HandleID="k8s-pod-network.fed4f3f1e9d521ee583ade003d4d117acd6da61bd471431ec09366c2c693dc1b" Workload="localhost-k8s-coredns--668d6bf9bc--wzrxr-eth0" Nov 1 01:02:10.674309 env[1357]: 2025-11-01 01:02:10.671 [INFO][4939] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="fed4f3f1e9d521ee583ade003d4d117acd6da61bd471431ec09366c2c693dc1b" HandleID="k8s-pod-network.fed4f3f1e9d521ee583ade003d4d117acd6da61bd471431ec09366c2c693dc1b" Workload="localhost-k8s-coredns--668d6bf9bc--wzrxr-eth0" Nov 1 01:02:10.674309 env[1357]: 2025-11-01 01:02:10.671 [INFO][4939] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 1 01:02:10.674309 env[1357]: 2025-11-01 01:02:10.673 [INFO][4932] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="fed4f3f1e9d521ee583ade003d4d117acd6da61bd471431ec09366c2c693dc1b" Nov 1 01:02:10.675763 env[1357]: time="2025-11-01T01:02:10.674708335Z" level=info msg="TearDown network for sandbox \"fed4f3f1e9d521ee583ade003d4d117acd6da61bd471431ec09366c2c693dc1b\" successfully" Nov 1 01:02:10.675763 env[1357]: time="2025-11-01T01:02:10.674730178Z" level=info msg="StopPodSandbox for \"fed4f3f1e9d521ee583ade003d4d117acd6da61bd471431ec09366c2c693dc1b\" returns successfully" Nov 1 01:02:10.675763 env[1357]: time="2025-11-01T01:02:10.674904642Z" level=info msg="RemovePodSandbox for \"fed4f3f1e9d521ee583ade003d4d117acd6da61bd471431ec09366c2c693dc1b\"" Nov 1 01:02:10.675763 env[1357]: time="2025-11-01T01:02:10.674920301Z" level=info msg="Forcibly stopping sandbox \"fed4f3f1e9d521ee583ade003d4d117acd6da61bd471431ec09366c2c693dc1b\"" Nov 1 01:02:10.727009 env[1357]: 2025-11-01 01:02:10.698 [WARNING][4953] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="fed4f3f1e9d521ee583ade003d4d117acd6da61bd471431ec09366c2c693dc1b" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--668d6bf9bc--wzrxr-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"8387fb77-dd73-4b48-9e3f-a6209aeef170", ResourceVersion:"1054", Generation:0, CreationTimestamp:time.Date(2025, time.November, 1, 1, 1, 15, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"907aa702979e91979c9a388ad6f02f1b1c9e519f21dfd76f44ffe1a8795c2156", Pod:"coredns-668d6bf9bc-wzrxr", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.137/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calif7369b63cde", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 1 01:02:10.727009 env[1357]: 2025-11-01 01:02:10.698 [INFO][4953] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="fed4f3f1e9d521ee583ade003d4d117acd6da61bd471431ec09366c2c693dc1b" Nov 1 01:02:10.727009 env[1357]: 2025-11-01 01:02:10.698 [INFO][4953] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="fed4f3f1e9d521ee583ade003d4d117acd6da61bd471431ec09366c2c693dc1b" iface="eth0" netns="" Nov 1 01:02:10.727009 env[1357]: 2025-11-01 01:02:10.698 [INFO][4953] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="fed4f3f1e9d521ee583ade003d4d117acd6da61bd471431ec09366c2c693dc1b" Nov 1 01:02:10.727009 env[1357]: 2025-11-01 01:02:10.698 [INFO][4953] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="fed4f3f1e9d521ee583ade003d4d117acd6da61bd471431ec09366c2c693dc1b" Nov 1 01:02:10.727009 env[1357]: 2025-11-01 01:02:10.719 [INFO][4961] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="fed4f3f1e9d521ee583ade003d4d117acd6da61bd471431ec09366c2c693dc1b" HandleID="k8s-pod-network.fed4f3f1e9d521ee583ade003d4d117acd6da61bd471431ec09366c2c693dc1b" Workload="localhost-k8s-coredns--668d6bf9bc--wzrxr-eth0" Nov 1 01:02:10.727009 env[1357]: 2025-11-01 01:02:10.719 [INFO][4961] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 1 01:02:10.727009 env[1357]: 2025-11-01 01:02:10.719 [INFO][4961] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 1 01:02:10.727009 env[1357]: 2025-11-01 01:02:10.723 [WARNING][4961] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="fed4f3f1e9d521ee583ade003d4d117acd6da61bd471431ec09366c2c693dc1b" HandleID="k8s-pod-network.fed4f3f1e9d521ee583ade003d4d117acd6da61bd471431ec09366c2c693dc1b" Workload="localhost-k8s-coredns--668d6bf9bc--wzrxr-eth0" Nov 1 01:02:10.727009 env[1357]: 2025-11-01 01:02:10.723 [INFO][4961] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="fed4f3f1e9d521ee583ade003d4d117acd6da61bd471431ec09366c2c693dc1b" HandleID="k8s-pod-network.fed4f3f1e9d521ee583ade003d4d117acd6da61bd471431ec09366c2c693dc1b" Workload="localhost-k8s-coredns--668d6bf9bc--wzrxr-eth0" Nov 1 01:02:10.727009 env[1357]: 2025-11-01 01:02:10.724 [INFO][4961] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 1 01:02:10.727009 env[1357]: 2025-11-01 01:02:10.725 [INFO][4953] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="fed4f3f1e9d521ee583ade003d4d117acd6da61bd471431ec09366c2c693dc1b" Nov 1 01:02:10.727402 env[1357]: time="2025-11-01T01:02:10.727380455Z" level=info msg="TearDown network for sandbox \"fed4f3f1e9d521ee583ade003d4d117acd6da61bd471431ec09366c2c693dc1b\" successfully" Nov 1 01:02:10.728998 env[1357]: time="2025-11-01T01:02:10.728984256Z" level=info msg="RemovePodSandbox \"fed4f3f1e9d521ee583ade003d4d117acd6da61bd471431ec09366c2c693dc1b\" returns successfully" Nov 1 01:02:10.729389 env[1357]: time="2025-11-01T01:02:10.729371296Z" level=info msg="StopPodSandbox for \"31276cef7e87a461939e11d45dc633439d500732f1e50e32643ef0ddbcb40df1\"" Nov 1 01:02:10.791090 env[1357]: 2025-11-01 01:02:10.757 [WARNING][4975] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="31276cef7e87a461939e11d45dc633439d500732f1e50e32643ef0ddbcb40df1" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--647db64f7d--p9wv7-eth0", GenerateName:"calico-kube-controllers-647db64f7d-", Namespace:"calico-system", SelfLink:"", UID:"a48af7bf-dcba-4afd-bada-a7a0787cc063", ResourceVersion:"1008", Generation:0, CreationTimestamp:time.Date(2025, time.November, 1, 1, 1, 29, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"647db64f7d", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"4ff3a8a7896b458686ff020f251fcf5dd16baa30f0c16b87b262da918bc58d56", Pod:"calico-kube-controllers-647db64f7d-p9wv7", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali9e0739d5afb", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 1 01:02:10.791090 env[1357]: 2025-11-01 01:02:10.757 [INFO][4975] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="31276cef7e87a461939e11d45dc633439d500732f1e50e32643ef0ddbcb40df1" Nov 1 01:02:10.791090 env[1357]: 2025-11-01 01:02:10.757 [INFO][4975] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="31276cef7e87a461939e11d45dc633439d500732f1e50e32643ef0ddbcb40df1" iface="eth0" netns="" Nov 1 01:02:10.791090 env[1357]: 2025-11-01 01:02:10.757 [INFO][4975] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="31276cef7e87a461939e11d45dc633439d500732f1e50e32643ef0ddbcb40df1" Nov 1 01:02:10.791090 env[1357]: 2025-11-01 01:02:10.757 [INFO][4975] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="31276cef7e87a461939e11d45dc633439d500732f1e50e32643ef0ddbcb40df1" Nov 1 01:02:10.791090 env[1357]: 2025-11-01 01:02:10.783 [INFO][4982] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="31276cef7e87a461939e11d45dc633439d500732f1e50e32643ef0ddbcb40df1" HandleID="k8s-pod-network.31276cef7e87a461939e11d45dc633439d500732f1e50e32643ef0ddbcb40df1" Workload="localhost-k8s-calico--kube--controllers--647db64f7d--p9wv7-eth0" Nov 1 01:02:10.791090 env[1357]: 2025-11-01 01:02:10.783 [INFO][4982] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 1 01:02:10.791090 env[1357]: 2025-11-01 01:02:10.783 [INFO][4982] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 1 01:02:10.791090 env[1357]: 2025-11-01 01:02:10.787 [WARNING][4982] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="31276cef7e87a461939e11d45dc633439d500732f1e50e32643ef0ddbcb40df1" HandleID="k8s-pod-network.31276cef7e87a461939e11d45dc633439d500732f1e50e32643ef0ddbcb40df1" Workload="localhost-k8s-calico--kube--controllers--647db64f7d--p9wv7-eth0" Nov 1 01:02:10.791090 env[1357]: 2025-11-01 01:02:10.787 [INFO][4982] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="31276cef7e87a461939e11d45dc633439d500732f1e50e32643ef0ddbcb40df1" HandleID="k8s-pod-network.31276cef7e87a461939e11d45dc633439d500732f1e50e32643ef0ddbcb40df1" Workload="localhost-k8s-calico--kube--controllers--647db64f7d--p9wv7-eth0" Nov 1 01:02:10.791090 env[1357]: 2025-11-01 01:02:10.788 [INFO][4982] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 1 01:02:10.791090 env[1357]: 2025-11-01 01:02:10.789 [INFO][4975] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="31276cef7e87a461939e11d45dc633439d500732f1e50e32643ef0ddbcb40df1" Nov 1 01:02:10.793107 env[1357]: time="2025-11-01T01:02:10.791066287Z" level=info msg="TearDown network for sandbox \"31276cef7e87a461939e11d45dc633439d500732f1e50e32643ef0ddbcb40df1\" successfully" Nov 1 01:02:10.793143 env[1357]: time="2025-11-01T01:02:10.793107510Z" level=info msg="StopPodSandbox for \"31276cef7e87a461939e11d45dc633439d500732f1e50e32643ef0ddbcb40df1\" returns successfully" Nov 1 01:02:10.797450 env[1357]: time="2025-11-01T01:02:10.797426643Z" level=info msg="RemovePodSandbox for \"31276cef7e87a461939e11d45dc633439d500732f1e50e32643ef0ddbcb40df1\"" Nov 1 01:02:10.797523 env[1357]: time="2025-11-01T01:02:10.797451673Z" level=info msg="Forcibly stopping sandbox \"31276cef7e87a461939e11d45dc633439d500732f1e50e32643ef0ddbcb40df1\"" Nov 1 01:02:10.851680 env[1357]: 2025-11-01 01:02:10.831 [WARNING][4997] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="31276cef7e87a461939e11d45dc633439d500732f1e50e32643ef0ddbcb40df1" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--647db64f7d--p9wv7-eth0", GenerateName:"calico-kube-controllers-647db64f7d-", Namespace:"calico-system", SelfLink:"", UID:"a48af7bf-dcba-4afd-bada-a7a0787cc063", ResourceVersion:"1008", Generation:0, CreationTimestamp:time.Date(2025, time.November, 1, 1, 1, 29, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"647db64f7d", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"4ff3a8a7896b458686ff020f251fcf5dd16baa30f0c16b87b262da918bc58d56", Pod:"calico-kube-controllers-647db64f7d-p9wv7", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali9e0739d5afb", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 1 01:02:10.851680 env[1357]: 2025-11-01 01:02:10.832 [INFO][4997] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="31276cef7e87a461939e11d45dc633439d500732f1e50e32643ef0ddbcb40df1" Nov 1 01:02:10.851680 env[1357]: 2025-11-01 01:02:10.832 [INFO][4997] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="31276cef7e87a461939e11d45dc633439d500732f1e50e32643ef0ddbcb40df1" iface="eth0" netns="" Nov 1 01:02:10.851680 env[1357]: 2025-11-01 01:02:10.832 [INFO][4997] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="31276cef7e87a461939e11d45dc633439d500732f1e50e32643ef0ddbcb40df1" Nov 1 01:02:10.851680 env[1357]: 2025-11-01 01:02:10.832 [INFO][4997] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="31276cef7e87a461939e11d45dc633439d500732f1e50e32643ef0ddbcb40df1" Nov 1 01:02:10.851680 env[1357]: 2025-11-01 01:02:10.845 [INFO][5004] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="31276cef7e87a461939e11d45dc633439d500732f1e50e32643ef0ddbcb40df1" HandleID="k8s-pod-network.31276cef7e87a461939e11d45dc633439d500732f1e50e32643ef0ddbcb40df1" Workload="localhost-k8s-calico--kube--controllers--647db64f7d--p9wv7-eth0" Nov 1 01:02:10.851680 env[1357]: 2025-11-01 01:02:10.845 [INFO][5004] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 1 01:02:10.851680 env[1357]: 2025-11-01 01:02:10.845 [INFO][5004] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 1 01:02:10.851680 env[1357]: 2025-11-01 01:02:10.848 [WARNING][5004] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="31276cef7e87a461939e11d45dc633439d500732f1e50e32643ef0ddbcb40df1" HandleID="k8s-pod-network.31276cef7e87a461939e11d45dc633439d500732f1e50e32643ef0ddbcb40df1" Workload="localhost-k8s-calico--kube--controllers--647db64f7d--p9wv7-eth0" Nov 1 01:02:10.851680 env[1357]: 2025-11-01 01:02:10.848 [INFO][5004] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="31276cef7e87a461939e11d45dc633439d500732f1e50e32643ef0ddbcb40df1" HandleID="k8s-pod-network.31276cef7e87a461939e11d45dc633439d500732f1e50e32643ef0ddbcb40df1" Workload="localhost-k8s-calico--kube--controllers--647db64f7d--p9wv7-eth0" Nov 1 01:02:10.851680 env[1357]: 2025-11-01 01:02:10.849 [INFO][5004] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 1 01:02:10.851680 env[1357]: 2025-11-01 01:02:10.850 [INFO][4997] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="31276cef7e87a461939e11d45dc633439d500732f1e50e32643ef0ddbcb40df1" Nov 1 01:02:10.852066 env[1357]: time="2025-11-01T01:02:10.851703909Z" level=info msg="TearDown network for sandbox \"31276cef7e87a461939e11d45dc633439d500732f1e50e32643ef0ddbcb40df1\" successfully" Nov 1 01:02:10.853183 env[1357]: time="2025-11-01T01:02:10.853166838Z" level=info msg="RemovePodSandbox \"31276cef7e87a461939e11d45dc633439d500732f1e50e32643ef0ddbcb40df1\" returns successfully" Nov 1 01:02:10.853563 env[1357]: time="2025-11-01T01:02:10.853513697Z" level=info msg="StopPodSandbox for \"b12d86bfd044332a93a763b4593abffd6bff3a0520d093a99f14b1dc632feef7\"" Nov 1 01:02:10.894624 env[1357]: 2025-11-01 01:02:10.873 [WARNING][5018] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="b12d86bfd044332a93a763b4593abffd6bff3a0520d093a99f14b1dc632feef7" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--666569f655--hsw5f-eth0", GenerateName:"goldmane-666569f655-", Namespace:"calico-system", SelfLink:"", UID:"cb274633-10f4-4984-be19-e536608b3bf1", ResourceVersion:"1094", Generation:0, CreationTimestamp:time.Date(2025, time.November, 1, 1, 1, 27, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"666569f655", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"7584e08ceda31396cbd1e1f19981d2e703b451b91978ef00ef02b7129fd71f9c", Pod:"goldmane-666569f655-hsw5f", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali5630d412530", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 1 01:02:10.894624 env[1357]: 2025-11-01 01:02:10.874 [INFO][5018] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="b12d86bfd044332a93a763b4593abffd6bff3a0520d093a99f14b1dc632feef7" Nov 1 01:02:10.894624 env[1357]: 2025-11-01 01:02:10.874 [INFO][5018] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="b12d86bfd044332a93a763b4593abffd6bff3a0520d093a99f14b1dc632feef7" iface="eth0" netns="" Nov 1 01:02:10.894624 env[1357]: 2025-11-01 01:02:10.874 [INFO][5018] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="b12d86bfd044332a93a763b4593abffd6bff3a0520d093a99f14b1dc632feef7" Nov 1 01:02:10.894624 env[1357]: 2025-11-01 01:02:10.874 [INFO][5018] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="b12d86bfd044332a93a763b4593abffd6bff3a0520d093a99f14b1dc632feef7" Nov 1 01:02:10.894624 env[1357]: 2025-11-01 01:02:10.887 [INFO][5025] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="b12d86bfd044332a93a763b4593abffd6bff3a0520d093a99f14b1dc632feef7" HandleID="k8s-pod-network.b12d86bfd044332a93a763b4593abffd6bff3a0520d093a99f14b1dc632feef7" Workload="localhost-k8s-goldmane--666569f655--hsw5f-eth0" Nov 1 01:02:10.894624 env[1357]: 2025-11-01 01:02:10.887 [INFO][5025] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 1 01:02:10.894624 env[1357]: 2025-11-01 01:02:10.887 [INFO][5025] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 1 01:02:10.894624 env[1357]: 2025-11-01 01:02:10.891 [WARNING][5025] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="b12d86bfd044332a93a763b4593abffd6bff3a0520d093a99f14b1dc632feef7" HandleID="k8s-pod-network.b12d86bfd044332a93a763b4593abffd6bff3a0520d093a99f14b1dc632feef7" Workload="localhost-k8s-goldmane--666569f655--hsw5f-eth0" Nov 1 01:02:10.894624 env[1357]: 2025-11-01 01:02:10.891 [INFO][5025] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="b12d86bfd044332a93a763b4593abffd6bff3a0520d093a99f14b1dc632feef7" HandleID="k8s-pod-network.b12d86bfd044332a93a763b4593abffd6bff3a0520d093a99f14b1dc632feef7" Workload="localhost-k8s-goldmane--666569f655--hsw5f-eth0" Nov 1 01:02:10.894624 env[1357]: 2025-11-01 01:02:10.892 [INFO][5025] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 1 01:02:10.894624 env[1357]: 2025-11-01 01:02:10.893 [INFO][5018] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="b12d86bfd044332a93a763b4593abffd6bff3a0520d093a99f14b1dc632feef7" Nov 1 01:02:10.895007 env[1357]: time="2025-11-01T01:02:10.894645571Z" level=info msg="TearDown network for sandbox \"b12d86bfd044332a93a763b4593abffd6bff3a0520d093a99f14b1dc632feef7\" successfully" Nov 1 01:02:10.895007 env[1357]: time="2025-11-01T01:02:10.894683517Z" level=info msg="StopPodSandbox for \"b12d86bfd044332a93a763b4593abffd6bff3a0520d093a99f14b1dc632feef7\" returns successfully" Nov 1 01:02:10.897510 env[1357]: time="2025-11-01T01:02:10.897493824Z" level=info msg="RemovePodSandbox for \"b12d86bfd044332a93a763b4593abffd6bff3a0520d093a99f14b1dc632feef7\"" Nov 1 01:02:10.897711 env[1357]: time="2025-11-01T01:02:10.897683685Z" level=info msg="Forcibly stopping sandbox \"b12d86bfd044332a93a763b4593abffd6bff3a0520d093a99f14b1dc632feef7\"" Nov 1 01:02:10.943696 env[1357]: 2025-11-01 01:02:10.923 [WARNING][5040] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="b12d86bfd044332a93a763b4593abffd6bff3a0520d093a99f14b1dc632feef7" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--666569f655--hsw5f-eth0", GenerateName:"goldmane-666569f655-", Namespace:"calico-system", SelfLink:"", UID:"cb274633-10f4-4984-be19-e536608b3bf1", ResourceVersion:"1094", Generation:0, CreationTimestamp:time.Date(2025, time.November, 1, 1, 1, 27, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"666569f655", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"7584e08ceda31396cbd1e1f19981d2e703b451b91978ef00ef02b7129fd71f9c", Pod:"goldmane-666569f655-hsw5f", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali5630d412530", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 1 01:02:10.943696 env[1357]: 2025-11-01 01:02:10.923 [INFO][5040] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="b12d86bfd044332a93a763b4593abffd6bff3a0520d093a99f14b1dc632feef7" Nov 1 01:02:10.943696 env[1357]: 2025-11-01 01:02:10.923 [INFO][5040] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="b12d86bfd044332a93a763b4593abffd6bff3a0520d093a99f14b1dc632feef7" iface="eth0" netns="" Nov 1 01:02:10.943696 env[1357]: 2025-11-01 01:02:10.923 [INFO][5040] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="b12d86bfd044332a93a763b4593abffd6bff3a0520d093a99f14b1dc632feef7" Nov 1 01:02:10.943696 env[1357]: 2025-11-01 01:02:10.923 [INFO][5040] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="b12d86bfd044332a93a763b4593abffd6bff3a0520d093a99f14b1dc632feef7" Nov 1 01:02:10.943696 env[1357]: 2025-11-01 01:02:10.936 [INFO][5047] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="b12d86bfd044332a93a763b4593abffd6bff3a0520d093a99f14b1dc632feef7" HandleID="k8s-pod-network.b12d86bfd044332a93a763b4593abffd6bff3a0520d093a99f14b1dc632feef7" Workload="localhost-k8s-goldmane--666569f655--hsw5f-eth0" Nov 1 01:02:10.943696 env[1357]: 2025-11-01 01:02:10.936 [INFO][5047] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 1 01:02:10.943696 env[1357]: 2025-11-01 01:02:10.936 [INFO][5047] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 1 01:02:10.943696 env[1357]: 2025-11-01 01:02:10.940 [WARNING][5047] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="b12d86bfd044332a93a763b4593abffd6bff3a0520d093a99f14b1dc632feef7" HandleID="k8s-pod-network.b12d86bfd044332a93a763b4593abffd6bff3a0520d093a99f14b1dc632feef7" Workload="localhost-k8s-goldmane--666569f655--hsw5f-eth0" Nov 1 01:02:10.943696 env[1357]: 2025-11-01 01:02:10.940 [INFO][5047] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="b12d86bfd044332a93a763b4593abffd6bff3a0520d093a99f14b1dc632feef7" HandleID="k8s-pod-network.b12d86bfd044332a93a763b4593abffd6bff3a0520d093a99f14b1dc632feef7" Workload="localhost-k8s-goldmane--666569f655--hsw5f-eth0" Nov 1 01:02:10.943696 env[1357]: 2025-11-01 01:02:10.941 [INFO][5047] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 1 01:02:10.943696 env[1357]: 2025-11-01 01:02:10.942 [INFO][5040] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="b12d86bfd044332a93a763b4593abffd6bff3a0520d093a99f14b1dc632feef7" Nov 1 01:02:10.944037 env[1357]: time="2025-11-01T01:02:10.943715596Z" level=info msg="TearDown network for sandbox \"b12d86bfd044332a93a763b4593abffd6bff3a0520d093a99f14b1dc632feef7\" successfully" Nov 1 01:02:10.962536 env[1357]: time="2025-11-01T01:02:10.962486316Z" level=info msg="RemovePodSandbox \"b12d86bfd044332a93a763b4593abffd6bff3a0520d093a99f14b1dc632feef7\" returns successfully" Nov 1 01:02:10.962899 env[1357]: time="2025-11-01T01:02:10.962879645Z" level=info msg="StopPodSandbox for \"cb109768aeefa75de5b45b846d9511a268159f390cbfc2b97f2bb59f0618f4bf\"" Nov 1 01:02:11.011596 env[1357]: 2025-11-01 01:02:10.984 [WARNING][5061] cni-plugin/k8s.go 598: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="cb109768aeefa75de5b45b846d9511a268159f390cbfc2b97f2bb59f0618f4bf" WorkloadEndpoint="localhost-k8s-whisker--9467669d4--jjpzl-eth0" Nov 1 01:02:11.011596 env[1357]: 2025-11-01 01:02:10.984 [INFO][5061] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="cb109768aeefa75de5b45b846d9511a268159f390cbfc2b97f2bb59f0618f4bf" Nov 1 01:02:11.011596 env[1357]: 2025-11-01 01:02:10.984 [INFO][5061] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="cb109768aeefa75de5b45b846d9511a268159f390cbfc2b97f2bb59f0618f4bf" iface="eth0" netns="" Nov 1 01:02:11.011596 env[1357]: 2025-11-01 01:02:10.984 [INFO][5061] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="cb109768aeefa75de5b45b846d9511a268159f390cbfc2b97f2bb59f0618f4bf" Nov 1 01:02:11.011596 env[1357]: 2025-11-01 01:02:10.984 [INFO][5061] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="cb109768aeefa75de5b45b846d9511a268159f390cbfc2b97f2bb59f0618f4bf" Nov 1 01:02:11.011596 env[1357]: 2025-11-01 01:02:11.003 [INFO][5068] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="cb109768aeefa75de5b45b846d9511a268159f390cbfc2b97f2bb59f0618f4bf" HandleID="k8s-pod-network.cb109768aeefa75de5b45b846d9511a268159f390cbfc2b97f2bb59f0618f4bf" Workload="localhost-k8s-whisker--9467669d4--jjpzl-eth0" Nov 1 01:02:11.011596 env[1357]: 2025-11-01 01:02:11.003 [INFO][5068] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 1 01:02:11.011596 env[1357]: 2025-11-01 01:02:11.003 [INFO][5068] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 1 01:02:11.011596 env[1357]: 2025-11-01 01:02:11.007 [WARNING][5068] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="cb109768aeefa75de5b45b846d9511a268159f390cbfc2b97f2bb59f0618f4bf" HandleID="k8s-pod-network.cb109768aeefa75de5b45b846d9511a268159f390cbfc2b97f2bb59f0618f4bf" Workload="localhost-k8s-whisker--9467669d4--jjpzl-eth0" Nov 1 01:02:11.011596 env[1357]: 2025-11-01 01:02:11.007 [INFO][5068] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="cb109768aeefa75de5b45b846d9511a268159f390cbfc2b97f2bb59f0618f4bf" HandleID="k8s-pod-network.cb109768aeefa75de5b45b846d9511a268159f390cbfc2b97f2bb59f0618f4bf" Workload="localhost-k8s-whisker--9467669d4--jjpzl-eth0" Nov 1 01:02:11.011596 env[1357]: 2025-11-01 01:02:11.008 [INFO][5068] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 1 01:02:11.011596 env[1357]: 2025-11-01 01:02:11.009 [INFO][5061] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="cb109768aeefa75de5b45b846d9511a268159f390cbfc2b97f2bb59f0618f4bf" Nov 1 01:02:11.015934 env[1357]: time="2025-11-01T01:02:11.012132584Z" level=info msg="TearDown network for sandbox \"cb109768aeefa75de5b45b846d9511a268159f390cbfc2b97f2bb59f0618f4bf\" successfully" Nov 1 01:02:11.015934 env[1357]: time="2025-11-01T01:02:11.012161828Z" level=info msg="StopPodSandbox for \"cb109768aeefa75de5b45b846d9511a268159f390cbfc2b97f2bb59f0618f4bf\" returns successfully" Nov 1 01:02:11.015934 env[1357]: time="2025-11-01T01:02:11.012556306Z" level=info msg="RemovePodSandbox for \"cb109768aeefa75de5b45b846d9511a268159f390cbfc2b97f2bb59f0618f4bf\"" Nov 1 01:02:11.015934 env[1357]: time="2025-11-01T01:02:11.012590413Z" level=info msg="Forcibly stopping sandbox \"cb109768aeefa75de5b45b846d9511a268159f390cbfc2b97f2bb59f0618f4bf\"" Nov 1 01:02:11.061435 env[1357]: 2025-11-01 01:02:11.035 [WARNING][5082] cni-plugin/k8s.go 598: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="cb109768aeefa75de5b45b846d9511a268159f390cbfc2b97f2bb59f0618f4bf" WorkloadEndpoint="localhost-k8s-whisker--9467669d4--jjpzl-eth0" Nov 1 01:02:11.061435 env[1357]: 2025-11-01 01:02:11.035 [INFO][5082] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="cb109768aeefa75de5b45b846d9511a268159f390cbfc2b97f2bb59f0618f4bf" Nov 1 01:02:11.061435 env[1357]: 2025-11-01 01:02:11.035 [INFO][5082] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="cb109768aeefa75de5b45b846d9511a268159f390cbfc2b97f2bb59f0618f4bf" iface="eth0" netns="" Nov 1 01:02:11.061435 env[1357]: 2025-11-01 01:02:11.035 [INFO][5082] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="cb109768aeefa75de5b45b846d9511a268159f390cbfc2b97f2bb59f0618f4bf" Nov 1 01:02:11.061435 env[1357]: 2025-11-01 01:02:11.035 [INFO][5082] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="cb109768aeefa75de5b45b846d9511a268159f390cbfc2b97f2bb59f0618f4bf" Nov 1 01:02:11.061435 env[1357]: 2025-11-01 01:02:11.054 [INFO][5089] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="cb109768aeefa75de5b45b846d9511a268159f390cbfc2b97f2bb59f0618f4bf" HandleID="k8s-pod-network.cb109768aeefa75de5b45b846d9511a268159f390cbfc2b97f2bb59f0618f4bf" Workload="localhost-k8s-whisker--9467669d4--jjpzl-eth0" Nov 1 01:02:11.061435 env[1357]: 2025-11-01 01:02:11.054 [INFO][5089] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 1 01:02:11.061435 env[1357]: 2025-11-01 01:02:11.054 [INFO][5089] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 1 01:02:11.061435 env[1357]: 2025-11-01 01:02:11.058 [WARNING][5089] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="cb109768aeefa75de5b45b846d9511a268159f390cbfc2b97f2bb59f0618f4bf" HandleID="k8s-pod-network.cb109768aeefa75de5b45b846d9511a268159f390cbfc2b97f2bb59f0618f4bf" Workload="localhost-k8s-whisker--9467669d4--jjpzl-eth0" Nov 1 01:02:11.061435 env[1357]: 2025-11-01 01:02:11.058 [INFO][5089] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="cb109768aeefa75de5b45b846d9511a268159f390cbfc2b97f2bb59f0618f4bf" HandleID="k8s-pod-network.cb109768aeefa75de5b45b846d9511a268159f390cbfc2b97f2bb59f0618f4bf" Workload="localhost-k8s-whisker--9467669d4--jjpzl-eth0" Nov 1 01:02:11.061435 env[1357]: 2025-11-01 01:02:11.059 [INFO][5089] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 1 01:02:11.061435 env[1357]: 2025-11-01 01:02:11.060 [INFO][5082] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="cb109768aeefa75de5b45b846d9511a268159f390cbfc2b97f2bb59f0618f4bf" Nov 1 01:02:11.063178 env[1357]: time="2025-11-01T01:02:11.061792287Z" level=info msg="TearDown network for sandbox \"cb109768aeefa75de5b45b846d9511a268159f390cbfc2b97f2bb59f0618f4bf\" successfully" Nov 1 01:02:11.063804 env[1357]: time="2025-11-01T01:02:11.063789986Z" level=info msg="RemovePodSandbox \"cb109768aeefa75de5b45b846d9511a268159f390cbfc2b97f2bb59f0618f4bf\" returns successfully" Nov 1 01:02:16.020084 kubelet[2289]: E1101 01:02:16.020051 2289 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-d4bf4bbfd-xnxgp" podUID="e47c436e-7585-46a1-976f-d1673b769a3e" Nov 1 01:02:17.527027 systemd[1]: run-containerd-runc-k8s.io-a452617c1eafd1ae76c1a80d68b621eb5658aea9e6f3c56486c9c71db5808985-runc.eKFuQQ.mount: Deactivated successfully. Nov 1 01:02:19.031124 kubelet[2289]: E1101 01:02:19.031075 2289 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-hsw5f" podUID="cb274633-10f4-4984-be19-e536608b3bf1" Nov 1 01:02:19.035787 kubelet[2289]: E1101 01:02:19.035750 2289 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7787d665fb-fb8nb" podUID="9e4c5135-dd57-4916-a0b6-81789ca74a77" Nov 1 01:02:20.017392 kubelet[2289]: E1101 01:02:20.017369 2289 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-647db64f7d-p9wv7" podUID="a48af7bf-dcba-4afd-bada-a7a0787cc063" Nov 1 01:02:21.047579 kubelet[2289]: E1101 01:02:21.047551 2289 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-clw4d" podUID="2e92d5bc-ac6d-4f8c-9478-cb1bbeb19fda" Nov 1 01:02:23.016573 kubelet[2289]: E1101 01:02:23.016542 2289 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5c87d7ff56-95ljj" podUID="dbcec211-4cad-40a0-8aa5-e63111d93180" Nov 1 01:02:24.017437 kubelet[2289]: E1101 01:02:24.017411 2289 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7787d665fb-xxt5l" podUID="27248a54-b567-4865-8268-2eb8267aa120" Nov 1 01:02:27.165154 env[1357]: time="2025-11-01T01:02:27.165120427Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Nov 1 01:02:27.491435 env[1357]: time="2025-11-01T01:02:27.491343481Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 1 01:02:27.499316 env[1357]: time="2025-11-01T01:02:27.499272022Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Nov 1 01:02:27.512255 kubelet[2289]: E1101 01:02:27.512203 2289 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Nov 1 01:02:27.538246 kubelet[2289]: E1101 01:02:27.538206 2289 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Nov 1 01:02:27.559389 kubelet[2289]: E1101 01:02:27.559343 2289 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:24a2a9b1916e445d94a05dba571afa1c,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-9jxtv,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-d4bf4bbfd-xnxgp_calico-system(e47c436e-7585-46a1-976f-d1673b769a3e): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Nov 1 01:02:27.561297 env[1357]: time="2025-11-01T01:02:27.561270602Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Nov 1 01:02:27.866294 env[1357]: time="2025-11-01T01:02:27.866218292Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 1 01:02:28.245817 env[1357]: time="2025-11-01T01:02:28.245773412Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Nov 1 01:02:28.246239 kubelet[2289]: E1101 01:02:28.246217 2289 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Nov 1 01:02:28.246318 kubelet[2289]: E1101 01:02:28.246302 2289 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Nov 1 01:02:28.254903 kubelet[2289]: E1101 01:02:28.246515 2289 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-9jxtv,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-d4bf4bbfd-xnxgp_calico-system(e47c436e-7585-46a1-976f-d1673b769a3e): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Nov 1 01:02:28.254903 kubelet[2289]: E1101 01:02:28.247600 2289 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-d4bf4bbfd-xnxgp" podUID="e47c436e-7585-46a1-976f-d1673b769a3e" Nov 1 01:02:31.017942 env[1357]: time="2025-11-01T01:02:31.017714219Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Nov 1 01:02:31.319982 env[1357]: time="2025-11-01T01:02:31.319881976Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 1 01:02:31.321306 env[1357]: time="2025-11-01T01:02:31.321272752Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Nov 1 01:02:31.321486 kubelet[2289]: E1101 01:02:31.321464 2289 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Nov 1 01:02:31.323520 kubelet[2289]: E1101 01:02:31.321905 2289 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Nov 1 01:02:31.323520 kubelet[2289]: E1101 01:02:31.322009 2289 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-9mx6f,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-hsw5f_calico-system(cb274633-10f4-4984-be19-e536608b3bf1): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Nov 1 01:02:31.323520 kubelet[2289]: E1101 01:02:31.323334 2289 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-hsw5f" podUID="cb274633-10f4-4984-be19-e536608b3bf1" Nov 1 01:02:32.017569 env[1357]: time="2025-11-01T01:02:32.017542163Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 1 01:02:32.333515 env[1357]: time="2025-11-01T01:02:32.333421263Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 1 01:02:32.334034 env[1357]: time="2025-11-01T01:02:32.333920631Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 1 01:02:32.334296 kubelet[2289]: E1101 01:02:32.334086 2289 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 1 01:02:32.334296 kubelet[2289]: E1101 01:02:32.334124 2289 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 1 01:02:32.334296 kubelet[2289]: E1101 01:02:32.334217 2289 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-pgbnf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-7787d665fb-fb8nb_calico-apiserver(9e4c5135-dd57-4916-a0b6-81789ca74a77): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 1 01:02:32.335719 kubelet[2289]: E1101 01:02:32.335516 2289 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7787d665fb-fb8nb" podUID="9e4c5135-dd57-4916-a0b6-81789ca74a77" Nov 1 01:02:33.016618 env[1357]: time="2025-11-01T01:02:33.016591330Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Nov 1 01:02:33.328369 env[1357]: time="2025-11-01T01:02:33.328275969Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 1 01:02:33.343111 env[1357]: time="2025-11-01T01:02:33.343033559Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Nov 1 01:02:33.347027 kubelet[2289]: E1101 01:02:33.343310 2289 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Nov 1 01:02:33.347027 kubelet[2289]: E1101 01:02:33.343363 2289 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Nov 1 01:02:33.347027 kubelet[2289]: E1101 01:02:33.343477 2289 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-67xqj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-647db64f7d-p9wv7_calico-system(a48af7bf-dcba-4afd-bada-a7a0787cc063): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Nov 1 01:02:33.347027 kubelet[2289]: E1101 01:02:33.344594 2289 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-647db64f7d-p9wv7" podUID="a48af7bf-dcba-4afd-bada-a7a0787cc063" Nov 1 01:02:34.018127 env[1357]: time="2025-11-01T01:02:34.018098569Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 1 01:02:34.356267 env[1357]: time="2025-11-01T01:02:34.356169926Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 1 01:02:34.356717 env[1357]: time="2025-11-01T01:02:34.356661027Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 1 01:02:34.356930 kubelet[2289]: E1101 01:02:34.356897 2289 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 1 01:02:34.357149 kubelet[2289]: E1101 01:02:34.356936 2289 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 1 01:02:34.357149 kubelet[2289]: E1101 01:02:34.357015 2289 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-wpnrd,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-5c87d7ff56-95ljj_calico-apiserver(dbcec211-4cad-40a0-8aa5-e63111d93180): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 1 01:02:34.358344 kubelet[2289]: E1101 01:02:34.358313 2289 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5c87d7ff56-95ljj" podUID="dbcec211-4cad-40a0-8aa5-e63111d93180" Nov 1 01:02:35.017437 env[1357]: time="2025-11-01T01:02:35.017185884Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 1 01:02:35.239172 systemd[1]: Started sshd@7-139.178.70.108:22-147.75.109.163:52128.service. Nov 1 01:02:35.244748 kernel: kauditd_printk_skb: 35 callbacks suppressed Nov 1 01:02:35.249135 kernel: audit: type=1130 audit(1761958955.239:436): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@7-139.178.70.108:22-147.75.109.163:52128 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 01:02:35.239000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@7-139.178.70.108:22-147.75.109.163:52128 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 01:02:35.372332 env[1357]: time="2025-11-01T01:02:35.372199736Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 1 01:02:35.449072 env[1357]: time="2025-11-01T01:02:35.449019717Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 1 01:02:35.449302 kubelet[2289]: E1101 01:02:35.449264 2289 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 1 01:02:35.449514 kubelet[2289]: E1101 01:02:35.449311 2289 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 1 01:02:35.449541 kubelet[2289]: E1101 01:02:35.449494 2289 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-v6njt,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-7787d665fb-xxt5l_calico-apiserver(27248a54-b567-4865-8268-2eb8267aa120): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 1 01:02:35.449833 env[1357]: time="2025-11-01T01:02:35.449820915Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Nov 1 01:02:35.451166 kubelet[2289]: E1101 01:02:35.451150 2289 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7787d665fb-xxt5l" podUID="27248a54-b567-4865-8268-2eb8267aa120" Nov 1 01:02:35.714000 audit[5132]: USER_ACCT pid=5132 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Nov 1 01:02:35.717095 sshd[5132]: Accepted publickey for core from 147.75.109.163 port 52128 ssh2: RSA SHA256:Zb6OsOkmHuKObgLqAaxNeVGNfZDCbP6FgE1ozchKog8 Nov 1 01:02:35.718682 kernel: audit: type=1101 audit(1761958955.714:437): pid=5132 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Nov 1 01:02:35.719000 audit[5132]: CRED_ACQ pid=5132 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Nov 1 01:02:35.721803 sshd[5132]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Nov 1 01:02:35.723723 kernel: audit: type=1103 audit(1761958955.719:438): pid=5132 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Nov 1 01:02:35.725698 kernel: audit: type=1006 audit(1761958955.719:439): pid=5132 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=10 res=1 Nov 1 01:02:35.719000 audit[5132]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffe4e58b2e0 a2=3 a3=0 items=0 ppid=1 pid=5132 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=10 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 01:02:35.795554 kernel: audit: type=1300 audit(1761958955.719:439): arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffe4e58b2e0 a2=3 a3=0 items=0 ppid=1 pid=5132 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=10 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 01:02:35.795618 kernel: audit: type=1327 audit(1761958955.719:439): proctitle=737368643A20636F7265205B707269765D Nov 1 01:02:35.719000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Nov 1 01:02:35.795698 env[1357]: time="2025-11-01T01:02:35.778254267Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 1 01:02:35.833441 env[1357]: time="2025-11-01T01:02:35.803273946Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Nov 1 01:02:35.833441 env[1357]: time="2025-11-01T01:02:35.805289687Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Nov 1 01:02:35.834948 kubelet[2289]: E1101 01:02:35.803454 2289 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Nov 1 01:02:35.834948 kubelet[2289]: E1101 01:02:35.803488 2289 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Nov 1 01:02:35.834948 kubelet[2289]: E1101 01:02:35.803566 2289 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-442hg,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-clw4d_calico-system(2e92d5bc-ac6d-4f8c-9478-cb1bbeb19fda): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Nov 1 01:02:35.893965 systemd[1]: Started session-10.scope. Nov 1 01:02:35.894615 systemd-logind[1341]: New session 10 of user core. Nov 1 01:02:35.897000 audit[5132]: USER_START pid=5132 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Nov 1 01:02:35.901697 kernel: audit: type=1105 audit(1761958955.897:440): pid=5132 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Nov 1 01:02:35.901000 audit[5135]: CRED_ACQ pid=5135 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Nov 1 01:02:35.905943 kernel: audit: type=1103 audit(1761958955.901:441): pid=5135 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Nov 1 01:02:36.153588 env[1357]: time="2025-11-01T01:02:36.153466126Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 1 01:02:36.153934 env[1357]: time="2025-11-01T01:02:36.153857462Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Nov 1 01:02:36.154106 kubelet[2289]: E1101 01:02:36.154072 2289 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Nov 1 01:02:36.154209 kubelet[2289]: E1101 01:02:36.154194 2289 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Nov 1 01:02:36.154404 kubelet[2289]: E1101 01:02:36.154382 2289 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-442hg,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-clw4d_calico-system(2e92d5bc-ac6d-4f8c-9478-cb1bbeb19fda): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Nov 1 01:02:36.155608 kubelet[2289]: E1101 01:02:36.155589 2289 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-clw4d" podUID="2e92d5bc-ac6d-4f8c-9478-cb1bbeb19fda" Nov 1 01:02:36.882609 sshd[5132]: pam_unix(sshd:session): session closed for user core Nov 1 01:02:36.883000 audit[5132]: USER_END pid=5132 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Nov 1 01:02:36.887685 kernel: audit: type=1106 audit(1761958956.883:442): pid=5132 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Nov 1 01:02:36.887821 systemd[1]: sshd@7-139.178.70.108:22-147.75.109.163:52128.service: Deactivated successfully. Nov 1 01:02:36.888796 systemd[1]: session-10.scope: Deactivated successfully. Nov 1 01:02:36.889020 systemd-logind[1341]: Session 10 logged out. Waiting for processes to exit. Nov 1 01:02:36.883000 audit[5132]: CRED_DISP pid=5132 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Nov 1 01:02:36.893683 kernel: audit: type=1104 audit(1761958956.883:443): pid=5132 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Nov 1 01:02:36.887000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@7-139.178.70.108:22-147.75.109.163:52128 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 01:02:36.893972 systemd-logind[1341]: Removed session 10. Nov 1 01:02:39.139688 kubelet[2289]: E1101 01:02:39.139646 2289 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-d4bf4bbfd-xnxgp" podUID="e47c436e-7585-46a1-976f-d1673b769a3e" Nov 1 01:02:41.885858 systemd[1]: Started sshd@8-139.178.70.108:22-147.75.109.163:60070.service. Nov 1 01:02:41.887882 kernel: kauditd_printk_skb: 1 callbacks suppressed Nov 1 01:02:41.887914 kernel: audit: type=1130 audit(1761958961.885:445): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@8-139.178.70.108:22-147.75.109.163:60070 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 01:02:41.885000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@8-139.178.70.108:22-147.75.109.163:60070 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 01:02:42.110000 audit[5148]: USER_ACCT pid=5148 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Nov 1 01:02:42.110942 sshd[5148]: Accepted publickey for core from 147.75.109.163 port 60070 ssh2: RSA SHA256:Zb6OsOkmHuKObgLqAaxNeVGNfZDCbP6FgE1ozchKog8 Nov 1 01:02:42.117956 kernel: audit: type=1101 audit(1761958962.110:446): pid=5148 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Nov 1 01:02:42.117990 kernel: audit: type=1103 audit(1761958962.114:447): pid=5148 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Nov 1 01:02:42.118008 kernel: audit: type=1006 audit(1761958962.114:448): pid=5148 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=11 res=1 Nov 1 01:02:42.114000 audit[5148]: CRED_ACQ pid=5148 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Nov 1 01:02:42.129327 kernel: audit: type=1300 audit(1761958962.114:448): arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7fff28c09590 a2=3 a3=0 items=0 ppid=1 pid=5148 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=11 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 01:02:42.129366 kernel: audit: type=1327 audit(1761958962.114:448): proctitle=737368643A20636F7265205B707269765D Nov 1 01:02:42.114000 audit[5148]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7fff28c09590 a2=3 a3=0 items=0 ppid=1 pid=5148 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=11 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 01:02:42.114000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Nov 1 01:02:42.125134 sshd[5148]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Nov 1 01:02:42.155806 systemd[1]: Started session-11.scope. Nov 1 01:02:42.156437 systemd-logind[1341]: New session 11 of user core. Nov 1 01:02:42.163956 kernel: audit: type=1105 audit(1761958962.158:449): pid=5148 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Nov 1 01:02:42.158000 audit[5148]: USER_START pid=5148 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Nov 1 01:02:42.159000 audit[5151]: CRED_ACQ pid=5151 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Nov 1 01:02:42.167692 kernel: audit: type=1103 audit(1761958962.159:450): pid=5151 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Nov 1 01:02:42.470921 sshd[5148]: pam_unix(sshd:session): session closed for user core Nov 1 01:02:42.471000 audit[5148]: USER_END pid=5148 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Nov 1 01:02:42.472950 systemd[1]: sshd@8-139.178.70.108:22-147.75.109.163:60070.service: Deactivated successfully. Nov 1 01:02:42.473519 systemd[1]: session-11.scope: Deactivated successfully. Nov 1 01:02:42.471000 audit[5148]: CRED_DISP pid=5148 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Nov 1 01:02:42.476127 systemd-logind[1341]: Session 11 logged out. Waiting for processes to exit. Nov 1 01:02:42.479473 kernel: audit: type=1106 audit(1761958962.471:451): pid=5148 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Nov 1 01:02:42.479532 kernel: audit: type=1104 audit(1761958962.471:452): pid=5148 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Nov 1 01:02:42.479989 systemd-logind[1341]: Removed session 11. Nov 1 01:02:42.472000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@8-139.178.70.108:22-147.75.109.163:60070 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 01:02:43.017068 kubelet[2289]: E1101 01:02:43.017033 2289 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-hsw5f" podUID="cb274633-10f4-4984-be19-e536608b3bf1" Nov 1 01:02:46.018564 kubelet[2289]: E1101 01:02:46.018543 2289 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7787d665fb-fb8nb" podUID="9e4c5135-dd57-4916-a0b6-81789ca74a77" Nov 1 01:02:47.017154 kubelet[2289]: E1101 01:02:47.017126 2289 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-647db64f7d-p9wv7" podUID="a48af7bf-dcba-4afd-bada-a7a0787cc063" Nov 1 01:02:47.475857 systemd[1]: Started sshd@9-139.178.70.108:22-147.75.109.163:60072.service. Nov 1 01:02:47.474000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@9-139.178.70.108:22-147.75.109.163:60072 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 01:02:47.477431 kernel: kauditd_printk_skb: 1 callbacks suppressed Nov 1 01:02:47.477470 kernel: audit: type=1130 audit(1761958967.474:454): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@9-139.178.70.108:22-147.75.109.163:60072 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 01:02:47.621000 audit[5165]: USER_ACCT pid=5165 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Nov 1 01:02:47.623489 sshd[5165]: Accepted publickey for core from 147.75.109.163 port 60072 ssh2: RSA SHA256:Zb6OsOkmHuKObgLqAaxNeVGNfZDCbP6FgE1ozchKog8 Nov 1 01:02:47.635421 kernel: audit: type=1101 audit(1761958967.621:455): pid=5165 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Nov 1 01:02:47.635461 kernel: audit: type=1103 audit(1761958967.621:456): pid=5165 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Nov 1 01:02:47.635496 kernel: audit: type=1006 audit(1761958967.621:457): pid=5165 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=12 res=1 Nov 1 01:02:47.635525 kernel: audit: type=1300 audit(1761958967.621:457): arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffcf3d48c90 a2=3 a3=0 items=0 ppid=1 pid=5165 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=12 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 01:02:47.621000 audit[5165]: CRED_ACQ pid=5165 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Nov 1 01:02:47.621000 audit[5165]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffcf3d48c90 a2=3 a3=0 items=0 ppid=1 pid=5165 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=12 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 01:02:47.624043 sshd[5165]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Nov 1 01:02:47.641348 kernel: audit: type=1327 audit(1761958967.621:457): proctitle=737368643A20636F7265205B707269765D Nov 1 01:02:47.641374 kernel: audit: type=1105 audit(1761958967.639:458): pid=5165 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Nov 1 01:02:47.621000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Nov 1 01:02:47.639000 audit[5165]: USER_START pid=5165 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Nov 1 01:02:47.637418 systemd[1]: Started session-12.scope. Nov 1 01:02:47.638049 systemd-logind[1341]: New session 12 of user core. Nov 1 01:02:47.641000 audit[5184]: CRED_ACQ pid=5184 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Nov 1 01:02:47.652697 kernel: audit: type=1103 audit(1761958967.641:459): pid=5184 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Nov 1 01:02:48.006340 systemd[1]: Started sshd@10-139.178.70.108:22-147.75.109.163:60086.service. Nov 1 01:02:48.017839 kernel: audit: type=1130 audit(1761958968.004:460): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@10-139.178.70.108:22-147.75.109.163:60086 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 01:02:48.004000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@10-139.178.70.108:22-147.75.109.163:60086 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 01:02:48.028058 kubelet[2289]: E1101 01:02:48.028037 2289 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5c87d7ff56-95ljj" podUID="dbcec211-4cad-40a0-8aa5-e63111d93180" Nov 1 01:02:48.036807 sshd[5165]: pam_unix(sshd:session): session closed for user core Nov 1 01:02:48.045216 kubelet[2289]: E1101 01:02:48.045184 2289 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-clw4d" podUID="2e92d5bc-ac6d-4f8c-9478-cb1bbeb19fda" Nov 1 01:02:48.070000 audit[5165]: USER_END pid=5165 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Nov 1 01:02:48.077688 kernel: audit: type=1106 audit(1761958968.070:461): pid=5165 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Nov 1 01:02:48.081000 audit[5165]: CRED_DISP pid=5165 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Nov 1 01:02:48.084581 systemd[1]: sshd@9-139.178.70.108:22-147.75.109.163:60072.service: Deactivated successfully. Nov 1 01:02:48.083000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@9-139.178.70.108:22-147.75.109.163:60072 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 01:02:48.085646 systemd[1]: session-12.scope: Deactivated successfully. Nov 1 01:02:48.085763 systemd-logind[1341]: Session 12 logged out. Waiting for processes to exit. Nov 1 01:02:48.086562 systemd-logind[1341]: Removed session 12. Nov 1 01:02:48.252000 audit[5197]: USER_ACCT pid=5197 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Nov 1 01:02:48.253000 audit[5197]: CRED_ACQ pid=5197 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Nov 1 01:02:48.253000 audit[5197]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7fffd2c93540 a2=3 a3=0 items=0 ppid=1 pid=5197 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=13 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 01:02:48.253000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Nov 1 01:02:48.280371 sshd[5197]: Accepted publickey for core from 147.75.109.163 port 60086 ssh2: RSA SHA256:Zb6OsOkmHuKObgLqAaxNeVGNfZDCbP6FgE1ozchKog8 Nov 1 01:02:48.277710 sshd[5197]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Nov 1 01:02:48.289000 audit[5197]: USER_START pid=5197 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Nov 1 01:02:48.290000 audit[5202]: CRED_ACQ pid=5202 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Nov 1 01:02:48.287618 systemd-logind[1341]: New session 13 of user core. Nov 1 01:02:48.287978 systemd[1]: Started session-13.scope. Nov 1 01:02:48.543910 sshd[5197]: pam_unix(sshd:session): session closed for user core Nov 1 01:02:48.543000 audit[5197]: USER_END pid=5197 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Nov 1 01:02:48.544000 audit[5197]: CRED_DISP pid=5197 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Nov 1 01:02:48.545000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@11-139.178.70.108:22-147.75.109.163:60100 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 01:02:48.546590 systemd[1]: Started sshd@11-139.178.70.108:22-147.75.109.163:60100.service. Nov 1 01:02:48.548000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@10-139.178.70.108:22-147.75.109.163:60086 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 01:02:48.550310 systemd[1]: sshd@10-139.178.70.108:22-147.75.109.163:60086.service: Deactivated successfully. Nov 1 01:02:48.552066 systemd[1]: session-13.scope: Deactivated successfully. Nov 1 01:02:48.552277 systemd-logind[1341]: Session 13 logged out. Waiting for processes to exit. Nov 1 01:02:48.553586 systemd-logind[1341]: Removed session 13. Nov 1 01:02:48.610000 audit[5207]: USER_ACCT pid=5207 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Nov 1 01:02:48.612532 sshd[5207]: Accepted publickey for core from 147.75.109.163 port 60100 ssh2: RSA SHA256:Zb6OsOkmHuKObgLqAaxNeVGNfZDCbP6FgE1ozchKog8 Nov 1 01:02:48.611000 audit[5207]: CRED_ACQ pid=5207 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Nov 1 01:02:48.611000 audit[5207]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7fff382ef390 a2=3 a3=0 items=0 ppid=1 pid=5207 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=14 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 01:02:48.611000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Nov 1 01:02:48.613875 sshd[5207]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Nov 1 01:02:48.618000 audit[5207]: USER_START pid=5207 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Nov 1 01:02:48.619000 audit[5212]: CRED_ACQ pid=5212 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Nov 1 01:02:48.616700 systemd-logind[1341]: New session 14 of user core. Nov 1 01:02:48.617074 systemd[1]: Started session-14.scope. Nov 1 01:02:49.320380 sshd[5207]: pam_unix(sshd:session): session closed for user core Nov 1 01:02:49.354000 audit[5207]: USER_END pid=5207 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Nov 1 01:02:49.354000 audit[5207]: CRED_DISP pid=5207 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Nov 1 01:02:49.414539 systemd[1]: sshd@11-139.178.70.108:22-147.75.109.163:60100.service: Deactivated successfully. Nov 1 01:02:49.420000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@11-139.178.70.108:22-147.75.109.163:60100 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 01:02:49.422479 systemd[1]: session-14.scope: Deactivated successfully. Nov 1 01:02:49.422521 systemd-logind[1341]: Session 14 logged out. Waiting for processes to exit. Nov 1 01:02:49.423152 systemd-logind[1341]: Removed session 14. Nov 1 01:02:50.211912 kubelet[2289]: E1101 01:02:50.211874 2289 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7787d665fb-xxt5l" podUID="27248a54-b567-4865-8268-2eb8267aa120" Nov 1 01:02:54.018853 kubelet[2289]: E1101 01:02:54.018818 2289 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-d4bf4bbfd-xnxgp" podUID="e47c436e-7585-46a1-976f-d1673b769a3e" Nov 1 01:02:54.029579 kubelet[2289]: E1101 01:02:54.029556 2289 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-hsw5f" podUID="cb274633-10f4-4984-be19-e536608b3bf1" Nov 1 01:02:54.287934 kernel: kauditd_printk_skb: 23 callbacks suppressed Nov 1 01:02:54.288006 kernel: audit: type=1130 audit(1761958974.283:481): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@12-139.178.70.108:22-147.75.109.163:37806 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 01:02:54.283000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@12-139.178.70.108:22-147.75.109.163:37806 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 01:02:54.284869 systemd[1]: Started sshd@12-139.178.70.108:22-147.75.109.163:37806.service. Nov 1 01:02:54.473000 audit[5225]: USER_ACCT pid=5225 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Nov 1 01:02:54.474972 sshd[5225]: Accepted publickey for core from 147.75.109.163 port 37806 ssh2: RSA SHA256:Zb6OsOkmHuKObgLqAaxNeVGNfZDCbP6FgE1ozchKog8 Nov 1 01:02:54.486379 kernel: audit: type=1101 audit(1761958974.473:482): pid=5225 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Nov 1 01:02:54.484000 audit[5225]: CRED_ACQ pid=5225 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Nov 1 01:02:54.489700 kernel: audit: type=1103 audit(1761958974.484:483): pid=5225 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Nov 1 01:02:54.500721 kernel: audit: type=1006 audit(1761958974.485:484): pid=5225 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=15 res=1 Nov 1 01:02:54.500760 kernel: audit: type=1300 audit(1761958974.485:484): arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffc02839190 a2=3 a3=0 items=0 ppid=1 pid=5225 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=15 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 01:02:54.507645 kernel: audit: type=1327 audit(1761958974.485:484): proctitle=737368643A20636F7265205B707269765D Nov 1 01:02:54.485000 audit[5225]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffc02839190 a2=3 a3=0 items=0 ppid=1 pid=5225 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=15 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 01:02:54.485000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Nov 1 01:02:54.526822 sshd[5225]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Nov 1 01:02:54.582587 systemd[1]: Started session-15.scope. Nov 1 01:02:54.594515 kernel: audit: type=1105 audit(1761958974.584:485): pid=5225 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Nov 1 01:02:54.594570 kernel: audit: type=1103 audit(1761958974.585:486): pid=5228 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Nov 1 01:02:54.584000 audit[5225]: USER_START pid=5225 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Nov 1 01:02:54.585000 audit[5228]: CRED_ACQ pid=5228 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Nov 1 01:02:54.583371 systemd-logind[1341]: New session 15 of user core. Nov 1 01:02:55.149240 sshd[5225]: pam_unix(sshd:session): session closed for user core Nov 1 01:02:55.148000 audit[5225]: USER_END pid=5225 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Nov 1 01:02:55.154680 kernel: audit: type=1106 audit(1761958975.148:487): pid=5225 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Nov 1 01:02:55.153000 audit[5225]: CRED_DISP pid=5225 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Nov 1 01:02:55.158684 kernel: audit: type=1104 audit(1761958975.153:488): pid=5225 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Nov 1 01:02:55.159365 systemd-logind[1341]: Session 15 logged out. Waiting for processes to exit. Nov 1 01:02:55.159443 systemd[1]: sshd@12-139.178.70.108:22-147.75.109.163:37806.service: Deactivated successfully. Nov 1 01:02:55.157000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@12-139.178.70.108:22-147.75.109.163:37806 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 01:02:55.159976 systemd[1]: session-15.scope: Deactivated successfully. Nov 1 01:02:55.160381 systemd-logind[1341]: Removed session 15. Nov 1 01:03:00.017496 kubelet[2289]: E1101 01:03:00.017473 2289 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5c87d7ff56-95ljj" podUID="dbcec211-4cad-40a0-8aa5-e63111d93180" Nov 1 01:03:00.152089 systemd[1]: Started sshd@13-139.178.70.108:22-147.75.109.163:52986.service. Nov 1 01:03:00.156490 kernel: kauditd_printk_skb: 1 callbacks suppressed Nov 1 01:03:00.156530 kernel: audit: type=1130 audit(1761958980.150:490): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@13-139.178.70.108:22-147.75.109.163:52986 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 01:03:00.150000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@13-139.178.70.108:22-147.75.109.163:52986 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 01:03:00.199000 audit[5239]: USER_ACCT pid=5239 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Nov 1 01:03:00.204813 kernel: audit: type=1101 audit(1761958980.199:491): pid=5239 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Nov 1 01:03:00.204845 sshd[5239]: Accepted publickey for core from 147.75.109.163 port 52986 ssh2: RSA SHA256:Zb6OsOkmHuKObgLqAaxNeVGNfZDCbP6FgE1ozchKog8 Nov 1 01:03:00.204000 audit[5239]: CRED_ACQ pid=5239 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Nov 1 01:03:00.206132 sshd[5239]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Nov 1 01:03:00.211233 kernel: audit: type=1103 audit(1761958980.204:492): pid=5239 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Nov 1 01:03:00.211273 kernel: audit: type=1006 audit(1761958980.204:493): pid=5239 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=16 res=1 Nov 1 01:03:00.214810 kernel: audit: type=1300 audit(1761958980.204:493): arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffdbb739410 a2=3 a3=0 items=0 ppid=1 pid=5239 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=16 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 01:03:00.204000 audit[5239]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffdbb739410 a2=3 a3=0 items=0 ppid=1 pid=5239 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=16 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 01:03:00.216929 systemd[1]: Started session-16.scope. Nov 1 01:03:00.217062 systemd-logind[1341]: New session 16 of user core. Nov 1 01:03:00.204000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Nov 1 01:03:00.222676 kernel: audit: type=1327 audit(1761958980.204:493): proctitle=737368643A20636F7265205B707269765D Nov 1 01:03:00.218000 audit[5239]: USER_START pid=5239 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Nov 1 01:03:00.230078 kernel: audit: type=1105 audit(1761958980.218:494): pid=5239 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Nov 1 01:03:00.230692 kernel: audit: type=1103 audit(1761958980.220:495): pid=5242 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Nov 1 01:03:00.220000 audit[5242]: CRED_ACQ pid=5242 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Nov 1 01:03:00.370470 sshd[5239]: pam_unix(sshd:session): session closed for user core Nov 1 01:03:00.372000 audit[5239]: USER_END pid=5239 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Nov 1 01:03:00.379176 kernel: audit: type=1106 audit(1761958980.372:496): pid=5239 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Nov 1 01:03:00.378628 systemd[1]: sshd@13-139.178.70.108:22-147.75.109.163:52986.service: Deactivated successfully. Nov 1 01:03:00.379459 systemd[1]: session-16.scope: Deactivated successfully. Nov 1 01:03:00.379643 systemd-logind[1341]: Session 16 logged out. Waiting for processes to exit. Nov 1 01:03:00.372000 audit[5239]: CRED_DISP pid=5239 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Nov 1 01:03:00.377000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@13-139.178.70.108:22-147.75.109.163:52986 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 01:03:00.383919 kernel: audit: type=1104 audit(1761958980.372:497): pid=5239 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Nov 1 01:03:00.383335 systemd-logind[1341]: Removed session 16. Nov 1 01:03:01.017442 kubelet[2289]: E1101 01:03:01.017415 2289 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7787d665fb-fb8nb" podUID="9e4c5135-dd57-4916-a0b6-81789ca74a77" Nov 1 01:03:01.019518 kubelet[2289]: E1101 01:03:01.019494 2289 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7787d665fb-xxt5l" podUID="27248a54-b567-4865-8268-2eb8267aa120" Nov 1 01:03:02.017126 kubelet[2289]: E1101 01:03:02.017100 2289 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-647db64f7d-p9wv7" podUID="a48af7bf-dcba-4afd-bada-a7a0787cc063" Nov 1 01:03:03.017370 kubelet[2289]: E1101 01:03:03.017328 2289 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-clw4d" podUID="2e92d5bc-ac6d-4f8c-9478-cb1bbeb19fda" Nov 1 01:03:05.017043 kubelet[2289]: E1101 01:03:05.017013 2289 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-d4bf4bbfd-xnxgp" podUID="e47c436e-7585-46a1-976f-d1673b769a3e" Nov 1 01:03:05.370210 systemd[1]: Started sshd@14-139.178.70.108:22-147.75.109.163:52996.service. Nov 1 01:03:05.373017 kernel: kauditd_printk_skb: 1 callbacks suppressed Nov 1 01:03:05.373093 kernel: audit: type=1130 audit(1761958985.369:499): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@14-139.178.70.108:22-147.75.109.163:52996 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 01:03:05.369000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@14-139.178.70.108:22-147.75.109.163:52996 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 01:03:05.448000 audit[5251]: USER_ACCT pid=5251 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Nov 1 01:03:05.453717 kernel: audit: type=1101 audit(1761958985.448:500): pid=5251 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Nov 1 01:03:05.454645 sshd[5251]: Accepted publickey for core from 147.75.109.163 port 52996 ssh2: RSA SHA256:Zb6OsOkmHuKObgLqAaxNeVGNfZDCbP6FgE1ozchKog8 Nov 1 01:03:05.455000 audit[5251]: CRED_ACQ pid=5251 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Nov 1 01:03:05.460715 kernel: audit: type=1103 audit(1761958985.455:501): pid=5251 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Nov 1 01:03:05.461536 sshd[5251]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Nov 1 01:03:05.463685 kernel: audit: type=1006 audit(1761958985.459:502): pid=5251 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=17 res=1 Nov 1 01:03:05.459000 audit[5251]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7fff10d3ca30 a2=3 a3=0 items=0 ppid=1 pid=5251 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=17 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 01:03:05.459000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Nov 1 01:03:05.468481 kernel: audit: type=1300 audit(1761958985.459:502): arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7fff10d3ca30 a2=3 a3=0 items=0 ppid=1 pid=5251 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=17 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 01:03:05.468552 kernel: audit: type=1327 audit(1761958985.459:502): proctitle=737368643A20636F7265205B707269765D Nov 1 01:03:05.471706 systemd[1]: Started session-17.scope. Nov 1 01:03:05.471937 systemd-logind[1341]: New session 17 of user core. Nov 1 01:03:05.473000 audit[5251]: USER_START pid=5251 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Nov 1 01:03:05.479892 kernel: audit: type=1105 audit(1761958985.473:503): pid=5251 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Nov 1 01:03:05.478000 audit[5254]: CRED_ACQ pid=5254 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Nov 1 01:03:05.484696 kernel: audit: type=1103 audit(1761958985.478:504): pid=5254 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Nov 1 01:03:05.818137 sshd[5251]: pam_unix(sshd:session): session closed for user core Nov 1 01:03:05.818000 audit[5251]: USER_END pid=5251 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Nov 1 01:03:05.823688 kernel: audit: type=1106 audit(1761958985.818:505): pid=5251 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Nov 1 01:03:05.825043 systemd-logind[1341]: Session 17 logged out. Waiting for processes to exit. Nov 1 01:03:05.825834 systemd[1]: sshd@14-139.178.70.108:22-147.75.109.163:52996.service: Deactivated successfully. Nov 1 01:03:05.826306 systemd[1]: session-17.scope: Deactivated successfully. Nov 1 01:03:05.827147 systemd-logind[1341]: Removed session 17. Nov 1 01:03:05.822000 audit[5251]: CRED_DISP pid=5251 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Nov 1 01:03:05.824000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@14-139.178.70.108:22-147.75.109.163:52996 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 01:03:05.830705 kernel: audit: type=1104 audit(1761958985.822:506): pid=5251 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Nov 1 01:03:06.017573 kubelet[2289]: E1101 01:03:06.017549 2289 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-hsw5f" podUID="cb274633-10f4-4984-be19-e536608b3bf1" Nov 1 01:03:10.820095 systemd[1]: Started sshd@15-139.178.70.108:22-147.75.109.163:56422.service. Nov 1 01:03:10.821258 kernel: kauditd_printk_skb: 1 callbacks suppressed Nov 1 01:03:10.821295 kernel: audit: type=1130 audit(1761958990.819:508): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@15-139.178.70.108:22-147.75.109.163:56422 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 01:03:10.819000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@15-139.178.70.108:22-147.75.109.163:56422 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 01:03:10.918000 audit[5272]: USER_ACCT pid=5272 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Nov 1 01:03:10.919886 sshd[5272]: Accepted publickey for core from 147.75.109.163 port 56422 ssh2: RSA SHA256:Zb6OsOkmHuKObgLqAaxNeVGNfZDCbP6FgE1ozchKog8 Nov 1 01:03:10.922676 kernel: audit: type=1101 audit(1761958990.918:509): pid=5272 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Nov 1 01:03:10.922000 audit[5272]: CRED_ACQ pid=5272 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Nov 1 01:03:10.925824 sshd[5272]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Nov 1 01:03:10.926696 kernel: audit: type=1103 audit(1761958990.922:510): pid=5272 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Nov 1 01:03:10.929701 kernel: audit: type=1006 audit(1761958990.922:511): pid=5272 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=18 res=1 Nov 1 01:03:10.922000 audit[5272]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffc8c7e0ad0 a2=3 a3=0 items=0 ppid=1 pid=5272 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=18 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 01:03:10.933696 kernel: audit: type=1300 audit(1761958990.922:511): arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffc8c7e0ad0 a2=3 a3=0 items=0 ppid=1 pid=5272 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=18 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 01:03:10.922000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Nov 1 01:03:10.935926 kernel: audit: type=1327 audit(1761958990.922:511): proctitle=737368643A20636F7265205B707269765D Nov 1 01:03:10.937499 systemd-logind[1341]: New session 18 of user core. Nov 1 01:03:10.937851 systemd[1]: Started session-18.scope. Nov 1 01:03:10.940000 audit[5272]: USER_START pid=5272 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Nov 1 01:03:10.944000 audit[5275]: CRED_ACQ pid=5275 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Nov 1 01:03:10.948314 kernel: audit: type=1105 audit(1761958990.940:512): pid=5272 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Nov 1 01:03:10.948356 kernel: audit: type=1103 audit(1761958990.944:513): pid=5275 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Nov 1 01:03:11.301049 systemd[1]: Started sshd@16-139.178.70.108:22-147.75.109.163:56424.service. Nov 1 01:03:11.300000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@16-139.178.70.108:22-147.75.109.163:56424 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 01:03:11.304717 kernel: audit: type=1130 audit(1761958991.300:514): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@16-139.178.70.108:22-147.75.109.163:56424 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 01:03:11.307499 sshd[5272]: pam_unix(sshd:session): session closed for user core Nov 1 01:03:11.311000 audit[5272]: USER_END pid=5272 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Nov 1 01:03:11.312900 systemd[1]: sshd@15-139.178.70.108:22-147.75.109.163:56422.service: Deactivated successfully. Nov 1 01:03:11.317015 kernel: audit: type=1106 audit(1761958991.311:515): pid=5272 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Nov 1 01:03:11.316786 systemd[1]: session-18.scope: Deactivated successfully. Nov 1 01:03:11.316821 systemd-logind[1341]: Session 18 logged out. Waiting for processes to exit. Nov 1 01:03:11.311000 audit[5272]: CRED_DISP pid=5272 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Nov 1 01:03:11.312000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@15-139.178.70.108:22-147.75.109.163:56422 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 01:03:11.318682 systemd-logind[1341]: Removed session 18. Nov 1 01:03:11.367974 sshd[5282]: Accepted publickey for core from 147.75.109.163 port 56424 ssh2: RSA SHA256:Zb6OsOkmHuKObgLqAaxNeVGNfZDCbP6FgE1ozchKog8 Nov 1 01:03:11.368949 sshd[5282]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Nov 1 01:03:11.367000 audit[5282]: USER_ACCT pid=5282 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Nov 1 01:03:11.368000 audit[5282]: CRED_ACQ pid=5282 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Nov 1 01:03:11.368000 audit[5282]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7fff8b26ec20 a2=3 a3=0 items=0 ppid=1 pid=5282 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=19 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 01:03:11.368000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Nov 1 01:03:11.371923 systemd[1]: Started session-19.scope. Nov 1 01:03:11.372571 systemd-logind[1341]: New session 19 of user core. Nov 1 01:03:11.374000 audit[5282]: USER_START pid=5282 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Nov 1 01:03:11.375000 audit[5287]: CRED_ACQ pid=5287 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Nov 1 01:03:11.846699 systemd[1]: Started sshd@17-139.178.70.108:22-147.75.109.163:56428.service. Nov 1 01:03:11.846000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@17-139.178.70.108:22-147.75.109.163:56428 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 01:03:11.849047 sshd[5282]: pam_unix(sshd:session): session closed for user core Nov 1 01:03:11.849000 audit[5282]: USER_END pid=5282 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Nov 1 01:03:11.849000 audit[5282]: CRED_DISP pid=5282 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Nov 1 01:03:11.852025 systemd[1]: sshd@16-139.178.70.108:22-147.75.109.163:56424.service: Deactivated successfully. Nov 1 01:03:11.851000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@16-139.178.70.108:22-147.75.109.163:56424 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 01:03:11.853586 systemd[1]: session-19.scope: Deactivated successfully. Nov 1 01:03:11.853757 systemd-logind[1341]: Session 19 logged out. Waiting for processes to exit. Nov 1 01:03:11.856581 systemd-logind[1341]: Removed session 19. Nov 1 01:03:11.966000 audit[5293]: USER_ACCT pid=5293 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Nov 1 01:03:11.967858 sshd[5293]: Accepted publickey for core from 147.75.109.163 port 56428 ssh2: RSA SHA256:Zb6OsOkmHuKObgLqAaxNeVGNfZDCbP6FgE1ozchKog8 Nov 1 01:03:11.967000 audit[5293]: CRED_ACQ pid=5293 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Nov 1 01:03:11.967000 audit[5293]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffe3c825bb0 a2=3 a3=0 items=0 ppid=1 pid=5293 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=20 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 01:03:11.967000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Nov 1 01:03:11.968327 sshd[5293]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Nov 1 01:03:11.975941 systemd[1]: Started session-20.scope. Nov 1 01:03:11.976205 systemd-logind[1341]: New session 20 of user core. Nov 1 01:03:11.979000 audit[5293]: USER_START pid=5293 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Nov 1 01:03:11.980000 audit[5299]: CRED_ACQ pid=5299 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Nov 1 01:03:12.820606 systemd[1]: Started sshd@18-139.178.70.108:22-147.75.109.163:56436.service. Nov 1 01:03:12.821000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@18-139.178.70.108:22-147.75.109.163:56436 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 01:03:12.844943 sshd[5293]: pam_unix(sshd:session): session closed for user core Nov 1 01:03:12.847000 audit[5293]: USER_END pid=5293 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Nov 1 01:03:12.850000 audit[5293]: CRED_DISP pid=5293 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Nov 1 01:03:12.851000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@17-139.178.70.108:22-147.75.109.163:56428 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 01:03:12.852067 systemd[1]: sshd@17-139.178.70.108:22-147.75.109.163:56428.service: Deactivated successfully. Nov 1 01:03:12.853013 systemd[1]: session-20.scope: Deactivated successfully. Nov 1 01:03:12.853606 systemd-logind[1341]: Session 20 logged out. Waiting for processes to exit. Nov 1 01:03:12.855626 systemd-logind[1341]: Removed session 20. Nov 1 01:03:12.944000 audit[5308]: USER_ACCT pid=5308 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Nov 1 01:03:12.945121 sshd[5308]: Accepted publickey for core from 147.75.109.163 port 56436 ssh2: RSA SHA256:Zb6OsOkmHuKObgLqAaxNeVGNfZDCbP6FgE1ozchKog8 Nov 1 01:03:12.945000 audit[5308]: CRED_ACQ pid=5308 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Nov 1 01:03:12.945000 audit[5308]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffd0afcc710 a2=3 a3=0 items=0 ppid=1 pid=5308 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=21 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 01:03:12.945000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Nov 1 01:03:12.948735 sshd[5308]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Nov 1 01:03:12.956550 systemd[1]: Started session-21.scope. Nov 1 01:03:12.959817 systemd-logind[1341]: New session 21 of user core. Nov 1 01:03:12.965000 audit[5308]: USER_START pid=5308 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Nov 1 01:03:12.966000 audit[5313]: CRED_ACQ pid=5313 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Nov 1 01:03:13.128000 audit[5316]: NETFILTER_CFG table=filter:132 family=2 entries=26 op=nft_register_rule pid=5316 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Nov 1 01:03:13.128000 audit[5316]: SYSCALL arch=c000003e syscall=46 success=yes exit=14176 a0=3 a1=7ffc1b584920 a2=0 a3=7ffc1b58490c items=0 ppid=2387 pid=5316 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 01:03:13.128000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Nov 1 01:03:13.133000 audit[5316]: NETFILTER_CFG table=nat:133 family=2 entries=20 op=nft_register_rule pid=5316 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Nov 1 01:03:13.133000 audit[5316]: SYSCALL arch=c000003e syscall=46 success=yes exit=5772 a0=3 a1=7ffc1b584920 a2=0 a3=0 items=0 ppid=2387 pid=5316 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 01:03:13.133000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Nov 1 01:03:13.158000 audit[5318]: NETFILTER_CFG table=filter:134 family=2 entries=38 op=nft_register_rule pid=5318 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Nov 1 01:03:13.158000 audit[5318]: SYSCALL arch=c000003e syscall=46 success=yes exit=14176 a0=3 a1=7ffd8102cd40 a2=0 a3=7ffd8102cd2c items=0 ppid=2387 pid=5318 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 01:03:13.158000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Nov 1 01:03:13.175000 audit[5318]: NETFILTER_CFG table=nat:135 family=2 entries=20 op=nft_register_rule pid=5318 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Nov 1 01:03:13.175000 audit[5318]: SYSCALL arch=c000003e syscall=46 success=yes exit=5772 a0=3 a1=7ffd8102cd40 a2=0 a3=0 items=0 ppid=2387 pid=5318 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 01:03:13.175000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Nov 1 01:03:13.595741 env[1357]: time="2025-11-01T01:03:13.595650949Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 1 01:03:13.621851 kubelet[2289]: E1101 01:03:13.611560 2289 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5c87d7ff56-95ljj" podUID="dbcec211-4cad-40a0-8aa5-e63111d93180" Nov 1 01:03:13.954334 env[1357]: time="2025-11-01T01:03:13.954294397Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 1 01:03:13.954636 env[1357]: time="2025-11-01T01:03:13.954607146Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 1 01:03:13.962727 kubelet[2289]: E1101 01:03:13.960291 2289 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 1 01:03:13.965502 kubelet[2289]: E1101 01:03:13.964186 2289 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 1 01:03:14.031650 env[1357]: time="2025-11-01T01:03:14.031519989Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Nov 1 01:03:14.036825 kubelet[2289]: E1101 01:03:14.036784 2289 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-pgbnf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-7787d665fb-fb8nb_calico-apiserver(9e4c5135-dd57-4916-a0b6-81789ca74a77): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 1 01:03:14.037886 kubelet[2289]: E1101 01:03:14.037863 2289 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7787d665fb-fb8nb" podUID="9e4c5135-dd57-4916-a0b6-81789ca74a77" Nov 1 01:03:14.338151 env[1357]: time="2025-11-01T01:03:14.338047645Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 1 01:03:14.338484 env[1357]: time="2025-11-01T01:03:14.338441375Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Nov 1 01:03:14.340515 kubelet[2289]: E1101 01:03:14.338619 2289 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Nov 1 01:03:14.340515 kubelet[2289]: E1101 01:03:14.338682 2289 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Nov 1 01:03:14.340515 kubelet[2289]: E1101 01:03:14.338777 2289 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-67xqj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-647db64f7d-p9wv7_calico-system(a48af7bf-dcba-4afd-bada-a7a0787cc063): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Nov 1 01:03:14.340515 kubelet[2289]: E1101 01:03:14.340095 2289 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-647db64f7d-p9wv7" podUID="a48af7bf-dcba-4afd-bada-a7a0787cc063" Nov 1 01:03:14.493982 systemd[1]: Started sshd@19-139.178.70.108:22-147.75.109.163:56438.service. Nov 1 01:03:14.493000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@19-139.178.70.108:22-147.75.109.163:56438 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 01:03:14.558160 sshd[5308]: pam_unix(sshd:session): session closed for user core Nov 1 01:03:14.654000 audit[5308]: USER_END pid=5308 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Nov 1 01:03:14.668000 audit[5308]: CRED_DISP pid=5308 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Nov 1 01:03:14.748178 systemd[1]: sshd@18-139.178.70.108:22-147.75.109.163:56436.service: Deactivated successfully. Nov 1 01:03:14.747000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@18-139.178.70.108:22-147.75.109.163:56436 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 01:03:14.749042 systemd[1]: session-21.scope: Deactivated successfully. Nov 1 01:03:14.749411 systemd-logind[1341]: Session 21 logged out. Waiting for processes to exit. Nov 1 01:03:14.749956 systemd-logind[1341]: Removed session 21. Nov 1 01:03:15.112000 audit[5325]: USER_ACCT pid=5325 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Nov 1 01:03:15.113157 sshd[5325]: Accepted publickey for core from 147.75.109.163 port 56438 ssh2: RSA SHA256:Zb6OsOkmHuKObgLqAaxNeVGNfZDCbP6FgE1ozchKog8 Nov 1 01:03:15.112000 audit[5325]: CRED_ACQ pid=5325 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Nov 1 01:03:15.112000 audit[5325]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffd0dba6890 a2=3 a3=0 items=0 ppid=1 pid=5325 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=22 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 01:03:15.112000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Nov 1 01:03:15.136186 sshd[5325]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Nov 1 01:03:15.151686 systemd[1]: Started session-22.scope. Nov 1 01:03:15.152344 systemd-logind[1341]: New session 22 of user core. Nov 1 01:03:15.155000 audit[5325]: USER_START pid=5325 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Nov 1 01:03:15.156000 audit[5330]: CRED_ACQ pid=5330 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Nov 1 01:03:15.615593 sshd[5325]: pam_unix(sshd:session): session closed for user core Nov 1 01:03:15.615000 audit[5325]: USER_END pid=5325 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Nov 1 01:03:15.615000 audit[5325]: CRED_DISP pid=5325 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Nov 1 01:03:15.616000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@19-139.178.70.108:22-147.75.109.163:56438 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 01:03:15.617397 systemd[1]: sshd@19-139.178.70.108:22-147.75.109.163:56438.service: Deactivated successfully. Nov 1 01:03:15.618270 systemd[1]: session-22.scope: Deactivated successfully. Nov 1 01:03:15.618804 systemd-logind[1341]: Session 22 logged out. Waiting for processes to exit. Nov 1 01:03:15.619342 systemd-logind[1341]: Removed session 22. Nov 1 01:03:16.018082 env[1357]: time="2025-11-01T01:03:16.017911907Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 1 01:03:16.334330 env[1357]: time="2025-11-01T01:03:16.334116883Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 1 01:03:16.339708 env[1357]: time="2025-11-01T01:03:16.339615498Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 1 01:03:16.339817 kubelet[2289]: E1101 01:03:16.339789 2289 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 1 01:03:16.340043 kubelet[2289]: E1101 01:03:16.339828 2289 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 1 01:03:16.340043 kubelet[2289]: E1101 01:03:16.339915 2289 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-v6njt,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-7787d665fb-xxt5l_calico-apiserver(27248a54-b567-4865-8268-2eb8267aa120): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 1 01:03:16.341248 kubelet[2289]: E1101 01:03:16.341221 2289 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7787d665fb-xxt5l" podUID="27248a54-b567-4865-8268-2eb8267aa120" Nov 1 01:03:18.017896 env[1357]: time="2025-11-01T01:03:18.017863181Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Nov 1 01:03:18.343458 env[1357]: time="2025-11-01T01:03:18.343376405Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 1 01:03:18.405923 env[1357]: time="2025-11-01T01:03:18.404608037Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Nov 1 01:03:18.405923 env[1357]: time="2025-11-01T01:03:18.405453604Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Nov 1 01:03:18.407124 kubelet[2289]: E1101 01:03:18.404841 2289 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Nov 1 01:03:18.407124 kubelet[2289]: E1101 01:03:18.404884 2289 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Nov 1 01:03:18.407124 kubelet[2289]: E1101 01:03:18.405072 2289 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:24a2a9b1916e445d94a05dba571afa1c,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-9jxtv,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-d4bf4bbfd-xnxgp_calico-system(e47c436e-7585-46a1-976f-d1673b769a3e): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Nov 1 01:03:18.697649 env[1357]: time="2025-11-01T01:03:18.697607436Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 1 01:03:18.714803 env[1357]: time="2025-11-01T01:03:18.714763467Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Nov 1 01:03:18.725450 env[1357]: time="2025-11-01T01:03:18.715296850Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Nov 1 01:03:18.727794 kubelet[2289]: E1101 01:03:18.715037 2289 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Nov 1 01:03:18.727794 kubelet[2289]: E1101 01:03:18.715081 2289 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Nov 1 01:03:18.727794 kubelet[2289]: E1101 01:03:18.715478 2289 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-442hg,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-clw4d_calico-system(2e92d5bc-ac6d-4f8c-9478-cb1bbeb19fda): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Nov 1 01:03:18.734000 audit[5366]: NETFILTER_CFG table=filter:136 family=2 entries=26 op=nft_register_rule pid=5366 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Nov 1 01:03:18.759485 kernel: kauditd_printk_skb: 57 callbacks suppressed Nov 1 01:03:18.782248 kernel: audit: type=1325 audit(1761958998.734:557): table=filter:136 family=2 entries=26 op=nft_register_rule pid=5366 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Nov 1 01:03:18.794421 kernel: audit: type=1300 audit(1761958998.734:557): arch=c000003e syscall=46 success=yes exit=5248 a0=3 a1=7ffddd95de20 a2=0 a3=7ffddd95de0c items=0 ppid=2387 pid=5366 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 01:03:18.794466 kernel: audit: type=1327 audit(1761958998.734:557): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Nov 1 01:03:18.797318 kernel: audit: type=1325 audit(1761958998.749:558): table=nat:137 family=2 entries=104 op=nft_register_chain pid=5366 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Nov 1 01:03:18.797348 kernel: audit: type=1300 audit(1761958998.749:558): arch=c000003e syscall=46 success=yes exit=48684 a0=3 a1=7ffddd95de20 a2=0 a3=7ffddd95de0c items=0 ppid=2387 pid=5366 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 01:03:18.797368 kernel: audit: type=1327 audit(1761958998.749:558): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Nov 1 01:03:18.734000 audit[5366]: SYSCALL arch=c000003e syscall=46 success=yes exit=5248 a0=3 a1=7ffddd95de20 a2=0 a3=7ffddd95de0c items=0 ppid=2387 pid=5366 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 01:03:18.734000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Nov 1 01:03:18.749000 audit[5366]: NETFILTER_CFG table=nat:137 family=2 entries=104 op=nft_register_chain pid=5366 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Nov 1 01:03:18.749000 audit[5366]: SYSCALL arch=c000003e syscall=46 success=yes exit=48684 a0=3 a1=7ffddd95de20 a2=0 a3=7ffddd95de0c items=0 ppid=2387 pid=5366 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 01:03:18.749000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Nov 1 01:03:19.022078 env[1357]: time="2025-11-01T01:03:19.021957161Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 1 01:03:19.022432 env[1357]: time="2025-11-01T01:03:19.022319685Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Nov 1 01:03:19.023619 env[1357]: time="2025-11-01T01:03:19.023361156Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Nov 1 01:03:19.023738 kubelet[2289]: E1101 01:03:19.022483 2289 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Nov 1 01:03:19.023738 kubelet[2289]: E1101 01:03:19.022518 2289 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Nov 1 01:03:19.023738 kubelet[2289]: E1101 01:03:19.022726 2289 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-9jxtv,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-d4bf4bbfd-xnxgp_calico-system(e47c436e-7585-46a1-976f-d1673b769a3e): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Nov 1 01:03:19.025485 kubelet[2289]: E1101 01:03:19.025455 2289 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-d4bf4bbfd-xnxgp" podUID="e47c436e-7585-46a1-976f-d1673b769a3e" Nov 1 01:03:19.340490 env[1357]: time="2025-11-01T01:03:19.340394500Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 1 01:03:19.388218 env[1357]: time="2025-11-01T01:03:19.388163297Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Nov 1 01:03:19.388391 kubelet[2289]: E1101 01:03:19.388365 2289 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Nov 1 01:03:19.388441 kubelet[2289]: E1101 01:03:19.388402 2289 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Nov 1 01:03:19.388522 kubelet[2289]: E1101 01:03:19.388491 2289 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-442hg,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-clw4d_calico-system(2e92d5bc-ac6d-4f8c-9478-cb1bbeb19fda): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Nov 1 01:03:19.389720 kubelet[2289]: E1101 01:03:19.389700 2289 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-clw4d" podUID="2e92d5bc-ac6d-4f8c-9478-cb1bbeb19fda" Nov 1 01:03:20.090378 env[1357]: time="2025-11-01T01:03:20.090212730Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Nov 1 01:03:20.413371 env[1357]: time="2025-11-01T01:03:20.413326221Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 1 01:03:20.413872 env[1357]: time="2025-11-01T01:03:20.413830807Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Nov 1 01:03:20.413990 kubelet[2289]: E1101 01:03:20.413965 2289 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Nov 1 01:03:20.414193 kubelet[2289]: E1101 01:03:20.413997 2289 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Nov 1 01:03:20.414193 kubelet[2289]: E1101 01:03:20.414079 2289 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-9mx6f,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-hsw5f_calico-system(cb274633-10f4-4984-be19-e536608b3bf1): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Nov 1 01:03:20.415528 kubelet[2289]: E1101 01:03:20.415504 2289 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-hsw5f" podUID="cb274633-10f4-4984-be19-e536608b3bf1" Nov 1 01:03:20.669000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@20-139.178.70.108:22-147.75.109.163:37660 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 01:03:20.690072 kernel: audit: type=1130 audit(1761959000.669:559): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@20-139.178.70.108:22-147.75.109.163:37660 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 01:03:20.670228 systemd[1]: Started sshd@20-139.178.70.108:22-147.75.109.163:37660.service. Nov 1 01:03:20.812827 kernel: audit: type=1101 audit(1761959000.805:560): pid=5367 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Nov 1 01:03:20.812921 kernel: audit: type=1103 audit(1761959000.806:561): pid=5367 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Nov 1 01:03:20.805000 audit[5367]: USER_ACCT pid=5367 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Nov 1 01:03:20.820901 kernel: audit: type=1006 audit(1761959000.806:562): pid=5367 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=23 res=1 Nov 1 01:03:20.806000 audit[5367]: CRED_ACQ pid=5367 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Nov 1 01:03:20.806000 audit[5367]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffd39de2210 a2=3 a3=0 items=0 ppid=1 pid=5367 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=23 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 01:03:20.806000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Nov 1 01:03:20.837777 sshd[5367]: Accepted publickey for core from 147.75.109.163 port 37660 ssh2: RSA SHA256:Zb6OsOkmHuKObgLqAaxNeVGNfZDCbP6FgE1ozchKog8 Nov 1 01:03:20.829085 sshd[5367]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Nov 1 01:03:20.872539 systemd[1]: Started session-23.scope. Nov 1 01:03:20.872687 systemd-logind[1341]: New session 23 of user core. Nov 1 01:03:20.875000 audit[5367]: USER_START pid=5367 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Nov 1 01:03:20.876000 audit[5370]: CRED_ACQ pid=5370 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Nov 1 01:03:22.112270 sshd[5367]: pam_unix(sshd:session): session closed for user core Nov 1 01:03:22.112000 audit[5367]: USER_END pid=5367 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Nov 1 01:03:22.112000 audit[5367]: CRED_DISP pid=5367 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Nov 1 01:03:22.114000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@20-139.178.70.108:22-147.75.109.163:37660 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 01:03:22.114554 systemd[1]: sshd@20-139.178.70.108:22-147.75.109.163:37660.service: Deactivated successfully. Nov 1 01:03:22.115581 systemd[1]: session-23.scope: Deactivated successfully. Nov 1 01:03:22.115828 systemd-logind[1341]: Session 23 logged out. Waiting for processes to exit. Nov 1 01:03:22.116613 systemd-logind[1341]: Removed session 23. Nov 1 01:03:25.304697 kubelet[2289]: E1101 01:03:25.304632 2289 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-647db64f7d-p9wv7" podUID="a48af7bf-dcba-4afd-bada-a7a0787cc063" Nov 1 01:03:27.016882 kubelet[2289]: E1101 01:03:27.016852 2289 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7787d665fb-xxt5l" podUID="27248a54-b567-4865-8268-2eb8267aa120" Nov 1 01:03:27.168000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@21-139.178.70.108:22-147.75.109.163:37664 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 01:03:27.168721 systemd[1]: Started sshd@21-139.178.70.108:22-147.75.109.163:37664.service. Nov 1 01:03:27.208953 kernel: kauditd_printk_skb: 7 callbacks suppressed Nov 1 01:03:27.220035 kernel: audit: type=1130 audit(1761959007.168:568): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@21-139.178.70.108:22-147.75.109.163:37664 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 01:03:27.411559 sshd[5400]: Accepted publickey for core from 147.75.109.163 port 37664 ssh2: RSA SHA256:Zb6OsOkmHuKObgLqAaxNeVGNfZDCbP6FgE1ozchKog8 Nov 1 01:03:27.410000 audit[5400]: USER_ACCT pid=5400 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Nov 1 01:03:27.411000 audit[5400]: CRED_ACQ pid=5400 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Nov 1 01:03:27.418896 kernel: audit: type=1101 audit(1761959007.410:569): pid=5400 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Nov 1 01:03:27.418941 kernel: audit: type=1103 audit(1761959007.411:570): pid=5400 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Nov 1 01:03:27.419543 sshd[5400]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Nov 1 01:03:27.411000 audit[5400]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffcde3c4290 a2=3 a3=0 items=0 ppid=1 pid=5400 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=24 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 01:03:27.425546 kernel: audit: type=1006 audit(1761959007.411:571): pid=5400 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=24 res=1 Nov 1 01:03:27.426627 kernel: audit: type=1300 audit(1761959007.411:571): arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffcde3c4290 a2=3 a3=0 items=0 ppid=1 pid=5400 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=24 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 01:03:27.426661 kernel: audit: type=1327 audit(1761959007.411:571): proctitle=737368643A20636F7265205B707269765D Nov 1 01:03:27.411000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Nov 1 01:03:27.430031 systemd-logind[1341]: New session 24 of user core. Nov 1 01:03:27.430382 systemd[1]: Started session-24.scope. Nov 1 01:03:27.433000 audit[5400]: USER_START pid=5400 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Nov 1 01:03:27.438000 audit[5403]: CRED_ACQ pid=5403 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Nov 1 01:03:27.443746 kernel: audit: type=1105 audit(1761959007.433:572): pid=5400 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Nov 1 01:03:27.443778 kernel: audit: type=1103 audit(1761959007.438:573): pid=5403 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Nov 1 01:03:27.754702 update_engine[1342]: I1101 01:03:27.753764 1342 prefs.cc:52] certificate-report-to-send-update not present in /var/lib/update_engine/prefs Nov 1 01:03:27.754702 update_engine[1342]: I1101 01:03:27.754262 1342 prefs.cc:52] certificate-report-to-send-download not present in /var/lib/update_engine/prefs Nov 1 01:03:27.758591 update_engine[1342]: I1101 01:03:27.758365 1342 prefs.cc:52] aleph-version not present in /var/lib/update_engine/prefs Nov 1 01:03:27.760071 update_engine[1342]: I1101 01:03:27.760054 1342 omaha_request_params.cc:62] Current group set to lts Nov 1 01:03:27.766903 update_engine[1342]: I1101 01:03:27.766707 1342 update_attempter.cc:499] Already updated boot flags. Skipping. Nov 1 01:03:27.766903 update_engine[1342]: I1101 01:03:27.766721 1342 update_attempter.cc:643] Scheduling an action processor start. Nov 1 01:03:27.767166 update_engine[1342]: I1101 01:03:27.767153 1342 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Nov 1 01:03:27.769265 update_engine[1342]: I1101 01:03:27.769217 1342 prefs.cc:52] previous-version not present in /var/lib/update_engine/prefs Nov 1 01:03:27.769316 update_engine[1342]: I1101 01:03:27.769270 1342 omaha_request_action.cc:270] Posting an Omaha request to disabled Nov 1 01:03:27.769316 update_engine[1342]: I1101 01:03:27.769274 1342 omaha_request_action.cc:271] Request: Nov 1 01:03:27.769316 update_engine[1342]: Nov 1 01:03:27.769316 update_engine[1342]: Nov 1 01:03:27.769316 update_engine[1342]: Nov 1 01:03:27.769316 update_engine[1342]: Nov 1 01:03:27.769316 update_engine[1342]: Nov 1 01:03:27.769316 update_engine[1342]: Nov 1 01:03:27.769316 update_engine[1342]: Nov 1 01:03:27.769316 update_engine[1342]: Nov 1 01:03:27.769316 update_engine[1342]: I1101 01:03:27.769276 1342 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Nov 1 01:03:27.776926 update_engine[1342]: I1101 01:03:27.776880 1342 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Nov 1 01:03:27.777021 update_engine[1342]: E1101 01:03:27.776958 1342 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Nov 1 01:03:27.777160 update_engine[1342]: I1101 01:03:27.777148 1342 libcurl_http_fetcher.cc:283] No HTTP response, retry 1 Nov 1 01:03:27.870142 locksmithd[1407]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_CHECKING_FOR_UPDATE" NewVersion=0.0.0 NewSize=0 Nov 1 01:03:28.030144 env[1357]: time="2025-11-01T01:03:28.029938600Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 1 01:03:28.132159 sshd[5400]: pam_unix(sshd:session): session closed for user core Nov 1 01:03:28.147000 audit[5400]: USER_END pid=5400 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Nov 1 01:03:28.153732 kernel: audit: type=1106 audit(1761959008.147:574): pid=5400 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Nov 1 01:03:28.153000 audit[5400]: CRED_DISP pid=5400 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Nov 1 01:03:28.160541 kernel: audit: type=1104 audit(1761959008.153:575): pid=5400 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Nov 1 01:03:28.157000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@21-139.178.70.108:22-147.75.109.163:37664 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 01:03:28.157626 systemd[1]: sshd@21-139.178.70.108:22-147.75.109.163:37664.service: Deactivated successfully. Nov 1 01:03:28.158471 systemd[1]: session-24.scope: Deactivated successfully. Nov 1 01:03:28.158493 systemd-logind[1341]: Session 24 logged out. Waiting for processes to exit. Nov 1 01:03:28.159155 systemd-logind[1341]: Removed session 24. Nov 1 01:03:28.356589 env[1357]: time="2025-11-01T01:03:28.356485601Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 1 01:03:28.357316 env[1357]: time="2025-11-01T01:03:28.356916870Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 1 01:03:28.361276 kubelet[2289]: E1101 01:03:28.361240 2289 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 1 01:03:28.362741 kubelet[2289]: E1101 01:03:28.362649 2289 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 1 01:03:28.366167 kubelet[2289]: E1101 01:03:28.366111 2289 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-wpnrd,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-5c87d7ff56-95ljj_calico-apiserver(dbcec211-4cad-40a0-8aa5-e63111d93180): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 1 01:03:28.367283 kubelet[2289]: E1101 01:03:28.367252 2289 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5c87d7ff56-95ljj" podUID="dbcec211-4cad-40a0-8aa5-e63111d93180" Nov 1 01:03:29.016695 kubelet[2289]: E1101 01:03:29.016656 2289 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7787d665fb-fb8nb" podUID="9e4c5135-dd57-4916-a0b6-81789ca74a77" Nov 1 01:03:33.016847 kubelet[2289]: E1101 01:03:33.016821 2289 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-clw4d" podUID="2e92d5bc-ac6d-4f8c-9478-cb1bbeb19fda" Nov 1 01:03:33.112897 systemd[1]: Started sshd@22-139.178.70.108:22-147.75.109.163:50718.service. Nov 1 01:03:33.112000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@22-139.178.70.108:22-147.75.109.163:50718 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 01:03:33.113849 kernel: kauditd_printk_skb: 1 callbacks suppressed Nov 1 01:03:33.114502 kernel: audit: type=1130 audit(1761959013.112:577): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@22-139.178.70.108:22-147.75.109.163:50718 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 01:03:33.163000 audit[5412]: USER_ACCT pid=5412 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Nov 1 01:03:33.164224 sshd[5412]: Accepted publickey for core from 147.75.109.163 port 50718 ssh2: RSA SHA256:Zb6OsOkmHuKObgLqAaxNeVGNfZDCbP6FgE1ozchKog8 Nov 1 01:03:33.165342 sshd[5412]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Nov 1 01:03:33.163000 audit[5412]: CRED_ACQ pid=5412 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Nov 1 01:03:33.170542 kernel: audit: type=1101 audit(1761959013.163:578): pid=5412 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Nov 1 01:03:33.170578 kernel: audit: type=1103 audit(1761959013.163:579): pid=5412 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Nov 1 01:03:33.171431 systemd[1]: Started session-25.scope. Nov 1 01:03:33.172323 systemd-logind[1341]: New session 25 of user core. Nov 1 01:03:33.174214 kernel: audit: type=1006 audit(1761959013.163:580): pid=5412 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=25 res=1 Nov 1 01:03:33.163000 audit[5412]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffeaab14ea0 a2=3 a3=0 items=0 ppid=1 pid=5412 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=25 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 01:03:33.179161 kernel: audit: type=1300 audit(1761959013.163:580): arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffeaab14ea0 a2=3 a3=0 items=0 ppid=1 pid=5412 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=25 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 01:03:33.163000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Nov 1 01:03:33.187766 kernel: audit: type=1327 audit(1761959013.163:580): proctitle=737368643A20636F7265205B707269765D Nov 1 01:03:33.174000 audit[5412]: USER_START pid=5412 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Nov 1 01:03:33.191696 kernel: audit: type=1105 audit(1761959013.174:581): pid=5412 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Nov 1 01:03:33.178000 audit[5415]: CRED_ACQ pid=5415 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Nov 1 01:03:33.195699 kernel: audit: type=1103 audit(1761959013.178:582): pid=5415 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Nov 1 01:03:33.318376 sshd[5412]: pam_unix(sshd:session): session closed for user core Nov 1 01:03:33.319000 audit[5412]: USER_END pid=5412 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Nov 1 01:03:33.322848 systemd-logind[1341]: Session 25 logged out. Waiting for processes to exit. Nov 1 01:03:33.327684 kernel: audit: type=1106 audit(1761959013.319:583): pid=5412 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Nov 1 01:03:33.327768 kernel: audit: type=1104 audit(1761959013.319:584): pid=5412 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Nov 1 01:03:33.319000 audit[5412]: CRED_DISP pid=5412 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Nov 1 01:03:33.323000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@22-139.178.70.108:22-147.75.109.163:50718 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 01:03:33.323935 systemd[1]: sshd@22-139.178.70.108:22-147.75.109.163:50718.service: Deactivated successfully. Nov 1 01:03:33.324415 systemd[1]: session-25.scope: Deactivated successfully. Nov 1 01:03:33.329227 systemd-logind[1341]: Removed session 25. Nov 1 01:03:34.017274 kubelet[2289]: E1101 01:03:34.017248 2289 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-d4bf4bbfd-xnxgp" podUID="e47c436e-7585-46a1-976f-d1673b769a3e" Nov 1 01:03:35.017197 kubelet[2289]: E1101 01:03:35.017173 2289 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-hsw5f" podUID="cb274633-10f4-4984-be19-e536608b3bf1"