Oct 31 01:29:30.709416 kernel: Linux version 5.15.192-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 11.3.1_p20221209 p3) 11.3.1 20221209, GNU ld (Gentoo 2.39 p5) 2.39.0) #1 SMP Thu Oct 30 23:32:41 -00 2025 Oct 31 01:29:30.709434 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=vmware flatcar.autologin verity.usrhash=7605c743a37b990723033788c91d5dcda748347858877b1088098370c2a7e4d3 Oct 31 01:29:30.709441 kernel: Disabled fast string operations Oct 31 01:29:30.709445 kernel: BIOS-provided physical RAM map: Oct 31 01:29:30.709449 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009ebff] usable Oct 31 01:29:30.709453 kernel: BIOS-e820: [mem 0x000000000009ec00-0x000000000009ffff] reserved Oct 31 01:29:30.709459 kernel: BIOS-e820: [mem 0x00000000000dc000-0x00000000000fffff] reserved Oct 31 01:29:30.709464 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000007fedffff] usable Oct 31 01:29:30.709468 kernel: BIOS-e820: [mem 0x000000007fee0000-0x000000007fefefff] ACPI data Oct 31 01:29:30.709472 kernel: BIOS-e820: [mem 0x000000007feff000-0x000000007fefffff] ACPI NVS Oct 31 01:29:30.709477 kernel: BIOS-e820: [mem 0x000000007ff00000-0x000000007fffffff] usable Oct 31 01:29:30.709481 kernel: BIOS-e820: [mem 0x00000000f0000000-0x00000000f7ffffff] reserved Oct 31 01:29:30.709485 kernel: BIOS-e820: [mem 0x00000000fec00000-0x00000000fec0ffff] reserved Oct 31 01:29:30.709489 kernel: BIOS-e820: [mem 0x00000000fee00000-0x00000000fee00fff] reserved Oct 31 01:29:30.709496 kernel: BIOS-e820: [mem 0x00000000fffe0000-0x00000000ffffffff] reserved Oct 31 01:29:30.709501 kernel: NX (Execute Disable) protection: active Oct 31 01:29:30.709505 kernel: SMBIOS 2.7 present. Oct 31 01:29:30.709510 kernel: DMI: VMware, Inc. VMware Virtual Platform/440BX Desktop Reference Platform, BIOS 6.00 05/28/2020 Oct 31 01:29:30.709515 kernel: vmware: hypercall mode: 0x00 Oct 31 01:29:30.709519 kernel: Hypervisor detected: VMware Oct 31 01:29:30.709525 kernel: vmware: TSC freq read from hypervisor : 3408.000 MHz Oct 31 01:29:30.709530 kernel: vmware: Host bus clock speed read from hypervisor : 66000000 Hz Oct 31 01:29:30.709535 kernel: vmware: using clock offset of 2691025058 ns Oct 31 01:29:30.709539 kernel: tsc: Detected 3408.000 MHz processor Oct 31 01:29:30.709544 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Oct 31 01:29:30.709550 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Oct 31 01:29:30.709554 kernel: last_pfn = 0x80000 max_arch_pfn = 0x400000000 Oct 31 01:29:30.709559 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Oct 31 01:29:30.709564 kernel: total RAM covered: 3072M Oct 31 01:29:30.709570 kernel: Found optimal setting for mtrr clean up Oct 31 01:29:30.709575 kernel: gran_size: 64K chunk_size: 64K num_reg: 2 lose cover RAM: 0G Oct 31 01:29:30.709580 kernel: Using GB pages for direct mapping Oct 31 01:29:30.709585 kernel: ACPI: Early table checksum verification disabled Oct 31 01:29:30.709590 kernel: ACPI: RSDP 0x00000000000F6A00 000024 (v02 PTLTD ) Oct 31 01:29:30.709594 kernel: ACPI: XSDT 0x000000007FEE965B 00005C (v01 INTEL 440BX 06040000 VMW 01324272) Oct 31 01:29:30.709599 kernel: ACPI: FACP 0x000000007FEFEE73 0000F4 (v04 INTEL 440BX 06040000 PTL 000F4240) Oct 31 01:29:30.709604 kernel: ACPI: DSDT 0x000000007FEEAD55 01411E (v01 PTLTD Custom 06040000 MSFT 03000001) Oct 31 01:29:30.709609 kernel: ACPI: FACS 0x000000007FEFFFC0 000040 Oct 31 01:29:30.709613 kernel: ACPI: FACS 0x000000007FEFFFC0 000040 Oct 31 01:29:30.709619 kernel: ACPI: BOOT 0x000000007FEEAD2D 000028 (v01 PTLTD $SBFTBL$ 06040000 LTP 00000001) Oct 31 01:29:30.709626 kernel: ACPI: APIC 0x000000007FEEA5EB 000742 (v01 PTLTD ? APIC 06040000 LTP 00000000) Oct 31 01:29:30.709631 kernel: ACPI: MCFG 0x000000007FEEA5AF 00003C (v01 PTLTD $PCITBL$ 06040000 LTP 00000001) Oct 31 01:29:30.709636 kernel: ACPI: SRAT 0x000000007FEE9757 0008A8 (v02 VMWARE MEMPLUG 06040000 VMW 00000001) Oct 31 01:29:30.709642 kernel: ACPI: HPET 0x000000007FEE971F 000038 (v01 VMWARE VMW HPET 06040000 VMW 00000001) Oct 31 01:29:30.709648 kernel: ACPI: WAET 0x000000007FEE96F7 000028 (v01 VMWARE VMW WAET 06040000 VMW 00000001) Oct 31 01:29:30.709653 kernel: ACPI: Reserving FACP table memory at [mem 0x7fefee73-0x7fefef66] Oct 31 01:29:30.709658 kernel: ACPI: Reserving DSDT table memory at [mem 0x7feead55-0x7fefee72] Oct 31 01:29:30.709663 kernel: ACPI: Reserving FACS table memory at [mem 0x7fefffc0-0x7fefffff] Oct 31 01:29:30.709668 kernel: ACPI: Reserving FACS table memory at [mem 0x7fefffc0-0x7fefffff] Oct 31 01:29:30.709673 kernel: ACPI: Reserving BOOT table memory at [mem 0x7feead2d-0x7feead54] Oct 31 01:29:30.709679 kernel: ACPI: Reserving APIC table memory at [mem 0x7feea5eb-0x7feead2c] Oct 31 01:29:30.709684 kernel: ACPI: Reserving MCFG table memory at [mem 0x7feea5af-0x7feea5ea] Oct 31 01:29:30.709689 kernel: ACPI: Reserving SRAT table memory at [mem 0x7fee9757-0x7fee9ffe] Oct 31 01:29:30.709695 kernel: ACPI: Reserving HPET table memory at [mem 0x7fee971f-0x7fee9756] Oct 31 01:29:30.709700 kernel: ACPI: Reserving WAET table memory at [mem 0x7fee96f7-0x7fee971e] Oct 31 01:29:30.709705 kernel: system APIC only can use physical flat Oct 31 01:29:30.709710 kernel: Setting APIC routing to physical flat. Oct 31 01:29:30.709715 kernel: SRAT: PXM 0 -> APIC 0x00 -> Node 0 Oct 31 01:29:30.709721 kernel: SRAT: PXM 0 -> APIC 0x02 -> Node 0 Oct 31 01:29:30.709726 kernel: SRAT: PXM 0 -> APIC 0x04 -> Node 0 Oct 31 01:29:30.709731 kernel: SRAT: PXM 0 -> APIC 0x06 -> Node 0 Oct 31 01:29:30.709736 kernel: SRAT: PXM 0 -> APIC 0x08 -> Node 0 Oct 31 01:29:30.709742 kernel: SRAT: PXM 0 -> APIC 0x0a -> Node 0 Oct 31 01:29:30.709747 kernel: SRAT: PXM 0 -> APIC 0x0c -> Node 0 Oct 31 01:29:30.709752 kernel: SRAT: PXM 0 -> APIC 0x0e -> Node 0 Oct 31 01:29:30.709757 kernel: SRAT: PXM 0 -> APIC 0x10 -> Node 0 Oct 31 01:29:30.709762 kernel: SRAT: PXM 0 -> APIC 0x12 -> Node 0 Oct 31 01:29:30.709767 kernel: SRAT: PXM 0 -> APIC 0x14 -> Node 0 Oct 31 01:29:30.709772 kernel: SRAT: PXM 0 -> APIC 0x16 -> Node 0 Oct 31 01:29:30.709777 kernel: SRAT: PXM 0 -> APIC 0x18 -> Node 0 Oct 31 01:29:30.709782 kernel: SRAT: PXM 0 -> APIC 0x1a -> Node 0 Oct 31 01:29:30.709787 kernel: SRAT: PXM 0 -> APIC 0x1c -> Node 0 Oct 31 01:29:30.709793 kernel: SRAT: PXM 0 -> APIC 0x1e -> Node 0 Oct 31 01:29:30.709798 kernel: SRAT: PXM 0 -> APIC 0x20 -> Node 0 Oct 31 01:29:30.709803 kernel: SRAT: PXM 0 -> APIC 0x22 -> Node 0 Oct 31 01:29:30.709808 kernel: SRAT: PXM 0 -> APIC 0x24 -> Node 0 Oct 31 01:29:30.709813 kernel: SRAT: PXM 0 -> APIC 0x26 -> Node 0 Oct 31 01:29:30.709818 kernel: SRAT: PXM 0 -> APIC 0x28 -> Node 0 Oct 31 01:29:30.709823 kernel: SRAT: PXM 0 -> APIC 0x2a -> Node 0 Oct 31 01:29:30.709828 kernel: SRAT: PXM 0 -> APIC 0x2c -> Node 0 Oct 31 01:29:30.709833 kernel: SRAT: PXM 0 -> APIC 0x2e -> Node 0 Oct 31 01:29:30.709838 kernel: SRAT: PXM 0 -> APIC 0x30 -> Node 0 Oct 31 01:29:30.709844 kernel: SRAT: PXM 0 -> APIC 0x32 -> Node 0 Oct 31 01:29:30.709849 kernel: SRAT: PXM 0 -> APIC 0x34 -> Node 0 Oct 31 01:29:30.709854 kernel: SRAT: PXM 0 -> APIC 0x36 -> Node 0 Oct 31 01:29:30.709859 kernel: SRAT: PXM 0 -> APIC 0x38 -> Node 0 Oct 31 01:29:30.709864 kernel: SRAT: PXM 0 -> APIC 0x3a -> Node 0 Oct 31 01:29:30.709869 kernel: SRAT: PXM 0 -> APIC 0x3c -> Node 0 Oct 31 01:29:30.709874 kernel: SRAT: PXM 0 -> APIC 0x3e -> Node 0 Oct 31 01:29:30.709879 kernel: SRAT: PXM 0 -> APIC 0x40 -> Node 0 Oct 31 01:29:30.709884 kernel: SRAT: PXM 0 -> APIC 0x42 -> Node 0 Oct 31 01:29:30.709889 kernel: SRAT: PXM 0 -> APIC 0x44 -> Node 0 Oct 31 01:29:30.709895 kernel: SRAT: PXM 0 -> APIC 0x46 -> Node 0 Oct 31 01:29:30.709900 kernel: SRAT: PXM 0 -> APIC 0x48 -> Node 0 Oct 31 01:29:30.709905 kernel: SRAT: PXM 0 -> APIC 0x4a -> Node 0 Oct 31 01:29:30.709910 kernel: SRAT: PXM 0 -> APIC 0x4c -> Node 0 Oct 31 01:29:30.709915 kernel: SRAT: PXM 0 -> APIC 0x4e -> Node 0 Oct 31 01:29:30.709920 kernel: SRAT: PXM 0 -> APIC 0x50 -> Node 0 Oct 31 01:29:30.709925 kernel: SRAT: PXM 0 -> APIC 0x52 -> Node 0 Oct 31 01:29:30.709930 kernel: SRAT: PXM 0 -> APIC 0x54 -> Node 0 Oct 31 01:29:30.709935 kernel: SRAT: PXM 0 -> APIC 0x56 -> Node 0 Oct 31 01:29:30.709940 kernel: SRAT: PXM 0 -> APIC 0x58 -> Node 0 Oct 31 01:29:30.709946 kernel: SRAT: PXM 0 -> APIC 0x5a -> Node 0 Oct 31 01:29:30.709951 kernel: SRAT: PXM 0 -> APIC 0x5c -> Node 0 Oct 31 01:29:30.709956 kernel: SRAT: PXM 0 -> APIC 0x5e -> Node 0 Oct 31 01:29:30.709961 kernel: SRAT: PXM 0 -> APIC 0x60 -> Node 0 Oct 31 01:29:30.709967 kernel: SRAT: PXM 0 -> APIC 0x62 -> Node 0 Oct 31 01:29:30.709972 kernel: SRAT: PXM 0 -> APIC 0x64 -> Node 0 Oct 31 01:29:30.709977 kernel: SRAT: PXM 0 -> APIC 0x66 -> Node 0 Oct 31 01:29:30.709982 kernel: SRAT: PXM 0 -> APIC 0x68 -> Node 0 Oct 31 01:29:30.709987 kernel: SRAT: PXM 0 -> APIC 0x6a -> Node 0 Oct 31 01:29:30.709992 kernel: SRAT: PXM 0 -> APIC 0x6c -> Node 0 Oct 31 01:29:30.709997 kernel: SRAT: PXM 0 -> APIC 0x6e -> Node 0 Oct 31 01:29:30.710002 kernel: SRAT: PXM 0 -> APIC 0x70 -> Node 0 Oct 31 01:29:30.710008 kernel: SRAT: PXM 0 -> APIC 0x72 -> Node 0 Oct 31 01:29:30.710013 kernel: SRAT: PXM 0 -> APIC 0x74 -> Node 0 Oct 31 01:29:30.710018 kernel: SRAT: PXM 0 -> APIC 0x76 -> Node 0 Oct 31 01:29:30.710023 kernel: SRAT: PXM 0 -> APIC 0x78 -> Node 0 Oct 31 01:29:30.710033 kernel: SRAT: PXM 0 -> APIC 0x7a -> Node 0 Oct 31 01:29:30.710038 kernel: SRAT: PXM 0 -> APIC 0x7c -> Node 0 Oct 31 01:29:30.710044 kernel: SRAT: PXM 0 -> APIC 0x7e -> Node 0 Oct 31 01:29:30.710049 kernel: SRAT: PXM 0 -> APIC 0x80 -> Node 0 Oct 31 01:29:30.710054 kernel: SRAT: PXM 0 -> APIC 0x82 -> Node 0 Oct 31 01:29:30.710061 kernel: SRAT: PXM 0 -> APIC 0x84 -> Node 0 Oct 31 01:29:30.710066 kernel: SRAT: PXM 0 -> APIC 0x86 -> Node 0 Oct 31 01:29:30.710072 kernel: SRAT: PXM 0 -> APIC 0x88 -> Node 0 Oct 31 01:29:30.710077 kernel: SRAT: PXM 0 -> APIC 0x8a -> Node 0 Oct 31 01:29:30.710082 kernel: SRAT: PXM 0 -> APIC 0x8c -> Node 0 Oct 31 01:29:30.710088 kernel: SRAT: PXM 0 -> APIC 0x8e -> Node 0 Oct 31 01:29:30.710093 kernel: SRAT: PXM 0 -> APIC 0x90 -> Node 0 Oct 31 01:29:30.710099 kernel: SRAT: PXM 0 -> APIC 0x92 -> Node 0 Oct 31 01:29:30.710105 kernel: SRAT: PXM 0 -> APIC 0x94 -> Node 0 Oct 31 01:29:30.710110 kernel: SRAT: PXM 0 -> APIC 0x96 -> Node 0 Oct 31 01:29:30.710115 kernel: SRAT: PXM 0 -> APIC 0x98 -> Node 0 Oct 31 01:29:30.710121 kernel: SRAT: PXM 0 -> APIC 0x9a -> Node 0 Oct 31 01:29:30.710126 kernel: SRAT: PXM 0 -> APIC 0x9c -> Node 0 Oct 31 01:29:30.710131 kernel: SRAT: PXM 0 -> APIC 0x9e -> Node 0 Oct 31 01:29:30.710137 kernel: SRAT: PXM 0 -> APIC 0xa0 -> Node 0 Oct 31 01:29:30.710142 kernel: SRAT: PXM 0 -> APIC 0xa2 -> Node 0 Oct 31 01:29:30.710149 kernel: SRAT: PXM 0 -> APIC 0xa4 -> Node 0 Oct 31 01:29:30.710154 kernel: SRAT: PXM 0 -> APIC 0xa6 -> Node 0 Oct 31 01:29:30.710159 kernel: SRAT: PXM 0 -> APIC 0xa8 -> Node 0 Oct 31 01:29:30.710165 kernel: SRAT: PXM 0 -> APIC 0xaa -> Node 0 Oct 31 01:29:30.710170 kernel: SRAT: PXM 0 -> APIC 0xac -> Node 0 Oct 31 01:29:30.710176 kernel: SRAT: PXM 0 -> APIC 0xae -> Node 0 Oct 31 01:29:30.710181 kernel: SRAT: PXM 0 -> APIC 0xb0 -> Node 0 Oct 31 01:29:30.710187 kernel: SRAT: PXM 0 -> APIC 0xb2 -> Node 0 Oct 31 01:29:30.710192 kernel: SRAT: PXM 0 -> APIC 0xb4 -> Node 0 Oct 31 01:29:30.710197 kernel: SRAT: PXM 0 -> APIC 0xb6 -> Node 0 Oct 31 01:29:30.710204 kernel: SRAT: PXM 0 -> APIC 0xb8 -> Node 0 Oct 31 01:29:30.710209 kernel: SRAT: PXM 0 -> APIC 0xba -> Node 0 Oct 31 01:29:30.710214 kernel: SRAT: PXM 0 -> APIC 0xbc -> Node 0 Oct 31 01:29:30.710220 kernel: SRAT: PXM 0 -> APIC 0xbe -> Node 0 Oct 31 01:29:30.710225 kernel: SRAT: PXM 0 -> APIC 0xc0 -> Node 0 Oct 31 01:29:30.710231 kernel: SRAT: PXM 0 -> APIC 0xc2 -> Node 0 Oct 31 01:29:30.710236 kernel: SRAT: PXM 0 -> APIC 0xc4 -> Node 0 Oct 31 01:29:30.710241 kernel: SRAT: PXM 0 -> APIC 0xc6 -> Node 0 Oct 31 01:29:30.710246 kernel: SRAT: PXM 0 -> APIC 0xc8 -> Node 0 Oct 31 01:29:30.710252 kernel: SRAT: PXM 0 -> APIC 0xca -> Node 0 Oct 31 01:29:30.710258 kernel: SRAT: PXM 0 -> APIC 0xcc -> Node 0 Oct 31 01:29:30.710263 kernel: SRAT: PXM 0 -> APIC 0xce -> Node 0 Oct 31 01:29:30.710269 kernel: SRAT: PXM 0 -> APIC 0xd0 -> Node 0 Oct 31 01:29:30.710274 kernel: SRAT: PXM 0 -> APIC 0xd2 -> Node 0 Oct 31 01:29:30.710280 kernel: SRAT: PXM 0 -> APIC 0xd4 -> Node 0 Oct 31 01:29:30.710285 kernel: SRAT: PXM 0 -> APIC 0xd6 -> Node 0 Oct 31 01:29:30.710290 kernel: SRAT: PXM 0 -> APIC 0xd8 -> Node 0 Oct 31 01:29:30.710296 kernel: SRAT: PXM 0 -> APIC 0xda -> Node 0 Oct 31 01:29:30.710301 kernel: SRAT: PXM 0 -> APIC 0xdc -> Node 0 Oct 31 01:29:30.710307 kernel: SRAT: PXM 0 -> APIC 0xde -> Node 0 Oct 31 01:29:30.710313 kernel: SRAT: PXM 0 -> APIC 0xe0 -> Node 0 Oct 31 01:29:30.710318 kernel: SRAT: PXM 0 -> APIC 0xe2 -> Node 0 Oct 31 01:29:30.710324 kernel: SRAT: PXM 0 -> APIC 0xe4 -> Node 0 Oct 31 01:29:30.710329 kernel: SRAT: PXM 0 -> APIC 0xe6 -> Node 0 Oct 31 01:29:30.710334 kernel: SRAT: PXM 0 -> APIC 0xe8 -> Node 0 Oct 31 01:29:30.710340 kernel: SRAT: PXM 0 -> APIC 0xea -> Node 0 Oct 31 01:29:30.710345 kernel: SRAT: PXM 0 -> APIC 0xec -> Node 0 Oct 31 01:29:30.710351 kernel: SRAT: PXM 0 -> APIC 0xee -> Node 0 Oct 31 01:29:30.710356 kernel: SRAT: PXM 0 -> APIC 0xf0 -> Node 0 Oct 31 01:29:30.710361 kernel: SRAT: PXM 0 -> APIC 0xf2 -> Node 0 Oct 31 01:29:30.710367 kernel: SRAT: PXM 0 -> APIC 0xf4 -> Node 0 Oct 31 01:29:30.710373 kernel: SRAT: PXM 0 -> APIC 0xf6 -> Node 0 Oct 31 01:29:30.710378 kernel: SRAT: PXM 0 -> APIC 0xf8 -> Node 0 Oct 31 01:29:30.710384 kernel: SRAT: PXM 0 -> APIC 0xfa -> Node 0 Oct 31 01:29:30.710389 kernel: SRAT: PXM 0 -> APIC 0xfc -> Node 0 Oct 31 01:29:30.710399 kernel: SRAT: PXM 0 -> APIC 0xfe -> Node 0 Oct 31 01:29:30.710414 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x0009ffff] Oct 31 01:29:30.710420 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00100000-0x7fffffff] Oct 31 01:29:30.710426 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x80000000-0xbfffffff] hotplug Oct 31 01:29:30.710433 kernel: NUMA: Node 0 [mem 0x00000000-0x0009ffff] + [mem 0x00100000-0x7fffffff] -> [mem 0x00000000-0x7fffffff] Oct 31 01:29:30.710439 kernel: NODE_DATA(0) allocated [mem 0x7fffa000-0x7fffffff] Oct 31 01:29:30.710445 kernel: Zone ranges: Oct 31 01:29:30.710451 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Oct 31 01:29:30.710456 kernel: DMA32 [mem 0x0000000001000000-0x000000007fffffff] Oct 31 01:29:30.710462 kernel: Normal empty Oct 31 01:29:30.710467 kernel: Movable zone start for each node Oct 31 01:29:30.710473 kernel: Early memory node ranges Oct 31 01:29:30.710478 kernel: node 0: [mem 0x0000000000001000-0x000000000009dfff] Oct 31 01:29:30.710484 kernel: node 0: [mem 0x0000000000100000-0x000000007fedffff] Oct 31 01:29:30.710490 kernel: node 0: [mem 0x000000007ff00000-0x000000007fffffff] Oct 31 01:29:30.710495 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000007fffffff] Oct 31 01:29:30.710501 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Oct 31 01:29:30.710506 kernel: On node 0, zone DMA: 98 pages in unavailable ranges Oct 31 01:29:30.710512 kernel: On node 0, zone DMA32: 32 pages in unavailable ranges Oct 31 01:29:30.710517 kernel: ACPI: PM-Timer IO Port: 0x1008 Oct 31 01:29:30.710523 kernel: system APIC only can use physical flat Oct 31 01:29:30.710528 kernel: ACPI: LAPIC_NMI (acpi_id[0x00] high edge lint[0x1]) Oct 31 01:29:30.710534 kernel: ACPI: LAPIC_NMI (acpi_id[0x01] high edge lint[0x1]) Oct 31 01:29:30.710540 kernel: ACPI: LAPIC_NMI (acpi_id[0x02] high edge lint[0x1]) Oct 31 01:29:30.710546 kernel: ACPI: LAPIC_NMI (acpi_id[0x03] high edge lint[0x1]) Oct 31 01:29:30.710551 kernel: ACPI: LAPIC_NMI (acpi_id[0x04] high edge lint[0x1]) Oct 31 01:29:30.710557 kernel: ACPI: LAPIC_NMI (acpi_id[0x05] high edge lint[0x1]) Oct 31 01:29:30.710562 kernel: ACPI: LAPIC_NMI (acpi_id[0x06] high edge lint[0x1]) Oct 31 01:29:30.710567 kernel: ACPI: LAPIC_NMI (acpi_id[0x07] high edge lint[0x1]) Oct 31 01:29:30.710573 kernel: ACPI: LAPIC_NMI (acpi_id[0x08] high edge lint[0x1]) Oct 31 01:29:30.710579 kernel: ACPI: LAPIC_NMI (acpi_id[0x09] high edge lint[0x1]) Oct 31 01:29:30.710584 kernel: ACPI: LAPIC_NMI (acpi_id[0x0a] high edge lint[0x1]) Oct 31 01:29:30.710589 kernel: ACPI: LAPIC_NMI (acpi_id[0x0b] high edge lint[0x1]) Oct 31 01:29:30.710596 kernel: ACPI: LAPIC_NMI (acpi_id[0x0c] high edge lint[0x1]) Oct 31 01:29:30.710601 kernel: ACPI: LAPIC_NMI (acpi_id[0x0d] high edge lint[0x1]) Oct 31 01:29:30.710607 kernel: ACPI: LAPIC_NMI (acpi_id[0x0e] high edge lint[0x1]) Oct 31 01:29:30.710612 kernel: ACPI: LAPIC_NMI (acpi_id[0x0f] high edge lint[0x1]) Oct 31 01:29:30.710618 kernel: ACPI: LAPIC_NMI (acpi_id[0x10] high edge lint[0x1]) Oct 31 01:29:30.710623 kernel: ACPI: LAPIC_NMI (acpi_id[0x11] high edge lint[0x1]) Oct 31 01:29:30.710628 kernel: ACPI: LAPIC_NMI (acpi_id[0x12] high edge lint[0x1]) Oct 31 01:29:30.710634 kernel: ACPI: LAPIC_NMI (acpi_id[0x13] high edge lint[0x1]) Oct 31 01:29:30.710639 kernel: ACPI: LAPIC_NMI (acpi_id[0x14] high edge lint[0x1]) Oct 31 01:29:30.710645 kernel: ACPI: LAPIC_NMI (acpi_id[0x15] high edge lint[0x1]) Oct 31 01:29:30.710651 kernel: ACPI: LAPIC_NMI (acpi_id[0x16] high edge lint[0x1]) Oct 31 01:29:30.710656 kernel: ACPI: LAPIC_NMI (acpi_id[0x17] high edge lint[0x1]) Oct 31 01:29:30.710662 kernel: ACPI: LAPIC_NMI (acpi_id[0x18] high edge lint[0x1]) Oct 31 01:29:30.710667 kernel: ACPI: LAPIC_NMI (acpi_id[0x19] high edge lint[0x1]) Oct 31 01:29:30.710672 kernel: ACPI: LAPIC_NMI (acpi_id[0x1a] high edge lint[0x1]) Oct 31 01:29:30.710678 kernel: ACPI: LAPIC_NMI (acpi_id[0x1b] high edge lint[0x1]) Oct 31 01:29:30.710683 kernel: ACPI: LAPIC_NMI (acpi_id[0x1c] high edge lint[0x1]) Oct 31 01:29:30.710689 kernel: ACPI: LAPIC_NMI (acpi_id[0x1d] high edge lint[0x1]) Oct 31 01:29:30.710694 kernel: ACPI: LAPIC_NMI (acpi_id[0x1e] high edge lint[0x1]) Oct 31 01:29:30.710700 kernel: ACPI: LAPIC_NMI (acpi_id[0x1f] high edge lint[0x1]) Oct 31 01:29:30.710706 kernel: ACPI: LAPIC_NMI (acpi_id[0x20] high edge lint[0x1]) Oct 31 01:29:30.710711 kernel: ACPI: LAPIC_NMI (acpi_id[0x21] high edge lint[0x1]) Oct 31 01:29:30.710717 kernel: ACPI: LAPIC_NMI (acpi_id[0x22] high edge lint[0x1]) Oct 31 01:29:30.710722 kernel: ACPI: LAPIC_NMI (acpi_id[0x23] high edge lint[0x1]) Oct 31 01:29:30.710727 kernel: ACPI: LAPIC_NMI (acpi_id[0x24] high edge lint[0x1]) Oct 31 01:29:30.710733 kernel: ACPI: LAPIC_NMI (acpi_id[0x25] high edge lint[0x1]) Oct 31 01:29:30.710738 kernel: ACPI: LAPIC_NMI (acpi_id[0x26] high edge lint[0x1]) Oct 31 01:29:30.710743 kernel: ACPI: LAPIC_NMI (acpi_id[0x27] high edge lint[0x1]) Oct 31 01:29:30.710750 kernel: ACPI: LAPIC_NMI (acpi_id[0x28] high edge lint[0x1]) Oct 31 01:29:30.710755 kernel: ACPI: LAPIC_NMI (acpi_id[0x29] high edge lint[0x1]) Oct 31 01:29:30.710761 kernel: ACPI: LAPIC_NMI (acpi_id[0x2a] high edge lint[0x1]) Oct 31 01:29:30.710766 kernel: ACPI: LAPIC_NMI (acpi_id[0x2b] high edge lint[0x1]) Oct 31 01:29:30.710772 kernel: ACPI: LAPIC_NMI (acpi_id[0x2c] high edge lint[0x1]) Oct 31 01:29:30.710777 kernel: ACPI: LAPIC_NMI (acpi_id[0x2d] high edge lint[0x1]) Oct 31 01:29:30.710782 kernel: ACPI: LAPIC_NMI (acpi_id[0x2e] high edge lint[0x1]) Oct 31 01:29:30.710788 kernel: ACPI: LAPIC_NMI (acpi_id[0x2f] high edge lint[0x1]) Oct 31 01:29:30.710793 kernel: ACPI: LAPIC_NMI (acpi_id[0x30] high edge lint[0x1]) Oct 31 01:29:30.711018 kernel: ACPI: LAPIC_NMI (acpi_id[0x31] high edge lint[0x1]) Oct 31 01:29:30.711028 kernel: ACPI: LAPIC_NMI (acpi_id[0x32] high edge lint[0x1]) Oct 31 01:29:30.711034 kernel: ACPI: LAPIC_NMI (acpi_id[0x33] high edge lint[0x1]) Oct 31 01:29:30.711039 kernel: ACPI: LAPIC_NMI (acpi_id[0x34] high edge lint[0x1]) Oct 31 01:29:30.711045 kernel: ACPI: LAPIC_NMI (acpi_id[0x35] high edge lint[0x1]) Oct 31 01:29:30.711050 kernel: ACPI: LAPIC_NMI (acpi_id[0x36] high edge lint[0x1]) Oct 31 01:29:30.711055 kernel: ACPI: LAPIC_NMI (acpi_id[0x37] high edge lint[0x1]) Oct 31 01:29:30.711061 kernel: ACPI: LAPIC_NMI (acpi_id[0x38] high edge lint[0x1]) Oct 31 01:29:30.711066 kernel: ACPI: LAPIC_NMI (acpi_id[0x39] high edge lint[0x1]) Oct 31 01:29:30.711072 kernel: ACPI: LAPIC_NMI (acpi_id[0x3a] high edge lint[0x1]) Oct 31 01:29:30.711081 kernel: ACPI: LAPIC_NMI (acpi_id[0x3b] high edge lint[0x1]) Oct 31 01:29:30.711088 kernel: ACPI: LAPIC_NMI (acpi_id[0x3c] high edge lint[0x1]) Oct 31 01:29:30.711096 kernel: ACPI: LAPIC_NMI (acpi_id[0x3d] high edge lint[0x1]) Oct 31 01:29:30.711103 kernel: ACPI: LAPIC_NMI (acpi_id[0x3e] high edge lint[0x1]) Oct 31 01:29:30.711110 kernel: ACPI: LAPIC_NMI (acpi_id[0x3f] high edge lint[0x1]) Oct 31 01:29:30.711118 kernel: ACPI: LAPIC_NMI (acpi_id[0x40] high edge lint[0x1]) Oct 31 01:29:30.711126 kernel: ACPI: LAPIC_NMI (acpi_id[0x41] high edge lint[0x1]) Oct 31 01:29:30.711132 kernel: ACPI: LAPIC_NMI (acpi_id[0x42] high edge lint[0x1]) Oct 31 01:29:30.711137 kernel: ACPI: LAPIC_NMI (acpi_id[0x43] high edge lint[0x1]) Oct 31 01:29:30.711143 kernel: ACPI: LAPIC_NMI (acpi_id[0x44] high edge lint[0x1]) Oct 31 01:29:30.711150 kernel: ACPI: LAPIC_NMI (acpi_id[0x45] high edge lint[0x1]) Oct 31 01:29:30.711155 kernel: ACPI: LAPIC_NMI (acpi_id[0x46] high edge lint[0x1]) Oct 31 01:29:30.711161 kernel: ACPI: LAPIC_NMI (acpi_id[0x47] high edge lint[0x1]) Oct 31 01:29:30.711166 kernel: ACPI: LAPIC_NMI (acpi_id[0x48] high edge lint[0x1]) Oct 31 01:29:30.711172 kernel: ACPI: LAPIC_NMI (acpi_id[0x49] high edge lint[0x1]) Oct 31 01:29:30.711177 kernel: ACPI: LAPIC_NMI (acpi_id[0x4a] high edge lint[0x1]) Oct 31 01:29:30.711183 kernel: ACPI: LAPIC_NMI (acpi_id[0x4b] high edge lint[0x1]) Oct 31 01:29:30.711188 kernel: ACPI: LAPIC_NMI (acpi_id[0x4c] high edge lint[0x1]) Oct 31 01:29:30.711194 kernel: ACPI: LAPIC_NMI (acpi_id[0x4d] high edge lint[0x1]) Oct 31 01:29:30.711200 kernel: ACPI: LAPIC_NMI (acpi_id[0x4e] high edge lint[0x1]) Oct 31 01:29:30.711205 kernel: ACPI: LAPIC_NMI (acpi_id[0x4f] high edge lint[0x1]) Oct 31 01:29:30.711211 kernel: ACPI: LAPIC_NMI (acpi_id[0x50] high edge lint[0x1]) Oct 31 01:29:30.711220 kernel: ACPI: LAPIC_NMI (acpi_id[0x51] high edge lint[0x1]) Oct 31 01:29:30.711226 kernel: ACPI: LAPIC_NMI (acpi_id[0x52] high edge lint[0x1]) Oct 31 01:29:30.711231 kernel: ACPI: LAPIC_NMI (acpi_id[0x53] high edge lint[0x1]) Oct 31 01:29:30.711237 kernel: ACPI: LAPIC_NMI (acpi_id[0x54] high edge lint[0x1]) Oct 31 01:29:30.711242 kernel: ACPI: LAPIC_NMI (acpi_id[0x55] high edge lint[0x1]) Oct 31 01:29:30.711247 kernel: ACPI: LAPIC_NMI (acpi_id[0x56] high edge lint[0x1]) Oct 31 01:29:30.711254 kernel: ACPI: LAPIC_NMI (acpi_id[0x57] high edge lint[0x1]) Oct 31 01:29:30.711260 kernel: ACPI: LAPIC_NMI (acpi_id[0x58] high edge lint[0x1]) Oct 31 01:29:30.711265 kernel: ACPI: LAPIC_NMI (acpi_id[0x59] high edge lint[0x1]) Oct 31 01:29:30.711270 kernel: ACPI: LAPIC_NMI (acpi_id[0x5a] high edge lint[0x1]) Oct 31 01:29:30.711276 kernel: ACPI: LAPIC_NMI (acpi_id[0x5b] high edge lint[0x1]) Oct 31 01:29:30.711281 kernel: ACPI: LAPIC_NMI (acpi_id[0x5c] high edge lint[0x1]) Oct 31 01:29:30.711287 kernel: ACPI: LAPIC_NMI (acpi_id[0x5d] high edge lint[0x1]) Oct 31 01:29:30.711292 kernel: ACPI: LAPIC_NMI (acpi_id[0x5e] high edge lint[0x1]) Oct 31 01:29:30.711297 kernel: ACPI: LAPIC_NMI (acpi_id[0x5f] high edge lint[0x1]) Oct 31 01:29:30.711303 kernel: ACPI: LAPIC_NMI (acpi_id[0x60] high edge lint[0x1]) Oct 31 01:29:30.711309 kernel: ACPI: LAPIC_NMI (acpi_id[0x61] high edge lint[0x1]) Oct 31 01:29:30.711315 kernel: ACPI: LAPIC_NMI (acpi_id[0x62] high edge lint[0x1]) Oct 31 01:29:30.711320 kernel: ACPI: LAPIC_NMI (acpi_id[0x63] high edge lint[0x1]) Oct 31 01:29:30.711325 kernel: ACPI: LAPIC_NMI (acpi_id[0x64] high edge lint[0x1]) Oct 31 01:29:30.711331 kernel: ACPI: LAPIC_NMI (acpi_id[0x65] high edge lint[0x1]) Oct 31 01:29:30.711336 kernel: ACPI: LAPIC_NMI (acpi_id[0x66] high edge lint[0x1]) Oct 31 01:29:30.711342 kernel: ACPI: LAPIC_NMI (acpi_id[0x67] high edge lint[0x1]) Oct 31 01:29:30.711347 kernel: ACPI: LAPIC_NMI (acpi_id[0x68] high edge lint[0x1]) Oct 31 01:29:30.711353 kernel: ACPI: LAPIC_NMI (acpi_id[0x69] high edge lint[0x1]) Oct 31 01:29:30.711359 kernel: ACPI: LAPIC_NMI (acpi_id[0x6a] high edge lint[0x1]) Oct 31 01:29:30.711364 kernel: ACPI: LAPIC_NMI (acpi_id[0x6b] high edge lint[0x1]) Oct 31 01:29:30.711370 kernel: ACPI: LAPIC_NMI (acpi_id[0x6c] high edge lint[0x1]) Oct 31 01:29:30.711375 kernel: ACPI: LAPIC_NMI (acpi_id[0x6d] high edge lint[0x1]) Oct 31 01:29:30.711381 kernel: ACPI: LAPIC_NMI (acpi_id[0x6e] high edge lint[0x1]) Oct 31 01:29:30.711386 kernel: ACPI: LAPIC_NMI (acpi_id[0x6f] high edge lint[0x1]) Oct 31 01:29:30.711392 kernel: ACPI: LAPIC_NMI (acpi_id[0x70] high edge lint[0x1]) Oct 31 01:29:30.711410 kernel: ACPI: LAPIC_NMI (acpi_id[0x71] high edge lint[0x1]) Oct 31 01:29:30.711416 kernel: ACPI: LAPIC_NMI (acpi_id[0x72] high edge lint[0x1]) Oct 31 01:29:30.711421 kernel: ACPI: LAPIC_NMI (acpi_id[0x73] high edge lint[0x1]) Oct 31 01:29:30.711428 kernel: ACPI: LAPIC_NMI (acpi_id[0x74] high edge lint[0x1]) Oct 31 01:29:30.711434 kernel: ACPI: LAPIC_NMI (acpi_id[0x75] high edge lint[0x1]) Oct 31 01:29:30.711439 kernel: ACPI: LAPIC_NMI (acpi_id[0x76] high edge lint[0x1]) Oct 31 01:29:30.711445 kernel: ACPI: LAPIC_NMI (acpi_id[0x77] high edge lint[0x1]) Oct 31 01:29:30.711450 kernel: ACPI: LAPIC_NMI (acpi_id[0x78] high edge lint[0x1]) Oct 31 01:29:30.711456 kernel: ACPI: LAPIC_NMI (acpi_id[0x79] high edge lint[0x1]) Oct 31 01:29:30.711461 kernel: ACPI: LAPIC_NMI (acpi_id[0x7a] high edge lint[0x1]) Oct 31 01:29:30.711466 kernel: ACPI: LAPIC_NMI (acpi_id[0x7b] high edge lint[0x1]) Oct 31 01:29:30.711472 kernel: ACPI: LAPIC_NMI (acpi_id[0x7c] high edge lint[0x1]) Oct 31 01:29:30.711478 kernel: ACPI: LAPIC_NMI (acpi_id[0x7d] high edge lint[0x1]) Oct 31 01:29:30.711483 kernel: ACPI: LAPIC_NMI (acpi_id[0x7e] high edge lint[0x1]) Oct 31 01:29:30.711489 kernel: ACPI: LAPIC_NMI (acpi_id[0x7f] high edge lint[0x1]) Oct 31 01:29:30.711494 kernel: IOAPIC[0]: apic_id 1, version 17, address 0xfec00000, GSI 0-23 Oct 31 01:29:30.711500 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 high edge) Oct 31 01:29:30.711505 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Oct 31 01:29:30.711511 kernel: ACPI: HPET id: 0x8086af01 base: 0xfed00000 Oct 31 01:29:30.711517 kernel: TSC deadline timer available Oct 31 01:29:30.711522 kernel: smpboot: Allowing 128 CPUs, 126 hotplug CPUs Oct 31 01:29:30.711529 kernel: [mem 0x80000000-0xefffffff] available for PCI devices Oct 31 01:29:30.711534 kernel: Booting paravirtualized kernel on VMware hypervisor Oct 31 01:29:30.711540 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Oct 31 01:29:30.711545 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:512 nr_cpu_ids:128 nr_node_ids:1 Oct 31 01:29:30.711551 kernel: percpu: Embedded 56 pages/cpu s188696 r8192 d32488 u262144 Oct 31 01:29:30.711559 kernel: pcpu-alloc: s188696 r8192 d32488 u262144 alloc=1*2097152 Oct 31 01:29:30.711565 kernel: pcpu-alloc: [0] 000 001 002 003 004 005 006 007 Oct 31 01:29:30.711571 kernel: pcpu-alloc: [0] 008 009 010 011 012 013 014 015 Oct 31 01:29:30.711576 kernel: pcpu-alloc: [0] 016 017 018 019 020 021 022 023 Oct 31 01:29:30.711583 kernel: pcpu-alloc: [0] 024 025 026 027 028 029 030 031 Oct 31 01:29:30.711588 kernel: pcpu-alloc: [0] 032 033 034 035 036 037 038 039 Oct 31 01:29:30.711594 kernel: pcpu-alloc: [0] 040 041 042 043 044 045 046 047 Oct 31 01:29:30.711599 kernel: pcpu-alloc: [0] 048 049 050 051 052 053 054 055 Oct 31 01:29:30.711611 kernel: pcpu-alloc: [0] 056 057 058 059 060 061 062 063 Oct 31 01:29:30.711618 kernel: pcpu-alloc: [0] 064 065 066 067 068 069 070 071 Oct 31 01:29:30.711624 kernel: pcpu-alloc: [0] 072 073 074 075 076 077 078 079 Oct 31 01:29:30.711629 kernel: pcpu-alloc: [0] 080 081 082 083 084 085 086 087 Oct 31 01:29:30.711635 kernel: pcpu-alloc: [0] 088 089 090 091 092 093 094 095 Oct 31 01:29:30.711642 kernel: pcpu-alloc: [0] 096 097 098 099 100 101 102 103 Oct 31 01:29:30.711647 kernel: pcpu-alloc: [0] 104 105 106 107 108 109 110 111 Oct 31 01:29:30.711653 kernel: pcpu-alloc: [0] 112 113 114 115 116 117 118 119 Oct 31 01:29:30.711659 kernel: pcpu-alloc: [0] 120 121 122 123 124 125 126 127 Oct 31 01:29:30.711665 kernel: Built 1 zonelists, mobility grouping on. Total pages: 515808 Oct 31 01:29:30.711671 kernel: Policy zone: DMA32 Oct 31 01:29:30.711678 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=vmware flatcar.autologin verity.usrhash=7605c743a37b990723033788c91d5dcda748347858877b1088098370c2a7e4d3 Oct 31 01:29:30.711684 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Oct 31 01:29:30.711691 kernel: printk: log_buf_len individual max cpu contribution: 4096 bytes Oct 31 01:29:30.711696 kernel: printk: log_buf_len total cpu_extra contributions: 520192 bytes Oct 31 01:29:30.711703 kernel: printk: log_buf_len min size: 262144 bytes Oct 31 01:29:30.711708 kernel: printk: log_buf_len: 1048576 bytes Oct 31 01:29:30.711714 kernel: printk: early log buf free: 239728(91%) Oct 31 01:29:30.711720 kernel: Dentry cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Oct 31 01:29:30.711726 kernel: Inode-cache hash table entries: 131072 (order: 8, 1048576 bytes, linear) Oct 31 01:29:30.711732 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Oct 31 01:29:30.711738 kernel: Memory: 1940392K/2096628K available (12295K kernel code, 2276K rwdata, 13732K rodata, 47496K init, 4084K bss, 155976K reserved, 0K cma-reserved) Oct 31 01:29:30.711745 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=128, Nodes=1 Oct 31 01:29:30.711751 kernel: ftrace: allocating 34614 entries in 136 pages Oct 31 01:29:30.711758 kernel: ftrace: allocated 136 pages with 2 groups Oct 31 01:29:30.711765 kernel: rcu: Hierarchical RCU implementation. Oct 31 01:29:30.711771 kernel: rcu: RCU event tracing is enabled. Oct 31 01:29:30.711779 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=128. Oct 31 01:29:30.711785 kernel: Rude variant of Tasks RCU enabled. Oct 31 01:29:30.711791 kernel: Tracing variant of Tasks RCU enabled. Oct 31 01:29:30.711797 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Oct 31 01:29:30.711802 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=128 Oct 31 01:29:30.711808 kernel: NR_IRQS: 33024, nr_irqs: 1448, preallocated irqs: 16 Oct 31 01:29:30.711814 kernel: random: crng init done Oct 31 01:29:30.711820 kernel: Console: colour VGA+ 80x25 Oct 31 01:29:30.711826 kernel: printk: console [tty0] enabled Oct 31 01:29:30.711832 kernel: printk: console [ttyS0] enabled Oct 31 01:29:30.711839 kernel: ACPI: Core revision 20210730 Oct 31 01:29:30.711845 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 133484882848 ns Oct 31 01:29:30.711851 kernel: APIC: Switch to symmetric I/O mode setup Oct 31 01:29:30.711857 kernel: x2apic enabled Oct 31 01:29:30.711863 kernel: Switched APIC routing to physical x2apic. Oct 31 01:29:30.711869 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Oct 31 01:29:30.711875 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x311fd3cd494, max_idle_ns: 440795223879 ns Oct 31 01:29:30.711881 kernel: Calibrating delay loop (skipped) preset value.. 6816.00 BogoMIPS (lpj=3408000) Oct 31 01:29:30.711887 kernel: Disabled fast string operations Oct 31 01:29:30.711893 kernel: Last level iTLB entries: 4KB 64, 2MB 8, 4MB 8 Oct 31 01:29:30.711899 kernel: Last level dTLB entries: 4KB 64, 2MB 32, 4MB 32, 1GB 4 Oct 31 01:29:30.711905 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Oct 31 01:29:30.711912 kernel: Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks! Oct 31 01:29:30.711918 kernel: Spectre V2 : Spectre BHI mitigation: SW BHB clearing on vm exit Oct 31 01:29:30.711923 kernel: Spectre V2 : Spectre BHI mitigation: SW BHB clearing on syscall Oct 31 01:29:30.711929 kernel: Spectre V2 : Mitigation: Enhanced / Automatic IBRS Oct 31 01:29:30.711935 kernel: Spectre V2 : Spectre v2 / PBRSB-eIBRS: Retire a single CALL on VMEXIT Oct 31 01:29:30.711942 kernel: RETBleed: Mitigation: Enhanced IBRS Oct 31 01:29:30.711948 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Oct 31 01:29:30.711954 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl and seccomp Oct 31 01:29:30.711960 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode Oct 31 01:29:30.711966 kernel: SRBDS: Unknown: Dependent on hypervisor status Oct 31 01:29:30.711972 kernel: GDS: Unknown: Dependent on hypervisor status Oct 31 01:29:30.711978 kernel: active return thunk: its_return_thunk Oct 31 01:29:30.711983 kernel: ITS: Mitigation: Aligned branch/return thunks Oct 31 01:29:30.711989 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Oct 31 01:29:30.711996 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Oct 31 01:29:30.712002 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Oct 31 01:29:30.712008 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Oct 31 01:29:30.712014 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format. Oct 31 01:29:30.712020 kernel: Freeing SMP alternatives memory: 32K Oct 31 01:29:30.712025 kernel: pid_max: default: 131072 minimum: 1024 Oct 31 01:29:30.712031 kernel: LSM: Security Framework initializing Oct 31 01:29:30.712037 kernel: SELinux: Initializing. Oct 31 01:29:30.712043 kernel: Mount-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Oct 31 01:29:30.712050 kernel: Mountpoint-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Oct 31 01:29:30.712056 kernel: smpboot: CPU0: Intel(R) Xeon(R) E-2278G CPU @ 3.40GHz (family: 0x6, model: 0x9e, stepping: 0xd) Oct 31 01:29:30.712062 kernel: Performance Events: Skylake events, core PMU driver. Oct 31 01:29:30.712068 kernel: core: CPUID marked event: 'cpu cycles' unavailable Oct 31 01:29:30.712075 kernel: core: CPUID marked event: 'instructions' unavailable Oct 31 01:29:30.712081 kernel: core: CPUID marked event: 'bus cycles' unavailable Oct 31 01:29:30.712086 kernel: core: CPUID marked event: 'cache references' unavailable Oct 31 01:29:30.712092 kernel: core: CPUID marked event: 'cache misses' unavailable Oct 31 01:29:30.712097 kernel: core: CPUID marked event: 'branch instructions' unavailable Oct 31 01:29:30.712104 kernel: core: CPUID marked event: 'branch misses' unavailable Oct 31 01:29:30.712110 kernel: ... version: 1 Oct 31 01:29:30.712116 kernel: ... bit width: 48 Oct 31 01:29:30.712122 kernel: ... generic registers: 4 Oct 31 01:29:30.712127 kernel: ... value mask: 0000ffffffffffff Oct 31 01:29:30.712133 kernel: ... max period: 000000007fffffff Oct 31 01:29:30.712139 kernel: ... fixed-purpose events: 0 Oct 31 01:29:30.712145 kernel: ... event mask: 000000000000000f Oct 31 01:29:30.712151 kernel: signal: max sigframe size: 1776 Oct 31 01:29:30.712157 kernel: rcu: Hierarchical SRCU implementation. Oct 31 01:29:30.712163 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Oct 31 01:29:30.712169 kernel: smp: Bringing up secondary CPUs ... Oct 31 01:29:30.712175 kernel: x86: Booting SMP configuration: Oct 31 01:29:30.712181 kernel: .... node #0, CPUs: #1 Oct 31 01:29:30.712186 kernel: Disabled fast string operations Oct 31 01:29:30.712192 kernel: smpboot: CPU 1 Converting physical 2 to logical package 1 Oct 31 01:29:30.712198 kernel: smpboot: CPU 1 Converting physical 0 to logical die 1 Oct 31 01:29:30.712204 kernel: smp: Brought up 1 node, 2 CPUs Oct 31 01:29:30.712210 kernel: smpboot: Max logical packages: 128 Oct 31 01:29:30.712217 kernel: smpboot: Total of 2 processors activated (13632.00 BogoMIPS) Oct 31 01:29:30.712222 kernel: devtmpfs: initialized Oct 31 01:29:30.712228 kernel: x86/mm: Memory block size: 128MB Oct 31 01:29:30.712234 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x7feff000-0x7fefffff] (4096 bytes) Oct 31 01:29:30.712240 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Oct 31 01:29:30.712246 kernel: futex hash table entries: 32768 (order: 9, 2097152 bytes, linear) Oct 31 01:29:30.712252 kernel: pinctrl core: initialized pinctrl subsystem Oct 31 01:29:30.712258 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Oct 31 01:29:30.712264 kernel: audit: initializing netlink subsys (disabled) Oct 31 01:29:30.712271 kernel: audit: type=2000 audit(1761874169.085:1): state=initialized audit_enabled=0 res=1 Oct 31 01:29:30.712277 kernel: thermal_sys: Registered thermal governor 'step_wise' Oct 31 01:29:30.712283 kernel: thermal_sys: Registered thermal governor 'user_space' Oct 31 01:29:30.712288 kernel: cpuidle: using governor menu Oct 31 01:29:30.712294 kernel: Simple Boot Flag at 0x36 set to 0x80 Oct 31 01:29:30.712300 kernel: ACPI: bus type PCI registered Oct 31 01:29:30.712306 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Oct 31 01:29:30.712312 kernel: dca service started, version 1.12.1 Oct 31 01:29:30.712318 kernel: PCI: MMCONFIG for domain 0000 [bus 00-7f] at [mem 0xf0000000-0xf7ffffff] (base 0xf0000000) Oct 31 01:29:30.712325 kernel: PCI: MMCONFIG at [mem 0xf0000000-0xf7ffffff] reserved in E820 Oct 31 01:29:30.712331 kernel: PCI: Using configuration type 1 for base access Oct 31 01:29:30.712337 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Oct 31 01:29:30.712343 kernel: HugeTLB registered 1.00 GiB page size, pre-allocated 0 pages Oct 31 01:29:30.712348 kernel: HugeTLB registered 2.00 MiB page size, pre-allocated 0 pages Oct 31 01:29:30.712354 kernel: ACPI: Added _OSI(Module Device) Oct 31 01:29:30.712360 kernel: ACPI: Added _OSI(Processor Device) Oct 31 01:29:30.712366 kernel: ACPI: Added _OSI(Processor Aggregator Device) Oct 31 01:29:30.712372 kernel: ACPI: Added _OSI(Linux-Dell-Video) Oct 31 01:29:30.712379 kernel: ACPI: Added _OSI(Linux-Lenovo-NV-HDMI-Audio) Oct 31 01:29:30.712384 kernel: ACPI: Added _OSI(Linux-HPI-Hybrid-Graphics) Oct 31 01:29:30.712390 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Oct 31 01:29:30.715611 kernel: ACPI: [Firmware Bug]: BIOS _OSI(Linux) query ignored Oct 31 01:29:30.715626 kernel: ACPI: Interpreter enabled Oct 31 01:29:30.715633 kernel: ACPI: PM: (supports S0 S1 S5) Oct 31 01:29:30.715640 kernel: ACPI: Using IOAPIC for interrupt routing Oct 31 01:29:30.715645 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Oct 31 01:29:30.715653 kernel: ACPI: Enabled 4 GPEs in block 00 to 0F Oct 31 01:29:30.715660 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-7f]) Oct 31 01:29:30.715737 kernel: acpi PNP0A03:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Oct 31 01:29:30.715788 kernel: acpi PNP0A03:00: _OSC: platform does not support [AER LTR] Oct 31 01:29:30.715836 kernel: acpi PNP0A03:00: _OSC: OS now controls [PCIeHotplug PME PCIeCapability] Oct 31 01:29:30.715844 kernel: PCI host bridge to bus 0000:00 Oct 31 01:29:30.715910 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Oct 31 01:29:30.715957 kernel: pci_bus 0000:00: root bus resource [mem 0x000cc000-0x000dbfff window] Oct 31 01:29:30.716001 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Oct 31 01:29:30.716044 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Oct 31 01:29:30.716087 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xfeff window] Oct 31 01:29:30.716129 kernel: pci_bus 0000:00: root bus resource [bus 00-7f] Oct 31 01:29:30.716188 kernel: pci 0000:00:00.0: [8086:7190] type 00 class 0x060000 Oct 31 01:29:30.716245 kernel: pci 0000:00:01.0: [8086:7191] type 01 class 0x060400 Oct 31 01:29:30.716302 kernel: pci 0000:00:07.0: [8086:7110] type 00 class 0x060100 Oct 31 01:29:30.716358 kernel: pci 0000:00:07.1: [8086:7111] type 00 class 0x01018a Oct 31 01:29:30.716418 kernel: pci 0000:00:07.1: reg 0x20: [io 0x1060-0x106f] Oct 31 01:29:30.716468 kernel: pci 0000:00:07.1: legacy IDE quirk: reg 0x10: [io 0x01f0-0x01f7] Oct 31 01:29:30.716517 kernel: pci 0000:00:07.1: legacy IDE quirk: reg 0x14: [io 0x03f6] Oct 31 01:29:30.716565 kernel: pci 0000:00:07.1: legacy IDE quirk: reg 0x18: [io 0x0170-0x0177] Oct 31 01:29:30.716616 kernel: pci 0000:00:07.1: legacy IDE quirk: reg 0x1c: [io 0x0376] Oct 31 01:29:30.716669 kernel: pci 0000:00:07.3: [8086:7113] type 00 class 0x068000 Oct 31 01:29:30.716724 kernel: pci 0000:00:07.3: quirk: [io 0x1000-0x103f] claimed by PIIX4 ACPI Oct 31 01:29:30.716964 kernel: pci 0000:00:07.3: quirk: [io 0x1040-0x104f] claimed by PIIX4 SMB Oct 31 01:29:30.717032 kernel: pci 0000:00:07.7: [15ad:0740] type 00 class 0x088000 Oct 31 01:29:30.717084 kernel: pci 0000:00:07.7: reg 0x10: [io 0x1080-0x10bf] Oct 31 01:29:30.717801 kernel: pci 0000:00:07.7: reg 0x14: [mem 0xfebfe000-0xfebfffff 64bit] Oct 31 01:29:30.717889 kernel: pci 0000:00:0f.0: [15ad:0405] type 00 class 0x030000 Oct 31 01:29:30.717946 kernel: pci 0000:00:0f.0: reg 0x10: [io 0x1070-0x107f] Oct 31 01:29:30.717997 kernel: pci 0000:00:0f.0: reg 0x14: [mem 0xe8000000-0xefffffff pref] Oct 31 01:29:30.718057 kernel: pci 0000:00:0f.0: reg 0x18: [mem 0xfe000000-0xfe7fffff] Oct 31 01:29:30.718781 kernel: pci 0000:00:0f.0: reg 0x30: [mem 0x00000000-0x00007fff pref] Oct 31 01:29:30.718839 kernel: pci 0000:00:0f.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Oct 31 01:29:30.718900 kernel: pci 0000:00:11.0: [15ad:0790] type 01 class 0x060401 Oct 31 01:29:30.718957 kernel: pci 0000:00:15.0: [15ad:07a0] type 01 class 0x060400 Oct 31 01:29:30.719009 kernel: pci 0000:00:15.0: PME# supported from D0 D3hot D3cold Oct 31 01:29:30.719063 kernel: pci 0000:00:15.1: [15ad:07a0] type 01 class 0x060400 Oct 31 01:29:30.719116 kernel: pci 0000:00:15.1: PME# supported from D0 D3hot D3cold Oct 31 01:29:30.719186 kernel: pci 0000:00:15.2: [15ad:07a0] type 01 class 0x060400 Oct 31 01:29:30.719252 kernel: pci 0000:00:15.2: PME# supported from D0 D3hot D3cold Oct 31 01:29:30.719314 kernel: pci 0000:00:15.3: [15ad:07a0] type 01 class 0x060400 Oct 31 01:29:30.719383 kernel: pci 0000:00:15.3: PME# supported from D0 D3hot D3cold Oct 31 01:29:30.719466 kernel: pci 0000:00:15.4: [15ad:07a0] type 01 class 0x060400 Oct 31 01:29:30.725712 kernel: pci 0000:00:15.4: PME# supported from D0 D3hot D3cold Oct 31 01:29:30.725821 kernel: pci 0000:00:15.5: [15ad:07a0] type 01 class 0x060400 Oct 31 01:29:30.725896 kernel: pci 0000:00:15.5: PME# supported from D0 D3hot D3cold Oct 31 01:29:30.725988 kernel: pci 0000:00:15.6: [15ad:07a0] type 01 class 0x060400 Oct 31 01:29:30.726046 kernel: pci 0000:00:15.6: PME# supported from D0 D3hot D3cold Oct 31 01:29:30.726111 kernel: pci 0000:00:15.7: [15ad:07a0] type 01 class 0x060400 Oct 31 01:29:30.726180 kernel: pci 0000:00:15.7: PME# supported from D0 D3hot D3cold Oct 31 01:29:30.726262 kernel: pci 0000:00:16.0: [15ad:07a0] type 01 class 0x060400 Oct 31 01:29:30.726322 kernel: pci 0000:00:16.0: PME# supported from D0 D3hot D3cold Oct 31 01:29:30.726418 kernel: pci 0000:00:16.1: [15ad:07a0] type 01 class 0x060400 Oct 31 01:29:30.726489 kernel: pci 0000:00:16.1: PME# supported from D0 D3hot D3cold Oct 31 01:29:30.726569 kernel: pci 0000:00:16.2: [15ad:07a0] type 01 class 0x060400 Oct 31 01:29:30.726646 kernel: pci 0000:00:16.2: PME# supported from D0 D3hot D3cold Oct 31 01:29:30.726728 kernel: pci 0000:00:16.3: [15ad:07a0] type 01 class 0x060400 Oct 31 01:29:30.726801 kernel: pci 0000:00:16.3: PME# supported from D0 D3hot D3cold Oct 31 01:29:30.726872 kernel: pci 0000:00:16.4: [15ad:07a0] type 01 class 0x060400 Oct 31 01:29:30.726933 kernel: pci 0000:00:16.4: PME# supported from D0 D3hot D3cold Oct 31 01:29:30.727006 kernel: pci 0000:00:16.5: [15ad:07a0] type 01 class 0x060400 Oct 31 01:29:30.727087 kernel: pci 0000:00:16.5: PME# supported from D0 D3hot D3cold Oct 31 01:29:30.727174 kernel: pci 0000:00:16.6: [15ad:07a0] type 01 class 0x060400 Oct 31 01:29:30.727251 kernel: pci 0000:00:16.6: PME# supported from D0 D3hot D3cold Oct 31 01:29:30.727344 kernel: pci 0000:00:16.7: [15ad:07a0] type 01 class 0x060400 Oct 31 01:29:30.727445 kernel: pci 0000:00:16.7: PME# supported from D0 D3hot D3cold Oct 31 01:29:30.727528 kernel: pci 0000:00:17.0: [15ad:07a0] type 01 class 0x060400 Oct 31 01:29:30.727603 kernel: pci 0000:00:17.0: PME# supported from D0 D3hot D3cold Oct 31 01:29:30.727671 kernel: pci 0000:00:17.1: [15ad:07a0] type 01 class 0x060400 Oct 31 01:29:30.727754 kernel: pci 0000:00:17.1: PME# supported from D0 D3hot D3cold Oct 31 01:29:30.727838 kernel: pci 0000:00:17.2: [15ad:07a0] type 01 class 0x060400 Oct 31 01:29:30.727915 kernel: pci 0000:00:17.2: PME# supported from D0 D3hot D3cold Oct 31 01:29:30.728007 kernel: pci 0000:00:17.3: [15ad:07a0] type 01 class 0x060400 Oct 31 01:29:30.728090 kernel: pci 0000:00:17.3: PME# supported from D0 D3hot D3cold Oct 31 01:29:30.728183 kernel: pci 0000:00:17.4: [15ad:07a0] type 01 class 0x060400 Oct 31 01:29:30.728269 kernel: pci 0000:00:17.4: PME# supported from D0 D3hot D3cold Oct 31 01:29:30.728347 kernel: pci 0000:00:17.5: [15ad:07a0] type 01 class 0x060400 Oct 31 01:29:30.728443 kernel: pci 0000:00:17.5: PME# supported from D0 D3hot D3cold Oct 31 01:29:30.728510 kernel: pci 0000:00:17.6: [15ad:07a0] type 01 class 0x060400 Oct 31 01:29:30.728597 kernel: pci 0000:00:17.6: PME# supported from D0 D3hot D3cold Oct 31 01:29:30.728675 kernel: pci 0000:00:17.7: [15ad:07a0] type 01 class 0x060400 Oct 31 01:29:30.728762 kernel: pci 0000:00:17.7: PME# supported from D0 D3hot D3cold Oct 31 01:29:30.728837 kernel: pci 0000:00:18.0: [15ad:07a0] type 01 class 0x060400 Oct 31 01:29:30.728915 kernel: pci 0000:00:18.0: PME# supported from D0 D3hot D3cold Oct 31 01:29:30.729009 kernel: pci 0000:00:18.1: [15ad:07a0] type 01 class 0x060400 Oct 31 01:29:30.729097 kernel: pci 0000:00:18.1: PME# supported from D0 D3hot D3cold Oct 31 01:29:30.729181 kernel: pci 0000:00:18.2: [15ad:07a0] type 01 class 0x060400 Oct 31 01:29:30.729252 kernel: pci 0000:00:18.2: PME# supported from D0 D3hot D3cold Oct 31 01:29:30.729332 kernel: pci 0000:00:18.3: [15ad:07a0] type 01 class 0x060400 Oct 31 01:29:30.731672 kernel: pci 0000:00:18.3: PME# supported from D0 D3hot D3cold Oct 31 01:29:30.731770 kernel: pci 0000:00:18.4: [15ad:07a0] type 01 class 0x060400 Oct 31 01:29:30.731844 kernel: pci 0000:00:18.4: PME# supported from D0 D3hot D3cold Oct 31 01:29:30.731915 kernel: pci 0000:00:18.5: [15ad:07a0] type 01 class 0x060400 Oct 31 01:29:30.732015 kernel: pci 0000:00:18.5: PME# supported from D0 D3hot D3cold Oct 31 01:29:30.732088 kernel: pci 0000:00:18.6: [15ad:07a0] type 01 class 0x060400 Oct 31 01:29:30.732170 kernel: pci 0000:00:18.6: PME# supported from D0 D3hot D3cold Oct 31 01:29:30.732244 kernel: pci 0000:00:18.7: [15ad:07a0] type 01 class 0x060400 Oct 31 01:29:30.732300 kernel: pci 0000:00:18.7: PME# supported from D0 D3hot D3cold Oct 31 01:29:30.732362 kernel: pci_bus 0000:01: extended config space not accessible Oct 31 01:29:30.732443 kernel: pci 0000:00:01.0: PCI bridge to [bus 01] Oct 31 01:29:30.732515 kernel: pci_bus 0000:02: extended config space not accessible Oct 31 01:29:30.732530 kernel: acpiphp: Slot [32] registered Oct 31 01:29:30.732540 kernel: acpiphp: Slot [33] registered Oct 31 01:29:30.732550 kernel: acpiphp: Slot [34] registered Oct 31 01:29:30.732556 kernel: acpiphp: Slot [35] registered Oct 31 01:29:30.732561 kernel: acpiphp: Slot [36] registered Oct 31 01:29:30.732567 kernel: acpiphp: Slot [37] registered Oct 31 01:29:30.732575 kernel: acpiphp: Slot [38] registered Oct 31 01:29:30.732584 kernel: acpiphp: Slot [39] registered Oct 31 01:29:30.732593 kernel: acpiphp: Slot [40] registered Oct 31 01:29:30.732605 kernel: acpiphp: Slot [41] registered Oct 31 01:29:30.732614 kernel: acpiphp: Slot [42] registered Oct 31 01:29:30.732623 kernel: acpiphp: Slot [43] registered Oct 31 01:29:30.732629 kernel: acpiphp: Slot [44] registered Oct 31 01:29:30.732635 kernel: acpiphp: Slot [45] registered Oct 31 01:29:30.732641 kernel: acpiphp: Slot [46] registered Oct 31 01:29:30.732647 kernel: acpiphp: Slot [47] registered Oct 31 01:29:30.732653 kernel: acpiphp: Slot [48] registered Oct 31 01:29:30.732662 kernel: acpiphp: Slot [49] registered Oct 31 01:29:30.732672 kernel: acpiphp: Slot [50] registered Oct 31 01:29:30.732683 kernel: acpiphp: Slot [51] registered Oct 31 01:29:30.732690 kernel: acpiphp: Slot [52] registered Oct 31 01:29:30.732696 kernel: acpiphp: Slot [53] registered Oct 31 01:29:30.732701 kernel: acpiphp: Slot [54] registered Oct 31 01:29:30.732709 kernel: acpiphp: Slot [55] registered Oct 31 01:29:30.732719 kernel: acpiphp: Slot [56] registered Oct 31 01:29:30.732729 kernel: acpiphp: Slot [57] registered Oct 31 01:29:30.732736 kernel: acpiphp: Slot [58] registered Oct 31 01:29:30.732742 kernel: acpiphp: Slot [59] registered Oct 31 01:29:30.732752 kernel: acpiphp: Slot [60] registered Oct 31 01:29:30.732762 kernel: acpiphp: Slot [61] registered Oct 31 01:29:30.732771 kernel: acpiphp: Slot [62] registered Oct 31 01:29:30.732780 kernel: acpiphp: Slot [63] registered Oct 31 01:29:30.732849 kernel: pci 0000:00:11.0: PCI bridge to [bus 02] (subtractive decode) Oct 31 01:29:30.732921 kernel: pci 0000:00:11.0: bridge window [io 0x2000-0x3fff] Oct 31 01:29:30.733006 kernel: pci 0000:00:11.0: bridge window [mem 0xfd600000-0xfdffffff] Oct 31 01:29:30.733063 kernel: pci 0000:00:11.0: bridge window [mem 0xe7b00000-0xe7ffffff 64bit pref] Oct 31 01:29:30.733123 kernel: pci 0000:00:11.0: bridge window [mem 0x000a0000-0x000bffff window] (subtractive decode) Oct 31 01:29:30.733198 kernel: pci 0000:00:11.0: bridge window [mem 0x000cc000-0x000dbfff window] (subtractive decode) Oct 31 01:29:30.733272 kernel: pci 0000:00:11.0: bridge window [mem 0xc0000000-0xfebfffff window] (subtractive decode) Oct 31 01:29:30.733340 kernel: pci 0000:00:11.0: bridge window [io 0x0000-0x0cf7 window] (subtractive decode) Oct 31 01:29:30.733387 kernel: pci 0000:00:11.0: bridge window [io 0x0d00-0xfeff window] (subtractive decode) Oct 31 01:29:30.733483 kernel: pci 0000:03:00.0: [15ad:07c0] type 00 class 0x010700 Oct 31 01:29:30.733548 kernel: pci 0000:03:00.0: reg 0x10: [io 0x4000-0x4007] Oct 31 01:29:30.733611 kernel: pci 0000:03:00.0: reg 0x14: [mem 0xfd5f8000-0xfd5fffff 64bit] Oct 31 01:29:30.733667 kernel: pci 0000:03:00.0: reg 0x30: [mem 0x00000000-0x0000ffff pref] Oct 31 01:29:30.733718 kernel: pci 0000:03:00.0: PME# supported from D0 D3hot D3cold Oct 31 01:29:30.733778 kernel: pci 0000:03:00.0: disabling ASPM on pre-1.1 PCIe device. You can enable it with 'pcie_aspm=force' Oct 31 01:29:30.733833 kernel: pci 0000:00:15.0: PCI bridge to [bus 03] Oct 31 01:29:30.733905 kernel: pci 0000:00:15.0: bridge window [io 0x4000-0x4fff] Oct 31 01:29:30.733963 kernel: pci 0000:00:15.0: bridge window [mem 0xfd500000-0xfd5fffff] Oct 31 01:29:30.734037 kernel: pci 0000:00:15.1: PCI bridge to [bus 04] Oct 31 01:29:30.734111 kernel: pci 0000:00:15.1: bridge window [io 0x8000-0x8fff] Oct 31 01:29:30.734180 kernel: pci 0000:00:15.1: bridge window [mem 0xfd100000-0xfd1fffff] Oct 31 01:29:30.734241 kernel: pci 0000:00:15.1: bridge window [mem 0xe7800000-0xe78fffff 64bit pref] Oct 31 01:29:30.734294 kernel: pci 0000:00:15.2: PCI bridge to [bus 05] Oct 31 01:29:30.734342 kernel: pci 0000:00:15.2: bridge window [io 0xc000-0xcfff] Oct 31 01:29:30.734474 kernel: pci 0000:00:15.2: bridge window [mem 0xfcd00000-0xfcdfffff] Oct 31 01:29:30.734533 kernel: pci 0000:00:15.2: bridge window [mem 0xe7400000-0xe74fffff 64bit pref] Oct 31 01:29:30.734602 kernel: pci 0000:00:15.3: PCI bridge to [bus 06] Oct 31 01:29:30.734659 kernel: pci 0000:00:15.3: bridge window [mem 0xfc900000-0xfc9fffff] Oct 31 01:29:30.740495 kernel: pci 0000:00:15.3: bridge window [mem 0xe7000000-0xe70fffff 64bit pref] Oct 31 01:29:30.740584 kernel: pci 0000:00:15.4: PCI bridge to [bus 07] Oct 31 01:29:30.740660 kernel: pci 0000:00:15.4: bridge window [mem 0xfc500000-0xfc5fffff] Oct 31 01:29:30.740740 kernel: pci 0000:00:15.4: bridge window [mem 0xe6c00000-0xe6cfffff 64bit pref] Oct 31 01:29:30.740830 kernel: pci 0000:00:15.5: PCI bridge to [bus 08] Oct 31 01:29:30.740908 kernel: pci 0000:00:15.5: bridge window [mem 0xfc100000-0xfc1fffff] Oct 31 01:29:30.741001 kernel: pci 0000:00:15.5: bridge window [mem 0xe6800000-0xe68fffff 64bit pref] Oct 31 01:29:30.741073 kernel: pci 0000:00:15.6: PCI bridge to [bus 09] Oct 31 01:29:30.741129 kernel: pci 0000:00:15.6: bridge window [mem 0xfbd00000-0xfbdfffff] Oct 31 01:29:30.741186 kernel: pci 0000:00:15.6: bridge window [mem 0xe6400000-0xe64fffff 64bit pref] Oct 31 01:29:30.741257 kernel: pci 0000:00:15.7: PCI bridge to [bus 0a] Oct 31 01:29:30.741330 kernel: pci 0000:00:15.7: bridge window [mem 0xfb900000-0xfb9fffff] Oct 31 01:29:30.741391 kernel: pci 0000:00:15.7: bridge window [mem 0xe6000000-0xe60fffff 64bit pref] Oct 31 01:29:30.741464 kernel: pci 0000:0b:00.0: [15ad:07b0] type 00 class 0x020000 Oct 31 01:29:30.741518 kernel: pci 0000:0b:00.0: reg 0x10: [mem 0xfd4fc000-0xfd4fcfff] Oct 31 01:29:30.741570 kernel: pci 0000:0b:00.0: reg 0x14: [mem 0xfd4fd000-0xfd4fdfff] Oct 31 01:29:30.741634 kernel: pci 0000:0b:00.0: reg 0x18: [mem 0xfd4fe000-0xfd4fffff] Oct 31 01:29:30.741702 kernel: pci 0000:0b:00.0: reg 0x1c: [io 0x5000-0x500f] Oct 31 01:29:30.741785 kernel: pci 0000:0b:00.0: reg 0x30: [mem 0x00000000-0x0000ffff pref] Oct 31 01:29:30.741859 kernel: pci 0000:0b:00.0: supports D1 D2 Oct 31 01:29:30.741938 kernel: pci 0000:0b:00.0: PME# supported from D0 D1 D2 D3hot D3cold Oct 31 01:29:30.742019 kernel: pci 0000:0b:00.0: disabling ASPM on pre-1.1 PCIe device. You can enable it with 'pcie_aspm=force' Oct 31 01:29:30.742094 kernel: pci 0000:00:16.0: PCI bridge to [bus 0b] Oct 31 01:29:30.742170 kernel: pci 0000:00:16.0: bridge window [io 0x5000-0x5fff] Oct 31 01:29:30.742307 kernel: pci 0000:00:16.0: bridge window [mem 0xfd400000-0xfd4fffff] Oct 31 01:29:30.742425 kernel: pci 0000:00:16.1: PCI bridge to [bus 0c] Oct 31 01:29:30.742510 kernel: pci 0000:00:16.1: bridge window [io 0x9000-0x9fff] Oct 31 01:29:30.742589 kernel: pci 0000:00:16.1: bridge window [mem 0xfd000000-0xfd0fffff] Oct 31 01:29:30.742650 kernel: pci 0000:00:16.1: bridge window [mem 0xe7700000-0xe77fffff 64bit pref] Oct 31 01:29:30.742703 kernel: pci 0000:00:16.2: PCI bridge to [bus 0d] Oct 31 01:29:30.742752 kernel: pci 0000:00:16.2: bridge window [io 0xd000-0xdfff] Oct 31 01:29:30.742802 kernel: pci 0000:00:16.2: bridge window [mem 0xfcc00000-0xfccfffff] Oct 31 01:29:30.742867 kernel: pci 0000:00:16.2: bridge window [mem 0xe7300000-0xe73fffff 64bit pref] Oct 31 01:29:30.742941 kernel: pci 0000:00:16.3: PCI bridge to [bus 0e] Oct 31 01:29:30.743018 kernel: pci 0000:00:16.3: bridge window [mem 0xfc800000-0xfc8fffff] Oct 31 01:29:30.743096 kernel: pci 0000:00:16.3: bridge window [mem 0xe6f00000-0xe6ffffff 64bit pref] Oct 31 01:29:30.743222 kernel: pci 0000:00:16.4: PCI bridge to [bus 0f] Oct 31 01:29:30.743301 kernel: pci 0000:00:16.4: bridge window [mem 0xfc400000-0xfc4fffff] Oct 31 01:29:30.743362 kernel: pci 0000:00:16.4: bridge window [mem 0xe6b00000-0xe6bfffff 64bit pref] Oct 31 01:29:30.743447 kernel: pci 0000:00:16.5: PCI bridge to [bus 10] Oct 31 01:29:30.743527 kernel: pci 0000:00:16.5: bridge window [mem 0xfc000000-0xfc0fffff] Oct 31 01:29:30.743595 kernel: pci 0000:00:16.5: bridge window [mem 0xe6700000-0xe67fffff 64bit pref] Oct 31 01:29:30.743650 kernel: pci 0000:00:16.6: PCI bridge to [bus 11] Oct 31 01:29:30.743704 kernel: pci 0000:00:16.6: bridge window [mem 0xfbc00000-0xfbcfffff] Oct 31 01:29:30.743774 kernel: pci 0000:00:16.6: bridge window [mem 0xe6300000-0xe63fffff 64bit pref] Oct 31 01:29:30.743833 kernel: pci 0000:00:16.7: PCI bridge to [bus 12] Oct 31 01:29:30.743903 kernel: pci 0000:00:16.7: bridge window [mem 0xfb800000-0xfb8fffff] Oct 31 01:29:30.743983 kernel: pci 0000:00:16.7: bridge window [mem 0xe5f00000-0xe5ffffff 64bit pref] Oct 31 01:29:30.744043 kernel: pci 0000:00:17.0: PCI bridge to [bus 13] Oct 31 01:29:30.744093 kernel: pci 0000:00:17.0: bridge window [io 0x6000-0x6fff] Oct 31 01:29:30.744155 kernel: pci 0000:00:17.0: bridge window [mem 0xfd300000-0xfd3fffff] Oct 31 01:29:30.744215 kernel: pci 0000:00:17.0: bridge window [mem 0xe7a00000-0xe7afffff 64bit pref] Oct 31 01:29:30.744279 kernel: pci 0000:00:17.1: PCI bridge to [bus 14] Oct 31 01:29:30.744344 kernel: pci 0000:00:17.1: bridge window [io 0xa000-0xafff] Oct 31 01:29:30.744640 kernel: pci 0000:00:17.1: bridge window [mem 0xfcf00000-0xfcffffff] Oct 31 01:29:30.744702 kernel: pci 0000:00:17.1: bridge window [mem 0xe7600000-0xe76fffff 64bit pref] Oct 31 01:29:30.744787 kernel: pci 0000:00:17.2: PCI bridge to [bus 15] Oct 31 01:29:30.744868 kernel: pci 0000:00:17.2: bridge window [io 0xe000-0xefff] Oct 31 01:29:30.744924 kernel: pci 0000:00:17.2: bridge window [mem 0xfcb00000-0xfcbfffff] Oct 31 01:29:30.744997 kernel: pci 0000:00:17.2: bridge window [mem 0xe7200000-0xe72fffff 64bit pref] Oct 31 01:29:30.745052 kernel: pci 0000:00:17.3: PCI bridge to [bus 16] Oct 31 01:29:30.745122 kernel: pci 0000:00:17.3: bridge window [mem 0xfc700000-0xfc7fffff] Oct 31 01:29:30.745199 kernel: pci 0000:00:17.3: bridge window [mem 0xe6e00000-0xe6efffff 64bit pref] Oct 31 01:29:30.745273 kernel: pci 0000:00:17.4: PCI bridge to [bus 17] Oct 31 01:29:30.745355 kernel: pci 0000:00:17.4: bridge window [mem 0xfc300000-0xfc3fffff] Oct 31 01:29:30.745436 kernel: pci 0000:00:17.4: bridge window [mem 0xe6a00000-0xe6afffff 64bit pref] Oct 31 01:29:30.745492 kernel: pci 0000:00:17.5: PCI bridge to [bus 18] Oct 31 01:29:30.745553 kernel: pci 0000:00:17.5: bridge window [mem 0xfbf00000-0xfbffffff] Oct 31 01:29:30.745609 kernel: pci 0000:00:17.5: bridge window [mem 0xe6600000-0xe66fffff 64bit pref] Oct 31 01:29:30.745661 kernel: pci 0000:00:17.6: PCI bridge to [bus 19] Oct 31 01:29:30.745711 kernel: pci 0000:00:17.6: bridge window [mem 0xfbb00000-0xfbbfffff] Oct 31 01:29:30.745762 kernel: pci 0000:00:17.6: bridge window [mem 0xe6200000-0xe62fffff 64bit pref] Oct 31 01:29:30.745815 kernel: pci 0000:00:17.7: PCI bridge to [bus 1a] Oct 31 01:29:30.745868 kernel: pci 0000:00:17.7: bridge window [mem 0xfb700000-0xfb7fffff] Oct 31 01:29:30.745919 kernel: pci 0000:00:17.7: bridge window [mem 0xe5e00000-0xe5efffff 64bit pref] Oct 31 01:29:30.745976 kernel: pci 0000:00:18.0: PCI bridge to [bus 1b] Oct 31 01:29:30.746038 kernel: pci 0000:00:18.0: bridge window [io 0x7000-0x7fff] Oct 31 01:29:30.746109 kernel: pci 0000:00:18.0: bridge window [mem 0xfd200000-0xfd2fffff] Oct 31 01:29:30.746183 kernel: pci 0000:00:18.0: bridge window [mem 0xe7900000-0xe79fffff 64bit pref] Oct 31 01:29:30.746244 kernel: pci 0000:00:18.1: PCI bridge to [bus 1c] Oct 31 01:29:30.746322 kernel: pci 0000:00:18.1: bridge window [io 0xb000-0xbfff] Oct 31 01:29:30.746390 kernel: pci 0000:00:18.1: bridge window [mem 0xfce00000-0xfcefffff] Oct 31 01:29:30.746504 kernel: pci 0000:00:18.1: bridge window [mem 0xe7500000-0xe75fffff 64bit pref] Oct 31 01:29:30.746602 kernel: pci 0000:00:18.2: PCI bridge to [bus 1d] Oct 31 01:29:30.746679 kernel: pci 0000:00:18.2: bridge window [mem 0xfca00000-0xfcafffff] Oct 31 01:29:30.746735 kernel: pci 0000:00:18.2: bridge window [mem 0xe7100000-0xe71fffff 64bit pref] Oct 31 01:29:30.746816 kernel: pci 0000:00:18.3: PCI bridge to [bus 1e] Oct 31 01:29:30.746881 kernel: pci 0000:00:18.3: bridge window [mem 0xfc600000-0xfc6fffff] Oct 31 01:29:30.746960 kernel: pci 0000:00:18.3: bridge window [mem 0xe6d00000-0xe6dfffff 64bit pref] Oct 31 01:29:30.747039 kernel: pci 0000:00:18.4: PCI bridge to [bus 1f] Oct 31 01:29:30.747104 kernel: pci 0000:00:18.4: bridge window [mem 0xfc200000-0xfc2fffff] Oct 31 01:29:30.747174 kernel: pci 0000:00:18.4: bridge window [mem 0xe6900000-0xe69fffff 64bit pref] Oct 31 01:29:30.747240 kernel: pci 0000:00:18.5: PCI bridge to [bus 20] Oct 31 01:29:30.747307 kernel: pci 0000:00:18.5: bridge window [mem 0xfbe00000-0xfbefffff] Oct 31 01:29:30.747369 kernel: pci 0000:00:18.5: bridge window [mem 0xe6500000-0xe65fffff 64bit pref] Oct 31 01:29:30.747457 kernel: pci 0000:00:18.6: PCI bridge to [bus 21] Oct 31 01:29:30.747542 kernel: pci 0000:00:18.6: bridge window [mem 0xfba00000-0xfbafffff] Oct 31 01:29:30.747605 kernel: pci 0000:00:18.6: bridge window [mem 0xe6100000-0xe61fffff 64bit pref] Oct 31 01:29:30.747663 kernel: pci 0000:00:18.7: PCI bridge to [bus 22] Oct 31 01:29:30.747714 kernel: pci 0000:00:18.7: bridge window [mem 0xfb600000-0xfb6fffff] Oct 31 01:29:30.747777 kernel: pci 0000:00:18.7: bridge window [mem 0xe5d00000-0xe5dfffff 64bit pref] Oct 31 01:29:30.747787 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 9 Oct 31 01:29:30.747796 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 0 Oct 31 01:29:30.747802 kernel: ACPI: PCI: Interrupt link LNKB disabled Oct 31 01:29:30.747808 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Oct 31 01:29:30.747816 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 10 Oct 31 01:29:30.747822 kernel: iommu: Default domain type: Translated Oct 31 01:29:30.747828 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Oct 31 01:29:30.747880 kernel: pci 0000:00:0f.0: vgaarb: setting as boot VGA device Oct 31 01:29:30.747931 kernel: pci 0000:00:0f.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Oct 31 01:29:30.747982 kernel: pci 0000:00:0f.0: vgaarb: bridge control possible Oct 31 01:29:30.747991 kernel: vgaarb: loaded Oct 31 01:29:30.747997 kernel: pps_core: LinuxPPS API ver. 1 registered Oct 31 01:29:30.748003 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Oct 31 01:29:30.748012 kernel: PTP clock support registered Oct 31 01:29:30.748018 kernel: PCI: Using ACPI for IRQ routing Oct 31 01:29:30.748024 kernel: PCI: pci_cache_line_size set to 64 bytes Oct 31 01:29:30.748031 kernel: e820: reserve RAM buffer [mem 0x0009ec00-0x0009ffff] Oct 31 01:29:30.748037 kernel: e820: reserve RAM buffer [mem 0x7fee0000-0x7fffffff] Oct 31 01:29:30.748043 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 Oct 31 01:29:30.748049 kernel: hpet0: 16 comparators, 64-bit 14.318180 MHz counter Oct 31 01:29:30.748055 kernel: clocksource: Switched to clocksource tsc-early Oct 31 01:29:30.748061 kernel: VFS: Disk quotas dquot_6.6.0 Oct 31 01:29:30.748069 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Oct 31 01:29:30.748075 kernel: pnp: PnP ACPI init Oct 31 01:29:30.748130 kernel: system 00:00: [io 0x1000-0x103f] has been reserved Oct 31 01:29:30.748178 kernel: system 00:00: [io 0x1040-0x104f] has been reserved Oct 31 01:29:30.748226 kernel: system 00:00: [io 0x0cf0-0x0cf1] has been reserved Oct 31 01:29:30.748294 kernel: system 00:04: [mem 0xfed00000-0xfed003ff] has been reserved Oct 31 01:29:30.748374 kernel: pnp 00:06: [dma 2] Oct 31 01:29:30.748580 kernel: system 00:07: [io 0xfce0-0xfcff] has been reserved Oct 31 01:29:30.748630 kernel: system 00:07: [mem 0xf0000000-0xf7ffffff] has been reserved Oct 31 01:29:30.748677 kernel: system 00:07: [mem 0xfe800000-0xfe9fffff] has been reserved Oct 31 01:29:30.748691 kernel: pnp: PnP ACPI: found 8 devices Oct 31 01:29:30.748702 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Oct 31 01:29:30.748712 kernel: NET: Registered PF_INET protocol family Oct 31 01:29:30.748723 kernel: IP idents hash table entries: 32768 (order: 6, 262144 bytes, linear) Oct 31 01:29:30.748736 kernel: tcp_listen_portaddr_hash hash table entries: 1024 (order: 2, 16384 bytes, linear) Oct 31 01:29:30.748745 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Oct 31 01:29:30.748751 kernel: TCP established hash table entries: 16384 (order: 5, 131072 bytes, linear) Oct 31 01:29:30.748758 kernel: TCP bind hash table entries: 16384 (order: 6, 262144 bytes, linear) Oct 31 01:29:30.748766 kernel: TCP: Hash tables configured (established 16384 bind 16384) Oct 31 01:29:30.748773 kernel: UDP hash table entries: 1024 (order: 3, 32768 bytes, linear) Oct 31 01:29:30.748779 kernel: UDP-Lite hash table entries: 1024 (order: 3, 32768 bytes, linear) Oct 31 01:29:30.748785 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Oct 31 01:29:30.748791 kernel: NET: Registered PF_XDP protocol family Oct 31 01:29:30.748861 kernel: pci 0000:00:15.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 03] add_size 200000 add_align 100000 Oct 31 01:29:30.748938 kernel: pci 0000:00:15.3: bridge window [io 0x1000-0x0fff] to [bus 06] add_size 1000 Oct 31 01:29:30.749018 kernel: pci 0000:00:15.4: bridge window [io 0x1000-0x0fff] to [bus 07] add_size 1000 Oct 31 01:29:30.749089 kernel: pci 0000:00:15.5: bridge window [io 0x1000-0x0fff] to [bus 08] add_size 1000 Oct 31 01:29:30.749144 kernel: pci 0000:00:15.6: bridge window [io 0x1000-0x0fff] to [bus 09] add_size 1000 Oct 31 01:29:30.749460 kernel: pci 0000:00:15.7: bridge window [io 0x1000-0x0fff] to [bus 0a] add_size 1000 Oct 31 01:29:30.749535 kernel: pci 0000:00:16.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 0b] add_size 200000 add_align 100000 Oct 31 01:29:30.749602 kernel: pci 0000:00:16.3: bridge window [io 0x1000-0x0fff] to [bus 0e] add_size 1000 Oct 31 01:29:30.749720 kernel: pci 0000:00:16.4: bridge window [io 0x1000-0x0fff] to [bus 0f] add_size 1000 Oct 31 01:29:30.750069 kernel: pci 0000:00:16.5: bridge window [io 0x1000-0x0fff] to [bus 10] add_size 1000 Oct 31 01:29:30.750125 kernel: pci 0000:00:16.6: bridge window [io 0x1000-0x0fff] to [bus 11] add_size 1000 Oct 31 01:29:30.750179 kernel: pci 0000:00:16.7: bridge window [io 0x1000-0x0fff] to [bus 12] add_size 1000 Oct 31 01:29:30.750234 kernel: pci 0000:00:17.3: bridge window [io 0x1000-0x0fff] to [bus 16] add_size 1000 Oct 31 01:29:30.750297 kernel: pci 0000:00:17.4: bridge window [io 0x1000-0x0fff] to [bus 17] add_size 1000 Oct 31 01:29:30.750354 kernel: pci 0000:00:17.5: bridge window [io 0x1000-0x0fff] to [bus 18] add_size 1000 Oct 31 01:29:30.750471 kernel: pci 0000:00:17.6: bridge window [io 0x1000-0x0fff] to [bus 19] add_size 1000 Oct 31 01:29:30.750525 kernel: pci 0000:00:17.7: bridge window [io 0x1000-0x0fff] to [bus 1a] add_size 1000 Oct 31 01:29:30.750811 kernel: pci 0000:00:18.2: bridge window [io 0x1000-0x0fff] to [bus 1d] add_size 1000 Oct 31 01:29:30.750872 kernel: pci 0000:00:18.3: bridge window [io 0x1000-0x0fff] to [bus 1e] add_size 1000 Oct 31 01:29:30.750941 kernel: pci 0000:00:18.4: bridge window [io 0x1000-0x0fff] to [bus 1f] add_size 1000 Oct 31 01:29:30.751029 kernel: pci 0000:00:18.5: bridge window [io 0x1000-0x0fff] to [bus 20] add_size 1000 Oct 31 01:29:30.751139 kernel: pci 0000:00:18.6: bridge window [io 0x1000-0x0fff] to [bus 21] add_size 1000 Oct 31 01:29:30.751228 kernel: pci 0000:00:18.7: bridge window [io 0x1000-0x0fff] to [bus 22] add_size 1000 Oct 31 01:29:30.751298 kernel: pci 0000:00:15.0: BAR 15: assigned [mem 0xc0000000-0xc01fffff 64bit pref] Oct 31 01:29:30.751356 kernel: pci 0000:00:16.0: BAR 15: assigned [mem 0xc0200000-0xc03fffff 64bit pref] Oct 31 01:29:30.751421 kernel: pci 0000:00:15.3: BAR 13: no space for [io size 0x1000] Oct 31 01:29:30.751475 kernel: pci 0000:00:15.3: BAR 13: failed to assign [io size 0x1000] Oct 31 01:29:30.751527 kernel: pci 0000:00:15.4: BAR 13: no space for [io size 0x1000] Oct 31 01:29:30.751584 kernel: pci 0000:00:15.4: BAR 13: failed to assign [io size 0x1000] Oct 31 01:29:30.751635 kernel: pci 0000:00:15.5: BAR 13: no space for [io size 0x1000] Oct 31 01:29:30.751700 kernel: pci 0000:00:15.5: BAR 13: failed to assign [io size 0x1000] Oct 31 01:29:30.751772 kernel: pci 0000:00:15.6: BAR 13: no space for [io size 0x1000] Oct 31 01:29:30.751838 kernel: pci 0000:00:15.6: BAR 13: failed to assign [io size 0x1000] Oct 31 01:29:30.751890 kernel: pci 0000:00:15.7: BAR 13: no space for [io size 0x1000] Oct 31 01:29:30.751976 kernel: pci 0000:00:15.7: BAR 13: failed to assign [io size 0x1000] Oct 31 01:29:30.752040 kernel: pci 0000:00:16.3: BAR 13: no space for [io size 0x1000] Oct 31 01:29:30.752092 kernel: pci 0000:00:16.3: BAR 13: failed to assign [io size 0x1000] Oct 31 01:29:30.752143 kernel: pci 0000:00:16.4: BAR 13: no space for [io size 0x1000] Oct 31 01:29:30.752214 kernel: pci 0000:00:16.4: BAR 13: failed to assign [io size 0x1000] Oct 31 01:29:30.752575 kernel: pci 0000:00:16.5: BAR 13: no space for [io size 0x1000] Oct 31 01:29:30.752637 kernel: pci 0000:00:16.5: BAR 13: failed to assign [io size 0x1000] Oct 31 01:29:30.752691 kernel: pci 0000:00:16.6: BAR 13: no space for [io size 0x1000] Oct 31 01:29:30.752743 kernel: pci 0000:00:16.6: BAR 13: failed to assign [io size 0x1000] Oct 31 01:29:30.752795 kernel: pci 0000:00:16.7: BAR 13: no space for [io size 0x1000] Oct 31 01:29:30.752847 kernel: pci 0000:00:16.7: BAR 13: failed to assign [io size 0x1000] Oct 31 01:29:30.752898 kernel: pci 0000:00:17.3: BAR 13: no space for [io size 0x1000] Oct 31 01:29:30.752965 kernel: pci 0000:00:17.3: BAR 13: failed to assign [io size 0x1000] Oct 31 01:29:30.753014 kernel: pci 0000:00:17.4: BAR 13: no space for [io size 0x1000] Oct 31 01:29:30.753080 kernel: pci 0000:00:17.4: BAR 13: failed to assign [io size 0x1000] Oct 31 01:29:30.753145 kernel: pci 0000:00:17.5: BAR 13: no space for [io size 0x1000] Oct 31 01:29:30.753195 kernel: pci 0000:00:17.5: BAR 13: failed to assign [io size 0x1000] Oct 31 01:29:30.753582 kernel: pci 0000:00:17.6: BAR 13: no space for [io size 0x1000] Oct 31 01:29:30.753643 kernel: pci 0000:00:17.6: BAR 13: failed to assign [io size 0x1000] Oct 31 01:29:30.753695 kernel: pci 0000:00:17.7: BAR 13: no space for [io size 0x1000] Oct 31 01:29:30.753774 kernel: pci 0000:00:17.7: BAR 13: failed to assign [io size 0x1000] Oct 31 01:29:30.754075 kernel: pci 0000:00:18.2: BAR 13: no space for [io size 0x1000] Oct 31 01:29:30.754148 kernel: pci 0000:00:18.2: BAR 13: failed to assign [io size 0x1000] Oct 31 01:29:30.754224 kernel: pci 0000:00:18.3: BAR 13: no space for [io size 0x1000] Oct 31 01:29:30.754289 kernel: pci 0000:00:18.3: BAR 13: failed to assign [io size 0x1000] Oct 31 01:29:30.754361 kernel: pci 0000:00:18.4: BAR 13: no space for [io size 0x1000] Oct 31 01:29:30.754432 kernel: pci 0000:00:18.4: BAR 13: failed to assign [io size 0x1000] Oct 31 01:29:30.754485 kernel: pci 0000:00:18.5: BAR 13: no space for [io size 0x1000] Oct 31 01:29:30.754764 kernel: pci 0000:00:18.5: BAR 13: failed to assign [io size 0x1000] Oct 31 01:29:30.754826 kernel: pci 0000:00:18.6: BAR 13: no space for [io size 0x1000] Oct 31 01:29:30.754898 kernel: pci 0000:00:18.6: BAR 13: failed to assign [io size 0x1000] Oct 31 01:29:30.754981 kernel: pci 0000:00:18.7: BAR 13: no space for [io size 0x1000] Oct 31 01:29:30.755033 kernel: pci 0000:00:18.7: BAR 13: failed to assign [io size 0x1000] Oct 31 01:29:30.755084 kernel: pci 0000:00:18.7: BAR 13: no space for [io size 0x1000] Oct 31 01:29:30.755136 kernel: pci 0000:00:18.7: BAR 13: failed to assign [io size 0x1000] Oct 31 01:29:30.755186 kernel: pci 0000:00:18.6: BAR 13: no space for [io size 0x1000] Oct 31 01:29:30.755236 kernel: pci 0000:00:18.6: BAR 13: failed to assign [io size 0x1000] Oct 31 01:29:30.755287 kernel: pci 0000:00:18.5: BAR 13: no space for [io size 0x1000] Oct 31 01:29:30.755337 kernel: pci 0000:00:18.5: BAR 13: failed to assign [io size 0x1000] Oct 31 01:29:30.755390 kernel: pci 0000:00:18.4: BAR 13: no space for [io size 0x1000] Oct 31 01:29:30.755476 kernel: pci 0000:00:18.4: BAR 13: failed to assign [io size 0x1000] Oct 31 01:29:30.755528 kernel: pci 0000:00:18.3: BAR 13: no space for [io size 0x1000] Oct 31 01:29:30.755579 kernel: pci 0000:00:18.3: BAR 13: failed to assign [io size 0x1000] Oct 31 01:29:30.755664 kernel: pci 0000:00:18.2: BAR 13: no space for [io size 0x1000] Oct 31 01:29:30.756014 kernel: pci 0000:00:18.2: BAR 13: failed to assign [io size 0x1000] Oct 31 01:29:30.756301 kernel: pci 0000:00:17.7: BAR 13: no space for [io size 0x1000] Oct 31 01:29:30.756375 kernel: pci 0000:00:17.7: BAR 13: failed to assign [io size 0x1000] Oct 31 01:29:30.756458 kernel: pci 0000:00:17.6: BAR 13: no space for [io size 0x1000] Oct 31 01:29:30.756535 kernel: pci 0000:00:17.6: BAR 13: failed to assign [io size 0x1000] Oct 31 01:29:30.756602 kernel: pci 0000:00:17.5: BAR 13: no space for [io size 0x1000] Oct 31 01:29:30.756688 kernel: pci 0000:00:17.5: BAR 13: failed to assign [io size 0x1000] Oct 31 01:29:30.756978 kernel: pci 0000:00:17.4: BAR 13: no space for [io size 0x1000] Oct 31 01:29:30.757036 kernel: pci 0000:00:17.4: BAR 13: failed to assign [io size 0x1000] Oct 31 01:29:30.757093 kernel: pci 0000:00:17.3: BAR 13: no space for [io size 0x1000] Oct 31 01:29:30.757510 kernel: pci 0000:00:17.3: BAR 13: failed to assign [io size 0x1000] Oct 31 01:29:30.757583 kernel: pci 0000:00:16.7: BAR 13: no space for [io size 0x1000] Oct 31 01:29:30.757933 kernel: pci 0000:00:16.7: BAR 13: failed to assign [io size 0x1000] Oct 31 01:29:30.758021 kernel: pci 0000:00:16.6: BAR 13: no space for [io size 0x1000] Oct 31 01:29:30.758110 kernel: pci 0000:00:16.6: BAR 13: failed to assign [io size 0x1000] Oct 31 01:29:30.758179 kernel: pci 0000:00:16.5: BAR 13: no space for [io size 0x1000] Oct 31 01:29:30.758234 kernel: pci 0000:00:16.5: BAR 13: failed to assign [io size 0x1000] Oct 31 01:29:30.758286 kernel: pci 0000:00:16.4: BAR 13: no space for [io size 0x1000] Oct 31 01:29:30.758337 kernel: pci 0000:00:16.4: BAR 13: failed to assign [io size 0x1000] Oct 31 01:29:30.758389 kernel: pci 0000:00:16.3: BAR 13: no space for [io size 0x1000] Oct 31 01:29:30.758450 kernel: pci 0000:00:16.3: BAR 13: failed to assign [io size 0x1000] Oct 31 01:29:30.758502 kernel: pci 0000:00:15.7: BAR 13: no space for [io size 0x1000] Oct 31 01:29:30.758552 kernel: pci 0000:00:15.7: BAR 13: failed to assign [io size 0x1000] Oct 31 01:29:30.758625 kernel: pci 0000:00:15.6: BAR 13: no space for [io size 0x1000] Oct 31 01:29:30.758688 kernel: pci 0000:00:15.6: BAR 13: failed to assign [io size 0x1000] Oct 31 01:29:30.758768 kernel: pci 0000:00:15.5: BAR 13: no space for [io size 0x1000] Oct 31 01:29:30.758848 kernel: pci 0000:00:15.5: BAR 13: failed to assign [io size 0x1000] Oct 31 01:29:30.758909 kernel: pci 0000:00:15.4: BAR 13: no space for [io size 0x1000] Oct 31 01:29:30.759004 kernel: pci 0000:00:15.4: BAR 13: failed to assign [io size 0x1000] Oct 31 01:29:30.759058 kernel: pci 0000:00:15.3: BAR 13: no space for [io size 0x1000] Oct 31 01:29:30.759108 kernel: pci 0000:00:15.3: BAR 13: failed to assign [io size 0x1000] Oct 31 01:29:30.759159 kernel: pci 0000:00:01.0: PCI bridge to [bus 01] Oct 31 01:29:30.759213 kernel: pci 0000:00:11.0: PCI bridge to [bus 02] Oct 31 01:29:30.759262 kernel: pci 0000:00:11.0: bridge window [io 0x2000-0x3fff] Oct 31 01:29:30.759310 kernel: pci 0000:00:11.0: bridge window [mem 0xfd600000-0xfdffffff] Oct 31 01:29:30.759641 kernel: pci 0000:00:11.0: bridge window [mem 0xe7b00000-0xe7ffffff 64bit pref] Oct 31 01:29:30.759704 kernel: pci 0000:03:00.0: BAR 6: assigned [mem 0xfd500000-0xfd50ffff pref] Oct 31 01:29:30.759758 kernel: pci 0000:00:15.0: PCI bridge to [bus 03] Oct 31 01:29:30.759809 kernel: pci 0000:00:15.0: bridge window [io 0x4000-0x4fff] Oct 31 01:29:30.760126 kernel: pci 0000:00:15.0: bridge window [mem 0xfd500000-0xfd5fffff] Oct 31 01:29:30.760210 kernel: pci 0000:00:15.0: bridge window [mem 0xc0000000-0xc01fffff 64bit pref] Oct 31 01:29:30.760302 kernel: pci 0000:00:15.1: PCI bridge to [bus 04] Oct 31 01:29:30.760386 kernel: pci 0000:00:15.1: bridge window [io 0x8000-0x8fff] Oct 31 01:29:30.760467 kernel: pci 0000:00:15.1: bridge window [mem 0xfd100000-0xfd1fffff] Oct 31 01:29:30.760522 kernel: pci 0000:00:15.1: bridge window [mem 0xe7800000-0xe78fffff 64bit pref] Oct 31 01:29:30.760575 kernel: pci 0000:00:15.2: PCI bridge to [bus 05] Oct 31 01:29:30.760626 kernel: pci 0000:00:15.2: bridge window [io 0xc000-0xcfff] Oct 31 01:29:30.760676 kernel: pci 0000:00:15.2: bridge window [mem 0xfcd00000-0xfcdfffff] Oct 31 01:29:30.760727 kernel: pci 0000:00:15.2: bridge window [mem 0xe7400000-0xe74fffff 64bit pref] Oct 31 01:29:30.760778 kernel: pci 0000:00:15.3: PCI bridge to [bus 06] Oct 31 01:29:30.760831 kernel: pci 0000:00:15.3: bridge window [mem 0xfc900000-0xfc9fffff] Oct 31 01:29:30.760886 kernel: pci 0000:00:15.3: bridge window [mem 0xe7000000-0xe70fffff 64bit pref] Oct 31 01:29:30.760951 kernel: pci 0000:00:15.4: PCI bridge to [bus 07] Oct 31 01:29:30.761010 kernel: pci 0000:00:15.4: bridge window [mem 0xfc500000-0xfc5fffff] Oct 31 01:29:30.761066 kernel: pci 0000:00:15.4: bridge window [mem 0xe6c00000-0xe6cfffff 64bit pref] Oct 31 01:29:30.761126 kernel: pci 0000:00:15.5: PCI bridge to [bus 08] Oct 31 01:29:30.761184 kernel: pci 0000:00:15.5: bridge window [mem 0xfc100000-0xfc1fffff] Oct 31 01:29:30.761237 kernel: pci 0000:00:15.5: bridge window [mem 0xe6800000-0xe68fffff 64bit pref] Oct 31 01:29:30.761294 kernel: pci 0000:00:15.6: PCI bridge to [bus 09] Oct 31 01:29:30.761345 kernel: pci 0000:00:15.6: bridge window [mem 0xfbd00000-0xfbdfffff] Oct 31 01:29:30.761403 kernel: pci 0000:00:15.6: bridge window [mem 0xe6400000-0xe64fffff 64bit pref] Oct 31 01:29:30.761463 kernel: pci 0000:00:15.7: PCI bridge to [bus 0a] Oct 31 01:29:30.761513 kernel: pci 0000:00:15.7: bridge window [mem 0xfb900000-0xfb9fffff] Oct 31 01:29:30.761569 kernel: pci 0000:00:15.7: bridge window [mem 0xe6000000-0xe60fffff 64bit pref] Oct 31 01:29:30.761626 kernel: pci 0000:0b:00.0: BAR 6: assigned [mem 0xfd400000-0xfd40ffff pref] Oct 31 01:29:30.761682 kernel: pci 0000:00:16.0: PCI bridge to [bus 0b] Oct 31 01:29:30.761739 kernel: pci 0000:00:16.0: bridge window [io 0x5000-0x5fff] Oct 31 01:29:30.761790 kernel: pci 0000:00:16.0: bridge window [mem 0xfd400000-0xfd4fffff] Oct 31 01:29:30.761841 kernel: pci 0000:00:16.0: bridge window [mem 0xc0200000-0xc03fffff 64bit pref] Oct 31 01:29:30.761893 kernel: pci 0000:00:16.1: PCI bridge to [bus 0c] Oct 31 01:29:30.761943 kernel: pci 0000:00:16.1: bridge window [io 0x9000-0x9fff] Oct 31 01:29:30.761993 kernel: pci 0000:00:16.1: bridge window [mem 0xfd000000-0xfd0fffff] Oct 31 01:29:30.762043 kernel: pci 0000:00:16.1: bridge window [mem 0xe7700000-0xe77fffff 64bit pref] Oct 31 01:29:30.762095 kernel: pci 0000:00:16.2: PCI bridge to [bus 0d] Oct 31 01:29:30.762148 kernel: pci 0000:00:16.2: bridge window [io 0xd000-0xdfff] Oct 31 01:29:30.762199 kernel: pci 0000:00:16.2: bridge window [mem 0xfcc00000-0xfccfffff] Oct 31 01:29:30.762249 kernel: pci 0000:00:16.2: bridge window [mem 0xe7300000-0xe73fffff 64bit pref] Oct 31 01:29:30.762300 kernel: pci 0000:00:16.3: PCI bridge to [bus 0e] Oct 31 01:29:30.762350 kernel: pci 0000:00:16.3: bridge window [mem 0xfc800000-0xfc8fffff] Oct 31 01:29:30.762412 kernel: pci 0000:00:16.3: bridge window [mem 0xe6f00000-0xe6ffffff 64bit pref] Oct 31 01:29:30.762475 kernel: pci 0000:00:16.4: PCI bridge to [bus 0f] Oct 31 01:29:30.762532 kernel: pci 0000:00:16.4: bridge window [mem 0xfc400000-0xfc4fffff] Oct 31 01:29:30.762588 kernel: pci 0000:00:16.4: bridge window [mem 0xe6b00000-0xe6bfffff 64bit pref] Oct 31 01:29:30.762644 kernel: pci 0000:00:16.5: PCI bridge to [bus 10] Oct 31 01:29:30.762699 kernel: pci 0000:00:16.5: bridge window [mem 0xfc000000-0xfc0fffff] Oct 31 01:29:30.762779 kernel: pci 0000:00:16.5: bridge window [mem 0xe6700000-0xe67fffff 64bit pref] Oct 31 01:29:30.763076 kernel: pci 0000:00:16.6: PCI bridge to [bus 11] Oct 31 01:29:30.763133 kernel: pci 0000:00:16.6: bridge window [mem 0xfbc00000-0xfbcfffff] Oct 31 01:29:30.763507 kernel: pci 0000:00:16.6: bridge window [mem 0xe6300000-0xe63fffff 64bit pref] Oct 31 01:29:30.763868 kernel: pci 0000:00:16.7: PCI bridge to [bus 12] Oct 31 01:29:30.763923 kernel: pci 0000:00:16.7: bridge window [mem 0xfb800000-0xfb8fffff] Oct 31 01:29:30.763976 kernel: pci 0000:00:16.7: bridge window [mem 0xe5f00000-0xe5ffffff 64bit pref] Oct 31 01:29:30.764029 kernel: pci 0000:00:17.0: PCI bridge to [bus 13] Oct 31 01:29:30.764083 kernel: pci 0000:00:17.0: bridge window [io 0x6000-0x6fff] Oct 31 01:29:30.764134 kernel: pci 0000:00:17.0: bridge window [mem 0xfd300000-0xfd3fffff] Oct 31 01:29:30.764184 kernel: pci 0000:00:17.0: bridge window [mem 0xe7a00000-0xe7afffff 64bit pref] Oct 31 01:29:30.764235 kernel: pci 0000:00:17.1: PCI bridge to [bus 14] Oct 31 01:29:30.764286 kernel: pci 0000:00:17.1: bridge window [io 0xa000-0xafff] Oct 31 01:29:30.764335 kernel: pci 0000:00:17.1: bridge window [mem 0xfcf00000-0xfcffffff] Oct 31 01:29:30.764384 kernel: pci 0000:00:17.1: bridge window [mem 0xe7600000-0xe76fffff 64bit pref] Oct 31 01:29:30.764451 kernel: pci 0000:00:17.2: PCI bridge to [bus 15] Oct 31 01:29:30.764504 kernel: pci 0000:00:17.2: bridge window [io 0xe000-0xefff] Oct 31 01:29:30.764554 kernel: pci 0000:00:17.2: bridge window [mem 0xfcb00000-0xfcbfffff] Oct 31 01:29:30.764613 kernel: pci 0000:00:17.2: bridge window [mem 0xe7200000-0xe72fffff 64bit pref] Oct 31 01:29:30.764783 kernel: pci 0000:00:17.3: PCI bridge to [bus 16] Oct 31 01:29:30.764838 kernel: pci 0000:00:17.3: bridge window [mem 0xfc700000-0xfc7fffff] Oct 31 01:29:30.765177 kernel: pci 0000:00:17.3: bridge window [mem 0xe6e00000-0xe6efffff 64bit pref] Oct 31 01:29:30.765245 kernel: pci 0000:00:17.4: PCI bridge to [bus 17] Oct 31 01:29:30.765300 kernel: pci 0000:00:17.4: bridge window [mem 0xfc300000-0xfc3fffff] Oct 31 01:29:30.765353 kernel: pci 0000:00:17.4: bridge window [mem 0xe6a00000-0xe6afffff 64bit pref] Oct 31 01:29:30.765433 kernel: pci 0000:00:17.5: PCI bridge to [bus 18] Oct 31 01:29:30.765487 kernel: pci 0000:00:17.5: bridge window [mem 0xfbf00000-0xfbffffff] Oct 31 01:29:30.765540 kernel: pci 0000:00:17.5: bridge window [mem 0xe6600000-0xe66fffff 64bit pref] Oct 31 01:29:30.765592 kernel: pci 0000:00:17.6: PCI bridge to [bus 19] Oct 31 01:29:30.765642 kernel: pci 0000:00:17.6: bridge window [mem 0xfbb00000-0xfbbfffff] Oct 31 01:29:30.765692 kernel: pci 0000:00:17.6: bridge window [mem 0xe6200000-0xe62fffff 64bit pref] Oct 31 01:29:30.765743 kernel: pci 0000:00:17.7: PCI bridge to [bus 1a] Oct 31 01:29:30.765793 kernel: pci 0000:00:17.7: bridge window [mem 0xfb700000-0xfb7fffff] Oct 31 01:29:30.765842 kernel: pci 0000:00:17.7: bridge window [mem 0xe5e00000-0xe5efffff 64bit pref] Oct 31 01:29:30.765894 kernel: pci 0000:00:18.0: PCI bridge to [bus 1b] Oct 31 01:29:30.765944 kernel: pci 0000:00:18.0: bridge window [io 0x7000-0x7fff] Oct 31 01:29:30.765994 kernel: pci 0000:00:18.0: bridge window [mem 0xfd200000-0xfd2fffff] Oct 31 01:29:30.766048 kernel: pci 0000:00:18.0: bridge window [mem 0xe7900000-0xe79fffff 64bit pref] Oct 31 01:29:30.766100 kernel: pci 0000:00:18.1: PCI bridge to [bus 1c] Oct 31 01:29:30.766150 kernel: pci 0000:00:18.1: bridge window [io 0xb000-0xbfff] Oct 31 01:29:30.766201 kernel: pci 0000:00:18.1: bridge window [mem 0xfce00000-0xfcefffff] Oct 31 01:29:30.766250 kernel: pci 0000:00:18.1: bridge window [mem 0xe7500000-0xe75fffff 64bit pref] Oct 31 01:29:30.766302 kernel: pci 0000:00:18.2: PCI bridge to [bus 1d] Oct 31 01:29:30.766353 kernel: pci 0000:00:18.2: bridge window [mem 0xfca00000-0xfcafffff] Oct 31 01:29:30.766411 kernel: pci 0000:00:18.2: bridge window [mem 0xe7100000-0xe71fffff 64bit pref] Oct 31 01:29:30.766465 kernel: pci 0000:00:18.3: PCI bridge to [bus 1e] Oct 31 01:29:30.766518 kernel: pci 0000:00:18.3: bridge window [mem 0xfc600000-0xfc6fffff] Oct 31 01:29:30.766568 kernel: pci 0000:00:18.3: bridge window [mem 0xe6d00000-0xe6dfffff 64bit pref] Oct 31 01:29:30.766619 kernel: pci 0000:00:18.4: PCI bridge to [bus 1f] Oct 31 01:29:30.766669 kernel: pci 0000:00:18.4: bridge window [mem 0xfc200000-0xfc2fffff] Oct 31 01:29:30.766719 kernel: pci 0000:00:18.4: bridge window [mem 0xe6900000-0xe69fffff 64bit pref] Oct 31 01:29:30.766771 kernel: pci 0000:00:18.5: PCI bridge to [bus 20] Oct 31 01:29:30.766821 kernel: pci 0000:00:18.5: bridge window [mem 0xfbe00000-0xfbefffff] Oct 31 01:29:30.766871 kernel: pci 0000:00:18.5: bridge window [mem 0xe6500000-0xe65fffff 64bit pref] Oct 31 01:29:30.766922 kernel: pci 0000:00:18.6: PCI bridge to [bus 21] Oct 31 01:29:30.766989 kernel: pci 0000:00:18.6: bridge window [mem 0xfba00000-0xfbafffff] Oct 31 01:29:30.767040 kernel: pci 0000:00:18.6: bridge window [mem 0xe6100000-0xe61fffff 64bit pref] Oct 31 01:29:30.767090 kernel: pci 0000:00:18.7: PCI bridge to [bus 22] Oct 31 01:29:30.767139 kernel: pci 0000:00:18.7: bridge window [mem 0xfb600000-0xfb6fffff] Oct 31 01:29:30.767331 kernel: pci 0000:00:18.7: bridge window [mem 0xe5d00000-0xe5dfffff 64bit pref] Oct 31 01:29:30.767599 kernel: pci_bus 0000:00: resource 4 [mem 0x000a0000-0x000bffff window] Oct 31 01:29:30.767659 kernel: pci_bus 0000:00: resource 5 [mem 0x000cc000-0x000dbfff window] Oct 31 01:29:30.767706 kernel: pci_bus 0000:00: resource 6 [mem 0xc0000000-0xfebfffff window] Oct 31 01:29:30.767750 kernel: pci_bus 0000:00: resource 7 [io 0x0000-0x0cf7 window] Oct 31 01:29:30.768157 kernel: pci_bus 0000:00: resource 8 [io 0x0d00-0xfeff window] Oct 31 01:29:30.768303 kernel: pci_bus 0000:02: resource 0 [io 0x2000-0x3fff] Oct 31 01:29:30.768428 kernel: pci_bus 0000:02: resource 1 [mem 0xfd600000-0xfdffffff] Oct 31 01:29:30.768480 kernel: pci_bus 0000:02: resource 2 [mem 0xe7b00000-0xe7ffffff 64bit pref] Oct 31 01:29:30.768529 kernel: pci_bus 0000:02: resource 4 [mem 0x000a0000-0x000bffff window] Oct 31 01:29:30.768583 kernel: pci_bus 0000:02: resource 5 [mem 0x000cc000-0x000dbfff window] Oct 31 01:29:30.768633 kernel: pci_bus 0000:02: resource 6 [mem 0xc0000000-0xfebfffff window] Oct 31 01:29:30.768892 kernel: pci_bus 0000:02: resource 7 [io 0x0000-0x0cf7 window] Oct 31 01:29:30.768944 kernel: pci_bus 0000:02: resource 8 [io 0x0d00-0xfeff window] Oct 31 01:29:30.769297 kernel: pci_bus 0000:03: resource 0 [io 0x4000-0x4fff] Oct 31 01:29:30.769355 kernel: pci_bus 0000:03: resource 1 [mem 0xfd500000-0xfd5fffff] Oct 31 01:29:30.769448 kernel: pci_bus 0000:03: resource 2 [mem 0xc0000000-0xc01fffff 64bit pref] Oct 31 01:29:30.769503 kernel: pci_bus 0000:04: resource 0 [io 0x8000-0x8fff] Oct 31 01:29:30.769553 kernel: pci_bus 0000:04: resource 1 [mem 0xfd100000-0xfd1fffff] Oct 31 01:29:30.769604 kernel: pci_bus 0000:04: resource 2 [mem 0xe7800000-0xe78fffff 64bit pref] Oct 31 01:29:30.769658 kernel: pci_bus 0000:05: resource 0 [io 0xc000-0xcfff] Oct 31 01:29:30.769880 kernel: pci_bus 0000:05: resource 1 [mem 0xfcd00000-0xfcdfffff] Oct 31 01:29:30.770183 kernel: pci_bus 0000:05: resource 2 [mem 0xe7400000-0xe74fffff 64bit pref] Oct 31 01:29:30.770242 kernel: pci_bus 0000:06: resource 1 [mem 0xfc900000-0xfc9fffff] Oct 31 01:29:30.770643 kernel: pci_bus 0000:06: resource 2 [mem 0xe7000000-0xe70fffff 64bit pref] Oct 31 01:29:30.770705 kernel: pci_bus 0000:07: resource 1 [mem 0xfc500000-0xfc5fffff] Oct 31 01:29:30.770759 kernel: pci_bus 0000:07: resource 2 [mem 0xe6c00000-0xe6cfffff 64bit pref] Oct 31 01:29:30.770812 kernel: pci_bus 0000:08: resource 1 [mem 0xfc100000-0xfc1fffff] Oct 31 01:29:30.770861 kernel: pci_bus 0000:08: resource 2 [mem 0xe6800000-0xe68fffff 64bit pref] Oct 31 01:29:30.770913 kernel: pci_bus 0000:09: resource 1 [mem 0xfbd00000-0xfbdfffff] Oct 31 01:29:30.770962 kernel: pci_bus 0000:09: resource 2 [mem 0xe6400000-0xe64fffff 64bit pref] Oct 31 01:29:30.771013 kernel: pci_bus 0000:0a: resource 1 [mem 0xfb900000-0xfb9fffff] Oct 31 01:29:30.771064 kernel: pci_bus 0000:0a: resource 2 [mem 0xe6000000-0xe60fffff 64bit pref] Oct 31 01:29:30.771119 kernel: pci_bus 0000:0b: resource 0 [io 0x5000-0x5fff] Oct 31 01:29:30.771297 kernel: pci_bus 0000:0b: resource 1 [mem 0xfd400000-0xfd4fffff] Oct 31 01:29:30.771348 kernel: pci_bus 0000:0b: resource 2 [mem 0xc0200000-0xc03fffff 64bit pref] Oct 31 01:29:30.771740 kernel: pci_bus 0000:0c: resource 0 [io 0x9000-0x9fff] Oct 31 01:29:30.771799 kernel: pci_bus 0000:0c: resource 1 [mem 0xfd000000-0xfd0fffff] Oct 31 01:29:30.771862 kernel: pci_bus 0000:0c: resource 2 [mem 0xe7700000-0xe77fffff 64bit pref] Oct 31 01:29:30.771916 kernel: pci_bus 0000:0d: resource 0 [io 0xd000-0xdfff] Oct 31 01:29:30.772133 kernel: pci_bus 0000:0d: resource 1 [mem 0xfcc00000-0xfccfffff] Oct 31 01:29:30.772426 kernel: pci_bus 0000:0d: resource 2 [mem 0xe7300000-0xe73fffff 64bit pref] Oct 31 01:29:30.772483 kernel: pci_bus 0000:0e: resource 1 [mem 0xfc800000-0xfc8fffff] Oct 31 01:29:30.772532 kernel: pci_bus 0000:0e: resource 2 [mem 0xe6f00000-0xe6ffffff 64bit pref] Oct 31 01:29:30.772589 kernel: pci_bus 0000:0f: resource 1 [mem 0xfc400000-0xfc4fffff] Oct 31 01:29:30.772641 kernel: pci_bus 0000:0f: resource 2 [mem 0xe6b00000-0xe6bfffff 64bit pref] Oct 31 01:29:30.772693 kernel: pci_bus 0000:10: resource 1 [mem 0xfc000000-0xfc0fffff] Oct 31 01:29:30.772741 kernel: pci_bus 0000:10: resource 2 [mem 0xe6700000-0xe67fffff 64bit pref] Oct 31 01:29:30.772792 kernel: pci_bus 0000:11: resource 1 [mem 0xfbc00000-0xfbcfffff] Oct 31 01:29:30.772843 kernel: pci_bus 0000:11: resource 2 [mem 0xe6300000-0xe63fffff 64bit pref] Oct 31 01:29:30.772916 kernel: pci_bus 0000:12: resource 1 [mem 0xfb800000-0xfb8fffff] Oct 31 01:29:30.772968 kernel: pci_bus 0000:12: resource 2 [mem 0xe5f00000-0xe5ffffff 64bit pref] Oct 31 01:29:30.773021 kernel: pci_bus 0000:13: resource 0 [io 0x6000-0x6fff] Oct 31 01:29:30.773070 kernel: pci_bus 0000:13: resource 1 [mem 0xfd300000-0xfd3fffff] Oct 31 01:29:30.773117 kernel: pci_bus 0000:13: resource 2 [mem 0xe7a00000-0xe7afffff 64bit pref] Oct 31 01:29:30.773169 kernel: pci_bus 0000:14: resource 0 [io 0xa000-0xafff] Oct 31 01:29:30.773218 kernel: pci_bus 0000:14: resource 1 [mem 0xfcf00000-0xfcffffff] Oct 31 01:29:30.773269 kernel: pci_bus 0000:14: resource 2 [mem 0xe7600000-0xe76fffff 64bit pref] Oct 31 01:29:30.773320 kernel: pci_bus 0000:15: resource 0 [io 0xe000-0xefff] Oct 31 01:29:30.773370 kernel: pci_bus 0000:15: resource 1 [mem 0xfcb00000-0xfcbfffff] Oct 31 01:29:30.773685 kernel: pci_bus 0000:15: resource 2 [mem 0xe7200000-0xe72fffff 64bit pref] Oct 31 01:29:30.773746 kernel: pci_bus 0000:16: resource 1 [mem 0xfc700000-0xfc7fffff] Oct 31 01:29:30.774037 kernel: pci_bus 0000:16: resource 2 [mem 0xe6e00000-0xe6efffff 64bit pref] Oct 31 01:29:30.774093 kernel: pci_bus 0000:17: resource 1 [mem 0xfc300000-0xfc3fffff] Oct 31 01:29:30.774146 kernel: pci_bus 0000:17: resource 2 [mem 0xe6a00000-0xe6afffff 64bit pref] Oct 31 01:29:30.774207 kernel: pci_bus 0000:18: resource 1 [mem 0xfbf00000-0xfbffffff] Oct 31 01:29:30.774258 kernel: pci_bus 0000:18: resource 2 [mem 0xe6600000-0xe66fffff 64bit pref] Oct 31 01:29:30.774312 kernel: pci_bus 0000:19: resource 1 [mem 0xfbb00000-0xfbbfffff] Oct 31 01:29:30.774361 kernel: pci_bus 0000:19: resource 2 [mem 0xe6200000-0xe62fffff 64bit pref] Oct 31 01:29:30.774423 kernel: pci_bus 0000:1a: resource 1 [mem 0xfb700000-0xfb7fffff] Oct 31 01:29:30.774475 kernel: pci_bus 0000:1a: resource 2 [mem 0xe5e00000-0xe5efffff 64bit pref] Oct 31 01:29:30.774528 kernel: pci_bus 0000:1b: resource 0 [io 0x7000-0x7fff] Oct 31 01:29:30.774591 kernel: pci_bus 0000:1b: resource 1 [mem 0xfd200000-0xfd2fffff] Oct 31 01:29:30.774642 kernel: pci_bus 0000:1b: resource 2 [mem 0xe7900000-0xe79fffff 64bit pref] Oct 31 01:29:30.774695 kernel: pci_bus 0000:1c: resource 0 [io 0xb000-0xbfff] Oct 31 01:29:30.774744 kernel: pci_bus 0000:1c: resource 1 [mem 0xfce00000-0xfcefffff] Oct 31 01:29:30.774792 kernel: pci_bus 0000:1c: resource 2 [mem 0xe7500000-0xe75fffff 64bit pref] Oct 31 01:29:30.774848 kernel: pci_bus 0000:1d: resource 1 [mem 0xfca00000-0xfcafffff] Oct 31 01:29:30.774898 kernel: pci_bus 0000:1d: resource 2 [mem 0xe7100000-0xe71fffff 64bit pref] Oct 31 01:29:30.774955 kernel: pci_bus 0000:1e: resource 1 [mem 0xfc600000-0xfc6fffff] Oct 31 01:29:30.775004 kernel: pci_bus 0000:1e: resource 2 [mem 0xe6d00000-0xe6dfffff 64bit pref] Oct 31 01:29:30.775058 kernel: pci_bus 0000:1f: resource 1 [mem 0xfc200000-0xfc2fffff] Oct 31 01:29:30.775109 kernel: pci_bus 0000:1f: resource 2 [mem 0xe6900000-0xe69fffff 64bit pref] Oct 31 01:29:30.775163 kernel: pci_bus 0000:20: resource 1 [mem 0xfbe00000-0xfbefffff] Oct 31 01:29:30.775212 kernel: pci_bus 0000:20: resource 2 [mem 0xe6500000-0xe65fffff 64bit pref] Oct 31 01:29:30.775264 kernel: pci_bus 0000:21: resource 1 [mem 0xfba00000-0xfbafffff] Oct 31 01:29:30.775313 kernel: pci_bus 0000:21: resource 2 [mem 0xe6100000-0xe61fffff 64bit pref] Oct 31 01:29:30.775365 kernel: pci_bus 0000:22: resource 1 [mem 0xfb600000-0xfb6fffff] Oct 31 01:29:30.775698 kernel: pci_bus 0000:22: resource 2 [mem 0xe5d00000-0xe5dfffff 64bit pref] Oct 31 01:29:30.775766 kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers Oct 31 01:29:30.775778 kernel: PCI: CLS 32 bytes, default 64 Oct 31 01:29:30.775785 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Oct 31 01:29:30.775865 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x311fd3cd494, max_idle_ns: 440795223879 ns Oct 31 01:29:30.775874 kernel: clocksource: Switched to clocksource tsc Oct 31 01:29:30.775880 kernel: Initialise system trusted keyrings Oct 31 01:29:30.775887 kernel: workingset: timestamp_bits=39 max_order=19 bucket_order=0 Oct 31 01:29:30.775894 kernel: Key type asymmetric registered Oct 31 01:29:30.775902 kernel: Asymmetric key parser 'x509' registered Oct 31 01:29:30.775909 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 249) Oct 31 01:29:30.775916 kernel: io scheduler mq-deadline registered Oct 31 01:29:30.775922 kernel: io scheduler kyber registered Oct 31 01:29:30.775928 kernel: io scheduler bfq registered Oct 31 01:29:30.775993 kernel: pcieport 0000:00:15.0: PME: Signaling with IRQ 24 Oct 31 01:29:30.776048 kernel: pcieport 0000:00:15.0: pciehp: Slot #160 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Oct 31 01:29:30.776103 kernel: pcieport 0000:00:15.1: PME: Signaling with IRQ 25 Oct 31 01:29:30.776156 kernel: pcieport 0000:00:15.1: pciehp: Slot #161 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Oct 31 01:29:30.776212 kernel: pcieport 0000:00:15.2: PME: Signaling with IRQ 26 Oct 31 01:29:30.776265 kernel: pcieport 0000:00:15.2: pciehp: Slot #162 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Oct 31 01:29:30.776318 kernel: pcieport 0000:00:15.3: PME: Signaling with IRQ 27 Oct 31 01:29:30.776371 kernel: pcieport 0000:00:15.3: pciehp: Slot #163 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Oct 31 01:29:30.776438 kernel: pcieport 0000:00:15.4: PME: Signaling with IRQ 28 Oct 31 01:29:30.776492 kernel: pcieport 0000:00:15.4: pciehp: Slot #164 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Oct 31 01:29:30.776780 kernel: pcieport 0000:00:15.5: PME: Signaling with IRQ 29 Oct 31 01:29:30.776837 kernel: pcieport 0000:00:15.5: pciehp: Slot #165 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Oct 31 01:29:30.777211 kernel: pcieport 0000:00:15.6: PME: Signaling with IRQ 30 Oct 31 01:29:30.777274 kernel: pcieport 0000:00:15.6: pciehp: Slot #166 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Oct 31 01:29:30.777330 kernel: pcieport 0000:00:15.7: PME: Signaling with IRQ 31 Oct 31 01:29:30.777388 kernel: pcieport 0000:00:15.7: pciehp: Slot #167 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Oct 31 01:29:30.777645 kernel: pcieport 0000:00:16.0: PME: Signaling with IRQ 32 Oct 31 01:29:30.777701 kernel: pcieport 0000:00:16.0: pciehp: Slot #192 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Oct 31 01:29:30.777991 kernel: pcieport 0000:00:16.1: PME: Signaling with IRQ 33 Oct 31 01:29:30.778047 kernel: pcieport 0000:00:16.1: pciehp: Slot #193 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Oct 31 01:29:30.778423 kernel: pcieport 0000:00:16.2: PME: Signaling with IRQ 34 Oct 31 01:29:30.778483 kernel: pcieport 0000:00:16.2: pciehp: Slot #194 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Oct 31 01:29:30.778543 kernel: pcieport 0000:00:16.3: PME: Signaling with IRQ 35 Oct 31 01:29:30.778605 kernel: pcieport 0000:00:16.3: pciehp: Slot #195 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Oct 31 01:29:30.778661 kernel: pcieport 0000:00:16.4: PME: Signaling with IRQ 36 Oct 31 01:29:30.778714 kernel: pcieport 0000:00:16.4: pciehp: Slot #196 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Oct 31 01:29:30.778767 kernel: pcieport 0000:00:16.5: PME: Signaling with IRQ 37 Oct 31 01:29:30.778822 kernel: pcieport 0000:00:16.5: pciehp: Slot #197 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Oct 31 01:29:30.778875 kernel: pcieport 0000:00:16.6: PME: Signaling with IRQ 38 Oct 31 01:29:30.778927 kernel: pcieport 0000:00:16.6: pciehp: Slot #198 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Oct 31 01:29:30.778980 kernel: pcieport 0000:00:16.7: PME: Signaling with IRQ 39 Oct 31 01:29:30.779034 kernel: pcieport 0000:00:16.7: pciehp: Slot #199 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Oct 31 01:29:30.779090 kernel: pcieport 0000:00:17.0: PME: Signaling with IRQ 40 Oct 31 01:29:30.779142 kernel: pcieport 0000:00:17.0: pciehp: Slot #224 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Oct 31 01:29:30.779195 kernel: pcieport 0000:00:17.1: PME: Signaling with IRQ 41 Oct 31 01:29:30.779246 kernel: pcieport 0000:00:17.1: pciehp: Slot #225 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Oct 31 01:29:30.779298 kernel: pcieport 0000:00:17.2: PME: Signaling with IRQ 42 Oct 31 01:29:30.779348 kernel: pcieport 0000:00:17.2: pciehp: Slot #226 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Oct 31 01:29:30.779411 kernel: pcieport 0000:00:17.3: PME: Signaling with IRQ 43 Oct 31 01:29:30.779468 kernel: pcieport 0000:00:17.3: pciehp: Slot #227 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Oct 31 01:29:30.779521 kernel: pcieport 0000:00:17.4: PME: Signaling with IRQ 44 Oct 31 01:29:30.779574 kernel: pcieport 0000:00:17.4: pciehp: Slot #228 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Oct 31 01:29:30.779627 kernel: pcieport 0000:00:17.5: PME: Signaling with IRQ 45 Oct 31 01:29:30.779679 kernel: pcieport 0000:00:17.5: pciehp: Slot #229 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Oct 31 01:29:30.779731 kernel: pcieport 0000:00:17.6: PME: Signaling with IRQ 46 Oct 31 01:29:30.779787 kernel: pcieport 0000:00:17.6: pciehp: Slot #230 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Oct 31 01:29:30.780007 kernel: pcieport 0000:00:17.7: PME: Signaling with IRQ 47 Oct 31 01:29:30.780300 kernel: pcieport 0000:00:17.7: pciehp: Slot #231 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Oct 31 01:29:30.780362 kernel: pcieport 0000:00:18.0: PME: Signaling with IRQ 48 Oct 31 01:29:30.780465 kernel: pcieport 0000:00:18.0: pciehp: Slot #256 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Oct 31 01:29:30.780522 kernel: pcieport 0000:00:18.1: PME: Signaling with IRQ 49 Oct 31 01:29:30.780578 kernel: pcieport 0000:00:18.1: pciehp: Slot #257 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Oct 31 01:29:30.780630 kernel: pcieport 0000:00:18.2: PME: Signaling with IRQ 50 Oct 31 01:29:30.780683 kernel: pcieport 0000:00:18.2: pciehp: Slot #258 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Oct 31 01:29:30.780736 kernel: pcieport 0000:00:18.3: PME: Signaling with IRQ 51 Oct 31 01:29:30.780788 kernel: pcieport 0000:00:18.3: pciehp: Slot #259 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Oct 31 01:29:30.780843 kernel: pcieport 0000:00:18.4: PME: Signaling with IRQ 52 Oct 31 01:29:30.780895 kernel: pcieport 0000:00:18.4: pciehp: Slot #260 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Oct 31 01:29:30.781090 kernel: pcieport 0000:00:18.5: PME: Signaling with IRQ 53 Oct 31 01:29:30.781155 kernel: pcieport 0000:00:18.5: pciehp: Slot #261 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Oct 31 01:29:30.781628 kernel: pcieport 0000:00:18.6: PME: Signaling with IRQ 54 Oct 31 01:29:30.781686 kernel: pcieport 0000:00:18.6: pciehp: Slot #262 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Oct 31 01:29:30.781765 kernel: pcieport 0000:00:18.7: PME: Signaling with IRQ 55 Oct 31 01:29:30.782184 kernel: pcieport 0000:00:18.7: pciehp: Slot #263 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Oct 31 01:29:30.782198 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Oct 31 01:29:30.782205 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Oct 31 01:29:30.782212 kernel: 00:05: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Oct 31 01:29:30.782219 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBC,PNP0f13:MOUS] at 0x60,0x64 irq 1,12 Oct 31 01:29:30.782228 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Oct 31 01:29:30.782234 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Oct 31 01:29:30.782292 kernel: rtc_cmos 00:01: registered as rtc0 Oct 31 01:29:30.782342 kernel: rtc_cmos 00:01: setting system clock to 2025-10-31T01:29:30 UTC (1761874170) Oct 31 01:29:30.782389 kernel: rtc_cmos 00:01: alarms up to one month, y3k, 114 bytes nvram Oct 31 01:29:30.782699 kernel: intel_pstate: CPU model not supported Oct 31 01:29:30.782710 kernel: NET: Registered PF_INET6 protocol family Oct 31 01:29:30.782717 kernel: Segment Routing with IPv6 Oct 31 01:29:30.782724 kernel: In-situ OAM (IOAM) with IPv6 Oct 31 01:29:30.782733 kernel: NET: Registered PF_PACKET protocol family Oct 31 01:29:30.782739 kernel: Key type dns_resolver registered Oct 31 01:29:30.782746 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Oct 31 01:29:30.782753 kernel: IPI shorthand broadcast: enabled Oct 31 01:29:30.782759 kernel: sched_clock: Marking stable (901245599, 222409931)->(1200315806, -76660276) Oct 31 01:29:30.782766 kernel: registered taskstats version 1 Oct 31 01:29:30.782772 kernel: Loading compiled-in X.509 certificates Oct 31 01:29:30.782779 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 5.15.192-flatcar: 8306d4e745b00e76b5fae2596c709096b7f28adc' Oct 31 01:29:30.782785 kernel: Key type .fscrypt registered Oct 31 01:29:30.782793 kernel: Key type fscrypt-provisioning registered Oct 31 01:29:30.782799 kernel: ima: No TPM chip found, activating TPM-bypass! Oct 31 01:29:30.782806 kernel: ima: Allocated hash algorithm: sha1 Oct 31 01:29:30.782812 kernel: ima: No architecture policies found Oct 31 01:29:30.782819 kernel: clk: Disabling unused clocks Oct 31 01:29:30.782826 kernel: Freeing unused kernel image (initmem) memory: 47496K Oct 31 01:29:30.782833 kernel: Write protecting the kernel read-only data: 28672k Oct 31 01:29:30.782840 kernel: Freeing unused kernel image (text/rodata gap) memory: 2040K Oct 31 01:29:30.782847 kernel: Freeing unused kernel image (rodata/data gap) memory: 604K Oct 31 01:29:30.782854 kernel: Run /init as init process Oct 31 01:29:30.782860 kernel: with arguments: Oct 31 01:29:30.782867 kernel: /init Oct 31 01:29:30.782873 kernel: with environment: Oct 31 01:29:30.782879 kernel: HOME=/ Oct 31 01:29:30.782886 kernel: TERM=linux Oct 31 01:29:30.782892 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Oct 31 01:29:30.782900 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Oct 31 01:29:30.782911 systemd[1]: Detected virtualization vmware. Oct 31 01:29:30.782918 systemd[1]: Detected architecture x86-64. Oct 31 01:29:30.782924 systemd[1]: Running in initrd. Oct 31 01:29:30.782931 systemd[1]: No hostname configured, using default hostname. Oct 31 01:29:30.782937 systemd[1]: Hostname set to . Oct 31 01:29:30.782944 systemd[1]: Initializing machine ID from random generator. Oct 31 01:29:30.782950 systemd[1]: Queued start job for default target initrd.target. Oct 31 01:29:30.782957 systemd[1]: Started systemd-ask-password-console.path. Oct 31 01:29:30.782964 systemd[1]: Reached target cryptsetup.target. Oct 31 01:29:30.782971 systemd[1]: Reached target paths.target. Oct 31 01:29:30.782977 systemd[1]: Reached target slices.target. Oct 31 01:29:30.782984 systemd[1]: Reached target swap.target. Oct 31 01:29:30.782991 systemd[1]: Reached target timers.target. Oct 31 01:29:30.782998 systemd[1]: Listening on iscsid.socket. Oct 31 01:29:30.783004 systemd[1]: Listening on iscsiuio.socket. Oct 31 01:29:30.783012 systemd[1]: Listening on systemd-journald-audit.socket. Oct 31 01:29:30.783019 systemd[1]: Listening on systemd-journald-dev-log.socket. Oct 31 01:29:30.783025 systemd[1]: Listening on systemd-journald.socket. Oct 31 01:29:30.783032 systemd[1]: Listening on systemd-networkd.socket. Oct 31 01:29:30.783038 systemd[1]: Listening on systemd-udevd-control.socket. Oct 31 01:29:30.783045 systemd[1]: Listening on systemd-udevd-kernel.socket. Oct 31 01:29:30.783051 systemd[1]: Reached target sockets.target. Oct 31 01:29:30.783058 systemd[1]: Starting kmod-static-nodes.service... Oct 31 01:29:30.783064 systemd[1]: Finished network-cleanup.service. Oct 31 01:29:30.783072 systemd[1]: Starting systemd-fsck-usr.service... Oct 31 01:29:30.783079 systemd[1]: Starting systemd-journald.service... Oct 31 01:29:30.783085 systemd[1]: Starting systemd-modules-load.service... Oct 31 01:29:30.783092 systemd[1]: Starting systemd-resolved.service... Oct 31 01:29:30.783099 systemd[1]: Starting systemd-vconsole-setup.service... Oct 31 01:29:30.783105 systemd[1]: Finished kmod-static-nodes.service. Oct 31 01:29:30.783112 systemd[1]: Finished systemd-fsck-usr.service. Oct 31 01:29:30.783118 systemd[1]: Starting systemd-tmpfiles-setup-dev.service... Oct 31 01:29:30.783125 systemd[1]: Finished systemd-vconsole-setup.service. Oct 31 01:29:30.783132 systemd[1]: Starting dracut-cmdline-ask.service... Oct 31 01:29:30.783139 systemd[1]: Finished systemd-tmpfiles-setup-dev.service. Oct 31 01:29:30.783146 kernel: audit: type=1130 audit(1761874170.705:2): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 01:29:30.783152 systemd[1]: Finished dracut-cmdline-ask.service. Oct 31 01:29:30.783159 systemd[1]: Starting dracut-cmdline.service... Oct 31 01:29:30.783166 kernel: audit: type=1130 audit(1761874170.718:3): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 01:29:30.783173 systemd[1]: Started systemd-resolved.service. Oct 31 01:29:30.783179 systemd[1]: Reached target nss-lookup.target. Oct 31 01:29:30.783188 kernel: audit: type=1130 audit(1761874170.737:4): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 01:29:30.783195 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Oct 31 01:29:30.783201 kernel: Bridge firewalling registered Oct 31 01:29:30.783208 kernel: SCSI subsystem initialized Oct 31 01:29:30.783214 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Oct 31 01:29:30.783226 systemd-journald[217]: Journal started Oct 31 01:29:30.783263 systemd-journald[217]: Runtime Journal (/run/log/journal/bd3fa3dd55e74161a2e220cb5a4ab727) is 4.8M, max 38.8M, 34.0M free. Oct 31 01:29:30.785177 systemd[1]: Started systemd-journald.service. Oct 31 01:29:30.785195 kernel: device-mapper: uevent: version 1.0.3 Oct 31 01:29:30.705000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 01:29:30.718000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 01:29:30.737000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 01:29:30.694648 systemd-modules-load[218]: Inserted module 'overlay' Oct 31 01:29:30.786084 kernel: audit: type=1130 audit(1761874170.784:5): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 01:29:30.784000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 01:29:30.734952 systemd-resolved[219]: Positive Trust Anchors: Oct 31 01:29:30.734958 systemd-resolved[219]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Oct 31 01:29:30.734978 systemd-resolved[219]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Oct 31 01:29:30.737097 systemd-resolved[219]: Defaulting to hostname 'linux'. Oct 31 01:29:30.752280 systemd-modules-load[218]: Inserted module 'br_netfilter' Oct 31 01:29:30.789691 dracut-cmdline[233]: dracut-dracut-053 Oct 31 01:29:30.789691 dracut-cmdline[233]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LA Oct 31 01:29:30.789691 dracut-cmdline[233]: BEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=vmware flatcar.autologin verity.usrhash=7605c743a37b990723033788c91d5dcda748347858877b1088098370c2a7e4d3 Oct 31 01:29:30.791553 kernel: device-mapper: ioctl: 4.45.0-ioctl (2021-03-22) initialised: dm-devel@redhat.com Oct 31 01:29:30.794644 systemd-modules-load[218]: Inserted module 'dm_multipath' Oct 31 01:29:30.795216 systemd[1]: Finished systemd-modules-load.service. Oct 31 01:29:30.795804 systemd[1]: Starting systemd-sysctl.service... Oct 31 01:29:30.793000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 01:29:30.799460 kernel: audit: type=1130 audit(1761874170.793:6): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 01:29:30.803226 systemd[1]: Finished systemd-sysctl.service. Oct 31 01:29:30.802000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 01:29:30.807412 kernel: audit: type=1130 audit(1761874170.802:7): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 01:29:30.807438 kernel: Loading iSCSI transport class v2.0-870. Oct 31 01:29:30.821422 kernel: iscsi: registered transport (tcp) Oct 31 01:29:30.840420 kernel: iscsi: registered transport (qla4xxx) Oct 31 01:29:30.840461 kernel: QLogic iSCSI HBA Driver Oct 31 01:29:30.857793 systemd[1]: Finished dracut-cmdline.service. Oct 31 01:29:30.856000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 01:29:30.858470 systemd[1]: Starting dracut-pre-udev.service... Oct 31 01:29:30.861409 kernel: audit: type=1130 audit(1761874170.856:8): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 01:29:30.897438 kernel: raid6: avx2x4 gen() 37924 MB/s Oct 31 01:29:30.913422 kernel: raid6: avx2x4 xor() 13899 MB/s Oct 31 01:29:30.930417 kernel: raid6: avx2x2 gen() 29444 MB/s Oct 31 01:29:30.947429 kernel: raid6: avx2x2 xor() 17668 MB/s Oct 31 01:29:30.964414 kernel: raid6: avx2x1 gen() 41806 MB/s Oct 31 01:29:30.981419 kernel: raid6: avx2x1 xor() 25811 MB/s Oct 31 01:29:30.998438 kernel: raid6: sse2x4 gen() 19513 MB/s Oct 31 01:29:31.015431 kernel: raid6: sse2x4 xor() 10585 MB/s Oct 31 01:29:31.032426 kernel: raid6: sse2x2 gen() 18941 MB/s Oct 31 01:29:31.049428 kernel: raid6: sse2x2 xor() 12481 MB/s Oct 31 01:29:31.066412 kernel: raid6: sse2x1 gen() 17307 MB/s Oct 31 01:29:31.083628 kernel: raid6: sse2x1 xor() 8489 MB/s Oct 31 01:29:31.083666 kernel: raid6: using algorithm avx2x1 gen() 41806 MB/s Oct 31 01:29:31.083675 kernel: raid6: .... xor() 25811 MB/s, rmw enabled Oct 31 01:29:31.084838 kernel: raid6: using avx2x2 recovery algorithm Oct 31 01:29:31.093408 kernel: xor: automatically using best checksumming function avx Oct 31 01:29:31.154499 kernel: Btrfs loaded, crc32c=crc32c-intel, zoned=no, fsverity=no Oct 31 01:29:31.157000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 01:29:31.159050 systemd[1]: Finished dracut-pre-udev.service. Oct 31 01:29:31.163042 kernel: audit: type=1130 audit(1761874171.157:9): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 01:29:31.163059 kernel: audit: type=1334 audit(1761874171.158:10): prog-id=7 op=LOAD Oct 31 01:29:31.158000 audit: BPF prog-id=7 op=LOAD Oct 31 01:29:31.158000 audit: BPF prog-id=8 op=LOAD Oct 31 01:29:31.159672 systemd[1]: Starting systemd-udevd.service... Oct 31 01:29:31.170738 systemd-udevd[415]: Using default interface naming scheme 'v252'. Oct 31 01:29:31.173668 systemd[1]: Started systemd-udevd.service. Oct 31 01:29:31.174226 systemd[1]: Starting dracut-pre-trigger.service... Oct 31 01:29:31.172000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 01:29:31.181783 dracut-pre-trigger[419]: rd.md=0: removing MD RAID activation Oct 31 01:29:31.198819 systemd[1]: Finished dracut-pre-trigger.service. Oct 31 01:29:31.197000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 01:29:31.199379 systemd[1]: Starting systemd-udev-trigger.service... Oct 31 01:29:31.263000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 01:29:31.264525 systemd[1]: Finished systemd-udev-trigger.service. Oct 31 01:29:31.324410 kernel: VMware PVSCSI driver - version 1.0.7.0-k Oct 31 01:29:31.325640 kernel: vmw_pvscsi: using 64bit dma Oct 31 01:29:31.325662 kernel: vmw_pvscsi: max_id: 16 Oct 31 01:29:31.325670 kernel: vmw_pvscsi: setting ring_pages to 8 Oct 31 01:29:31.336290 kernel: vmw_pvscsi: enabling reqCallThreshold Oct 31 01:29:31.336322 kernel: vmw_pvscsi: driver-based request coalescing enabled Oct 31 01:29:31.336334 kernel: vmw_pvscsi: using MSI-X Oct 31 01:29:31.336347 kernel: scsi host0: VMware PVSCSI storage adapter rev 2, req/cmp/msg rings: 8/8/1 pages, cmd_per_lun=254 Oct 31 01:29:31.338977 kernel: vmw_pvscsi 0000:03:00.0: VMware PVSCSI rev 2 host #0 Oct 31 01:29:31.339070 kernel: scsi 0:0:0:0: Direct-Access VMware Virtual disk 2.0 PQ: 0 ANSI: 6 Oct 31 01:29:31.345408 kernel: libata version 3.00 loaded. Oct 31 01:29:31.348443 kernel: ata_piix 0000:00:07.1: version 2.13 Oct 31 01:29:31.352627 kernel: scsi host1: ata_piix Oct 31 01:29:31.352705 kernel: scsi host2: ata_piix Oct 31 01:29:31.352769 kernel: ata1: PATA max UDMA/33 cmd 0x1f0 ctl 0x3f6 bmdma 0x1060 irq 14 Oct 31 01:29:31.352777 kernel: ata2: PATA max UDMA/33 cmd 0x170 ctl 0x376 bmdma 0x1068 irq 15 Oct 31 01:29:31.355407 kernel: VMware vmxnet3 virtual NIC driver - version 1.6.0.0-k-NAPI Oct 31 01:29:31.356405 kernel: cryptd: max_cpu_qlen set to 1000 Oct 31 01:29:31.363418 kernel: vmxnet3 0000:0b:00.0: # of Tx queues : 2, # of Rx queues : 2 Oct 31 01:29:31.363513 kernel: vmxnet3 0000:0b:00.0 eth0: NIC Link is Up 10000 Mbps Oct 31 01:29:31.516420 kernel: ata2.00: ATAPI: VMware Virtual IDE CDROM Drive, 00000001, max UDMA/33 Oct 31 01:29:31.520418 kernel: scsi 2:0:0:0: CD-ROM NECVMWar VMware IDE CDR10 1.00 PQ: 0 ANSI: 5 Oct 31 01:29:31.527416 kernel: vmxnet3 0000:0b:00.0 ens192: renamed from eth0 Oct 31 01:29:31.529788 kernel: sd 0:0:0:0: [sda] 17805312 512-byte logical blocks: (9.12 GB/8.49 GiB) Oct 31 01:29:31.535331 kernel: sd 0:0:0:0: [sda] Write Protect is off Oct 31 01:29:31.535424 kernel: sd 0:0:0:0: [sda] Mode Sense: 31 00 00 00 Oct 31 01:29:31.535496 kernel: sd 0:0:0:0: [sda] Cache data unavailable Oct 31 01:29:31.535579 kernel: sd 0:0:0:0: [sda] Assuming drive cache: write through Oct 31 01:29:31.535652 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Oct 31 01:29:31.535661 kernel: sd 0:0:0:0: [sda] Attached SCSI disk Oct 31 01:29:31.542863 kernel: AVX2 version of gcm_enc/dec engaged. Oct 31 01:29:31.542900 kernel: AES CTR mode by8 optimization enabled Oct 31 01:29:31.560415 kernel: sr 2:0:0:0: [sr0] scsi3-mmc drive: 1x/1x writer dvd-ram cd/rw xa/form2 cdda tray Oct 31 01:29:31.581472 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Oct 31 01:29:31.581484 kernel: sr 2:0:0:0: Attached scsi CD-ROM sr0 Oct 31 01:29:31.622888 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device. Oct 31 01:29:31.623476 kernel: BTRFS: device label OEM devid 1 transid 9 /dev/sda6 scanned by (udev-worker) (466) Oct 31 01:29:31.630124 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device. Oct 31 01:29:31.632676 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. Oct 31 01:29:31.634883 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device. Oct 31 01:29:31.635191 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device. Oct 31 01:29:31.636061 systemd[1]: Starting disk-uuid.service... Oct 31 01:29:31.658427 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Oct 31 01:29:31.663413 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Oct 31 01:29:32.685412 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Oct 31 01:29:32.685698 disk-uuid[548]: The operation has completed successfully. Oct 31 01:29:32.720658 systemd[1]: disk-uuid.service: Deactivated successfully. Oct 31 01:29:32.719000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 01:29:32.719000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 01:29:32.720725 systemd[1]: Finished disk-uuid.service. Oct 31 01:29:32.721333 systemd[1]: Starting verity-setup.service... Oct 31 01:29:32.733441 kernel: device-mapper: verity: sha256 using implementation "sha256-avx2" Oct 31 01:29:32.833953 systemd[1]: Found device dev-mapper-usr.device. Oct 31 01:29:32.834493 systemd[1]: Mounting sysusr-usr.mount... Oct 31 01:29:32.835473 systemd[1]: Finished verity-setup.service. Oct 31 01:29:32.834000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 01:29:32.902413 kernel: EXT4-fs (dm-0): mounted filesystem without journal. Opts: norecovery. Quota mode: none. Oct 31 01:29:32.902491 systemd[1]: Mounted sysusr-usr.mount. Oct 31 01:29:32.903114 systemd[1]: Starting afterburn-network-kargs.service... Oct 31 01:29:32.903602 systemd[1]: Starting ignition-setup.service... Oct 31 01:29:32.943509 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Oct 31 01:29:32.943542 kernel: BTRFS info (device sda6): using free space tree Oct 31 01:29:32.943556 kernel: BTRFS info (device sda6): has skinny extents Oct 31 01:29:32.950413 kernel: BTRFS info (device sda6): enabling ssd optimizations Oct 31 01:29:32.957753 systemd[1]: mnt-oem.mount: Deactivated successfully. Oct 31 01:29:32.961000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 01:29:32.963230 systemd[1]: Finished ignition-setup.service. Oct 31 01:29:32.963861 systemd[1]: Starting ignition-fetch-offline.service... Oct 31 01:29:33.063312 systemd[1]: Finished afterburn-network-kargs.service. Oct 31 01:29:33.062000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=afterburn-network-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 01:29:33.064004 systemd[1]: Starting parse-ip-for-networkd.service... Oct 31 01:29:33.117000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 01:29:33.119184 systemd[1]: Finished parse-ip-for-networkd.service. Oct 31 01:29:33.118000 audit: BPF prog-id=9 op=LOAD Oct 31 01:29:33.120108 systemd[1]: Starting systemd-networkd.service... Oct 31 01:29:33.134554 systemd-networkd[733]: lo: Link UP Oct 31 01:29:33.134563 systemd-networkd[733]: lo: Gained carrier Oct 31 01:29:33.134874 systemd-networkd[733]: Enumeration completed Oct 31 01:29:33.138442 kernel: vmxnet3 0000:0b:00.0 ens192: intr type 3, mode 0, 3 vectors allocated Oct 31 01:29:33.138578 kernel: vmxnet3 0000:0b:00.0 ens192: NIC Link is Up 10000 Mbps Oct 31 01:29:33.133000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 01:29:33.135072 systemd-networkd[733]: ens192: Configuring with /etc/systemd/network/10-dracut-cmdline-99.network. Oct 31 01:29:33.135138 systemd[1]: Started systemd-networkd.service. Oct 31 01:29:33.135284 systemd[1]: Reached target network.target. Oct 31 01:29:33.135827 systemd[1]: Starting iscsiuio.service... Oct 31 01:29:33.138769 systemd-networkd[733]: ens192: Link UP Oct 31 01:29:33.138772 systemd-networkd[733]: ens192: Gained carrier Oct 31 01:29:33.140000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 01:29:33.141526 systemd[1]: Started iscsiuio.service. Oct 31 01:29:33.142135 systemd[1]: Starting iscsid.service... Oct 31 01:29:33.144615 iscsid[738]: iscsid: can't open InitiatorName configuration file /etc/iscsi/initiatorname.iscsi Oct 31 01:29:33.144615 iscsid[738]: iscsid: Warning: InitiatorName file /etc/iscsi/initiatorname.iscsi does not exist or does not contain a properly formatted InitiatorName. If using software iscsi (iscsi_tcp or ib_iser) or partial offload (bnx2i or cxgbi iscsi), you may not be able to log into or discover targets. Please create a file /etc/iscsi/initiatorname.iscsi that contains a sting with the format: InitiatorName=iqn.yyyy-mm.[:identifier]. Oct 31 01:29:33.144615 iscsid[738]: Example: InitiatorName=iqn.2001-04.com.redhat:fc6. Oct 31 01:29:33.144615 iscsid[738]: If using hardware iscsi like qla4xxx this message can be ignored. Oct 31 01:29:33.144615 iscsid[738]: iscsid: can't open InitiatorAlias configuration file /etc/iscsi/initiatorname.iscsi Oct 31 01:29:33.144615 iscsid[738]: iscsid: can't open iscsid.safe_logout configuration file /etc/iscsi/iscsid.conf Oct 31 01:29:33.145443 systemd[1]: Started iscsid.service. Oct 31 01:29:33.144000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 01:29:33.146075 systemd[1]: Starting dracut-initqueue.service... Oct 31 01:29:33.150000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 01:29:33.152224 systemd[1]: Finished dracut-initqueue.service. Oct 31 01:29:33.152373 systemd[1]: Reached target remote-fs-pre.target. Oct 31 01:29:33.152470 systemd[1]: Reached target remote-cryptsetup.target. Oct 31 01:29:33.152561 systemd[1]: Reached target remote-fs.target. Oct 31 01:29:33.153623 systemd[1]: Starting dracut-pre-mount.service... Oct 31 01:29:33.157000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 01:29:33.158514 systemd[1]: Finished dracut-pre-mount.service. Oct 31 01:29:33.204843 ignition[605]: Ignition 2.14.0 Oct 31 01:29:33.204850 ignition[605]: Stage: fetch-offline Oct 31 01:29:33.204901 ignition[605]: reading system config file "/usr/lib/ignition/base.d/base.ign" Oct 31 01:29:33.204918 ignition[605]: parsing config with SHA512: bd85a898f7da4744ff98e02742aa4854e1ceea8026a4e95cb6fb599b39b54cff0db353847df13d3c55ae196a9dc5d648977228d55e5da3ea20cd600fa7cec8ed Oct 31 01:29:33.208901 ignition[605]: no config dir at "/usr/lib/ignition/base.platform.d/vmware" Oct 31 01:29:33.208997 ignition[605]: parsed url from cmdline: "" Oct 31 01:29:33.209000 ignition[605]: no config URL provided Oct 31 01:29:33.209003 ignition[605]: reading system config file "/usr/lib/ignition/user.ign" Oct 31 01:29:33.209009 ignition[605]: no config at "/usr/lib/ignition/user.ign" Oct 31 01:29:33.209482 ignition[605]: config successfully fetched Oct 31 01:29:33.209505 ignition[605]: parsing config with SHA512: 353e6e7d3912a123828f1df391410b5077e274423a7a1f5d22a3907b742abb2d146fdec43bd1dd806119949331ee38263b84144d6505726576e9e4d4848e0147 Oct 31 01:29:33.221926 unknown[605]: fetched base config from "system" Oct 31 01:29:33.221937 unknown[605]: fetched user config from "vmware" Oct 31 01:29:33.222345 ignition[605]: fetch-offline: fetch-offline passed Oct 31 01:29:33.222392 ignition[605]: Ignition finished successfully Oct 31 01:29:33.223195 systemd[1]: Finished ignition-fetch-offline.service. Oct 31 01:29:33.223370 systemd[1]: ignition-fetch.service was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Oct 31 01:29:33.221000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 01:29:33.223891 systemd[1]: Starting ignition-kargs.service... Oct 31 01:29:33.229880 ignition[753]: Ignition 2.14.0 Oct 31 01:29:33.229893 ignition[753]: Stage: kargs Oct 31 01:29:33.229984 ignition[753]: reading system config file "/usr/lib/ignition/base.d/base.ign" Oct 31 01:29:33.229997 ignition[753]: parsing config with SHA512: bd85a898f7da4744ff98e02742aa4854e1ceea8026a4e95cb6fb599b39b54cff0db353847df13d3c55ae196a9dc5d648977228d55e5da3ea20cd600fa7cec8ed Oct 31 01:29:33.231373 ignition[753]: no config dir at "/usr/lib/ignition/base.platform.d/vmware" Oct 31 01:29:33.232874 ignition[753]: kargs: kargs passed Oct 31 01:29:33.232903 ignition[753]: Ignition finished successfully Oct 31 01:29:33.232000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 01:29:33.233928 systemd[1]: Finished ignition-kargs.service. Oct 31 01:29:33.234535 systemd[1]: Starting ignition-disks.service... Oct 31 01:29:33.238796 ignition[760]: Ignition 2.14.0 Oct 31 01:29:33.238803 ignition[760]: Stage: disks Oct 31 01:29:33.238862 ignition[760]: reading system config file "/usr/lib/ignition/base.d/base.ign" Oct 31 01:29:33.238871 ignition[760]: parsing config with SHA512: bd85a898f7da4744ff98e02742aa4854e1ceea8026a4e95cb6fb599b39b54cff0db353847df13d3c55ae196a9dc5d648977228d55e5da3ea20cd600fa7cec8ed Oct 31 01:29:33.240119 ignition[760]: no config dir at "/usr/lib/ignition/base.platform.d/vmware" Oct 31 01:29:33.241521 ignition[760]: disks: disks passed Oct 31 01:29:33.240000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 01:29:33.242019 systemd[1]: Finished ignition-disks.service. Oct 31 01:29:33.241559 ignition[760]: Ignition finished successfully Oct 31 01:29:33.242443 systemd[1]: Reached target initrd-root-device.target. Oct 31 01:29:33.242662 systemd[1]: Reached target local-fs-pre.target. Oct 31 01:29:33.242752 systemd[1]: Reached target local-fs.target. Oct 31 01:29:33.242956 systemd[1]: Reached target sysinit.target. Oct 31 01:29:33.243153 systemd[1]: Reached target basic.target. Oct 31 01:29:33.244135 systemd[1]: Starting systemd-fsck-root.service... Oct 31 01:29:33.294909 systemd-fsck[768]: ROOT: clean, 637/1628000 files, 124069/1617920 blocks Oct 31 01:29:33.294000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 01:29:33.296123 systemd[1]: Finished systemd-fsck-root.service. Oct 31 01:29:33.296966 systemd[1]: Mounting sysroot.mount... Oct 31 01:29:33.305057 systemd[1]: Mounted sysroot.mount. Oct 31 01:29:33.305196 systemd[1]: Reached target initrd-root-fs.target. Oct 31 01:29:33.305443 kernel: EXT4-fs (sda9): mounted filesystem with ordered data mode. Opts: (null). Quota mode: none. Oct 31 01:29:33.306065 systemd[1]: Mounting sysroot-usr.mount... Oct 31 01:29:33.306498 systemd[1]: flatcar-metadata-hostname.service was skipped because no trigger condition checks were met. Oct 31 01:29:33.306529 systemd[1]: ignition-remount-sysroot.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Oct 31 01:29:33.306550 systemd[1]: Reached target ignition-diskful.target. Oct 31 01:29:33.308662 systemd[1]: Mounted sysroot-usr.mount. Oct 31 01:29:33.309195 systemd[1]: Starting initrd-setup-root.service... Oct 31 01:29:33.312382 initrd-setup-root[778]: cut: /sysroot/etc/passwd: No such file or directory Oct 31 01:29:33.316194 initrd-setup-root[786]: cut: /sysroot/etc/group: No such file or directory Oct 31 01:29:33.318485 initrd-setup-root[794]: cut: /sysroot/etc/shadow: No such file or directory Oct 31 01:29:33.320799 initrd-setup-root[802]: cut: /sysroot/etc/gshadow: No such file or directory Oct 31 01:29:33.352908 systemd[1]: Finished initrd-setup-root.service. Oct 31 01:29:33.351000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 01:29:33.353548 systemd[1]: Starting ignition-mount.service... Oct 31 01:29:33.354118 systemd[1]: Starting sysroot-boot.service... Oct 31 01:29:33.357881 bash[819]: umount: /sysroot/usr/share/oem: not mounted. Oct 31 01:29:33.364134 ignition[820]: INFO : Ignition 2.14.0 Oct 31 01:29:33.364392 ignition[820]: INFO : Stage: mount Oct 31 01:29:33.364639 ignition[820]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" Oct 31 01:29:33.364795 ignition[820]: DEBUG : parsing config with SHA512: bd85a898f7da4744ff98e02742aa4854e1ceea8026a4e95cb6fb599b39b54cff0db353847df13d3c55ae196a9dc5d648977228d55e5da3ea20cd600fa7cec8ed Oct 31 01:29:33.366242 ignition[820]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/vmware" Oct 31 01:29:33.367498 ignition[820]: INFO : mount: mount passed Oct 31 01:29:33.367640 ignition[820]: INFO : Ignition finished successfully Oct 31 01:29:33.367000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 01:29:33.368454 systemd[1]: Finished ignition-mount.service. Oct 31 01:29:33.390167 systemd[1]: Finished sysroot-boot.service. Oct 31 01:29:33.388000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 01:29:33.853745 systemd[1]: Mounting sysroot-usr-share-oem.mount... Oct 31 01:29:33.950424 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/sda6 scanned by mount (829) Oct 31 01:29:33.970050 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Oct 31 01:29:33.970092 kernel: BTRFS info (device sda6): using free space tree Oct 31 01:29:33.970101 kernel: BTRFS info (device sda6): has skinny extents Oct 31 01:29:33.986421 kernel: BTRFS info (device sda6): enabling ssd optimizations Oct 31 01:29:33.986656 systemd[1]: Mounted sysroot-usr-share-oem.mount. Oct 31 01:29:33.987327 systemd[1]: Starting ignition-files.service... Oct 31 01:29:33.997516 ignition[849]: INFO : Ignition 2.14.0 Oct 31 01:29:33.997516 ignition[849]: INFO : Stage: files Oct 31 01:29:33.997899 ignition[849]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" Oct 31 01:29:33.997899 ignition[849]: DEBUG : parsing config with SHA512: bd85a898f7da4744ff98e02742aa4854e1ceea8026a4e95cb6fb599b39b54cff0db353847df13d3c55ae196a9dc5d648977228d55e5da3ea20cd600fa7cec8ed Oct 31 01:29:33.999142 ignition[849]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/vmware" Oct 31 01:29:34.001282 ignition[849]: DEBUG : files: compiled without relabeling support, skipping Oct 31 01:29:34.001749 ignition[849]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Oct 31 01:29:34.001749 ignition[849]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Oct 31 01:29:34.003704 ignition[849]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Oct 31 01:29:34.003975 ignition[849]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Oct 31 01:29:34.004784 unknown[849]: wrote ssh authorized keys file for user: core Oct 31 01:29:34.005006 ignition[849]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Oct 31 01:29:34.005373 ignition[849]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/etc/flatcar-cgroupv1" Oct 31 01:29:34.005570 ignition[849]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/etc/flatcar-cgroupv1" Oct 31 01:29:34.005570 ignition[849]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/helm-v3.17.0-linux-amd64.tar.gz" Oct 31 01:29:34.005570 ignition[849]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://get.helm.sh/helm-v3.17.0-linux-amd64.tar.gz: attempt #1 Oct 31 01:29:35.048466 systemd-networkd[733]: ens192: Gained IPv6LL Oct 31 01:29:35.059859 ignition[849]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Oct 31 01:29:35.241444 ignition[849]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/helm-v3.17.0-linux-amd64.tar.gz" Oct 31 01:29:35.241709 ignition[849]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Oct 31 01:29:35.241709 ignition[849]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Oct 31 01:29:35.241709 ignition[849]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Oct 31 01:29:35.241709 ignition[849]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Oct 31 01:29:35.241709 ignition[849]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Oct 31 01:29:35.242528 ignition[849]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Oct 31 01:29:35.242528 ignition[849]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Oct 31 01:29:35.242528 ignition[849]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Oct 31 01:29:35.242528 ignition[849]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Oct 31 01:29:35.242528 ignition[849]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Oct 31 01:29:35.242528 ignition[849]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Oct 31 01:29:35.242528 ignition[849]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Oct 31 01:29:35.242528 ignition[849]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/etc/systemd/system/vmtoolsd.service" Oct 31 01:29:35.242528 ignition[849]: INFO : files: createFilesystemsFiles: createFiles: op(b): oem config not found in "/usr/share/oem", looking on oem partition Oct 31 01:29:35.247410 ignition[849]: INFO : files: createFilesystemsFiles: createFiles: op(b): op(c): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem3948657492" Oct 31 01:29:35.247675 ignition[849]: CRITICAL : files: createFilesystemsFiles: createFiles: op(b): op(c): [failed] mounting "/dev/disk/by-label/OEM" at "/mnt/oem3948657492": device or resource busy Oct 31 01:29:35.247675 ignition[849]: ERROR : files: createFilesystemsFiles: createFiles: op(b): failed to mount ext4 device "/dev/disk/by-label/OEM" at "/mnt/oem3948657492", trying btrfs: device or resource busy Oct 31 01:29:35.247675 ignition[849]: INFO : files: createFilesystemsFiles: createFiles: op(b): op(d): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem3948657492" Oct 31 01:29:35.247675 ignition[849]: INFO : files: createFilesystemsFiles: createFiles: op(b): op(d): [finished] mounting "/dev/disk/by-label/OEM" at "/mnt/oem3948657492" Oct 31 01:29:35.248514 ignition[849]: INFO : files: createFilesystemsFiles: createFiles: op(b): op(e): [started] unmounting "/mnt/oem3948657492" Oct 31 01:29:35.248514 ignition[849]: INFO : files: createFilesystemsFiles: createFiles: op(b): op(e): [finished] unmounting "/mnt/oem3948657492" Oct 31 01:29:35.249603 ignition[849]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/etc/systemd/system/vmtoolsd.service" Oct 31 01:29:35.249603 ignition[849]: INFO : files: createFilesystemsFiles: createFiles: op(f): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Oct 31 01:29:35.249603 ignition[849]: INFO : files: createFilesystemsFiles: createFiles: op(f): GET https://extensions.flatcar.org/extensions/kubernetes-v1.32.4-x86-64.raw: attempt #1 Oct 31 01:29:35.672188 ignition[849]: INFO : files: createFilesystemsFiles: createFiles: op(f): GET result: OK Oct 31 01:29:35.985823 ignition[849]: INFO : files: createFilesystemsFiles: createFiles: op(f): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Oct 31 01:29:35.994116 ignition[849]: INFO : files: createFilesystemsFiles: createFiles: op(10): [started] writing file "/sysroot/etc/systemd/network/00-vmware.network" Oct 31 01:29:35.994374 ignition[849]: INFO : files: createFilesystemsFiles: createFiles: op(10): [finished] writing file "/sysroot/etc/systemd/network/00-vmware.network" Oct 31 01:29:35.994374 ignition[849]: INFO : files: op(11): [started] processing unit "vmtoolsd.service" Oct 31 01:29:35.994374 ignition[849]: INFO : files: op(11): [finished] processing unit "vmtoolsd.service" Oct 31 01:29:35.994374 ignition[849]: INFO : files: op(12): [started] processing unit "containerd.service" Oct 31 01:29:35.994374 ignition[849]: INFO : files: op(12): op(13): [started] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Oct 31 01:29:35.994374 ignition[849]: INFO : files: op(12): op(13): [finished] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Oct 31 01:29:35.994374 ignition[849]: INFO : files: op(12): [finished] processing unit "containerd.service" Oct 31 01:29:35.994374 ignition[849]: INFO : files: op(14): [started] processing unit "prepare-helm.service" Oct 31 01:29:35.994374 ignition[849]: INFO : files: op(14): op(15): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Oct 31 01:29:35.994374 ignition[849]: INFO : files: op(14): op(15): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Oct 31 01:29:35.994374 ignition[849]: INFO : files: op(14): [finished] processing unit "prepare-helm.service" Oct 31 01:29:35.994374 ignition[849]: INFO : files: op(16): [started] processing unit "coreos-metadata.service" Oct 31 01:29:35.994374 ignition[849]: INFO : files: op(16): op(17): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Oct 31 01:29:35.997136 ignition[849]: INFO : files: op(16): op(17): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Oct 31 01:29:35.997136 ignition[849]: INFO : files: op(16): [finished] processing unit "coreos-metadata.service" Oct 31 01:29:35.997136 ignition[849]: INFO : files: op(18): [started] setting preset to enabled for "prepare-helm.service" Oct 31 01:29:35.997136 ignition[849]: INFO : files: op(18): [finished] setting preset to enabled for "prepare-helm.service" Oct 31 01:29:35.997136 ignition[849]: INFO : files: op(19): [started] setting preset to disabled for "coreos-metadata.service" Oct 31 01:29:35.997136 ignition[849]: INFO : files: op(19): op(1a): [started] removing enablement symlink(s) for "coreos-metadata.service" Oct 31 01:29:36.094768 ignition[849]: INFO : files: op(19): op(1a): [finished] removing enablement symlink(s) for "coreos-metadata.service" Oct 31 01:29:36.095151 ignition[849]: INFO : files: op(19): [finished] setting preset to disabled for "coreos-metadata.service" Oct 31 01:29:36.095151 ignition[849]: INFO : files: op(1b): [started] setting preset to enabled for "vmtoolsd.service" Oct 31 01:29:36.095151 ignition[849]: INFO : files: op(1b): [finished] setting preset to enabled for "vmtoolsd.service" Oct 31 01:29:36.095151 ignition[849]: INFO : files: createResultFile: createFiles: op(1c): [started] writing file "/sysroot/etc/.ignition-result.json" Oct 31 01:29:36.095151 ignition[849]: INFO : files: createResultFile: createFiles: op(1c): [finished] writing file "/sysroot/etc/.ignition-result.json" Oct 31 01:29:36.095151 ignition[849]: INFO : files: files passed Oct 31 01:29:36.095151 ignition[849]: INFO : Ignition finished successfully Oct 31 01:29:36.096985 systemd[1]: Finished ignition-files.service. Oct 31 01:29:36.095000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 01:29:36.097737 systemd[1]: Starting initrd-setup-root-after-ignition.service... Oct 31 01:29:36.100779 kernel: kauditd_printk_skb: 23 callbacks suppressed Oct 31 01:29:36.100799 kernel: audit: type=1130 audit(1761874176.095:34): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 01:29:36.100889 systemd[1]: torcx-profile-populate.service was skipped because of an unmet condition check (ConditionPathExists=/sysroot/etc/torcx/next-profile). Oct 31 01:29:36.101567 systemd[1]: Starting ignition-quench.service... Oct 31 01:29:36.103517 systemd[1]: ignition-quench.service: Deactivated successfully. Oct 31 01:29:36.103710 systemd[1]: Finished ignition-quench.service. Oct 31 01:29:36.102000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 01:29:36.102000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 01:29:36.109034 kernel: audit: type=1130 audit(1761874176.102:35): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 01:29:36.109061 kernel: audit: type=1131 audit(1761874176.102:36): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 01:29:36.110670 initrd-setup-root-after-ignition[875]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Oct 31 01:29:36.111173 systemd[1]: Finished initrd-setup-root-after-ignition.service. Oct 31 01:29:36.110000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 01:29:36.111509 systemd[1]: Reached target ignition-complete.target. Oct 31 01:29:36.114434 kernel: audit: type=1130 audit(1761874176.110:37): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 01:29:36.114767 systemd[1]: Starting initrd-parse-etc.service... Oct 31 01:29:36.123261 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Oct 31 01:29:36.123510 systemd[1]: Finished initrd-parse-etc.service. Oct 31 01:29:36.122000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 01:29:36.123835 systemd[1]: Reached target initrd-fs.target. Oct 31 01:29:36.128613 kernel: audit: type=1130 audit(1761874176.122:38): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 01:29:36.128627 kernel: audit: type=1131 audit(1761874176.122:39): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 01:29:36.122000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 01:29:36.128756 systemd[1]: Reached target initrd.target. Oct 31 01:29:36.128991 systemd[1]: dracut-mount.service was skipped because no trigger condition checks were met. Oct 31 01:29:36.129595 systemd[1]: Starting dracut-pre-pivot.service... Oct 31 01:29:36.136038 systemd[1]: Finished dracut-pre-pivot.service. Oct 31 01:29:36.134000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 01:29:36.136707 systemd[1]: Starting initrd-cleanup.service... Oct 31 01:29:36.139554 kernel: audit: type=1130 audit(1761874176.134:40): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 01:29:36.143906 systemd[1]: initrd-cleanup.service: Deactivated successfully. Oct 31 01:29:36.143958 systemd[1]: Finished initrd-cleanup.service. Oct 31 01:29:36.142000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 01:29:36.144562 systemd[1]: Stopped target nss-lookup.target. Oct 31 01:29:36.149036 kernel: audit: type=1130 audit(1761874176.142:41): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 01:29:36.149052 kernel: audit: type=1131 audit(1761874176.142:42): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 01:29:36.142000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 01:29:36.148942 systemd[1]: Stopped target remote-cryptsetup.target. Oct 31 01:29:36.149099 systemd[1]: Stopped target timers.target. Oct 31 01:29:36.149287 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Oct 31 01:29:36.151893 kernel: audit: type=1131 audit(1761874176.148:43): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 01:29:36.148000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 01:29:36.149316 systemd[1]: Stopped dracut-pre-pivot.service. Oct 31 01:29:36.149460 systemd[1]: Stopped target initrd.target. Oct 31 01:29:36.151943 systemd[1]: Stopped target basic.target. Oct 31 01:29:36.152118 systemd[1]: Stopped target ignition-complete.target. Oct 31 01:29:36.152201 systemd[1]: Stopped target ignition-diskful.target. Oct 31 01:29:36.152361 systemd[1]: Stopped target initrd-root-device.target. Oct 31 01:29:36.152555 systemd[1]: Stopped target remote-fs.target. Oct 31 01:29:36.152721 systemd[1]: Stopped target remote-fs-pre.target. Oct 31 01:29:36.152898 systemd[1]: Stopped target sysinit.target. Oct 31 01:29:36.153055 systemd[1]: Stopped target local-fs.target. Oct 31 01:29:36.153210 systemd[1]: Stopped target local-fs-pre.target. Oct 31 01:29:36.152000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 01:29:36.153374 systemd[1]: Stopped target swap.target. Oct 31 01:29:36.153534 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Oct 31 01:29:36.152000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 01:29:36.153559 systemd[1]: Stopped dracut-pre-mount.service. Oct 31 01:29:36.152000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 01:29:36.153716 systemd[1]: Stopped target cryptsetup.target. Oct 31 01:29:36.153859 systemd[1]: dracut-initqueue.service: Deactivated successfully. Oct 31 01:29:36.153881 systemd[1]: Stopped dracut-initqueue.service. Oct 31 01:29:36.154042 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Oct 31 01:29:36.154064 systemd[1]: Stopped ignition-fetch-offline.service. Oct 31 01:29:36.154208 systemd[1]: Stopped target paths.target. Oct 31 01:29:36.154349 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Oct 31 01:29:36.157416 systemd[1]: Stopped systemd-ask-password-console.path. Oct 31 01:29:36.157525 systemd[1]: Stopped target slices.target. Oct 31 01:29:36.157688 systemd[1]: Stopped target sockets.target. Oct 31 01:29:36.157862 systemd[1]: iscsid.socket: Deactivated successfully. Oct 31 01:29:36.157878 systemd[1]: Closed iscsid.socket. Oct 31 01:29:36.156000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 01:29:36.158020 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Oct 31 01:29:36.158041 systemd[1]: Stopped initrd-setup-root-after-ignition.service. Oct 31 01:29:36.158214 systemd[1]: ignition-files.service: Deactivated successfully. Oct 31 01:29:36.158234 systemd[1]: Stopped ignition-files.service. Oct 31 01:29:36.158813 systemd[1]: Stopping ignition-mount.service... Oct 31 01:29:36.159431 systemd[1]: Stopping iscsiuio.service... Oct 31 01:29:36.159519 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Oct 31 01:29:36.159546 systemd[1]: Stopped kmod-static-nodes.service. Oct 31 01:29:36.160012 systemd[1]: Stopping sysroot-boot.service... Oct 31 01:29:36.160118 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Oct 31 01:29:36.160146 systemd[1]: Stopped systemd-udev-trigger.service. Oct 31 01:29:36.160269 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Oct 31 01:29:36.160291 systemd[1]: Stopped dracut-pre-trigger.service. Oct 31 01:29:36.156000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 01:29:36.158000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 01:29:36.158000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 01:29:36.158000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 01:29:36.160000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 01:29:36.166538 ignition[888]: INFO : Ignition 2.14.0 Oct 31 01:29:36.166538 ignition[888]: INFO : Stage: umount Oct 31 01:29:36.166538 ignition[888]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" Oct 31 01:29:36.166538 ignition[888]: DEBUG : parsing config with SHA512: bd85a898f7da4744ff98e02742aa4854e1ceea8026a4e95cb6fb599b39b54cff0db353847df13d3c55ae196a9dc5d648977228d55e5da3ea20cd600fa7cec8ed Oct 31 01:29:36.161754 systemd[1]: iscsiuio.service: Deactivated successfully. Oct 31 01:29:36.168133 ignition[888]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/vmware" Oct 31 01:29:36.161817 systemd[1]: Stopped iscsiuio.service. Oct 31 01:29:36.161971 systemd[1]: iscsiuio.socket: Deactivated successfully. Oct 31 01:29:36.161986 systemd[1]: Closed iscsiuio.socket. Oct 31 01:29:36.168739 ignition[888]: INFO : umount: umount passed Oct 31 01:29:36.169149 ignition[888]: INFO : Ignition finished successfully Oct 31 01:29:36.169229 systemd[1]: ignition-mount.service: Deactivated successfully. Oct 31 01:29:36.169377 systemd[1]: Stopped ignition-mount.service. Oct 31 01:29:36.170040 systemd[1]: Stopped target network.target. Oct 31 01:29:36.168000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 01:29:36.170142 systemd[1]: ignition-disks.service: Deactivated successfully. Oct 31 01:29:36.168000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 01:29:36.170170 systemd[1]: Stopped ignition-disks.service. Oct 31 01:29:36.170324 systemd[1]: ignition-kargs.service: Deactivated successfully. Oct 31 01:29:36.169000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 01:29:36.170346 systemd[1]: Stopped ignition-kargs.service. Oct 31 01:29:36.170506 systemd[1]: ignition-setup.service: Deactivated successfully. Oct 31 01:29:36.170528 systemd[1]: Stopped ignition-setup.service. Oct 31 01:29:36.169000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 01:29:36.171386 systemd[1]: Stopping systemd-networkd.service... Oct 31 01:29:36.171584 systemd[1]: Stopping systemd-resolved.service... Oct 31 01:29:36.174711 systemd[1]: sysroot-boot.mount: Deactivated successfully. Oct 31 01:29:36.173000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 01:29:36.174967 systemd[1]: systemd-networkd.service: Deactivated successfully. Oct 31 01:29:36.175019 systemd[1]: Stopped systemd-networkd.service. Oct 31 01:29:36.175590 systemd[1]: systemd-networkd.socket: Deactivated successfully. Oct 31 01:29:36.175610 systemd[1]: Closed systemd-networkd.socket. Oct 31 01:29:36.175000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 01:29:36.175000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=afterburn-network-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 01:29:36.175000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 01:29:36.175000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 01:29:36.177000 audit: BPF prog-id=9 op=UNLOAD Oct 31 01:29:36.179000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 01:29:36.179000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=network-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 01:29:36.176205 systemd[1]: Stopping network-cleanup.service... Oct 31 01:29:36.176307 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Oct 31 01:29:36.176335 systemd[1]: Stopped parse-ip-for-networkd.service. Oct 31 01:29:36.176481 systemd[1]: afterburn-network-kargs.service: Deactivated successfully. Oct 31 01:29:36.176504 systemd[1]: Stopped afterburn-network-kargs.service. Oct 31 01:29:36.176616 systemd[1]: systemd-sysctl.service: Deactivated successfully. Oct 31 01:29:36.176637 systemd[1]: Stopped systemd-sysctl.service. Oct 31 01:29:36.176797 systemd[1]: systemd-modules-load.service: Deactivated successfully. Oct 31 01:29:36.176818 systemd[1]: Stopped systemd-modules-load.service. Oct 31 01:29:36.176964 systemd[1]: Stopping systemd-udevd.service... Oct 31 01:29:36.178672 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Oct 31 01:29:36.180247 systemd[1]: systemd-resolved.service: Deactivated successfully. Oct 31 01:29:36.180348 systemd[1]: Stopped systemd-resolved.service. Oct 31 01:29:36.181167 systemd[1]: network-cleanup.service: Deactivated successfully. Oct 31 01:29:36.181214 systemd[1]: Stopped network-cleanup.service. Oct 31 01:29:36.182000 audit: BPF prog-id=6 op=UNLOAD Oct 31 01:29:36.184321 systemd[1]: systemd-udevd.service: Deactivated successfully. Oct 31 01:29:36.184401 systemd[1]: Stopped systemd-udevd.service. Oct 31 01:29:36.183000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 01:29:36.184777 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Oct 31 01:29:36.184802 systemd[1]: Closed systemd-udevd-control.socket. Oct 31 01:29:36.185016 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Oct 31 01:29:36.185034 systemd[1]: Closed systemd-udevd-kernel.socket. Oct 31 01:29:36.183000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 01:29:36.185174 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Oct 31 01:29:36.184000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 01:29:36.185204 systemd[1]: Stopped dracut-pre-udev.service. Oct 31 01:29:36.184000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 01:29:36.185367 systemd[1]: dracut-cmdline.service: Deactivated successfully. Oct 31 01:29:36.185386 systemd[1]: Stopped dracut-cmdline.service. Oct 31 01:29:36.185544 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Oct 31 01:29:36.185563 systemd[1]: Stopped dracut-cmdline-ask.service. Oct 31 01:29:36.186162 systemd[1]: Starting initrd-udevadm-cleanup-db.service... Oct 31 01:29:36.187533 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Oct 31 01:29:36.187571 systemd[1]: Stopped systemd-vconsole-setup.service. Oct 31 01:29:36.186000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 01:29:36.188000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 01:29:36.188000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 01:29:36.189705 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Oct 31 01:29:36.189753 systemd[1]: Finished initrd-udevadm-cleanup-db.service. Oct 31 01:29:36.386950 systemd[1]: sysroot-boot.service: Deactivated successfully. Oct 31 01:29:36.387029 systemd[1]: Stopped sysroot-boot.service. Oct 31 01:29:36.387359 systemd[1]: Reached target initrd-switch-root.target. Oct 31 01:29:36.385000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 01:29:36.387513 systemd[1]: initrd-setup-root.service: Deactivated successfully. Oct 31 01:29:36.387550 systemd[1]: Stopped initrd-setup-root.service. Oct 31 01:29:36.386000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 01:29:36.388313 systemd[1]: Starting initrd-switch-root.service... Oct 31 01:29:36.411127 systemd[1]: Switching root. Oct 31 01:29:36.410000 audit: BPF prog-id=8 op=UNLOAD Oct 31 01:29:36.410000 audit: BPF prog-id=7 op=UNLOAD Oct 31 01:29:36.413000 audit: BPF prog-id=5 op=UNLOAD Oct 31 01:29:36.413000 audit: BPF prog-id=4 op=UNLOAD Oct 31 01:29:36.413000 audit: BPF prog-id=3 op=UNLOAD Oct 31 01:29:36.430075 systemd-journald[217]: Journal stopped Oct 31 01:29:39.516005 systemd-journald[217]: Received SIGTERM from PID 1 (systemd). Oct 31 01:29:39.516028 kernel: SELinux: Class mctp_socket not defined in policy. Oct 31 01:29:39.516037 kernel: SELinux: Class anon_inode not defined in policy. Oct 31 01:29:39.516044 kernel: SELinux: the above unknown classes and permissions will be allowed Oct 31 01:29:39.516049 kernel: SELinux: policy capability network_peer_controls=1 Oct 31 01:29:39.516056 kernel: SELinux: policy capability open_perms=1 Oct 31 01:29:39.516062 kernel: SELinux: policy capability extended_socket_class=1 Oct 31 01:29:39.516068 kernel: SELinux: policy capability always_check_network=0 Oct 31 01:29:39.516074 kernel: SELinux: policy capability cgroup_seclabel=1 Oct 31 01:29:39.516079 kernel: SELinux: policy capability nnp_nosuid_transition=1 Oct 31 01:29:39.516085 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Oct 31 01:29:39.516091 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Oct 31 01:29:39.516099 systemd[1]: Successfully loaded SELinux policy in 36.288ms. Oct 31 01:29:39.516106 systemd[1]: Relabelled /dev, /dev/shm, /run, /sys/fs/cgroup in 6.140ms. Oct 31 01:29:39.516115 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Oct 31 01:29:39.516122 systemd[1]: Detected virtualization vmware. Oct 31 01:29:39.516129 systemd[1]: Detected architecture x86-64. Oct 31 01:29:39.516136 systemd[1]: Detected first boot. Oct 31 01:29:39.516143 systemd[1]: Initializing machine ID from random generator. Oct 31 01:29:39.516150 kernel: SELinux: Context system_u:object_r:container_file_t:s0:c1022,c1023 is not valid (left unmapped). Oct 31 01:29:39.516156 systemd[1]: Populated /etc with preset unit settings. Oct 31 01:29:39.516164 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Oct 31 01:29:39.516171 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Oct 31 01:29:39.516179 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Oct 31 01:29:39.516187 systemd[1]: Queued start job for default target multi-user.target. Oct 31 01:29:39.516194 systemd[1]: Unnecessary job was removed for dev-sda6.device. Oct 31 01:29:39.516200 systemd[1]: Created slice system-addon\x2dconfig.slice. Oct 31 01:29:39.516208 systemd[1]: Created slice system-addon\x2drun.slice. Oct 31 01:29:39.516214 systemd[1]: Created slice system-getty.slice. Oct 31 01:29:39.516221 systemd[1]: Created slice system-modprobe.slice. Oct 31 01:29:39.516228 systemd[1]: Created slice system-serial\x2dgetty.slice. Oct 31 01:29:39.516236 systemd[1]: Created slice system-system\x2dcloudinit.slice. Oct 31 01:29:39.516243 systemd[1]: Created slice system-systemd\x2dfsck.slice. Oct 31 01:29:39.516250 systemd[1]: Created slice user.slice. Oct 31 01:29:39.516256 systemd[1]: Started systemd-ask-password-console.path. Oct 31 01:29:39.516263 systemd[1]: Started systemd-ask-password-wall.path. Oct 31 01:29:39.516270 systemd[1]: Set up automount boot.automount. Oct 31 01:29:39.516276 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount. Oct 31 01:29:39.516283 systemd[1]: Reached target integritysetup.target. Oct 31 01:29:39.516290 systemd[1]: Reached target remote-cryptsetup.target. Oct 31 01:29:39.516299 systemd[1]: Reached target remote-fs.target. Oct 31 01:29:39.516307 systemd[1]: Reached target slices.target. Oct 31 01:29:39.516314 systemd[1]: Reached target swap.target. Oct 31 01:29:39.516321 systemd[1]: Reached target torcx.target. Oct 31 01:29:39.516328 systemd[1]: Reached target veritysetup.target. Oct 31 01:29:39.516336 systemd[1]: Listening on systemd-coredump.socket. Oct 31 01:29:39.516343 systemd[1]: Listening on systemd-initctl.socket. Oct 31 01:29:39.516350 systemd[1]: Listening on systemd-journald-audit.socket. Oct 31 01:29:39.516358 systemd[1]: Listening on systemd-journald-dev-log.socket. Oct 31 01:29:39.516365 systemd[1]: Listening on systemd-journald.socket. Oct 31 01:29:39.516372 systemd[1]: Listening on systemd-networkd.socket. Oct 31 01:29:39.518815 systemd[1]: Listening on systemd-udevd-control.socket. Oct 31 01:29:39.518829 systemd[1]: Listening on systemd-udevd-kernel.socket. Oct 31 01:29:39.518838 systemd[1]: Listening on systemd-userdbd.socket. Oct 31 01:29:39.518848 systemd[1]: Mounting dev-hugepages.mount... Oct 31 01:29:39.518855 systemd[1]: Mounting dev-mqueue.mount... Oct 31 01:29:39.518862 systemd[1]: Mounting media.mount... Oct 31 01:29:39.518870 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Oct 31 01:29:39.518877 systemd[1]: Mounting sys-kernel-debug.mount... Oct 31 01:29:39.518885 systemd[1]: Mounting sys-kernel-tracing.mount... Oct 31 01:29:39.518892 systemd[1]: Mounting tmp.mount... Oct 31 01:29:39.518901 systemd[1]: Starting flatcar-tmpfiles.service... Oct 31 01:29:39.518908 systemd[1]: Starting ignition-delete-config.service... Oct 31 01:29:39.518915 systemd[1]: Starting kmod-static-nodes.service... Oct 31 01:29:39.518922 systemd[1]: Starting modprobe@configfs.service... Oct 31 01:29:39.518929 systemd[1]: Starting modprobe@dm_mod.service... Oct 31 01:29:39.518937 systemd[1]: Starting modprobe@drm.service... Oct 31 01:29:39.518944 systemd[1]: Starting modprobe@efi_pstore.service... Oct 31 01:29:39.518951 systemd[1]: Starting modprobe@fuse.service... Oct 31 01:29:39.518958 systemd[1]: Starting modprobe@loop.service... Oct 31 01:29:39.518976 systemd[1]: setup-nsswitch.service was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Oct 31 01:29:39.518984 systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling. Oct 31 01:29:39.518991 systemd[1]: (This warning is only shown for the first unit using IP firewalling.) Oct 31 01:29:39.518998 systemd[1]: Starting systemd-journald.service... Oct 31 01:29:39.519006 systemd[1]: Starting systemd-modules-load.service... Oct 31 01:29:39.519013 systemd[1]: Starting systemd-network-generator.service... Oct 31 01:29:39.519020 systemd[1]: Starting systemd-remount-fs.service... Oct 31 01:29:39.519027 systemd[1]: Starting systemd-udev-trigger.service... Oct 31 01:29:39.519034 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Oct 31 01:29:39.519043 systemd[1]: Mounted dev-hugepages.mount. Oct 31 01:29:39.519050 systemd[1]: Mounted dev-mqueue.mount. Oct 31 01:29:39.519057 systemd[1]: Mounted media.mount. Oct 31 01:29:39.519064 systemd[1]: Mounted sys-kernel-debug.mount. Oct 31 01:29:39.519071 systemd[1]: Mounted sys-kernel-tracing.mount. Oct 31 01:29:39.519078 systemd[1]: Mounted tmp.mount. Oct 31 01:29:39.519086 systemd[1]: Finished kmod-static-nodes.service. Oct 31 01:29:39.519093 systemd[1]: modprobe@configfs.service: Deactivated successfully. Oct 31 01:29:39.519100 systemd[1]: Finished modprobe@configfs.service. Oct 31 01:29:39.519108 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Oct 31 01:29:39.519115 systemd[1]: Finished modprobe@dm_mod.service. Oct 31 01:29:39.519123 systemd[1]: modprobe@drm.service: Deactivated successfully. Oct 31 01:29:39.519135 systemd[1]: Finished modprobe@drm.service. Oct 31 01:29:39.519143 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Oct 31 01:29:39.519150 systemd[1]: Finished modprobe@efi_pstore.service. Oct 31 01:29:39.519158 systemd[1]: Finished systemd-modules-load.service. Oct 31 01:29:39.519165 systemd[1]: Finished systemd-network-generator.service. Oct 31 01:29:39.519172 systemd[1]: Finished systemd-remount-fs.service. Oct 31 01:29:39.519180 systemd[1]: Reached target network-pre.target. Oct 31 01:29:39.519188 systemd[1]: Mounting sys-kernel-config.mount... Oct 31 01:29:39.519195 systemd[1]: remount-root.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Oct 31 01:29:39.519206 systemd-journald[1035]: Journal started Oct 31 01:29:39.519241 systemd-journald[1035]: Runtime Journal (/run/log/journal/a836b12f4abd44b3a88d3e0c235a9367) is 4.8M, max 38.8M, 34.0M free. Oct 31 01:29:39.424000 audit[1]: AVC avc: denied { audit_read } for pid=1 comm="systemd" capability=37 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 Oct 31 01:29:39.424000 audit[1]: EVENT_LISTENER pid=1 uid=0 auid=4294967295 tty=(none) ses=4294967295 subj=system_u:system_r:kernel_t:s0 comm="systemd" exe="/usr/lib/systemd/systemd" nl-mcgrp=1 op=connect res=1 Oct 31 01:29:39.494000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 01:29:39.496000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 01:29:39.496000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 01:29:39.500000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 01:29:39.500000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 01:29:39.505000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 01:29:39.505000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 01:29:39.508000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 01:29:39.508000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 01:29:39.508000 audit: CONFIG_CHANGE op=set audit_enabled=1 old=1 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 res=1 Oct 31 01:29:39.508000 audit[1035]: SYSCALL arch=c000003e syscall=46 success=yes exit=60 a0=3 a1=7fff8a00ed80 a2=4000 a3=7fff8a00ee1c items=0 ppid=1 pid=1035 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="systemd-journal" exe="/usr/lib/systemd/systemd-journald" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 01:29:39.509000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 01:29:39.508000 audit: PROCTITLE proctitle="/usr/lib/systemd/systemd-journald" Oct 31 01:29:39.510000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 01:29:39.513000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-remount-fs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 01:29:39.521030 jq[1019]: true Oct 31 01:29:39.525008 jq[1050]: true Oct 31 01:29:39.530406 kernel: loop: module loaded Oct 31 01:29:39.535367 systemd[1]: Starting systemd-hwdb-update.service... Oct 31 01:29:39.535406 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Oct 31 01:29:39.537427 systemd[1]: Starting systemd-random-seed.service... Oct 31 01:29:39.539422 systemd[1]: Starting systemd-sysctl.service... Oct 31 01:29:39.543431 systemd[1]: Started systemd-journald.service. Oct 31 01:29:39.542000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 01:29:39.544154 systemd[1]: modprobe@loop.service: Deactivated successfully. Oct 31 01:29:39.544248 systemd[1]: Finished modprobe@loop.service. Oct 31 01:29:39.542000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 01:29:39.542000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 01:29:39.544784 systemd[1]: Mounted sys-kernel-config.mount. Oct 31 01:29:39.545745 systemd[1]: Starting systemd-journal-flush.service... Oct 31 01:29:39.545863 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Oct 31 01:29:39.547280 systemd[1]: Finished flatcar-tmpfiles.service. Oct 31 01:29:39.545000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=flatcar-tmpfiles comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 01:29:39.548255 systemd[1]: Starting systemd-sysusers.service... Oct 31 01:29:39.557068 systemd[1]: modprobe@fuse.service: Deactivated successfully. Oct 31 01:29:39.557179 systemd[1]: Finished modprobe@fuse.service. Oct 31 01:29:39.555000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 01:29:39.555000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 01:29:39.557497 kernel: fuse: init (API version 7.34) Oct 31 01:29:39.558186 systemd[1]: Mounting sys-fs-fuse-connections.mount... Oct 31 01:29:39.560330 systemd[1]: Mounted sys-fs-fuse-connections.mount. Oct 31 01:29:39.568042 systemd-journald[1035]: Time spent on flushing to /var/log/journal/a836b12f4abd44b3a88d3e0c235a9367 is 61.497ms for 1933 entries. Oct 31 01:29:39.568042 systemd-journald[1035]: System Journal (/var/log/journal/a836b12f4abd44b3a88d3e0c235a9367) is 8.0M, max 584.8M, 576.8M free. Oct 31 01:29:39.701276 systemd-journald[1035]: Received client request to flush runtime journal. Oct 31 01:29:39.568000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-random-seed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 01:29:39.588000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 01:29:39.602000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysusers comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 01:29:39.669000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 01:29:39.569469 systemd[1]: Finished systemd-random-seed.service. Oct 31 01:29:39.569615 systemd[1]: Reached target first-boot-complete.target. Oct 31 01:29:39.589505 systemd[1]: Finished systemd-sysctl.service. Oct 31 01:29:39.701714 udevadm[1108]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Oct 31 01:29:39.603921 systemd[1]: Finished systemd-sysusers.service. Oct 31 01:29:39.605041 systemd[1]: Starting systemd-tmpfiles-setup-dev.service... Oct 31 01:29:39.670292 systemd[1]: Finished systemd-udev-trigger.service. Oct 31 01:29:39.671352 systemd[1]: Starting systemd-udev-settle.service... Oct 31 01:29:39.701000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-flush comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 01:29:39.702447 systemd[1]: Finished systemd-journal-flush.service. Oct 31 01:29:39.750454 systemd[1]: Finished systemd-tmpfiles-setup-dev.service. Oct 31 01:29:39.749000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 01:29:39.780425 ignition[1086]: Ignition 2.14.0 Oct 31 01:29:39.780615 ignition[1086]: deleting config from guestinfo properties Oct 31 01:29:39.783490 ignition[1086]: Successfully deleted config Oct 31 01:29:39.784113 systemd[1]: Finished ignition-delete-config.service. Oct 31 01:29:39.782000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=ignition-delete-config comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 01:29:40.113877 systemd[1]: Finished systemd-hwdb-update.service. Oct 31 01:29:40.112000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hwdb-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 01:29:40.115165 systemd[1]: Starting systemd-udevd.service... Oct 31 01:29:40.132017 systemd-udevd[1113]: Using default interface naming scheme 'v252'. Oct 31 01:29:40.194000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 01:29:40.195419 systemd[1]: Started systemd-udevd.service. Oct 31 01:29:40.199306 systemd[1]: Starting systemd-networkd.service... Oct 31 01:29:40.207995 systemd[1]: Starting systemd-userdbd.service... Oct 31 01:29:40.240277 systemd[1]: Found device dev-ttyS0.device. Oct 31 01:29:40.240000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-userdbd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 01:29:40.242171 systemd[1]: Started systemd-userdbd.service. Oct 31 01:29:40.281426 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input2 Oct 31 01:29:40.294425 kernel: ACPI: button: Power Button [PWRF] Oct 31 01:29:40.316000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 01:29:40.316779 systemd-networkd[1125]: lo: Link UP Oct 31 01:29:40.316784 systemd-networkd[1125]: lo: Gained carrier Oct 31 01:29:40.317203 systemd-networkd[1125]: Enumeration completed Oct 31 01:29:40.317299 systemd[1]: Started systemd-networkd.service. Oct 31 01:29:40.317934 systemd-networkd[1125]: ens192: Configuring with /etc/systemd/network/00-vmware.network. Oct 31 01:29:40.322890 kernel: vmxnet3 0000:0b:00.0 ens192: intr type 3, mode 0, 3 vectors allocated Oct 31 01:29:40.323080 kernel: vmxnet3 0000:0b:00.0 ens192: NIC Link is Up 10000 Mbps Oct 31 01:29:40.324434 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): ens192: link becomes ready Oct 31 01:29:40.327135 systemd-networkd[1125]: ens192: Link UP Oct 31 01:29:40.327371 systemd-networkd[1125]: ens192: Gained carrier Oct 31 01:29:40.349602 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. Oct 31 01:29:40.378414 kernel: vmw_vmci 0000:00:07.7: Found VMCI PCI device at 0x11080, irq 16 Oct 31 01:29:40.388563 kernel: vmw_vmci 0000:00:07.7: Using capabilities 0xc Oct 31 01:29:40.388653 kernel: Guest personality initialized and is active Oct 31 01:29:40.369000 audit[1127]: AVC avc: denied { confidentiality } for pid=1127 comm="(udev-worker)" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=1 Oct 31 01:29:40.369000 audit[1127]: SYSCALL arch=c000003e syscall=175 success=yes exit=0 a0=557cfd3c5040 a1=338ec a2=7f8aebbb0bc5 a3=5 items=110 ppid=1113 pid=1127 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="(udev-worker)" exe="/usr/bin/udevadm" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 01:29:40.369000 audit: CWD cwd="/" Oct 31 01:29:40.369000 audit: PATH item=0 name=(null) inode=45 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 31 01:29:40.369000 audit: PATH item=1 name=(null) inode=17841 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 31 01:29:40.369000 audit: PATH item=2 name=(null) inode=17841 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 31 01:29:40.369000 audit: PATH item=3 name=(null) inode=17842 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 31 01:29:40.369000 audit: PATH item=4 name=(null) inode=17841 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 31 01:29:40.369000 audit: PATH item=5 name=(null) inode=17843 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 31 01:29:40.369000 audit: PATH item=6 name=(null) inode=17841 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 31 01:29:40.369000 audit: PATH item=7 name=(null) inode=17844 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 31 01:29:40.369000 audit: PATH item=8 name=(null) inode=17844 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 31 01:29:40.369000 audit: PATH item=9 name=(null) inode=17845 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 31 01:29:40.369000 audit: PATH item=10 name=(null) inode=17844 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 31 01:29:40.369000 audit: PATH item=11 name=(null) inode=17846 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 31 01:29:40.369000 audit: PATH item=12 name=(null) inode=17844 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 31 01:29:40.369000 audit: PATH item=13 name=(null) inode=17847 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 31 01:29:40.369000 audit: PATH item=14 name=(null) inode=17844 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 31 01:29:40.369000 audit: PATH item=15 name=(null) inode=17848 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 31 01:29:40.369000 audit: PATH item=16 name=(null) inode=17844 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 31 01:29:40.369000 audit: PATH item=17 name=(null) inode=17849 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 31 01:29:40.369000 audit: PATH item=18 name=(null) inode=17841 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 31 01:29:40.369000 audit: PATH item=19 name=(null) inode=17850 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 31 01:29:40.369000 audit: PATH item=20 name=(null) inode=17850 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 31 01:29:40.369000 audit: PATH item=21 name=(null) inode=17851 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 31 01:29:40.369000 audit: PATH item=22 name=(null) inode=17850 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 31 01:29:40.369000 audit: PATH item=23 name=(null) inode=17852 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 31 01:29:40.369000 audit: PATH item=24 name=(null) inode=17850 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 31 01:29:40.369000 audit: PATH item=25 name=(null) inode=17853 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 31 01:29:40.369000 audit: PATH item=26 name=(null) inode=17850 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 31 01:29:40.369000 audit: PATH item=27 name=(null) inode=17854 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 31 01:29:40.369000 audit: PATH item=28 name=(null) inode=17850 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 31 01:29:40.369000 audit: PATH item=29 name=(null) inode=17855 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 31 01:29:40.369000 audit: PATH item=30 name=(null) inode=17841 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 31 01:29:40.369000 audit: PATH item=31 name=(null) inode=17856 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 31 01:29:40.369000 audit: PATH item=32 name=(null) inode=17856 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 31 01:29:40.369000 audit: PATH item=33 name=(null) inode=17857 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 31 01:29:40.369000 audit: PATH item=34 name=(null) inode=17856 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 31 01:29:40.369000 audit: PATH item=35 name=(null) inode=17858 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 31 01:29:40.369000 audit: PATH item=36 name=(null) inode=17856 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 31 01:29:40.369000 audit: PATH item=37 name=(null) inode=17859 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 31 01:29:40.369000 audit: PATH item=38 name=(null) inode=17856 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 31 01:29:40.369000 audit: PATH item=39 name=(null) inode=17860 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 31 01:29:40.369000 audit: PATH item=40 name=(null) inode=17856 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 31 01:29:40.369000 audit: PATH item=41 name=(null) inode=17861 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 31 01:29:40.369000 audit: PATH item=42 name=(null) inode=17841 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 31 01:29:40.369000 audit: PATH item=43 name=(null) inode=17862 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 31 01:29:40.369000 audit: PATH item=44 name=(null) inode=17862 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 31 01:29:40.369000 audit: PATH item=45 name=(null) inode=17863 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 31 01:29:40.369000 audit: PATH item=46 name=(null) inode=17862 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 31 01:29:40.369000 audit: PATH item=47 name=(null) inode=17864 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 31 01:29:40.369000 audit: PATH item=48 name=(null) inode=17862 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 31 01:29:40.369000 audit: PATH item=49 name=(null) inode=17865 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 31 01:29:40.369000 audit: PATH item=50 name=(null) inode=17862 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 31 01:29:40.369000 audit: PATH item=51 name=(null) inode=17866 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 31 01:29:40.369000 audit: PATH item=52 name=(null) inode=17862 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 31 01:29:40.369000 audit: PATH item=53 name=(null) inode=17867 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 31 01:29:40.369000 audit: PATH item=54 name=(null) inode=45 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 31 01:29:40.369000 audit: PATH item=55 name=(null) inode=17868 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 31 01:29:40.369000 audit: PATH item=56 name=(null) inode=17868 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 31 01:29:40.369000 audit: PATH item=57 name=(null) inode=17869 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 31 01:29:40.369000 audit: PATH item=58 name=(null) inode=17868 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 31 01:29:40.369000 audit: PATH item=59 name=(null) inode=17870 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 31 01:29:40.369000 audit: PATH item=60 name=(null) inode=17868 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 31 01:29:40.369000 audit: PATH item=61 name=(null) inode=17871 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 31 01:29:40.369000 audit: PATH item=62 name=(null) inode=17871 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 31 01:29:40.369000 audit: PATH item=63 name=(null) inode=17872 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 31 01:29:40.369000 audit: PATH item=64 name=(null) inode=17871 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 31 01:29:40.369000 audit: PATH item=65 name=(null) inode=17873 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 31 01:29:40.369000 audit: PATH item=66 name=(null) inode=17871 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 31 01:29:40.369000 audit: PATH item=67 name=(null) inode=17874 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 31 01:29:40.369000 audit: PATH item=68 name=(null) inode=17871 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 31 01:29:40.369000 audit: PATH item=69 name=(null) inode=17875 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 31 01:29:40.369000 audit: PATH item=70 name=(null) inode=17871 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 31 01:29:40.369000 audit: PATH item=71 name=(null) inode=17876 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 31 01:29:40.369000 audit: PATH item=72 name=(null) inode=17868 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 31 01:29:40.369000 audit: PATH item=73 name=(null) inode=17877 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 31 01:29:40.369000 audit: PATH item=74 name=(null) inode=17877 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 31 01:29:40.369000 audit: PATH item=75 name=(null) inode=17878 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 31 01:29:40.369000 audit: PATH item=76 name=(null) inode=17877 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 31 01:29:40.369000 audit: PATH item=77 name=(null) inode=17879 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 31 01:29:40.369000 audit: PATH item=78 name=(null) inode=17877 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 31 01:29:40.390770 kernel: VMCI host device registered (name=vmci, major=10, minor=125) Oct 31 01:29:40.390792 kernel: Initialized host personality Oct 31 01:29:40.369000 audit: PATH item=79 name=(null) inode=17880 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 31 01:29:40.369000 audit: PATH item=80 name=(null) inode=17877 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 31 01:29:40.369000 audit: PATH item=81 name=(null) inode=17881 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 31 01:29:40.369000 audit: PATH item=82 name=(null) inode=17877 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 31 01:29:40.369000 audit: PATH item=83 name=(null) inode=17882 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 31 01:29:40.369000 audit: PATH item=84 name=(null) inode=17868 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 31 01:29:40.369000 audit: PATH item=85 name=(null) inode=17883 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 31 01:29:40.369000 audit: PATH item=86 name=(null) inode=17883 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 31 01:29:40.369000 audit: PATH item=87 name=(null) inode=17884 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 31 01:29:40.369000 audit: PATH item=88 name=(null) inode=17883 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 31 01:29:40.369000 audit: PATH item=89 name=(null) inode=17885 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 31 01:29:40.369000 audit: PATH item=90 name=(null) inode=17883 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 31 01:29:40.369000 audit: PATH item=91 name=(null) inode=17886 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 31 01:29:40.369000 audit: PATH item=92 name=(null) inode=17883 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 31 01:29:40.369000 audit: PATH item=93 name=(null) inode=17887 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 31 01:29:40.369000 audit: PATH item=94 name=(null) inode=17883 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 31 01:29:40.369000 audit: PATH item=95 name=(null) inode=17888 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 31 01:29:40.369000 audit: PATH item=96 name=(null) inode=17868 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 31 01:29:40.369000 audit: PATH item=97 name=(null) inode=17889 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 31 01:29:40.369000 audit: PATH item=98 name=(null) inode=17889 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 31 01:29:40.369000 audit: PATH item=99 name=(null) inode=17890 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 31 01:29:40.369000 audit: PATH item=100 name=(null) inode=17889 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 31 01:29:40.369000 audit: PATH item=101 name=(null) inode=17891 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 31 01:29:40.369000 audit: PATH item=102 name=(null) inode=17889 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 31 01:29:40.369000 audit: PATH item=103 name=(null) inode=17892 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 31 01:29:40.369000 audit: PATH item=104 name=(null) inode=17889 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 31 01:29:40.369000 audit: PATH item=105 name=(null) inode=17893 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 31 01:29:40.369000 audit: PATH item=106 name=(null) inode=17889 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 31 01:29:40.369000 audit: PATH item=107 name=(null) inode=17894 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 31 01:29:40.369000 audit: PATH item=108 name=(null) inode=1 dev=00:07 mode=040700 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:debugfs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 31 01:29:40.369000 audit: PATH item=109 name=(null) inode=17895 dev=00:07 mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:debugfs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 31 01:29:40.369000 audit: PROCTITLE proctitle="(udev-worker)" Oct 31 01:29:40.402093 kernel: piix4_smbus 0000:00:07.3: SMBus Host Controller not enabled! Oct 31 01:29:40.407413 kernel: input: ImPS/2 Generic Wheel Mouse as /devices/platform/i8042/serio1/input/input3 Oct 31 01:29:40.418409 kernel: mousedev: PS/2 mouse device common for all mice Oct 31 01:29:40.418991 (udev-worker)[1114]: id: Truncating stdout of 'dmi_memory_id' up to 16384 byte. Oct 31 01:29:40.436000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-settle comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 01:29:40.437639 systemd[1]: Finished systemd-udev-settle.service. Oct 31 01:29:40.438774 systemd[1]: Starting lvm2-activation-early.service... Oct 31 01:29:40.582021 lvm[1147]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Oct 31 01:29:40.620000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 01:29:40.622154 systemd[1]: Finished lvm2-activation-early.service. Oct 31 01:29:40.622387 systemd[1]: Reached target cryptsetup.target. Oct 31 01:29:40.623628 systemd[1]: Starting lvm2-activation.service... Oct 31 01:29:40.627945 lvm[1149]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Oct 31 01:29:40.659000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 01:29:40.661218 systemd[1]: Finished lvm2-activation.service. Oct 31 01:29:40.661459 systemd[1]: Reached target local-fs-pre.target. Oct 31 01:29:40.661594 systemd[1]: var-lib-machines.mount was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Oct 31 01:29:40.661613 systemd[1]: Reached target local-fs.target. Oct 31 01:29:40.661737 systemd[1]: Reached target machines.target. Oct 31 01:29:40.663038 systemd[1]: Starting ldconfig.service... Oct 31 01:29:40.668605 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Oct 31 01:29:40.668664 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Oct 31 01:29:40.670080 systemd[1]: Starting systemd-boot-update.service... Oct 31 01:29:40.671078 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service... Oct 31 01:29:40.672313 systemd[1]: Starting systemd-machine-id-commit.service... Oct 31 01:29:40.673714 systemd[1]: Starting systemd-sysext.service... Oct 31 01:29:40.697495 systemd[1]: boot.automount: Got automount request for /boot, triggered by 1152 (bootctl) Oct 31 01:29:40.698452 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service... Oct 31 01:29:40.715000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-OEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 01:29:40.716626 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service. Oct 31 01:29:40.721378 systemd[1]: Unmounting usr-share-oem.mount... Oct 31 01:29:40.723768 systemd[1]: usr-share-oem.mount: Deactivated successfully. Oct 31 01:29:40.723901 systemd[1]: Unmounted usr-share-oem.mount. Oct 31 01:29:40.749416 kernel: loop0: detected capacity change from 0 to 224512 Oct 31 01:29:41.413639 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Oct 31 01:29:41.414090 systemd[1]: Finished systemd-machine-id-commit.service. Oct 31 01:29:41.412000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-machine-id-commit comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 01:29:41.414972 kernel: kauditd_printk_skb: 199 callbacks suppressed Oct 31 01:29:41.415013 kernel: audit: type=1130 audit(1761874181.412:121): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-machine-id-commit comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 01:29:41.430433 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Oct 31 01:29:41.440568 systemd-fsck[1164]: fsck.fat 4.2 (2021-01-31) Oct 31 01:29:41.440568 systemd-fsck[1164]: /dev/sda1: 790 files, 120772/258078 clusters Oct 31 01:29:41.441815 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service. Oct 31 01:29:41.442990 systemd[1]: Mounting boot.mount... Oct 31 01:29:41.440000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 01:29:41.447473 kernel: audit: type=1130 audit(1761874181.440:122): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 01:29:41.451416 kernel: loop1: detected capacity change from 0 to 224512 Oct 31 01:29:41.535668 systemd[1]: Mounted boot.mount. Oct 31 01:29:41.550039 systemd[1]: Finished systemd-boot-update.service. Oct 31 01:29:41.548000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-boot-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 01:29:41.553412 kernel: audit: type=1130 audit(1761874181.548:123): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-boot-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 01:29:41.554224 (sd-sysext)[1171]: Using extensions 'kubernetes'. Oct 31 01:29:41.554516 (sd-sysext)[1171]: Merged extensions into '/usr'. Oct 31 01:29:41.568378 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Oct 31 01:29:41.569904 systemd[1]: Mounting usr-share-oem.mount... Oct 31 01:29:41.571068 systemd[1]: Starting modprobe@dm_mod.service... Oct 31 01:29:41.572251 systemd[1]: Starting modprobe@efi_pstore.service... Oct 31 01:29:41.573330 systemd[1]: Starting modprobe@loop.service... Oct 31 01:29:41.573548 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Oct 31 01:29:41.573746 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Oct 31 01:29:41.573953 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Oct 31 01:29:41.575992 systemd[1]: Mounted usr-share-oem.mount. Oct 31 01:29:41.576508 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Oct 31 01:29:41.576989 systemd[1]: Finished modprobe@efi_pstore.service. Oct 31 01:29:41.575000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 01:29:41.577383 systemd[1]: Finished systemd-sysext.service. Oct 31 01:29:41.575000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 01:29:41.583289 kernel: audit: type=1130 audit(1761874181.575:124): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 01:29:41.583368 kernel: audit: type=1131 audit(1761874181.575:125): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 01:29:41.584000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 01:29:41.586316 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Oct 31 01:29:41.586500 systemd[1]: Finished modprobe@dm_mod.service. Oct 31 01:29:41.587000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 01:29:41.588817 systemd[1]: modprobe@loop.service: Deactivated successfully. Oct 31 01:29:41.588969 systemd[1]: Finished modprobe@loop.service. Oct 31 01:29:41.591184 kernel: audit: type=1130 audit(1761874181.584:126): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 01:29:41.591220 kernel: audit: type=1130 audit(1761874181.587:127): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 01:29:41.591236 kernel: audit: type=1131 audit(1761874181.587:128): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 01:29:41.587000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 01:29:41.592000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 01:29:41.595708 systemd[1]: Starting ensure-sysext.service... Oct 31 01:29:41.599414 kernel: audit: type=1130 audit(1761874181.592:129): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 01:29:41.599458 kernel: audit: type=1131 audit(1761874181.592:130): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 01:29:41.592000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 01:29:41.600080 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Oct 31 01:29:41.600122 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Oct 31 01:29:41.601119 systemd[1]: Starting systemd-tmpfiles-setup.service... Oct 31 01:29:41.602954 systemd[1]: Reloading. Oct 31 01:29:41.613927 systemd-tmpfiles[1187]: /usr/lib/tmpfiles.d/legacy.conf:13: Duplicate line for path "/run/lock", ignoring. Oct 31 01:29:41.620113 systemd-tmpfiles[1187]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Oct 31 01:29:41.625626 systemd-tmpfiles[1187]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Oct 31 01:29:41.638484 /usr/lib/systemd/system-generators/torcx-generator[1206]: time="2025-10-31T01:29:41Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.8 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.8 /var/lib/torcx/store]" Oct 31 01:29:41.638501 /usr/lib/systemd/system-generators/torcx-generator[1206]: time="2025-10-31T01:29:41Z" level=info msg="torcx already run" Oct 31 01:29:41.715044 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Oct 31 01:29:41.715059 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Oct 31 01:29:41.730963 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Oct 31 01:29:41.774029 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Oct 31 01:29:41.774944 systemd[1]: Starting modprobe@dm_mod.service... Oct 31 01:29:41.775999 systemd[1]: Starting modprobe@efi_pstore.service... Oct 31 01:29:41.777008 systemd[1]: Starting modprobe@loop.service... Oct 31 01:29:41.777166 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Oct 31 01:29:41.777241 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Oct 31 01:29:41.777347 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Oct 31 01:29:41.778243 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Oct 31 01:29:41.778436 systemd[1]: Finished modprobe@dm_mod.service. Oct 31 01:29:41.777000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 01:29:41.777000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 01:29:41.778991 systemd[1]: modprobe@loop.service: Deactivated successfully. Oct 31 01:29:41.779084 systemd[1]: Finished modprobe@loop.service. Oct 31 01:29:41.777000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 01:29:41.777000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 01:29:41.779391 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Oct 31 01:29:41.780269 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Oct 31 01:29:41.781297 systemd[1]: Starting modprobe@dm_mod.service... Oct 31 01:29:41.782451 systemd[1]: Starting modprobe@loop.service... Oct 31 01:29:41.782633 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Oct 31 01:29:41.782709 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Oct 31 01:29:41.782815 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Oct 31 01:29:41.784591 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Oct 31 01:29:41.788848 systemd[1]: Starting modprobe@drm.service... Oct 31 01:29:41.789133 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Oct 31 01:29:41.789216 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Oct 31 01:29:41.791565 systemd[1]: Starting systemd-networkd-wait-online.service... Oct 31 01:29:41.791775 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Oct 31 01:29:41.794711 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Oct 31 01:29:41.794817 systemd[1]: Finished modprobe@efi_pstore.service. Oct 31 01:29:41.795000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 01:29:41.795000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 01:29:41.796923 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Oct 31 01:29:41.797048 systemd[1]: Finished modprobe@dm_mod.service. Oct 31 01:29:41.797000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 01:29:41.798000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 01:29:41.798000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 01:29:41.798000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 01:29:41.799817 systemd[1]: modprobe@loop.service: Deactivated successfully. Oct 31 01:29:41.799914 systemd[1]: Finished modprobe@loop.service. Oct 31 01:29:41.799000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 01:29:41.799000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 01:29:41.800482 systemd[1]: modprobe@drm.service: Deactivated successfully. Oct 31 01:29:41.800572 systemd[1]: Finished modprobe@drm.service. Oct 31 01:29:41.801176 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Oct 31 01:29:41.801259 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Oct 31 01:29:41.802152 systemd[1]: Finished ensure-sysext.service. Oct 31 01:29:41.801000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=ensure-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 01:29:41.868057 systemd[1]: Finished systemd-tmpfiles-setup.service. Oct 31 01:29:41.869320 systemd[1]: Starting audit-rules.service... Oct 31 01:29:41.866000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 01:29:41.870432 systemd[1]: Starting clean-ca-certificates.service... Oct 31 01:29:41.871552 systemd[1]: Starting systemd-journal-catalog-update.service... Oct 31 01:29:41.873303 systemd[1]: Starting systemd-resolved.service... Oct 31 01:29:41.875494 systemd[1]: Starting systemd-timesyncd.service... Oct 31 01:29:41.878752 systemd[1]: Starting systemd-update-utmp.service... Oct 31 01:29:41.882000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=clean-ca-certificates comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 01:29:41.883708 systemd[1]: Finished clean-ca-certificates.service. Oct 31 01:29:41.884119 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Oct 31 01:29:41.888000 audit[1297]: SYSTEM_BOOT pid=1297 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg=' comm="systemd-update-utmp" exe="/usr/lib/systemd/systemd-update-utmp" hostname=? addr=? terminal=? res=success' Oct 31 01:29:41.889000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-update-utmp comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 01:29:41.891013 systemd[1]: Finished systemd-update-utmp.service. Oct 31 01:29:41.911000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-catalog-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 01:29:41.912638 systemd[1]: Finished systemd-journal-catalog-update.service. Oct 31 01:29:41.941000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=add_rule key=(null) list=5 res=1 Oct 31 01:29:41.941000 audit[1314]: SYSCALL arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7ffc0f02c480 a2=420 a3=0 items=0 ppid=1291 pid=1314 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/sbin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 01:29:41.941000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Oct 31 01:29:41.942603 augenrules[1314]: No rules Oct 31 01:29:41.943077 systemd[1]: Finished audit-rules.service. Oct 31 01:29:41.959590 systemd-networkd[1125]: ens192: Gained IPv6LL Oct 31 01:29:41.961818 systemd[1]: Finished systemd-networkd-wait-online.service. Oct 31 01:29:41.969838 systemd[1]: Started systemd-timesyncd.service. Oct 31 01:29:41.970019 systemd[1]: Reached target time-set.target. Oct 31 01:29:41.974455 systemd-resolved[1294]: Positive Trust Anchors: Oct 31 01:29:41.974466 systemd-resolved[1294]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Oct 31 01:29:41.974485 systemd-resolved[1294]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Oct 31 01:29:42.018376 systemd-resolved[1294]: Defaulting to hostname 'linux'. Oct 31 01:29:42.019631 systemd[1]: Started systemd-resolved.service. Oct 31 01:29:42.019785 systemd[1]: Reached target network.target. Oct 31 01:29:42.019875 systemd[1]: Reached target network-online.target. Oct 31 01:29:42.019965 systemd[1]: Reached target nss-lookup.target. Oct 31 01:29:42.106221 ldconfig[1151]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Oct 31 01:29:42.109123 systemd[1]: Finished ldconfig.service. Oct 31 01:29:42.110429 systemd[1]: Starting systemd-update-done.service... Oct 31 01:29:42.114484 systemd[1]: Finished systemd-update-done.service. Oct 31 01:29:42.114661 systemd[1]: Reached target sysinit.target. Oct 31 01:29:42.114819 systemd[1]: Started motdgen.path. Oct 31 01:29:42.114938 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path. Oct 31 01:29:42.115146 systemd[1]: Started logrotate.timer. Oct 31 01:29:42.115279 systemd[1]: Started mdadm.timer. Oct 31 01:29:42.115365 systemd[1]: Started systemd-tmpfiles-clean.timer. Oct 31 01:29:42.115467 systemd[1]: update-engine-stub.timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Oct 31 01:29:42.115484 systemd[1]: Reached target paths.target. Oct 31 01:29:42.115567 systemd[1]: Reached target timers.target. Oct 31 01:29:42.115818 systemd[1]: Listening on dbus.socket. Oct 31 01:29:42.116794 systemd[1]: Starting docker.socket... Oct 31 01:29:42.124599 systemd[1]: Listening on sshd.socket. Oct 31 01:29:42.124781 systemd[1]: systemd-pcrphase-sysinit.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Oct 31 01:29:42.125088 systemd[1]: Listening on docker.socket. Oct 31 01:29:42.125204 systemd[1]: Reached target sockets.target. Oct 31 01:29:42.125295 systemd[1]: Reached target basic.target. Oct 31 01:29:42.125485 systemd[1]: System is tainted: cgroupsv1 Oct 31 01:29:42.125516 systemd[1]: addon-config@usr-share-oem.service was skipped because no trigger condition checks were met. Oct 31 01:29:42.125532 systemd[1]: addon-run@usr-share-oem.service was skipped because no trigger condition checks were met. Oct 31 01:29:42.126545 systemd[1]: Starting containerd.service... Oct 31 01:29:42.127458 systemd[1]: Starting dbus.service... Oct 31 01:29:42.128455 systemd[1]: Starting enable-oem-cloudinit.service... Oct 31 01:29:42.129373 systemd[1]: Starting extend-filesystems.service... Oct 31 01:29:42.129537 systemd[1]: flatcar-setup-environment.service was skipped because of an unmet condition check (ConditionPathExists=/usr/share/oem/bin/flatcar-setup-environment). Oct 31 01:29:42.132691 jq[1330]: false Oct 31 01:29:42.135482 systemd[1]: Starting kubelet.service... Oct 31 01:29:42.136368 systemd[1]: Starting motdgen.service... Oct 31 01:29:42.145971 systemd[1]: Starting prepare-helm.service... Oct 31 01:29:42.147069 systemd[1]: Starting ssh-key-proc-cmdline.service... Oct 31 01:29:42.148118 systemd[1]: Starting sshd-keygen.service... Oct 31 01:29:42.149733 systemd[1]: Starting systemd-logind.service... Oct 31 01:31:20.308271 systemd-timesyncd[1295]: Contacted time server 172.235.60.8:123 (0.flatcar.pool.ntp.org). Oct 31 01:31:20.308287 systemd[1]: systemd-pcrphase.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Oct 31 01:31:20.308329 systemd[1]: tcsd.service was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Oct 31 01:31:20.308543 systemd-timesyncd[1295]: Initial clock synchronization to Fri 2025-10-31 01:31:20.308200 UTC. Oct 31 01:31:20.308911 systemd-resolved[1294]: Clock change detected. Flushing caches. Oct 31 01:31:20.309662 systemd[1]: Starting update-engine.service... Oct 31 01:31:20.311244 systemd[1]: Starting update-ssh-keys-after-ignition.service... Oct 31 01:31:20.312320 systemd[1]: Starting vmtoolsd.service... Oct 31 01:31:20.313627 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Oct 31 01:31:20.314226 systemd[1]: Condition check resulted in enable-oem-cloudinit.service being skipped. Oct 31 01:31:20.317040 jq[1344]: true Oct 31 01:31:20.321147 jq[1352]: true Oct 31 01:31:20.327299 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Oct 31 01:31:20.327444 systemd[1]: Finished ssh-key-proc-cmdline.service. Oct 31 01:31:20.345261 systemd[1]: Started vmtoolsd.service. Oct 31 01:31:20.352251 extend-filesystems[1331]: Found loop1 Oct 31 01:31:20.352251 extend-filesystems[1331]: Found sda Oct 31 01:31:20.352251 extend-filesystems[1331]: Found sda1 Oct 31 01:31:20.352251 extend-filesystems[1331]: Found sda2 Oct 31 01:31:20.352251 extend-filesystems[1331]: Found sda3 Oct 31 01:31:20.352251 extend-filesystems[1331]: Found usr Oct 31 01:31:20.352251 extend-filesystems[1331]: Found sda4 Oct 31 01:31:20.352251 extend-filesystems[1331]: Found sda6 Oct 31 01:31:20.352251 extend-filesystems[1331]: Found sda7 Oct 31 01:31:20.352251 extend-filesystems[1331]: Found sda9 Oct 31 01:31:20.352251 extend-filesystems[1331]: Checking size of /dev/sda9 Oct 31 01:31:20.355518 systemd[1]: motdgen.service: Deactivated successfully. Oct 31 01:31:20.355679 systemd[1]: Finished motdgen.service. Oct 31 01:31:20.392416 env[1377]: time="2025-10-31T01:31:20.392367922Z" level=info msg="starting containerd" revision=92b3a9d6f1b3bcc6dc74875cfdea653fe39f09c2 version=1.6.16 Oct 31 01:31:20.402338 env[1377]: time="2025-10-31T01:31:20.402320777Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Oct 31 01:31:20.402467 env[1377]: time="2025-10-31T01:31:20.402456426Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Oct 31 01:31:20.403306 env[1377]: time="2025-10-31T01:31:20.403279509Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.15.192-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Oct 31 01:31:20.403306 env[1377]: time="2025-10-31T01:31:20.403297936Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Oct 31 01:31:20.403447 env[1377]: time="2025-10-31T01:31:20.403433425Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Oct 31 01:31:20.403447 env[1377]: time="2025-10-31T01:31:20.403446037Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Oct 31 01:31:20.403495 env[1377]: time="2025-10-31T01:31:20.403453971Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured" Oct 31 01:31:20.403495 env[1377]: time="2025-10-31T01:31:20.403459509Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Oct 31 01:31:20.403535 env[1377]: time="2025-10-31T01:31:20.403502848Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Oct 31 01:31:20.403654 env[1377]: time="2025-10-31T01:31:20.403641011Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Oct 31 01:31:20.403751 env[1377]: time="2025-10-31T01:31:20.403735312Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Oct 31 01:31:20.403751 env[1377]: time="2025-10-31T01:31:20.403749518Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Oct 31 01:31:20.403799 env[1377]: time="2025-10-31T01:31:20.403778369Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured" Oct 31 01:31:20.403799 env[1377]: time="2025-10-31T01:31:20.403789281Z" level=info msg="metadata content store policy set" policy=shared Oct 31 01:31:20.423617 tar[1348]: linux-amd64/LICENSE Oct 31 01:31:20.423617 tar[1348]: linux-amd64/helm Oct 31 01:31:20.425100 systemd-logind[1342]: Watching system buttons on /dev/input/event1 (Power Button) Oct 31 01:31:20.425634 systemd-logind[1342]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Oct 31 01:31:20.429790 extend-filesystems[1331]: Old size kept for /dev/sda9 Oct 31 01:31:20.429790 extend-filesystems[1331]: Found sr0 Oct 31 01:31:20.429953 systemd[1]: extend-filesystems.service: Deactivated successfully. Oct 31 01:31:20.430089 systemd[1]: Finished extend-filesystems.service. Oct 31 01:31:20.432905 systemd-logind[1342]: New seat seat0. Oct 31 01:31:20.466328 env[1377]: time="2025-10-31T01:31:20.466300570Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Oct 31 01:31:20.466423 env[1377]: time="2025-10-31T01:31:20.466331644Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Oct 31 01:31:20.466423 env[1377]: time="2025-10-31T01:31:20.466342545Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Oct 31 01:31:20.466423 env[1377]: time="2025-10-31T01:31:20.466376694Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Oct 31 01:31:20.466423 env[1377]: time="2025-10-31T01:31:20.466392419Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Oct 31 01:31:20.466423 env[1377]: time="2025-10-31T01:31:20.466402114Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Oct 31 01:31:20.466423 env[1377]: time="2025-10-31T01:31:20.466409871Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Oct 31 01:31:20.466423 env[1377]: time="2025-10-31T01:31:20.466417467Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Oct 31 01:31:20.466423 env[1377]: time="2025-10-31T01:31:20.466424910Z" level=info msg="loading plugin \"io.containerd.service.v1.leases-service\"..." type=io.containerd.service.v1 Oct 31 01:31:20.466549 env[1377]: time="2025-10-31T01:31:20.466432437Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Oct 31 01:31:20.466549 env[1377]: time="2025-10-31T01:31:20.466459121Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Oct 31 01:31:20.466549 env[1377]: time="2025-10-31T01:31:20.466467983Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Oct 31 01:31:20.466549 env[1377]: time="2025-10-31T01:31:20.466534430Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Oct 31 01:31:20.466627 env[1377]: time="2025-10-31T01:31:20.466591106Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Oct 31 01:31:20.468360 env[1377]: time="2025-10-31T01:31:20.468065243Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Oct 31 01:31:20.468360 env[1377]: time="2025-10-31T01:31:20.468086714Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Oct 31 01:31:20.468360 env[1377]: time="2025-10-31T01:31:20.468095648Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Oct 31 01:31:20.468360 env[1377]: time="2025-10-31T01:31:20.468121660Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Oct 31 01:31:20.468360 env[1377]: time="2025-10-31T01:31:20.468129434Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Oct 31 01:31:20.468360 env[1377]: time="2025-10-31T01:31:20.468136551Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Oct 31 01:31:20.468360 env[1377]: time="2025-10-31T01:31:20.468142942Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Oct 31 01:31:20.468360 env[1377]: time="2025-10-31T01:31:20.468150232Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Oct 31 01:31:20.468360 env[1377]: time="2025-10-31T01:31:20.468157048Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Oct 31 01:31:20.468360 env[1377]: time="2025-10-31T01:31:20.468163779Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Oct 31 01:31:20.468360 env[1377]: time="2025-10-31T01:31:20.468170515Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Oct 31 01:31:20.468360 env[1377]: time="2025-10-31T01:31:20.468178840Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Oct 31 01:31:20.468814 env[1377]: time="2025-10-31T01:31:20.468660167Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Oct 31 01:31:20.468814 env[1377]: time="2025-10-31T01:31:20.468673715Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Oct 31 01:31:20.468814 env[1377]: time="2025-10-31T01:31:20.468682028Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Oct 31 01:31:20.468814 env[1377]: time="2025-10-31T01:31:20.468689053Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Oct 31 01:31:20.468814 env[1377]: time="2025-10-31T01:31:20.468697816Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1 Oct 31 01:31:20.468814 env[1377]: time="2025-10-31T01:31:20.468705371Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Oct 31 01:31:20.468814 env[1377]: time="2025-10-31T01:31:20.468716023Z" level=error msg="failed to initialize a tracing processor \"otlp\"" error="no OpenTelemetry endpoint: skip plugin" Oct 31 01:31:20.468814 env[1377]: time="2025-10-31T01:31:20.468737669Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Oct 31 01:31:20.469296 env[1377]: time="2025-10-31T01:31:20.469020631Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:false] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:false SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.6 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Oct 31 01:31:20.469296 env[1377]: time="2025-10-31T01:31:20.469056665Z" level=info msg="Connect containerd service" Oct 31 01:31:20.469296 env[1377]: time="2025-10-31T01:31:20.469078312Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Oct 31 01:31:20.481132 env[1377]: time="2025-10-31T01:31:20.469446950Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Oct 31 01:31:20.481132 env[1377]: time="2025-10-31T01:31:20.469577522Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Oct 31 01:31:20.481132 env[1377]: time="2025-10-31T01:31:20.469601156Z" level=info msg=serving... address=/run/containerd/containerd.sock Oct 31 01:31:20.481132 env[1377]: time="2025-10-31T01:31:20.475917684Z" level=info msg="containerd successfully booted in 0.083960s" Oct 31 01:31:20.474093 systemd[1]: Finished update-ssh-keys-after-ignition.service. Oct 31 01:31:20.481246 bash[1373]: Updated "/home/core/.ssh/authorized_keys" Oct 31 01:31:20.475781 systemd[1]: Started containerd.service. Oct 31 01:31:20.483538 env[1377]: time="2025-10-31T01:31:20.482673590Z" level=info msg="Start subscribing containerd event" Oct 31 01:31:20.484727 env[1377]: time="2025-10-31T01:31:20.484712655Z" level=info msg="Start recovering state" Oct 31 01:31:20.484866 env[1377]: time="2025-10-31T01:31:20.484855795Z" level=info msg="Start event monitor" Oct 31 01:31:20.484931 env[1377]: time="2025-10-31T01:31:20.484921170Z" level=info msg="Start snapshots syncer" Oct 31 01:31:20.484992 env[1377]: time="2025-10-31T01:31:20.484972290Z" level=info msg="Start cni network conf syncer for default" Oct 31 01:31:20.485035 env[1377]: time="2025-10-31T01:31:20.485025849Z" level=info msg="Start streaming server" Oct 31 01:31:20.498195 dbus-daemon[1328]: [system] SELinux support is enabled Oct 31 01:31:20.498327 systemd[1]: Started dbus.service. Oct 31 01:31:20.499650 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Oct 31 01:31:20.499668 systemd[1]: Reached target system-config.target. Oct 31 01:31:20.499802 systemd[1]: user-cloudinit-proc-cmdline.service was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Oct 31 01:31:20.499816 systemd[1]: Reached target user-config.target. Oct 31 01:31:20.503085 systemd[1]: Started systemd-logind.service. Oct 31 01:31:20.506713 kernel: NET: Registered PF_VSOCK protocol family Oct 31 01:31:20.529292 update_engine[1343]: I1031 01:31:20.523636 1343 main.cc:92] Flatcar Update Engine starting Oct 31 01:31:20.532981 systemd[1]: Started update-engine.service. Oct 31 01:31:20.534699 systemd[1]: Started locksmithd.service. Oct 31 01:31:20.535890 update_engine[1343]: I1031 01:31:20.535871 1343 update_check_scheduler.cc:74] Next update check in 6m32s Oct 31 01:31:20.885287 tar[1348]: linux-amd64/README.md Oct 31 01:31:20.888387 systemd[1]: Finished prepare-helm.service. Oct 31 01:31:20.978929 sshd_keygen[1359]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Oct 31 01:31:20.983292 locksmithd[1405]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Oct 31 01:31:20.991676 systemd[1]: Finished sshd-keygen.service. Oct 31 01:31:20.992911 systemd[1]: Starting issuegen.service... Oct 31 01:31:20.996644 systemd[1]: issuegen.service: Deactivated successfully. Oct 31 01:31:20.996767 systemd[1]: Finished issuegen.service. Oct 31 01:31:20.997900 systemd[1]: Starting systemd-user-sessions.service... Oct 31 01:31:21.005883 systemd[1]: Finished systemd-user-sessions.service. Oct 31 01:31:21.006908 systemd[1]: Started getty@tty1.service. Oct 31 01:31:21.007831 systemd[1]: Started serial-getty@ttyS0.service. Oct 31 01:31:21.008033 systemd[1]: Reached target getty.target. Oct 31 01:31:22.154997 systemd[1]: Started kubelet.service. Oct 31 01:31:22.155343 systemd[1]: Reached target multi-user.target. Oct 31 01:31:22.156937 systemd[1]: Starting systemd-update-utmp-runlevel.service... Oct 31 01:31:22.162981 systemd[1]: systemd-update-utmp-runlevel.service: Deactivated successfully. Oct 31 01:31:22.163132 systemd[1]: Finished systemd-update-utmp-runlevel.service. Oct 31 01:31:22.165556 systemd[1]: Startup finished in 7.146s (kernel) + 7.255s (userspace) = 14.401s. Oct 31 01:31:22.191181 login[1473]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Oct 31 01:31:22.192267 login[1474]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Oct 31 01:31:22.198692 systemd[1]: Created slice user-500.slice. Oct 31 01:31:22.199318 systemd[1]: Starting user-runtime-dir@500.service... Oct 31 01:31:22.201264 systemd-logind[1342]: New session 1 of user core. Oct 31 01:31:22.206024 systemd-logind[1342]: New session 2 of user core. Oct 31 01:31:22.208515 systemd[1]: Finished user-runtime-dir@500.service. Oct 31 01:31:22.209316 systemd[1]: Starting user@500.service... Oct 31 01:31:22.211852 (systemd)[1485]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Oct 31 01:31:22.262212 systemd[1485]: Queued start job for default target default.target. Oct 31 01:31:22.262348 systemd[1485]: Reached target paths.target. Oct 31 01:31:22.262360 systemd[1485]: Reached target sockets.target. Oct 31 01:31:22.262368 systemd[1485]: Reached target timers.target. Oct 31 01:31:22.262383 systemd[1485]: Reached target basic.target. Oct 31 01:31:22.262408 systemd[1485]: Reached target default.target. Oct 31 01:31:22.262423 systemd[1485]: Startup finished in 47ms. Oct 31 01:31:22.262502 systemd[1]: Started user@500.service. Oct 31 01:31:22.263151 systemd[1]: Started session-1.scope. Oct 31 01:31:22.263536 systemd[1]: Started session-2.scope. Oct 31 01:31:22.721449 kubelet[1480]: E1031 01:31:22.721424 1480 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Oct 31 01:31:22.722679 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Oct 31 01:31:22.722770 systemd[1]: kubelet.service: Failed with result 'exit-code'. Oct 31 01:31:32.821192 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Oct 31 01:31:32.821324 systemd[1]: Stopped kubelet.service. Oct 31 01:31:32.822505 systemd[1]: Starting kubelet.service... Oct 31 01:31:32.886321 systemd[1]: Started kubelet.service. Oct 31 01:31:32.951561 kubelet[1521]: E1031 01:31:32.951527 1521 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Oct 31 01:31:32.953513 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Oct 31 01:31:32.953618 systemd[1]: kubelet.service: Failed with result 'exit-code'. Oct 31 01:31:43.071125 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Oct 31 01:31:43.071243 systemd[1]: Stopped kubelet.service. Oct 31 01:31:43.072346 systemd[1]: Starting kubelet.service... Oct 31 01:31:43.128704 systemd[1]: Started kubelet.service. Oct 31 01:31:43.203854 kubelet[1536]: E1031 01:31:43.203830 1536 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Oct 31 01:31:43.204940 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Oct 31 01:31:43.205024 systemd[1]: kubelet.service: Failed with result 'exit-code'. Oct 31 01:31:50.670596 systemd[1]: Created slice system-sshd.slice. Oct 31 01:31:50.671578 systemd[1]: Started sshd@0-139.178.70.102:22-147.75.109.163:46130.service. Oct 31 01:31:50.811133 sshd[1543]: Accepted publickey for core from 147.75.109.163 port 46130 ssh2: RSA SHA256:5EJg7y3pv7Ht00E4mNOtVA/MuREJjzW0PuLgUWDrRw0 Oct 31 01:31:50.811931 sshd[1543]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Oct 31 01:31:50.815022 systemd[1]: Started session-3.scope. Oct 31 01:31:50.815781 systemd-logind[1342]: New session 3 of user core. Oct 31 01:31:50.863147 systemd[1]: Started sshd@1-139.178.70.102:22-147.75.109.163:46132.service. Oct 31 01:31:50.890335 sshd[1548]: Accepted publickey for core from 147.75.109.163 port 46132 ssh2: RSA SHA256:5EJg7y3pv7Ht00E4mNOtVA/MuREJjzW0PuLgUWDrRw0 Oct 31 01:31:50.891150 sshd[1548]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Oct 31 01:31:50.893767 systemd-logind[1342]: New session 4 of user core. Oct 31 01:31:50.894042 systemd[1]: Started session-4.scope. Oct 31 01:31:50.945803 systemd[1]: Started sshd@2-139.178.70.102:22-147.75.109.163:46138.service. Oct 31 01:31:50.946790 sshd[1548]: pam_unix(sshd:session): session closed for user core Oct 31 01:31:50.950757 systemd[1]: sshd@1-139.178.70.102:22-147.75.109.163:46132.service: Deactivated successfully. Oct 31 01:31:50.951190 systemd[1]: session-4.scope: Deactivated successfully. Oct 31 01:31:50.952128 systemd-logind[1342]: Session 4 logged out. Waiting for processes to exit. Oct 31 01:31:50.952751 systemd-logind[1342]: Removed session 4. Oct 31 01:31:50.973907 sshd[1553]: Accepted publickey for core from 147.75.109.163 port 46138 ssh2: RSA SHA256:5EJg7y3pv7Ht00E4mNOtVA/MuREJjzW0PuLgUWDrRw0 Oct 31 01:31:50.974942 sshd[1553]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Oct 31 01:31:50.977783 systemd[1]: Started session-5.scope. Oct 31 01:31:50.978042 systemd-logind[1342]: New session 5 of user core. Oct 31 01:31:51.026459 sshd[1553]: pam_unix(sshd:session): session closed for user core Oct 31 01:31:51.028139 systemd[1]: Started sshd@3-139.178.70.102:22-147.75.109.163:46152.service. Oct 31 01:31:51.031632 systemd[1]: sshd@2-139.178.70.102:22-147.75.109.163:46138.service: Deactivated successfully. Oct 31 01:31:51.031981 systemd[1]: session-5.scope: Deactivated successfully. Oct 31 01:31:51.032723 systemd-logind[1342]: Session 5 logged out. Waiting for processes to exit. Oct 31 01:31:51.033305 systemd-logind[1342]: Removed session 5. Oct 31 01:31:51.055904 sshd[1560]: Accepted publickey for core from 147.75.109.163 port 46152 ssh2: RSA SHA256:5EJg7y3pv7Ht00E4mNOtVA/MuREJjzW0PuLgUWDrRw0 Oct 31 01:31:51.056759 sshd[1560]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Oct 31 01:31:51.060250 systemd[1]: Started session-6.scope. Oct 31 01:31:51.060479 systemd-logind[1342]: New session 6 of user core. Oct 31 01:31:51.111269 sshd[1560]: pam_unix(sshd:session): session closed for user core Oct 31 01:31:51.112697 systemd[1]: Started sshd@4-139.178.70.102:22-147.75.109.163:46164.service. Oct 31 01:31:51.114228 systemd[1]: sshd@3-139.178.70.102:22-147.75.109.163:46152.service: Deactivated successfully. Oct 31 01:31:51.115054 systemd[1]: session-6.scope: Deactivated successfully. Oct 31 01:31:51.115370 systemd-logind[1342]: Session 6 logged out. Waiting for processes to exit. Oct 31 01:31:51.116103 systemd-logind[1342]: Removed session 6. Oct 31 01:31:51.140456 sshd[1567]: Accepted publickey for core from 147.75.109.163 port 46164 ssh2: RSA SHA256:5EJg7y3pv7Ht00E4mNOtVA/MuREJjzW0PuLgUWDrRw0 Oct 31 01:31:51.141255 sshd[1567]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Oct 31 01:31:51.144267 systemd[1]: Started session-7.scope. Oct 31 01:31:51.144451 systemd-logind[1342]: New session 7 of user core. Oct 31 01:31:51.237813 sudo[1573]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Oct 31 01:31:51.237988 sudo[1573]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Oct 31 01:31:51.244744 dbus-daemon[1328]: \xd0M\xa2\x90MV: received setenforce notice (enforcing=285659664) Oct 31 01:31:51.246091 sudo[1573]: pam_unix(sudo:session): session closed for user root Oct 31 01:31:51.248373 sshd[1567]: pam_unix(sshd:session): session closed for user core Oct 31 01:31:51.250333 systemd[1]: Started sshd@5-139.178.70.102:22-147.75.109.163:46168.service. Oct 31 01:31:51.251561 systemd[1]: sshd@4-139.178.70.102:22-147.75.109.163:46164.service: Deactivated successfully. Oct 31 01:31:51.252200 systemd[1]: session-7.scope: Deactivated successfully. Oct 31 01:31:51.252426 systemd-logind[1342]: Session 7 logged out. Waiting for processes to exit. Oct 31 01:31:51.253221 systemd-logind[1342]: Removed session 7. Oct 31 01:31:51.279824 sshd[1575]: Accepted publickey for core from 147.75.109.163 port 46168 ssh2: RSA SHA256:5EJg7y3pv7Ht00E4mNOtVA/MuREJjzW0PuLgUWDrRw0 Oct 31 01:31:51.280816 sshd[1575]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Oct 31 01:31:51.283520 systemd[1]: Started session-8.scope. Oct 31 01:31:51.283714 systemd-logind[1342]: New session 8 of user core. Oct 31 01:31:51.332991 sudo[1582]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Oct 31 01:31:51.333154 sudo[1582]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Oct 31 01:31:51.334861 sudo[1582]: pam_unix(sudo:session): session closed for user root Oct 31 01:31:51.337738 sudo[1581]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Oct 31 01:31:51.338018 sudo[1581]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Oct 31 01:31:51.343388 systemd[1]: Stopping audit-rules.service... Oct 31 01:31:51.343000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=remove_rule key=(null) list=5 res=1 Oct 31 01:31:51.350408 kernel: kauditd_printk_skb: 21 callbacks suppressed Oct 31 01:31:51.350439 kernel: audit: type=1305 audit(1761874311.343:150): auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=remove_rule key=(null) list=5 res=1 Oct 31 01:31:51.343000 audit[1585]: SYSCALL arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7fffcf8476b0 a2=420 a3=0 items=0 ppid=1 pid=1585 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/sbin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 01:31:51.352141 auditctl[1585]: No rules Oct 31 01:31:51.352365 systemd[1]: audit-rules.service: Deactivated successfully. Oct 31 01:31:51.352479 systemd[1]: Stopped audit-rules.service. Oct 31 01:31:51.353473 systemd[1]: Starting audit-rules.service... Oct 31 01:31:51.355548 kernel: audit: type=1300 audit(1761874311.343:150): arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7fffcf8476b0 a2=420 a3=0 items=0 ppid=1 pid=1585 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/sbin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 01:31:51.343000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D44 Oct 31 01:31:51.351000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 01:31:51.362020 kernel: audit: type=1327 audit(1761874311.343:150): proctitle=2F7362696E2F617564697463746C002D44 Oct 31 01:31:51.362049 kernel: audit: type=1131 audit(1761874311.351:151): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 01:31:51.368470 augenrules[1603]: No rules Oct 31 01:31:51.368935 systemd[1]: Finished audit-rules.service. Oct 31 01:31:51.371716 kernel: audit: type=1130 audit(1761874311.368:152): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 01:31:51.368000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 01:31:51.369467 sudo[1581]: pam_unix(sudo:session): session closed for user root Oct 31 01:31:51.372632 sshd[1575]: pam_unix(sshd:session): session closed for user core Oct 31 01:31:51.368000 audit[1581]: USER_END pid=1581 uid=500 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Oct 31 01:31:51.373375 systemd[1]: Started sshd@6-139.178.70.102:22-147.75.109.163:46174.service. Oct 31 01:31:51.378728 kernel: audit: type=1106 audit(1761874311.368:153): pid=1581 uid=500 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Oct 31 01:31:51.378756 kernel: audit: type=1104 audit(1761874311.368:154): pid=1581 uid=500 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Oct 31 01:31:51.368000 audit[1581]: CRED_DISP pid=1581 uid=500 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Oct 31 01:31:51.382445 kernel: audit: type=1130 audit(1761874311.372:155): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@6-139.178.70.102:22-147.75.109.163:46174 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 01:31:51.372000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@6-139.178.70.102:22-147.75.109.163:46174 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 01:31:51.379644 systemd[1]: sshd@5-139.178.70.102:22-147.75.109.163:46168.service: Deactivated successfully. Oct 31 01:31:51.380055 systemd[1]: session-8.scope: Deactivated successfully. Oct 31 01:31:51.381575 systemd-logind[1342]: Session 8 logged out. Waiting for processes to exit. Oct 31 01:31:51.382052 systemd-logind[1342]: Removed session 8. Oct 31 01:31:51.378000 audit[1575]: USER_END pid=1575 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Oct 31 01:31:51.389435 kernel: audit: type=1106 audit(1761874311.378:156): pid=1575 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Oct 31 01:31:51.389475 kernel: audit: type=1104 audit(1761874311.378:157): pid=1575 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Oct 31 01:31:51.378000 audit[1575]: CRED_DISP pid=1575 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Oct 31 01:31:51.379000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@5-139.178.70.102:22-147.75.109.163:46168 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 01:31:51.409000 audit[1608]: USER_ACCT pid=1608 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Oct 31 01:31:51.410681 sshd[1608]: Accepted publickey for core from 147.75.109.163 port 46174 ssh2: RSA SHA256:5EJg7y3pv7Ht00E4mNOtVA/MuREJjzW0PuLgUWDrRw0 Oct 31 01:31:51.410000 audit[1608]: CRED_ACQ pid=1608 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Oct 31 01:31:51.410000 audit[1608]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffd63162470 a2=3 a3=0 items=0 ppid=1 pid=1608 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=9 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 01:31:51.410000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Oct 31 01:31:51.411622 sshd[1608]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Oct 31 01:31:51.414377 systemd[1]: Started session-9.scope. Oct 31 01:31:51.414501 systemd-logind[1342]: New session 9 of user core. Oct 31 01:31:51.416000 audit[1608]: USER_START pid=1608 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Oct 31 01:31:51.417000 audit[1613]: CRED_ACQ pid=1613 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Oct 31 01:31:51.463742 sudo[1614]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Oct 31 01:31:51.463889 sudo[1614]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Oct 31 01:31:51.463000 audit[1614]: USER_ACCT pid=1614 uid=500 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Oct 31 01:31:51.463000 audit[1614]: CRED_REFR pid=1614 uid=500 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Oct 31 01:31:51.464000 audit[1614]: USER_START pid=1614 uid=500 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Oct 31 01:31:51.481649 systemd[1]: Starting docker.service... Oct 31 01:31:51.505492 env[1625]: time="2025-10-31T01:31:51.505389472Z" level=info msg="Starting up" Oct 31 01:31:51.507328 env[1625]: time="2025-10-31T01:31:51.507302584Z" level=info msg="parsed scheme: \"unix\"" module=grpc Oct 31 01:31:51.507328 env[1625]: time="2025-10-31T01:31:51.507323101Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc Oct 31 01:31:51.507390 env[1625]: time="2025-10-31T01:31:51.507338029Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/libcontainerd/docker-containerd.sock 0 }] }" module=grpc Oct 31 01:31:51.507390 env[1625]: time="2025-10-31T01:31:51.507344445Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc Oct 31 01:31:51.508617 env[1625]: time="2025-10-31T01:31:51.508597990Z" level=info msg="parsed scheme: \"unix\"" module=grpc Oct 31 01:31:51.508678 env[1625]: time="2025-10-31T01:31:51.508665374Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc Oct 31 01:31:51.508741 env[1625]: time="2025-10-31T01:31:51.508729931Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/libcontainerd/docker-containerd.sock 0 }] }" module=grpc Oct 31 01:31:51.508796 env[1625]: time="2025-10-31T01:31:51.508784633Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc Oct 31 01:31:51.530959 env[1625]: time="2025-10-31T01:31:51.530938846Z" level=warning msg="Your kernel does not support cgroup blkio weight" Oct 31 01:31:51.531075 env[1625]: time="2025-10-31T01:31:51.531065498Z" level=warning msg="Your kernel does not support cgroup blkio weight_device" Oct 31 01:31:51.531229 env[1625]: time="2025-10-31T01:31:51.531220929Z" level=info msg="Loading containers: start." Oct 31 01:31:51.614000 audit[1655]: NETFILTER_CFG table=nat:2 family=2 entries=2 op=nft_register_chain pid=1655 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 31 01:31:51.614000 audit[1655]: SYSCALL arch=c000003e syscall=46 success=yes exit=116 a0=3 a1=7fff5fde4010 a2=0 a3=7fff5fde3ffc items=0 ppid=1625 pid=1655 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 01:31:51.614000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D74006E6174002D4E00444F434B4552 Oct 31 01:31:51.615000 audit[1657]: NETFILTER_CFG table=filter:3 family=2 entries=2 op=nft_register_chain pid=1657 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 31 01:31:51.615000 audit[1657]: SYSCALL arch=c000003e syscall=46 success=yes exit=124 a0=3 a1=7ffedd505740 a2=0 a3=7ffedd50572c items=0 ppid=1625 pid=1657 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 01:31:51.615000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B4552 Oct 31 01:31:51.616000 audit[1659]: NETFILTER_CFG table=filter:4 family=2 entries=1 op=nft_register_chain pid=1659 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 31 01:31:51.616000 audit[1659]: SYSCALL arch=c000003e syscall=46 success=yes exit=112 a0=3 a1=7ffd2331f500 a2=0 a3=7ffd2331f4ec items=0 ppid=1625 pid=1659 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 01:31:51.616000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D49534F4C4154494F4E2D53544147452D31 Oct 31 01:31:51.617000 audit[1661]: NETFILTER_CFG table=filter:5 family=2 entries=1 op=nft_register_chain pid=1661 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 31 01:31:51.617000 audit[1661]: SYSCALL arch=c000003e syscall=46 success=yes exit=112 a0=3 a1=7ffdeb9fe920 a2=0 a3=7ffdeb9fe90c items=0 ppid=1625 pid=1661 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 01:31:51.617000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D49534F4C4154494F4E2D53544147452D32 Oct 31 01:31:51.626000 audit[1663]: NETFILTER_CFG table=filter:6 family=2 entries=1 op=nft_register_rule pid=1663 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 31 01:31:51.626000 audit[1663]: SYSCALL arch=c000003e syscall=46 success=yes exit=228 a0=3 a1=7ffef3cc2ea0 a2=0 a3=7ffef3cc2e8c items=0 ppid=1625 pid=1663 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 01:31:51.626000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4100444F434B45522D49534F4C4154494F4E2D53544147452D31002D6A0052455455524E Oct 31 01:31:51.649000 audit[1669]: NETFILTER_CFG table=filter:7 family=2 entries=1 op=nft_register_rule pid=1669 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 31 01:31:51.649000 audit[1669]: SYSCALL arch=c000003e syscall=46 success=yes exit=228 a0=3 a1=7ffdb1304680 a2=0 a3=7ffdb130466c items=0 ppid=1625 pid=1669 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 01:31:51.649000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4100444F434B45522D49534F4C4154494F4E2D53544147452D32002D6A0052455455524E Oct 31 01:31:51.655000 audit[1671]: NETFILTER_CFG table=filter:8 family=2 entries=1 op=nft_register_chain pid=1671 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 31 01:31:51.655000 audit[1671]: SYSCALL arch=c000003e syscall=46 success=yes exit=96 a0=3 a1=7ffd1b60dc10 a2=0 a3=7ffd1b60dbfc items=0 ppid=1625 pid=1671 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 01:31:51.655000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D55534552 Oct 31 01:31:51.657000 audit[1673]: NETFILTER_CFG table=filter:9 family=2 entries=1 op=nft_register_rule pid=1673 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 31 01:31:51.657000 audit[1673]: SYSCALL arch=c000003e syscall=46 success=yes exit=212 a0=3 a1=7ffe92861df0 a2=0 a3=7ffe92861ddc items=0 ppid=1625 pid=1673 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 01:31:51.657000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4100444F434B45522D55534552002D6A0052455455524E Oct 31 01:31:51.658000 audit[1675]: NETFILTER_CFG table=filter:10 family=2 entries=2 op=nft_register_chain pid=1675 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 31 01:31:51.658000 audit[1675]: SYSCALL arch=c000003e syscall=46 success=yes exit=308 a0=3 a1=7ffeb7926740 a2=0 a3=7ffeb792672c items=0 ppid=1625 pid=1675 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 01:31:51.658000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D55534552 Oct 31 01:31:51.662000 audit[1679]: NETFILTER_CFG table=filter:11 family=2 entries=1 op=nft_unregister_rule pid=1679 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 31 01:31:51.662000 audit[1679]: SYSCALL arch=c000003e syscall=46 success=yes exit=216 a0=3 a1=7ffd6399eb20 a2=0 a3=7ffd6399eb0c items=0 ppid=1625 pid=1679 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 01:31:51.662000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4400464F5257415244002D6A00444F434B45522D55534552 Oct 31 01:31:51.666000 audit[1680]: NETFILTER_CFG table=filter:12 family=2 entries=1 op=nft_register_rule pid=1680 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 31 01:31:51.666000 audit[1680]: SYSCALL arch=c000003e syscall=46 success=yes exit=224 a0=3 a1=7fff4a1d76a0 a2=0 a3=7fff4a1d768c items=0 ppid=1625 pid=1680 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 01:31:51.666000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D55534552 Oct 31 01:31:51.673621 kernel: Initializing XFRM netlink socket Oct 31 01:31:51.697677 env[1625]: time="2025-10-31T01:31:51.697656580Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address" Oct 31 01:31:51.713000 audit[1688]: NETFILTER_CFG table=nat:13 family=2 entries=2 op=nft_register_chain pid=1688 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 31 01:31:51.713000 audit[1688]: SYSCALL arch=c000003e syscall=46 success=yes exit=492 a0=3 a1=7ffde360ea10 a2=0 a3=7ffde360e9fc items=0 ppid=1625 pid=1688 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 01:31:51.713000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D74006E6174002D4900504F5354524F5554494E47002D73003137322E31372E302E302F31360000002D6F00646F636B657230002D6A004D415351554552414445 Oct 31 01:31:51.729000 audit[1691]: NETFILTER_CFG table=nat:14 family=2 entries=1 op=nft_register_rule pid=1691 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 31 01:31:51.729000 audit[1691]: SYSCALL arch=c000003e syscall=46 success=yes exit=288 a0=3 a1=7ffe82f4b730 a2=0 a3=7ffe82f4b71c items=0 ppid=1625 pid=1691 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 01:31:51.729000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D74006E6174002D4900444F434B4552002D6900646F636B657230002D6A0052455455524E Oct 31 01:31:51.730000 audit[1694]: NETFILTER_CFG table=filter:15 family=2 entries=1 op=nft_register_rule pid=1694 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 31 01:31:51.730000 audit[1694]: SYSCALL arch=c000003e syscall=46 success=yes exit=376 a0=3 a1=7ffd161b3740 a2=0 a3=7ffd161b372c items=0 ppid=1625 pid=1694 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 01:31:51.730000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6900646F636B657230002D6F00646F636B657230002D6A00414343455054 Oct 31 01:31:51.732000 audit[1696]: NETFILTER_CFG table=filter:16 family=2 entries=1 op=nft_register_rule pid=1696 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 31 01:31:51.732000 audit[1696]: SYSCALL arch=c000003e syscall=46 success=yes exit=376 a0=3 a1=7ffe3df72200 a2=0 a3=7ffe3df721ec items=0 ppid=1625 pid=1696 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 01:31:51.732000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6900646F636B6572300000002D6F00646F636B657230002D6A00414343455054 Oct 31 01:31:51.733000 audit[1698]: NETFILTER_CFG table=nat:17 family=2 entries=2 op=nft_register_chain pid=1698 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 31 01:31:51.733000 audit[1698]: SYSCALL arch=c000003e syscall=46 success=yes exit=356 a0=3 a1=7ffff42d6410 a2=0 a3=7ffff42d63fc items=0 ppid=1625 pid=1698 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 01:31:51.733000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D74006E6174002D4100505245524F5554494E47002D6D006164647274797065002D2D6473742D74797065004C4F43414C002D6A00444F434B4552 Oct 31 01:31:51.735000 audit[1700]: NETFILTER_CFG table=nat:18 family=2 entries=2 op=nft_register_chain pid=1700 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 31 01:31:51.735000 audit[1700]: SYSCALL arch=c000003e syscall=46 success=yes exit=444 a0=3 a1=7ffc1af62320 a2=0 a3=7ffc1af6230c items=0 ppid=1625 pid=1700 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 01:31:51.735000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D74006E6174002D41004F5554505554002D6D006164647274797065002D2D6473742D74797065004C4F43414C002D6A00444F434B45520000002D2D647374003132372E302E302E302F38 Oct 31 01:31:51.736000 audit[1702]: NETFILTER_CFG table=filter:19 family=2 entries=1 op=nft_register_rule pid=1702 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 31 01:31:51.736000 audit[1702]: SYSCALL arch=c000003e syscall=46 success=yes exit=304 a0=3 a1=7ffd52a52310 a2=0 a3=7ffd52a522fc items=0 ppid=1625 pid=1702 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 01:31:51.736000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6F00646F636B657230002D6A00444F434B4552 Oct 31 01:31:51.755000 audit[1705]: NETFILTER_CFG table=filter:20 family=2 entries=1 op=nft_register_rule pid=1705 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 31 01:31:51.755000 audit[1705]: SYSCALL arch=c000003e syscall=46 success=yes exit=508 a0=3 a1=7ffed16084a0 a2=0 a3=7ffed160848c items=0 ppid=1625 pid=1705 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 01:31:51.755000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6F00646F636B657230002D6D00636F6E6E747261636B002D2D637473746174650052454C415445442C45535441424C4953484544002D6A00414343455054 Oct 31 01:31:51.757000 audit[1707]: NETFILTER_CFG table=filter:21 family=2 entries=1 op=nft_register_rule pid=1707 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 31 01:31:51.757000 audit[1707]: SYSCALL arch=c000003e syscall=46 success=yes exit=240 a0=3 a1=7ffef1c90c40 a2=0 a3=7ffef1c90c2c items=0 ppid=1625 pid=1707 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 01:31:51.757000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D49534F4C4154494F4E2D53544147452D31 Oct 31 01:31:51.758000 audit[1709]: NETFILTER_CFG table=filter:22 family=2 entries=1 op=nft_register_rule pid=1709 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 31 01:31:51.758000 audit[1709]: SYSCALL arch=c000003e syscall=46 success=yes exit=428 a0=3 a1=7fff186cae30 a2=0 a3=7fff186cae1c items=0 ppid=1625 pid=1709 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 01:31:51.758000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D740066696C746572002D4900444F434B45522D49534F4C4154494F4E2D53544147452D31002D6900646F636B6572300000002D6F00646F636B657230002D6A00444F434B45522D49534F4C4154494F4E2D53544147452D32 Oct 31 01:31:51.760000 audit[1711]: NETFILTER_CFG table=filter:23 family=2 entries=1 op=nft_register_rule pid=1711 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 31 01:31:51.760000 audit[1711]: SYSCALL arch=c000003e syscall=46 success=yes exit=312 a0=3 a1=7ffc88633b40 a2=0 a3=7ffc88633b2c items=0 ppid=1625 pid=1711 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 01:31:51.760000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D740066696C746572002D4900444F434B45522D49534F4C4154494F4E2D53544147452D32002D6F00646F636B657230002D6A0044524F50 Oct 31 01:31:51.761043 systemd-networkd[1125]: docker0: Link UP Oct 31 01:31:51.770000 audit[1715]: NETFILTER_CFG table=filter:24 family=2 entries=1 op=nft_unregister_rule pid=1715 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 31 01:31:51.770000 audit[1715]: SYSCALL arch=c000003e syscall=46 success=yes exit=228 a0=3 a1=7ffd702dd820 a2=0 a3=7ffd702dd80c items=0 ppid=1625 pid=1715 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 01:31:51.770000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4400464F5257415244002D6A00444F434B45522D55534552 Oct 31 01:31:51.775000 audit[1716]: NETFILTER_CFG table=filter:25 family=2 entries=1 op=nft_register_rule pid=1716 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 31 01:31:51.775000 audit[1716]: SYSCALL arch=c000003e syscall=46 success=yes exit=224 a0=3 a1=7ffce4411df0 a2=0 a3=7ffce4411ddc items=0 ppid=1625 pid=1716 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 01:31:51.775000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D55534552 Oct 31 01:31:51.776376 env[1625]: time="2025-10-31T01:31:51.776361593Z" level=info msg="Loading containers: done." Oct 31 01:31:51.783284 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck2427098387-merged.mount: Deactivated successfully. Oct 31 01:31:51.789056 env[1625]: time="2025-10-31T01:31:51.789033014Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Oct 31 01:31:51.789272 env[1625]: time="2025-10-31T01:31:51.789262224Z" level=info msg="Docker daemon" commit=112bdf3343 graphdriver(s)=overlay2 version=20.10.23 Oct 31 01:31:51.789365 env[1625]: time="2025-10-31T01:31:51.789356709Z" level=info msg="Daemon has completed initialization" Oct 31 01:31:51.795580 systemd[1]: Started docker.service. Oct 31 01:31:51.795000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=docker comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 01:31:51.799895 env[1625]: time="2025-10-31T01:31:51.799864812Z" level=info msg="API listen on /run/docker.sock" Oct 31 01:31:52.936366 env[1377]: time="2025-10-31T01:31:52.936336050Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.9\"" Oct 31 01:31:53.321189 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Oct 31 01:31:53.320000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 01:31:53.320000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 01:31:53.321348 systemd[1]: Stopped kubelet.service. Oct 31 01:31:53.322691 systemd[1]: Starting kubelet.service... Oct 31 01:31:53.386053 systemd[1]: Started kubelet.service. Oct 31 01:31:53.385000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 01:31:53.515349 kubelet[1755]: E1031 01:31:53.515302 1755 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Oct 31 01:31:53.516000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Oct 31 01:31:53.516558 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Oct 31 01:31:53.516682 systemd[1]: kubelet.service: Failed with result 'exit-code'. Oct 31 01:31:53.954421 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2274040539.mount: Deactivated successfully. Oct 31 01:31:55.160128 env[1377]: time="2025-10-31T01:31:55.160098580Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-apiserver:v1.32.9,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 31 01:31:55.160964 env[1377]: time="2025-10-31T01:31:55.160951069Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:abd2b525baf428ffb8b8b7d1e09761dc5cdb7ed0c7896a9427e29e84f8eafc59,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 31 01:31:55.162010 env[1377]: time="2025-10-31T01:31:55.161992264Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-apiserver:v1.32.9,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 31 01:31:55.163101 env[1377]: time="2025-10-31T01:31:55.163088314Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-apiserver@sha256:6df11cc2ad9679b1117be34d3a0230add88bc0a08fd7a3ebc26b680575e8de97,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 31 01:31:55.163632 env[1377]: time="2025-10-31T01:31:55.163599266Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.9\" returns image reference \"sha256:abd2b525baf428ffb8b8b7d1e09761dc5cdb7ed0c7896a9427e29e84f8eafc59\"" Oct 31 01:31:55.164009 env[1377]: time="2025-10-31T01:31:55.163995405Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.9\"" Oct 31 01:31:56.827808 env[1377]: time="2025-10-31T01:31:56.827770481Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-controller-manager:v1.32.9,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 31 01:31:56.848231 env[1377]: time="2025-10-31T01:31:56.848213090Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:0debe32fbb7223500fcf8c312f2a568a5abd3ed9274d8ec6780cfb30b8861e91,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 31 01:31:56.877392 env[1377]: time="2025-10-31T01:31:56.877370224Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-controller-manager:v1.32.9,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 31 01:31:56.920095 env[1377]: time="2025-10-31T01:31:56.920075952Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-controller-manager@sha256:243c4b8e3bce271fcb1b78008ab996ab6976b1a20096deac08338fcd17979922,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 31 01:31:56.920417 env[1377]: time="2025-10-31T01:31:56.920398129Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.9\" returns image reference \"sha256:0debe32fbb7223500fcf8c312f2a568a5abd3ed9274d8ec6780cfb30b8861e91\"" Oct 31 01:31:56.921415 env[1377]: time="2025-10-31T01:31:56.921369379Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.9\"" Oct 31 01:31:58.307854 env[1377]: time="2025-10-31T01:31:58.307817154Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-scheduler:v1.32.9,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 31 01:31:58.322672 env[1377]: time="2025-10-31T01:31:58.322644340Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:6934c23b154fcb9bf54ed5913782de746735a49f4daa4732285915050cd44ad5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 31 01:31:58.325191 env[1377]: time="2025-10-31T01:31:58.325174175Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-scheduler:v1.32.9,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 31 01:31:58.332634 env[1377]: time="2025-10-31T01:31:58.332617271Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-scheduler@sha256:50c49520dbd0e8b4076b6a5c77d8014df09ea3d59a73e8bafd2678d51ebb92d5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 31 01:31:58.333031 env[1377]: time="2025-10-31T01:31:58.333012602Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.9\" returns image reference \"sha256:6934c23b154fcb9bf54ed5913782de746735a49f4daa4732285915050cd44ad5\"" Oct 31 01:31:58.333804 env[1377]: time="2025-10-31T01:31:58.333787013Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.9\"" Oct 31 01:31:59.546507 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3459654752.mount: Deactivated successfully. Oct 31 01:32:00.076690 env[1377]: time="2025-10-31T01:32:00.076647206Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy:v1.32.9,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 31 01:32:00.098261 env[1377]: time="2025-10-31T01:32:00.098238583Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:fa3fdca615a501743d8deb39729a96e731312aac8d96accec061d5265360332f,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 31 01:32:00.113504 env[1377]: time="2025-10-31T01:32:00.113483109Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-proxy:v1.32.9,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 31 01:32:00.119262 env[1377]: time="2025-10-31T01:32:00.119243673Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy@sha256:886af02535dc34886e4618b902f8c140d89af57233a245621d29642224516064,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 31 01:32:00.119783 env[1377]: time="2025-10-31T01:32:00.119760363Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.9\" returns image reference \"sha256:fa3fdca615a501743d8deb39729a96e731312aac8d96accec061d5265360332f\"" Oct 31 01:32:00.120757 env[1377]: time="2025-10-31T01:32:00.120730426Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" Oct 31 01:32:01.014616 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3208885370.mount: Deactivated successfully. Oct 31 01:32:01.933218 env[1377]: time="2025-10-31T01:32:01.933178052Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/coredns/coredns:v1.11.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 31 01:32:01.943278 env[1377]: time="2025-10-31T01:32:01.943256425Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 31 01:32:01.951413 env[1377]: time="2025-10-31T01:32:01.951394371Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/coredns/coredns:v1.11.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 31 01:32:01.956847 env[1377]: time="2025-10-31T01:32:01.956830877Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 31 01:32:01.957393 env[1377]: time="2025-10-31T01:32:01.957371156Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\"" Oct 31 01:32:01.957807 env[1377]: time="2025-10-31T01:32:01.957791695Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Oct 31 01:32:02.520158 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3214237640.mount: Deactivated successfully. Oct 31 01:32:02.522644 env[1377]: time="2025-10-31T01:32:02.522625022Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.10,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 31 01:32:02.523650 env[1377]: time="2025-10-31T01:32:02.523638207Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 31 01:32:02.524844 env[1377]: time="2025-10-31T01:32:02.524829699Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.10,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 31 01:32:02.525734 env[1377]: time="2025-10-31T01:32:02.525720639Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 31 01:32:02.525921 env[1377]: time="2025-10-31T01:32:02.525906425Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Oct 31 01:32:02.526559 env[1377]: time="2025-10-31T01:32:02.526542578Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\"" Oct 31 01:32:03.134269 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2710132997.mount: Deactivated successfully. Oct 31 01:32:03.571105 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 4. Oct 31 01:32:03.571227 systemd[1]: Stopped kubelet.service. Oct 31 01:32:03.573296 kernel: kauditd_printk_skb: 88 callbacks suppressed Oct 31 01:32:03.573339 kernel: audit: type=1130 audit(1761874323.570:196): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 01:32:03.570000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 01:32:03.572272 systemd[1]: Starting kubelet.service... Oct 31 01:32:03.570000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 01:32:03.579621 kernel: audit: type=1131 audit(1761874323.570:197): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 01:32:03.994966 systemd[1]: Started kubelet.service. Oct 31 01:32:03.994000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 01:32:03.998616 kernel: audit: type=1130 audit(1761874323.994:198): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 01:32:04.036233 kubelet[1772]: E1031 01:32:04.035952 1772 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Oct 31 01:32:04.037288 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Oct 31 01:32:04.037374 systemd[1]: kubelet.service: Failed with result 'exit-code'. Oct 31 01:32:04.036000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Oct 31 01:32:04.040621 kernel: audit: type=1131 audit(1761874324.036:199): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Oct 31 01:32:05.402762 env[1377]: time="2025-10-31T01:32:05.402732150Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/etcd:3.5.16-0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 31 01:32:05.421073 env[1377]: time="2025-10-31T01:32:05.421054346Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 31 01:32:05.431201 env[1377]: time="2025-10-31T01:32:05.431183508Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/etcd:3.5.16-0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 31 01:32:05.435798 env[1377]: time="2025-10-31T01:32:05.435782639Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 31 01:32:05.436251 env[1377]: time="2025-10-31T01:32:05.436231809Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\" returns image reference \"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\"" Oct 31 01:32:05.778719 update_engine[1343]: I1031 01:32:05.778649 1343 update_attempter.cc:509] Updating boot flags... Oct 31 01:32:06.960253 systemd[1]: Stopped kubelet.service. Oct 31 01:32:06.960000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 01:32:06.962699 systemd[1]: Starting kubelet.service... Oct 31 01:32:06.960000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 01:32:06.965770 kernel: audit: type=1130 audit(1761874326.960:200): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 01:32:06.965812 kernel: audit: type=1131 audit(1761874326.960:201): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 01:32:06.989523 systemd[1]: Reloading. Oct 31 01:32:07.049139 /usr/lib/systemd/system-generators/torcx-generator[1843]: time="2025-10-31T01:32:07Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.8 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.8 /var/lib/torcx/store]" Oct 31 01:32:07.049356 /usr/lib/systemd/system-generators/torcx-generator[1843]: time="2025-10-31T01:32:07Z" level=info msg="torcx already run" Oct 31 01:32:07.121590 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Oct 31 01:32:07.121711 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Oct 31 01:32:07.134169 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Oct 31 01:32:07.199339 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Oct 31 01:32:07.199468 systemd[1]: kubelet.service: Failed with result 'signal'. Oct 31 01:32:07.199755 systemd[1]: Stopped kubelet.service. Oct 31 01:32:07.199000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Oct 31 01:32:07.202615 kernel: audit: type=1130 audit(1761874327.199:202): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Oct 31 01:32:07.204460 systemd[1]: Starting kubelet.service... Oct 31 01:32:07.796000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 01:32:07.796730 systemd[1]: Started kubelet.service. Oct 31 01:32:07.801636 kernel: audit: type=1130 audit(1761874327.796:203): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 01:32:07.886893 kubelet[1918]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Oct 31 01:32:07.886893 kubelet[1918]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Oct 31 01:32:07.886893 kubelet[1918]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Oct 31 01:32:07.887210 kubelet[1918]: I1031 01:32:07.886942 1918 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Oct 31 01:32:08.141998 kubelet[1918]: I1031 01:32:08.141975 1918 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Oct 31 01:32:08.142159 kubelet[1918]: I1031 01:32:08.142151 1918 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Oct 31 01:32:08.142485 kubelet[1918]: I1031 01:32:08.142476 1918 server.go:954] "Client rotation is on, will bootstrap in background" Oct 31 01:32:08.164723 kubelet[1918]: E1031 01:32:08.164702 1918 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://139.178.70.102:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 139.178.70.102:6443: connect: connection refused" logger="UnhandledError" Oct 31 01:32:08.165120 kubelet[1918]: I1031 01:32:08.165109 1918 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Oct 31 01:32:08.171061 kubelet[1918]: E1031 01:32:08.171046 1918 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Oct 31 01:32:08.171146 kubelet[1918]: I1031 01:32:08.171136 1918 server.go:1421] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Oct 31 01:32:08.173296 kubelet[1918]: I1031 01:32:08.173286 1918 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Oct 31 01:32:08.173594 kubelet[1918]: I1031 01:32:08.173580 1918 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Oct 31 01:32:08.173751 kubelet[1918]: I1031 01:32:08.173645 1918 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":1} Oct 31 01:32:08.174657 kubelet[1918]: I1031 01:32:08.174648 1918 topology_manager.go:138] "Creating topology manager with none policy" Oct 31 01:32:08.174709 kubelet[1918]: I1031 01:32:08.174702 1918 container_manager_linux.go:304] "Creating device plugin manager" Oct 31 01:32:08.174829 kubelet[1918]: I1031 01:32:08.174821 1918 state_mem.go:36] "Initialized new in-memory state store" Oct 31 01:32:08.178417 kubelet[1918]: I1031 01:32:08.178407 1918 kubelet.go:446] "Attempting to sync node with API server" Oct 31 01:32:08.178482 kubelet[1918]: I1031 01:32:08.178473 1918 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Oct 31 01:32:08.178545 kubelet[1918]: I1031 01:32:08.178537 1918 kubelet.go:352] "Adding apiserver pod source" Oct 31 01:32:08.178599 kubelet[1918]: I1031 01:32:08.178592 1918 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Oct 31 01:32:08.193240 kubelet[1918]: W1031 01:32:08.193190 1918 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://139.178.70.102:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 139.178.70.102:6443: connect: connection refused Oct 31 01:32:08.193240 kubelet[1918]: E1031 01:32:08.193228 1918 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://139.178.70.102:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 139.178.70.102:6443: connect: connection refused" logger="UnhandledError" Oct 31 01:32:08.193344 kubelet[1918]: I1031 01:32:08.193309 1918 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" Oct 31 01:32:08.193574 kubelet[1918]: I1031 01:32:08.193560 1918 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Oct 31 01:32:08.199224 kubelet[1918]: W1031 01:32:08.199169 1918 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Oct 31 01:32:08.201051 kubelet[1918]: I1031 01:32:08.201036 1918 watchdog_linux.go:99] "Systemd watchdog is not enabled" Oct 31 01:32:08.201108 kubelet[1918]: I1031 01:32:08.201061 1918 server.go:1287] "Started kubelet" Oct 31 01:32:08.215616 kubelet[1918]: I1031 01:32:08.215574 1918 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Oct 31 01:32:08.215794 kubelet[1918]: W1031 01:32:08.215771 1918 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://139.178.70.102:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 139.178.70.102:6443: connect: connection refused Oct 31 01:32:08.215863 kubelet[1918]: E1031 01:32:08.215847 1918 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://139.178.70.102:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 139.178.70.102:6443: connect: connection refused" logger="UnhandledError" Oct 31 01:32:08.216202 kubelet[1918]: I1031 01:32:08.216189 1918 server.go:479] "Adding debug handlers to kubelet server" Oct 31 01:32:08.216000 audit[1918]: AVC avc: denied { mac_admin } for pid=1918 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 01:32:08.216939 kubelet[1918]: I1031 01:32:08.216927 1918 kubelet.go:1507] "Unprivileged containerized plugins might not work, could not set selinux context on plugin registration dir" path="/var/lib/kubelet/plugins_registry" err="setxattr(label=system_u:object_r:container_file_t:s0) /var/lib/kubelet/plugins_registry: invalid argument" Oct 31 01:32:08.217001 kubelet[1918]: I1031 01:32:08.216992 1918 kubelet.go:1511] "Unprivileged containerized plugins might not work, could not set selinux context on plugins dir" path="/var/lib/kubelet/plugins" err="setxattr(label=system_u:object_r:container_file_t:s0) /var/lib/kubelet/plugins: invalid argument" Oct 31 01:32:08.217083 kubelet[1918]: I1031 01:32:08.217075 1918 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Oct 31 01:32:08.216000 audit: SELINUX_ERR op=setxattr invalid_context="system_u:object_r:container_file_t:s0" Oct 31 01:32:08.220029 kubelet[1918]: I1031 01:32:08.220019 1918 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Oct 31 01:32:08.220636 kernel: audit: type=1400 audit(1761874328.216:204): avc: denied { mac_admin } for pid=1918 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 01:32:08.220672 kernel: audit: type=1401 audit(1761874328.216:204): op=setxattr invalid_context="system_u:object_r:container_file_t:s0" Oct 31 01:32:08.216000 audit[1918]: SYSCALL arch=c000003e syscall=188 success=no exit=-22 a0=c000a1b530 a1=c000c2cb10 a2=c000a1b500 a3=25 items=0 ppid=1 pid=1918 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/usr/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 01:32:08.216000 audit: PROCTITLE proctitle=2F7573722F62696E2F6B7562656C6574002D2D626F6F7473747261702D6B756265636F6E6669673D2F6574632F6B756265726E657465732F626F6F7473747261702D6B7562656C65742E636F6E66002D2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F6B7562656C65742E636F6E66002D2D636F6E6669 Oct 31 01:32:08.216000 audit[1918]: AVC avc: denied { mac_admin } for pid=1918 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 01:32:08.216000 audit: SELINUX_ERR op=setxattr invalid_context="system_u:object_r:container_file_t:s0" Oct 31 01:32:08.216000 audit[1918]: SYSCALL arch=c000003e syscall=188 success=no exit=-22 a0=c000bfb280 a1=c000c2cb28 a2=c000a1b5c0 a3=25 items=0 ppid=1 pid=1918 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/usr/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 01:32:08.216000 audit: PROCTITLE proctitle=2F7573722F62696E2F6B7562656C6574002D2D626F6F7473747261702D6B756265636F6E6669673D2F6574632F6B756265726E657465732F626F6F7473747261702D6B7562656C65742E636F6E66002D2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F6B7562656C65742E636F6E66002D2D636F6E6669 Oct 31 01:32:08.220926 kubelet[1918]: I1031 01:32:08.220898 1918 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Oct 31 01:32:08.221025 kubelet[1918]: I1031 01:32:08.221015 1918 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Oct 31 01:32:08.222580 kubelet[1918]: I1031 01:32:08.222571 1918 volume_manager.go:297] "Starting Kubelet Volume Manager" Oct 31 01:32:08.222743 kubelet[1918]: E1031 01:32:08.222733 1918 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Oct 31 01:32:08.223087 kubelet[1918]: E1031 01:32:08.222144 1918 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://139.178.70.102:6443/api/v1/namespaces/default/events\": dial tcp 139.178.70.102:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.18736f5ddf4daaf4 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-10-31 01:32:08.201046772 +0000 UTC m=+0.397637314,LastTimestamp:2025-10-31 01:32:08.201046772 +0000 UTC m=+0.397637314,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Oct 31 01:32:08.223296 kubelet[1918]: E1031 01:32:08.223271 1918 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://139.178.70.102:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 139.178.70.102:6443: connect: connection refused" interval="200ms" Oct 31 01:32:08.223617 kubelet[1918]: I1031 01:32:08.223558 1918 factory.go:221] Registration of the systemd container factory successfully Oct 31 01:32:08.223617 kubelet[1918]: I1031 01:32:08.223596 1918 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Oct 31 01:32:08.224491 kubelet[1918]: I1031 01:32:08.224480 1918 factory.go:221] Registration of the containerd container factory successfully Oct 31 01:32:08.224000 audit[1930]: NETFILTER_CFG table=mangle:26 family=2 entries=2 op=nft_register_chain pid=1930 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 31 01:32:08.224000 audit[1930]: SYSCALL arch=c000003e syscall=46 success=yes exit=136 a0=3 a1=7ffd450f7fe0 a2=0 a3=7ffd450f7fcc items=0 ppid=1918 pid=1930 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 01:32:08.224000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D49505441424C45532D48494E54002D74006D616E676C65 Oct 31 01:32:08.224000 audit[1931]: NETFILTER_CFG table=filter:27 family=2 entries=1 op=nft_register_chain pid=1931 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 31 01:32:08.224000 audit[1931]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffe335c4510 a2=0 a3=7ffe335c44fc items=0 ppid=1918 pid=1931 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 01:32:08.224000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4649524557414C4C002D740066696C746572 Oct 31 01:32:08.226434 kubelet[1918]: I1031 01:32:08.226425 1918 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Oct 31 01:32:08.226504 kubelet[1918]: I1031 01:32:08.226497 1918 reconciler.go:26] "Reconciler: start to sync state" Oct 31 01:32:08.226000 audit[1933]: NETFILTER_CFG table=filter:28 family=2 entries=2 op=nft_register_chain pid=1933 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 31 01:32:08.226000 audit[1933]: SYSCALL arch=c000003e syscall=46 success=yes exit=312 a0=3 a1=7fffb7bede40 a2=0 a3=7fffb7bede2c items=0 ppid=1918 pid=1933 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 01:32:08.226000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6A004B5542452D4649524557414C4C Oct 31 01:32:08.227000 audit[1935]: NETFILTER_CFG table=filter:29 family=2 entries=2 op=nft_register_chain pid=1935 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 31 01:32:08.227000 audit[1935]: SYSCALL arch=c000003e syscall=46 success=yes exit=312 a0=3 a1=7ffca3847920 a2=0 a3=7ffca384790c items=0 ppid=1918 pid=1935 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 01:32:08.227000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6A004B5542452D4649524557414C4C Oct 31 01:32:08.234000 audit[1938]: NETFILTER_CFG table=filter:30 family=2 entries=1 op=nft_register_rule pid=1938 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 31 01:32:08.234000 audit[1938]: SYSCALL arch=c000003e syscall=46 success=yes exit=924 a0=3 a1=7ffd0f73fd80 a2=0 a3=7ffd0f73fd6c items=0 ppid=1918 pid=1938 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 01:32:08.234000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D41004B5542452D4649524557414C4C002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E7400626C6F636B20696E636F6D696E67206C6F63616C6E657420636F6E6E656374696F6E73002D2D647374003132372E302E302E302F38 Oct 31 01:32:08.235007 kubelet[1918]: I1031 01:32:08.234985 1918 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Oct 31 01:32:08.234000 audit[1939]: NETFILTER_CFG table=mangle:31 family=10 entries=2 op=nft_register_chain pid=1939 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 31 01:32:08.234000 audit[1939]: SYSCALL arch=c000003e syscall=46 success=yes exit=136 a0=3 a1=7ffce7835660 a2=0 a3=7ffce783564c items=0 ppid=1918 pid=1939 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 01:32:08.234000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D49505441424C45532D48494E54002D74006D616E676C65 Oct 31 01:32:08.235643 kubelet[1918]: I1031 01:32:08.235636 1918 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Oct 31 01:32:08.235697 kubelet[1918]: I1031 01:32:08.235690 1918 status_manager.go:227] "Starting to sync pod status with apiserver" Oct 31 01:32:08.235746 kubelet[1918]: I1031 01:32:08.235739 1918 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Oct 31 01:32:08.235789 kubelet[1918]: I1031 01:32:08.235783 1918 kubelet.go:2382] "Starting kubelet main sync loop" Oct 31 01:32:08.235850 kubelet[1918]: E1031 01:32:08.235841 1918 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Oct 31 01:32:08.235000 audit[1940]: NETFILTER_CFG table=mangle:32 family=2 entries=1 op=nft_register_chain pid=1940 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 31 01:32:08.235000 audit[1940]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffee189a650 a2=0 a3=7ffee189a63c items=0 ppid=1918 pid=1940 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 01:32:08.235000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006D616E676C65 Oct 31 01:32:08.236000 audit[1941]: NETFILTER_CFG table=nat:33 family=2 entries=1 op=nft_register_chain pid=1941 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 31 01:32:08.236000 audit[1941]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffce7b9d560 a2=0 a3=7ffce7b9d54c items=0 ppid=1918 pid=1941 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 01:32:08.236000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006E6174 Oct 31 01:32:08.236000 audit[1942]: NETFILTER_CFG table=filter:34 family=2 entries=1 op=nft_register_chain pid=1942 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 31 01:32:08.236000 audit[1942]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffd59404f80 a2=0 a3=7ffd59404f6c items=0 ppid=1918 pid=1942 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 01:32:08.236000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D740066696C746572 Oct 31 01:32:08.237000 audit[1943]: NETFILTER_CFG table=mangle:35 family=10 entries=1 op=nft_register_chain pid=1943 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 31 01:32:08.237000 audit[1943]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffe02ca1e10 a2=0 a3=7ffe02ca1dfc items=0 ppid=1918 pid=1943 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 01:32:08.237000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006D616E676C65 Oct 31 01:32:08.238000 audit[1944]: NETFILTER_CFG table=nat:36 family=10 entries=2 op=nft_register_chain pid=1944 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 31 01:32:08.238000 audit[1944]: SYSCALL arch=c000003e syscall=46 success=yes exit=128 a0=3 a1=7fff4fdf0fa0 a2=0 a3=7fff4fdf0f8c items=0 ppid=1918 pid=1944 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 01:32:08.238000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006E6174 Oct 31 01:32:08.238000 audit[1945]: NETFILTER_CFG table=filter:37 family=10 entries=2 op=nft_register_chain pid=1945 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 31 01:32:08.238000 audit[1945]: SYSCALL arch=c000003e syscall=46 success=yes exit=136 a0=3 a1=7ffd86662730 a2=0 a3=7ffd8666271c items=0 ppid=1918 pid=1945 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 01:32:08.238000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D740066696C746572 Oct 31 01:32:08.241278 kubelet[1918]: W1031 01:32:08.241252 1918 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://139.178.70.102:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 139.178.70.102:6443: connect: connection refused Oct 31 01:32:08.241355 kubelet[1918]: E1031 01:32:08.241345 1918 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://139.178.70.102:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 139.178.70.102:6443: connect: connection refused" logger="UnhandledError" Oct 31 01:32:08.241459 kubelet[1918]: E1031 01:32:08.241451 1918 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Oct 31 01:32:08.241560 kubelet[1918]: W1031 01:32:08.241545 1918 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://139.178.70.102:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 139.178.70.102:6443: connect: connection refused Oct 31 01:32:08.241633 kubelet[1918]: E1031 01:32:08.241624 1918 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://139.178.70.102:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 139.178.70.102:6443: connect: connection refused" logger="UnhandledError" Oct 31 01:32:08.244512 kubelet[1918]: I1031 01:32:08.244504 1918 cpu_manager.go:221] "Starting CPU manager" policy="none" Oct 31 01:32:08.244574 kubelet[1918]: I1031 01:32:08.244566 1918 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Oct 31 01:32:08.244638 kubelet[1918]: I1031 01:32:08.244633 1918 state_mem.go:36] "Initialized new in-memory state store" Oct 31 01:32:08.245722 kubelet[1918]: I1031 01:32:08.245715 1918 policy_none.go:49] "None policy: Start" Oct 31 01:32:08.245775 kubelet[1918]: I1031 01:32:08.245769 1918 memory_manager.go:186] "Starting memorymanager" policy="None" Oct 31 01:32:08.245817 kubelet[1918]: I1031 01:32:08.245811 1918 state_mem.go:35] "Initializing new in-memory state store" Oct 31 01:32:08.248476 kubelet[1918]: I1031 01:32:08.248466 1918 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Oct 31 01:32:08.247000 audit[1918]: AVC avc: denied { mac_admin } for pid=1918 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 01:32:08.247000 audit: SELINUX_ERR op=setxattr invalid_context="system_u:object_r:container_file_t:s0" Oct 31 01:32:08.247000 audit[1918]: SYSCALL arch=c000003e syscall=188 success=no exit=-22 a0=c000f43da0 a1=c000f40ab0 a2=c000f43d70 a3=25 items=0 ppid=1 pid=1918 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/usr/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 01:32:08.247000 audit: PROCTITLE proctitle=2F7573722F62696E2F6B7562656C6574002D2D626F6F7473747261702D6B756265636F6E6669673D2F6574632F6B756265726E657465732F626F6F7473747261702D6B7562656C65742E636F6E66002D2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F6B7562656C65742E636F6E66002D2D636F6E6669 Oct 31 01:32:08.248720 kubelet[1918]: I1031 01:32:08.248709 1918 server.go:94] "Unprivileged containerized plugins might not work. Could not set selinux context on socket dir" path="/var/lib/kubelet/device-plugins/" err="setxattr(label=system_u:object_r:container_file_t:s0) /var/lib/kubelet/device-plugins/: invalid argument" Oct 31 01:32:08.248823 kubelet[1918]: I1031 01:32:08.248816 1918 eviction_manager.go:189] "Eviction manager: starting control loop" Oct 31 01:32:08.248886 kubelet[1918]: I1031 01:32:08.248870 1918 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Oct 31 01:32:08.249907 kubelet[1918]: I1031 01:32:08.249900 1918 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Oct 31 01:32:08.251047 kubelet[1918]: E1031 01:32:08.251033 1918 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Oct 31 01:32:08.251080 kubelet[1918]: E1031 01:32:08.251060 1918 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Oct 31 01:32:08.341131 kubelet[1918]: E1031 01:32:08.341094 1918 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Oct 31 01:32:08.341591 kubelet[1918]: E1031 01:32:08.341576 1918 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Oct 31 01:32:08.342990 kubelet[1918]: E1031 01:32:08.342973 1918 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Oct 31 01:32:08.350269 kubelet[1918]: I1031 01:32:08.350254 1918 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Oct 31 01:32:08.350487 kubelet[1918]: E1031 01:32:08.350472 1918 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://139.178.70.102:6443/api/v1/nodes\": dial tcp 139.178.70.102:6443: connect: connection refused" node="localhost" Oct 31 01:32:08.424729 kubelet[1918]: E1031 01:32:08.424074 1918 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://139.178.70.102:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 139.178.70.102:6443: connect: connection refused" interval="400ms" Oct 31 01:32:08.527802 kubelet[1918]: I1031 01:32:08.527774 1918 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/06baa010e2bb14eb356784f2a3a4cf93-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"06baa010e2bb14eb356784f2a3a4cf93\") " pod="kube-system/kube-apiserver-localhost" Oct 31 01:32:08.527929 kubelet[1918]: I1031 01:32:08.527919 1918 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/4654b122dbb389158fe3c0766e603624-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"4654b122dbb389158fe3c0766e603624\") " pod="kube-system/kube-controller-manager-localhost" Oct 31 01:32:08.527988 kubelet[1918]: I1031 01:32:08.527980 1918 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/4654b122dbb389158fe3c0766e603624-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"4654b122dbb389158fe3c0766e603624\") " pod="kube-system/kube-controller-manager-localhost" Oct 31 01:32:08.528051 kubelet[1918]: I1031 01:32:08.528041 1918 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/4654b122dbb389158fe3c0766e603624-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"4654b122dbb389158fe3c0766e603624\") " pod="kube-system/kube-controller-manager-localhost" Oct 31 01:32:08.528110 kubelet[1918]: I1031 01:32:08.528097 1918 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/a1d51be1ff02022474f2598f6e43038f-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"a1d51be1ff02022474f2598f6e43038f\") " pod="kube-system/kube-scheduler-localhost" Oct 31 01:32:08.528165 kubelet[1918]: I1031 01:32:08.528157 1918 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/06baa010e2bb14eb356784f2a3a4cf93-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"06baa010e2bb14eb356784f2a3a4cf93\") " pod="kube-system/kube-apiserver-localhost" Oct 31 01:32:08.528224 kubelet[1918]: I1031 01:32:08.528211 1918 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/06baa010e2bb14eb356784f2a3a4cf93-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"06baa010e2bb14eb356784f2a3a4cf93\") " pod="kube-system/kube-apiserver-localhost" Oct 31 01:32:08.528278 kubelet[1918]: I1031 01:32:08.528270 1918 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/4654b122dbb389158fe3c0766e603624-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"4654b122dbb389158fe3c0766e603624\") " pod="kube-system/kube-controller-manager-localhost" Oct 31 01:32:08.528334 kubelet[1918]: I1031 01:32:08.528321 1918 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/4654b122dbb389158fe3c0766e603624-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"4654b122dbb389158fe3c0766e603624\") " pod="kube-system/kube-controller-manager-localhost" Oct 31 01:32:08.551282 kubelet[1918]: I1031 01:32:08.551260 1918 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Oct 31 01:32:08.551476 kubelet[1918]: E1031 01:32:08.551460 1918 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://139.178.70.102:6443/api/v1/nodes\": dial tcp 139.178.70.102:6443: connect: connection refused" node="localhost" Oct 31 01:32:08.642051 env[1377]: time="2025-10-31T01:32:08.641758389Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:4654b122dbb389158fe3c0766e603624,Namespace:kube-system,Attempt:0,}" Oct 31 01:32:08.643169 env[1377]: time="2025-10-31T01:32:08.642993389Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:06baa010e2bb14eb356784f2a3a4cf93,Namespace:kube-system,Attempt:0,}" Oct 31 01:32:08.643450 env[1377]: time="2025-10-31T01:32:08.643430997Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:a1d51be1ff02022474f2598f6e43038f,Namespace:kube-system,Attempt:0,}" Oct 31 01:32:08.825099 kubelet[1918]: E1031 01:32:08.825072 1918 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://139.178.70.102:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 139.178.70.102:6443: connect: connection refused" interval="800ms" Oct 31 01:32:08.952860 kubelet[1918]: I1031 01:32:08.952803 1918 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Oct 31 01:32:08.953108 kubelet[1918]: E1031 01:32:08.952988 1918 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://139.178.70.102:6443/api/v1/nodes\": dial tcp 139.178.70.102:6443: connect: connection refused" node="localhost" Oct 31 01:32:09.143154 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount24873116.mount: Deactivated successfully. Oct 31 01:32:09.146333 env[1377]: time="2025-10-31T01:32:09.146299582Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 31 01:32:09.146991 env[1377]: time="2025-10-31T01:32:09.146970797Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 31 01:32:09.148620 env[1377]: time="2025-10-31T01:32:09.148590632Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 31 01:32:09.149225 env[1377]: time="2025-10-31T01:32:09.149207671Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 31 01:32:09.149828 env[1377]: time="2025-10-31T01:32:09.149808914Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 31 01:32:09.152119 env[1377]: time="2025-10-31T01:32:09.152099349Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 31 01:32:09.154298 env[1377]: time="2025-10-31T01:32:09.154278792Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 31 01:32:09.154722 env[1377]: time="2025-10-31T01:32:09.154705037Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 31 01:32:09.155233 env[1377]: time="2025-10-31T01:32:09.155166803Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 31 01:32:09.155630 env[1377]: time="2025-10-31T01:32:09.155616022Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 31 01:32:09.156086 env[1377]: time="2025-10-31T01:32:09.156048012Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 31 01:32:09.156546 env[1377]: time="2025-10-31T01:32:09.156505635Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 31 01:32:09.175742 env[1377]: time="2025-10-31T01:32:09.167983128Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 31 01:32:09.175742 env[1377]: time="2025-10-31T01:32:09.168001548Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 31 01:32:09.175742 env[1377]: time="2025-10-31T01:32:09.168008474Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 31 01:32:09.175742 env[1377]: time="2025-10-31T01:32:09.170222533Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/7e6665c96f9086d546fdc52be1841e0c09e3414872364f5ed01db688f62ae0d1 pid=1961 runtime=io.containerd.runc.v2 Oct 31 01:32:09.176089 env[1377]: time="2025-10-31T01:32:09.167981513Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 31 01:32:09.176089 env[1377]: time="2025-10-31T01:32:09.168001583Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 31 01:32:09.176089 env[1377]: time="2025-10-31T01:32:09.168008518Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 31 01:32:09.176089 env[1377]: time="2025-10-31T01:32:09.168187239Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/198078b263aac57e5dffe12c9a818bf5b38550e88de7bde23a5498c240dd30ad pid=1962 runtime=io.containerd.runc.v2 Oct 31 01:32:09.203630 env[1377]: time="2025-10-31T01:32:09.201724358Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 31 01:32:09.203630 env[1377]: time="2025-10-31T01:32:09.201745401Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 31 01:32:09.203630 env[1377]: time="2025-10-31T01:32:09.201752162Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 31 01:32:09.203630 env[1377]: time="2025-10-31T01:32:09.201820904Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/92fbb4ceb539517bdf4a89f18b0146e4ce229ed2487266634f78a2d985bd1e66 pid=2012 runtime=io.containerd.runc.v2 Oct 31 01:32:09.234881 env[1377]: time="2025-10-31T01:32:09.234855678Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:4654b122dbb389158fe3c0766e603624,Namespace:kube-system,Attempt:0,} returns sandbox id \"198078b263aac57e5dffe12c9a818bf5b38550e88de7bde23a5498c240dd30ad\"" Oct 31 01:32:09.243888 env[1377]: time="2025-10-31T01:32:09.243862511Z" level=info msg="CreateContainer within sandbox \"198078b263aac57e5dffe12c9a818bf5b38550e88de7bde23a5498c240dd30ad\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Oct 31 01:32:09.244871 env[1377]: time="2025-10-31T01:32:09.244739456Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:06baa010e2bb14eb356784f2a3a4cf93,Namespace:kube-system,Attempt:0,} returns sandbox id \"7e6665c96f9086d546fdc52be1841e0c09e3414872364f5ed01db688f62ae0d1\"" Oct 31 01:32:09.253143 env[1377]: time="2025-10-31T01:32:09.253124282Z" level=info msg="CreateContainer within sandbox \"7e6665c96f9086d546fdc52be1841e0c09e3414872364f5ed01db688f62ae0d1\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Oct 31 01:32:09.258759 env[1377]: time="2025-10-31T01:32:09.258732404Z" level=info msg="CreateContainer within sandbox \"198078b263aac57e5dffe12c9a818bf5b38550e88de7bde23a5498c240dd30ad\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"689d2cb52924445ab46e1c34855dc2dab74e22a8e4f02099846b539060e6cb97\"" Oct 31 01:32:09.259183 env[1377]: time="2025-10-31T01:32:09.259171850Z" level=info msg="StartContainer for \"689d2cb52924445ab46e1c34855dc2dab74e22a8e4f02099846b539060e6cb97\"" Oct 31 01:32:09.260040 env[1377]: time="2025-10-31T01:32:09.260019726Z" level=info msg="CreateContainer within sandbox \"7e6665c96f9086d546fdc52be1841e0c09e3414872364f5ed01db688f62ae0d1\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"ec4dae37c701da83ff8463b851ee6112f035d1387e6c9c8369bb9309b3a04c1d\"" Oct 31 01:32:09.260213 env[1377]: time="2025-10-31T01:32:09.260199006Z" level=info msg="StartContainer for \"ec4dae37c701da83ff8463b851ee6112f035d1387e6c9c8369bb9309b3a04c1d\"" Oct 31 01:32:09.264957 env[1377]: time="2025-10-31T01:32:09.264939762Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:a1d51be1ff02022474f2598f6e43038f,Namespace:kube-system,Attempt:0,} returns sandbox id \"92fbb4ceb539517bdf4a89f18b0146e4ce229ed2487266634f78a2d985bd1e66\"" Oct 31 01:32:09.266165 env[1377]: time="2025-10-31T01:32:09.266140796Z" level=info msg="CreateContainer within sandbox \"92fbb4ceb539517bdf4a89f18b0146e4ce229ed2487266634f78a2d985bd1e66\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Oct 31 01:32:09.270533 env[1377]: time="2025-10-31T01:32:09.270511524Z" level=info msg="CreateContainer within sandbox \"92fbb4ceb539517bdf4a89f18b0146e4ce229ed2487266634f78a2d985bd1e66\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"9af8ca4876b99a2e26bcf78ee66eefb864feed0c9b45195ad41b30b47307d4f9\"" Oct 31 01:32:09.270833 env[1377]: time="2025-10-31T01:32:09.270817812Z" level=info msg="StartContainer for \"9af8ca4876b99a2e26bcf78ee66eefb864feed0c9b45195ad41b30b47307d4f9\"" Oct 31 01:32:09.322683 env[1377]: time="2025-10-31T01:32:09.322658927Z" level=info msg="StartContainer for \"689d2cb52924445ab46e1c34855dc2dab74e22a8e4f02099846b539060e6cb97\" returns successfully" Oct 31 01:32:09.339254 env[1377]: time="2025-10-31T01:32:09.339230952Z" level=info msg="StartContainer for \"ec4dae37c701da83ff8463b851ee6112f035d1387e6c9c8369bb9309b3a04c1d\" returns successfully" Oct 31 01:32:09.347415 env[1377]: time="2025-10-31T01:32:09.347392591Z" level=info msg="StartContainer for \"9af8ca4876b99a2e26bcf78ee66eefb864feed0c9b45195ad41b30b47307d4f9\" returns successfully" Oct 31 01:32:09.451569 kubelet[1918]: W1031 01:32:09.451466 1918 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://139.178.70.102:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 139.178.70.102:6443: connect: connection refused Oct 31 01:32:09.451569 kubelet[1918]: E1031 01:32:09.451507 1918 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://139.178.70.102:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 139.178.70.102:6443: connect: connection refused" logger="UnhandledError" Oct 31 01:32:09.625627 kubelet[1918]: E1031 01:32:09.625589 1918 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://139.178.70.102:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 139.178.70.102:6443: connect: connection refused" interval="1.6s" Oct 31 01:32:09.663249 kubelet[1918]: W1031 01:32:09.663188 1918 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://139.178.70.102:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 139.178.70.102:6443: connect: connection refused Oct 31 01:32:09.663249 kubelet[1918]: E1031 01:32:09.663230 1918 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://139.178.70.102:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 139.178.70.102:6443: connect: connection refused" logger="UnhandledError" Oct 31 01:32:09.754305 kubelet[1918]: I1031 01:32:09.754058 1918 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Oct 31 01:32:09.754515 kubelet[1918]: E1031 01:32:09.754497 1918 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://139.178.70.102:6443/api/v1/nodes\": dial tcp 139.178.70.102:6443: connect: connection refused" node="localhost" Oct 31 01:32:09.755701 kubelet[1918]: W1031 01:32:09.755652 1918 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://139.178.70.102:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 139.178.70.102:6443: connect: connection refused Oct 31 01:32:09.755701 kubelet[1918]: E1031 01:32:09.755685 1918 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://139.178.70.102:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 139.178.70.102:6443: connect: connection refused" logger="UnhandledError" Oct 31 01:32:09.841557 kubelet[1918]: W1031 01:32:09.841520 1918 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://139.178.70.102:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 139.178.70.102:6443: connect: connection refused Oct 31 01:32:09.841656 kubelet[1918]: E1031 01:32:09.841565 1918 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://139.178.70.102:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 139.178.70.102:6443: connect: connection refused" logger="UnhandledError" Oct 31 01:32:10.255497 kubelet[1918]: E1031 01:32:10.255478 1918 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Oct 31 01:32:10.256729 kubelet[1918]: E1031 01:32:10.256712 1918 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Oct 31 01:32:10.257591 kubelet[1918]: E1031 01:32:10.257575 1918 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Oct 31 01:32:11.259419 kubelet[1918]: E1031 01:32:11.259401 1918 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Oct 31 01:32:11.259812 kubelet[1918]: E1031 01:32:11.259612 1918 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Oct 31 01:32:11.259929 kubelet[1918]: E1031 01:32:11.259807 1918 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Oct 31 01:32:11.270937 kubelet[1918]: E1031 01:32:11.270920 1918 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Oct 31 01:32:11.355533 kubelet[1918]: I1031 01:32:11.355517 1918 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Oct 31 01:32:11.375699 kubelet[1918]: I1031 01:32:11.375680 1918 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Oct 31 01:32:11.375829 kubelet[1918]: E1031 01:32:11.375818 1918 kubelet_node_status.go:548] "Error updating node status, will retry" err="error getting node \"localhost\": node \"localhost\" not found" Oct 31 01:32:11.384554 kubelet[1918]: E1031 01:32:11.384531 1918 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Oct 31 01:32:11.484786 kubelet[1918]: E1031 01:32:11.484752 1918 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Oct 31 01:32:11.585459 kubelet[1918]: E1031 01:32:11.585436 1918 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Oct 31 01:32:11.686099 kubelet[1918]: E1031 01:32:11.686075 1918 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Oct 31 01:32:11.786215 kubelet[1918]: E1031 01:32:11.786190 1918 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Oct 31 01:32:11.886983 kubelet[1918]: E1031 01:32:11.886836 1918 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Oct 31 01:32:11.986978 kubelet[1918]: E1031 01:32:11.986913 1918 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Oct 31 01:32:12.087665 kubelet[1918]: E1031 01:32:12.087638 1918 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Oct 31 01:32:12.188464 kubelet[1918]: E1031 01:32:12.188397 1918 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Oct 31 01:32:12.260157 kubelet[1918]: E1031 01:32:12.260141 1918 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Oct 31 01:32:12.289233 kubelet[1918]: E1031 01:32:12.289215 1918 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Oct 31 01:32:12.389976 kubelet[1918]: E1031 01:32:12.389947 1918 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Oct 31 01:32:12.490725 kubelet[1918]: E1031 01:32:12.490644 1918 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Oct 31 01:32:12.591436 kubelet[1918]: E1031 01:32:12.591400 1918 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Oct 31 01:32:12.623090 kubelet[1918]: I1031 01:32:12.622978 1918 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Oct 31 01:32:12.631146 kubelet[1918]: I1031 01:32:12.631116 1918 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Oct 31 01:32:12.633425 kubelet[1918]: I1031 01:32:12.633406 1918 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Oct 31 01:32:13.211054 kubelet[1918]: I1031 01:32:13.211036 1918 apiserver.go:52] "Watching apiserver" Oct 31 01:32:13.226992 kubelet[1918]: I1031 01:32:13.226976 1918 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Oct 31 01:32:13.423599 systemd[1]: Reloading. Oct 31 01:32:13.474539 /usr/lib/systemd/system-generators/torcx-generator[2209]: time="2025-10-31T01:32:13Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.8 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.8 /var/lib/torcx/store]" Oct 31 01:32:13.474569 /usr/lib/systemd/system-generators/torcx-generator[2209]: time="2025-10-31T01:32:13Z" level=info msg="torcx already run" Oct 31 01:32:13.535913 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Oct 31 01:32:13.536037 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Oct 31 01:32:13.547903 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Oct 31 01:32:13.608838 systemd[1]: Stopping kubelet.service... Oct 31 01:32:13.628111 systemd[1]: kubelet.service: Deactivated successfully. Oct 31 01:32:13.628350 systemd[1]: Stopped kubelet.service. Oct 31 01:32:13.627000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 01:32:13.629227 kernel: kauditd_printk_skb: 46 callbacks suppressed Oct 31 01:32:13.629266 kernel: audit: type=1131 audit(1761874333.627:219): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 01:32:13.632489 systemd[1]: Starting kubelet.service... Oct 31 01:32:15.377192 systemd[1]: Started kubelet.service. Oct 31 01:32:15.376000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 01:32:15.381902 kernel: audit: type=1130 audit(1761874335.376:220): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 01:32:15.416249 kubelet[2284]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Oct 31 01:32:15.416531 kubelet[2284]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Oct 31 01:32:15.416585 kubelet[2284]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Oct 31 01:32:15.418148 kubelet[2284]: I1031 01:32:15.416677 2284 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Oct 31 01:32:15.424104 kubelet[2284]: I1031 01:32:15.423721 2284 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Oct 31 01:32:15.424104 kubelet[2284]: I1031 01:32:15.423735 2284 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Oct 31 01:32:15.424104 kubelet[2284]: I1031 01:32:15.424055 2284 server.go:954] "Client rotation is on, will bootstrap in background" Oct 31 01:32:15.425509 kubelet[2284]: I1031 01:32:15.425499 2284 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Oct 31 01:32:15.467053 kubelet[2284]: I1031 01:32:15.467036 2284 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Oct 31 01:32:15.469895 kubelet[2284]: E1031 01:32:15.469877 2284 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Oct 31 01:32:15.469970 kubelet[2284]: I1031 01:32:15.469962 2284 server.go:1421] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Oct 31 01:32:15.472012 kubelet[2284]: I1031 01:32:15.472003 2284 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Oct 31 01:32:15.474867 kubelet[2284]: I1031 01:32:15.474815 2284 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Oct 31 01:32:15.475059 kubelet[2284]: I1031 01:32:15.474929 2284 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":1} Oct 31 01:32:15.475157 kubelet[2284]: I1031 01:32:15.475148 2284 topology_manager.go:138] "Creating topology manager with none policy" Oct 31 01:32:15.475204 kubelet[2284]: I1031 01:32:15.475197 2284 container_manager_linux.go:304] "Creating device plugin manager" Oct 31 01:32:15.477425 kubelet[2284]: I1031 01:32:15.477415 2284 state_mem.go:36] "Initialized new in-memory state store" Oct 31 01:32:15.477597 kubelet[2284]: I1031 01:32:15.477591 2284 kubelet.go:446] "Attempting to sync node with API server" Oct 31 01:32:15.478107 kubelet[2284]: I1031 01:32:15.478099 2284 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Oct 31 01:32:15.478174 kubelet[2284]: I1031 01:32:15.478167 2284 kubelet.go:352] "Adding apiserver pod source" Oct 31 01:32:15.480817 kubelet[2284]: I1031 01:32:15.480808 2284 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Oct 31 01:32:15.483799 kubelet[2284]: I1031 01:32:15.483785 2284 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" Oct 31 01:32:15.497966 kubelet[2284]: I1031 01:32:15.497931 2284 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Oct 31 01:32:15.499052 kubelet[2284]: I1031 01:32:15.499040 2284 watchdog_linux.go:99] "Systemd watchdog is not enabled" Oct 31 01:32:15.499158 kubelet[2284]: I1031 01:32:15.499150 2284 server.go:1287] "Started kubelet" Oct 31 01:32:15.505739 kernel: audit: type=1400 audit(1761874335.500:221): avc: denied { mac_admin } for pid=2284 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 01:32:15.506279 kernel: audit: type=1401 audit(1761874335.500:221): op=setxattr invalid_context="system_u:object_r:container_file_t:s0" Oct 31 01:32:15.500000 audit[2284]: AVC avc: denied { mac_admin } for pid=2284 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 01:32:15.500000 audit: SELINUX_ERR op=setxattr invalid_context="system_u:object_r:container_file_t:s0" Oct 31 01:32:15.506401 kubelet[2284]: I1031 01:32:15.501081 2284 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Oct 31 01:32:15.510785 kernel: audit: type=1300 audit(1761874335.500:221): arch=c000003e syscall=188 success=no exit=-22 a0=c000969470 a1=c0009d1cf8 a2=c000969440 a3=25 items=0 ppid=1 pid=2284 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/usr/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 01:32:15.500000 audit[2284]: SYSCALL arch=c000003e syscall=188 success=no exit=-22 a0=c000969470 a1=c0009d1cf8 a2=c000969440 a3=25 items=0 ppid=1 pid=2284 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/usr/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 01:32:15.500000 audit: PROCTITLE proctitle=2F7573722F62696E2F6B7562656C6574002D2D626F6F7473747261702D6B756265636F6E6669673D2F6574632F6B756265726E657465732F626F6F7473747261702D6B7562656C65742E636F6E66002D2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F6B7562656C65742E636F6E66002D2D636F6E6669 Oct 31 01:32:15.514625 kernel: audit: type=1327 audit(1761874335.500:221): proctitle=2F7573722F62696E2F6B7562656C6574002D2D626F6F7473747261702D6B756265636F6E6669673D2F6574632F6B756265726E657465732F626F6F7473747261702D6B7562656C65742E636F6E66002D2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F6B7562656C65742E636F6E66002D2D636F6E6669 Oct 31 01:32:15.514988 kubelet[2284]: I1031 01:32:15.514935 2284 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Oct 31 01:32:15.515995 kubelet[2284]: I1031 01:32:15.515271 2284 kubelet.go:1507] "Unprivileged containerized plugins might not work, could not set selinux context on plugin registration dir" path="/var/lib/kubelet/plugins_registry" err="setxattr(label=system_u:object_r:container_file_t:s0) /var/lib/kubelet/plugins_registry: invalid argument" Oct 31 01:32:15.516093 kubelet[2284]: I1031 01:32:15.516082 2284 kubelet.go:1511] "Unprivileged containerized plugins might not work, could not set selinux context on plugins dir" path="/var/lib/kubelet/plugins" err="setxattr(label=system_u:object_r:container_file_t:s0) /var/lib/kubelet/plugins: invalid argument" Oct 31 01:32:15.516156 kubelet[2284]: I1031 01:32:15.516148 2284 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Oct 31 01:32:15.515000 audit[2284]: AVC avc: denied { mac_admin } for pid=2284 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 01:32:15.519673 kernel: audit: type=1400 audit(1761874335.515:222): avc: denied { mac_admin } for pid=2284 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 01:32:15.519696 kubelet[2284]: I1031 01:32:15.515903 2284 server.go:479] "Adding debug handlers to kubelet server" Oct 31 01:32:15.519696 kubelet[2284]: I1031 01:32:15.517769 2284 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Oct 31 01:32:15.520816 kubelet[2284]: I1031 01:32:15.520039 2284 volume_manager.go:297] "Starting Kubelet Volume Manager" Oct 31 01:32:15.515000 audit: SELINUX_ERR op=setxattr invalid_context="system_u:object_r:container_file_t:s0" Oct 31 01:32:15.523617 kernel: audit: type=1401 audit(1761874335.515:222): op=setxattr invalid_context="system_u:object_r:container_file_t:s0" Oct 31 01:32:15.515000 audit[2284]: SYSCALL arch=c000003e syscall=188 success=no exit=-22 a0=c0008d2680 a1=c0009d03a8 a2=c000969650 a3=25 items=0 ppid=1 pid=2284 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/usr/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 01:32:15.530642 kernel: audit: type=1300 audit(1761874335.515:222): arch=c000003e syscall=188 success=no exit=-22 a0=c0008d2680 a1=c0009d03a8 a2=c000969650 a3=25 items=0 ppid=1 pid=2284 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/usr/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 01:32:15.531051 kubelet[2284]: I1031 01:32:15.520087 2284 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Oct 31 01:32:15.531130 kubelet[2284]: I1031 01:32:15.520141 2284 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Oct 31 01:32:15.531243 kubelet[2284]: I1031 01:32:15.531236 2284 reconciler.go:26] "Reconciler: start to sync state" Oct 31 01:32:15.531400 kubelet[2284]: I1031 01:32:15.531392 2284 factory.go:221] Registration of the containerd container factory successfully Oct 31 01:32:15.531459 kubelet[2284]: I1031 01:32:15.531452 2284 factory.go:221] Registration of the systemd container factory successfully Oct 31 01:32:15.531547 kubelet[2284]: I1031 01:32:15.531536 2284 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Oct 31 01:32:15.515000 audit: PROCTITLE proctitle=2F7573722F62696E2F6B7562656C6574002D2D626F6F7473747261702D6B756265636F6E6669673D2F6574632F6B756265726E657465732F626F6F7473747261702D6B7562656C65742E636F6E66002D2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F6B7562656C65742E636F6E66002D2D636F6E6669 Oct 31 01:32:15.536632 kernel: audit: type=1327 audit(1761874335.515:222): proctitle=2F7573722F62696E2F6B7562656C6574002D2D626F6F7473747261702D6B756265636F6E6669673D2F6574632F6B756265726E657465732F626F6F7473747261702D6B7562656C65742E636F6E66002D2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F6B7562656C65742E636F6E66002D2D636F6E6669 Oct 31 01:32:15.568223 kubelet[2284]: I1031 01:32:15.568197 2284 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Oct 31 01:32:15.569184 kubelet[2284]: I1031 01:32:15.569175 2284 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Oct 31 01:32:15.569253 kubelet[2284]: I1031 01:32:15.569246 2284 status_manager.go:227] "Starting to sync pod status with apiserver" Oct 31 01:32:15.569311 kubelet[2284]: I1031 01:32:15.569302 2284 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Oct 31 01:32:15.569358 kubelet[2284]: I1031 01:32:15.569351 2284 kubelet.go:2382] "Starting kubelet main sync loop" Oct 31 01:32:15.569431 kubelet[2284]: E1031 01:32:15.569419 2284 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Oct 31 01:32:15.585180 kubelet[2284]: I1031 01:32:15.585166 2284 cpu_manager.go:221] "Starting CPU manager" policy="none" Oct 31 01:32:15.585330 kubelet[2284]: I1031 01:32:15.585320 2284 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Oct 31 01:32:15.585389 kubelet[2284]: I1031 01:32:15.585382 2284 state_mem.go:36] "Initialized new in-memory state store" Oct 31 01:32:15.585526 kubelet[2284]: I1031 01:32:15.585517 2284 state_mem.go:88] "Updated default CPUSet" cpuSet="" Oct 31 01:32:15.585593 kubelet[2284]: I1031 01:32:15.585565 2284 state_mem.go:96] "Updated CPUSet assignments" assignments={} Oct 31 01:32:15.585651 kubelet[2284]: I1031 01:32:15.585644 2284 policy_none.go:49] "None policy: Start" Oct 31 01:32:15.585698 kubelet[2284]: I1031 01:32:15.585691 2284 memory_manager.go:186] "Starting memorymanager" policy="None" Oct 31 01:32:15.585748 kubelet[2284]: I1031 01:32:15.585741 2284 state_mem.go:35] "Initializing new in-memory state store" Oct 31 01:32:15.585854 kubelet[2284]: I1031 01:32:15.585847 2284 state_mem.go:75] "Updated machine memory state" Oct 31 01:32:15.586581 kubelet[2284]: I1031 01:32:15.586571 2284 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Oct 31 01:32:15.586000 audit[2284]: AVC avc: denied { mac_admin } for pid=2284 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 01:32:15.586000 audit: SELINUX_ERR op=setxattr invalid_context="system_u:object_r:container_file_t:s0" Oct 31 01:32:15.586000 audit[2284]: SYSCALL arch=c000003e syscall=188 success=no exit=-22 a0=c000de00c0 a1=c000dbee28 a2=c000de0090 a3=25 items=0 ppid=1 pid=2284 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/usr/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 01:32:15.586000 audit: PROCTITLE proctitle=2F7573722F62696E2F6B7562656C6574002D2D626F6F7473747261702D6B756265636F6E6669673D2F6574632F6B756265726E657465732F626F6F7473747261702D6B7562656C65742E636F6E66002D2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F6B7562656C65742E636F6E66002D2D636F6E6669 Oct 31 01:32:15.586870 kubelet[2284]: I1031 01:32:15.586859 2284 server.go:94] "Unprivileged containerized plugins might not work. Could not set selinux context on socket dir" path="/var/lib/kubelet/device-plugins/" err="setxattr(label=system_u:object_r:container_file_t:s0) /var/lib/kubelet/device-plugins/: invalid argument" Oct 31 01:32:15.586997 kubelet[2284]: I1031 01:32:15.586988 2284 eviction_manager.go:189] "Eviction manager: starting control loop" Oct 31 01:32:15.587058 kubelet[2284]: I1031 01:32:15.587040 2284 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Oct 31 01:32:15.587273 kubelet[2284]: I1031 01:32:15.587266 2284 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Oct 31 01:32:15.588856 kubelet[2284]: E1031 01:32:15.588845 2284 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Oct 31 01:32:15.670613 kubelet[2284]: I1031 01:32:15.670523 2284 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Oct 31 01:32:15.693115 kubelet[2284]: I1031 01:32:15.693100 2284 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Oct 31 01:32:15.724616 kubelet[2284]: I1031 01:32:15.724584 2284 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Oct 31 01:32:15.724827 kubelet[2284]: I1031 01:32:15.724654 2284 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Oct 31 01:32:15.732459 kubelet[2284]: I1031 01:32:15.732442 2284 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/06baa010e2bb14eb356784f2a3a4cf93-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"06baa010e2bb14eb356784f2a3a4cf93\") " pod="kube-system/kube-apiserver-localhost" Oct 31 01:32:15.733747 kubelet[2284]: I1031 01:32:15.733731 2284 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/06baa010e2bb14eb356784f2a3a4cf93-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"06baa010e2bb14eb356784f2a3a4cf93\") " pod="kube-system/kube-apiserver-localhost" Oct 31 01:32:15.733876 kubelet[2284]: I1031 01:32:15.733861 2284 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/4654b122dbb389158fe3c0766e603624-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"4654b122dbb389158fe3c0766e603624\") " pod="kube-system/kube-controller-manager-localhost" Oct 31 01:32:15.733952 kubelet[2284]: I1031 01:32:15.733940 2284 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/4654b122dbb389158fe3c0766e603624-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"4654b122dbb389158fe3c0766e603624\") " pod="kube-system/kube-controller-manager-localhost" Oct 31 01:32:15.734024 kubelet[2284]: I1031 01:32:15.734013 2284 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/a1d51be1ff02022474f2598f6e43038f-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"a1d51be1ff02022474f2598f6e43038f\") " pod="kube-system/kube-scheduler-localhost" Oct 31 01:32:15.734099 kubelet[2284]: I1031 01:32:15.734088 2284 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/06baa010e2bb14eb356784f2a3a4cf93-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"06baa010e2bb14eb356784f2a3a4cf93\") " pod="kube-system/kube-apiserver-localhost" Oct 31 01:32:15.734201 kubelet[2284]: I1031 01:32:15.734185 2284 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/4654b122dbb389158fe3c0766e603624-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"4654b122dbb389158fe3c0766e603624\") " pod="kube-system/kube-controller-manager-localhost" Oct 31 01:32:15.734293 kubelet[2284]: I1031 01:32:15.734281 2284 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/4654b122dbb389158fe3c0766e603624-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"4654b122dbb389158fe3c0766e603624\") " pod="kube-system/kube-controller-manager-localhost" Oct 31 01:32:15.734365 kubelet[2284]: I1031 01:32:15.734353 2284 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/4654b122dbb389158fe3c0766e603624-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"4654b122dbb389158fe3c0766e603624\") " pod="kube-system/kube-controller-manager-localhost" Oct 31 01:32:15.734433 kubelet[2284]: E1031 01:32:15.733431 2284 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Oct 31 01:32:15.757368 kubelet[2284]: E1031 01:32:15.757341 2284 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" already exists" pod="kube-system/kube-controller-manager-localhost" Oct 31 01:32:15.757694 kubelet[2284]: E1031 01:32:15.757678 2284 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" Oct 31 01:32:15.757863 kubelet[2284]: I1031 01:32:15.757818 2284 kubelet_node_status.go:124] "Node was previously registered" node="localhost" Oct 31 01:32:15.757966 kubelet[2284]: I1031 01:32:15.757958 2284 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Oct 31 01:32:16.483482 kubelet[2284]: I1031 01:32:16.483458 2284 apiserver.go:52] "Watching apiserver" Oct 31 01:32:16.531819 kubelet[2284]: I1031 01:32:16.531799 2284 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Oct 31 01:32:16.578984 kubelet[2284]: I1031 01:32:16.578963 2284 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Oct 31 01:32:16.579193 kubelet[2284]: I1031 01:32:16.579180 2284 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Oct 31 01:32:16.580628 kubelet[2284]: I1031 01:32:16.579317 2284 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Oct 31 01:32:16.582307 kubelet[2284]: E1031 01:32:16.582294 2284 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" Oct 31 01:32:16.582528 kubelet[2284]: E1031 01:32:16.582519 2284 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Oct 31 01:32:16.582717 kubelet[2284]: E1031 01:32:16.582708 2284 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" already exists" pod="kube-system/kube-controller-manager-localhost" Oct 31 01:32:16.609586 kubelet[2284]: I1031 01:32:16.609545 2284 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=4.609531918 podStartE2EDuration="4.609531918s" podCreationTimestamp="2025-10-31 01:32:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-31 01:32:16.602697506 +0000 UTC m=+1.214511581" watchObservedRunningTime="2025-10-31 01:32:16.609531918 +0000 UTC m=+1.221345998" Oct 31 01:32:16.619099 kubelet[2284]: I1031 01:32:16.619067 2284 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=4.619045639 podStartE2EDuration="4.619045639s" podCreationTimestamp="2025-10-31 01:32:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-31 01:32:16.610086513 +0000 UTC m=+1.221900600" watchObservedRunningTime="2025-10-31 01:32:16.619045639 +0000 UTC m=+1.230859720" Oct 31 01:32:18.635094 kubelet[2284]: I1031 01:32:18.635062 2284 kuberuntime_manager.go:1702] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Oct 31 01:32:18.635746 env[1377]: time="2025-10-31T01:32:18.635716717Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Oct 31 01:32:18.636130 kubelet[2284]: I1031 01:32:18.636057 2284 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Oct 31 01:32:19.445890 kubelet[2284]: I1031 01:32:19.445850 2284 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=7.445836099 podStartE2EDuration="7.445836099s" podCreationTimestamp="2025-10-31 01:32:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-31 01:32:16.619543175 +0000 UTC m=+1.231357267" watchObservedRunningTime="2025-10-31 01:32:19.445836099 +0000 UTC m=+4.057650185" Oct 31 01:32:19.565010 kubelet[2284]: I1031 01:32:19.564987 2284 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/508eedc6-be1f-4519-be9b-10935970d31f-lib-modules\") pod \"kube-proxy-dk487\" (UID: \"508eedc6-be1f-4519-be9b-10935970d31f\") " pod="kube-system/kube-proxy-dk487" Oct 31 01:32:19.565155 kubelet[2284]: I1031 01:32:19.565140 2284 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lfj2q\" (UniqueName: \"kubernetes.io/projected/508eedc6-be1f-4519-be9b-10935970d31f-kube-api-access-lfj2q\") pod \"kube-proxy-dk487\" (UID: \"508eedc6-be1f-4519-be9b-10935970d31f\") " pod="kube-system/kube-proxy-dk487" Oct 31 01:32:19.565225 kubelet[2284]: I1031 01:32:19.565214 2284 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/508eedc6-be1f-4519-be9b-10935970d31f-kube-proxy\") pod \"kube-proxy-dk487\" (UID: \"508eedc6-be1f-4519-be9b-10935970d31f\") " pod="kube-system/kube-proxy-dk487" Oct 31 01:32:19.565295 kubelet[2284]: I1031 01:32:19.565284 2284 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/508eedc6-be1f-4519-be9b-10935970d31f-xtables-lock\") pod \"kube-proxy-dk487\" (UID: \"508eedc6-be1f-4519-be9b-10935970d31f\") " pod="kube-system/kube-proxy-dk487" Oct 31 01:32:19.668557 kubelet[2284]: I1031 01:32:19.668532 2284 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory" Oct 31 01:32:19.737782 kubelet[2284]: W1031 01:32:19.737719 2284 reflector.go:569] object-"tigera-operator"/"kubernetes-services-endpoint": failed to list *v1.ConfigMap: configmaps "kubernetes-services-endpoint" is forbidden: User "system:node:localhost" cannot list resource "configmaps" in API group "" in the namespace "tigera-operator": no relationship found between node 'localhost' and this object Oct 31 01:32:19.737782 kubelet[2284]: E1031 01:32:19.737744 2284 reflector.go:166] "Unhandled Error" err="object-\"tigera-operator\"/\"kubernetes-services-endpoint\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kubernetes-services-endpoint\" is forbidden: User \"system:node:localhost\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"tigera-operator\": no relationship found between node 'localhost' and this object" logger="UnhandledError" Oct 31 01:32:19.755984 env[1377]: time="2025-10-31T01:32:19.755896554Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-dk487,Uid:508eedc6-be1f-4519-be9b-10935970d31f,Namespace:kube-system,Attempt:0,}" Oct 31 01:32:19.765868 env[1377]: time="2025-10-31T01:32:19.765822322Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 31 01:32:19.765999 env[1377]: time="2025-10-31T01:32:19.765984742Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 31 01:32:19.766069 env[1377]: time="2025-10-31T01:32:19.766055434Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 31 01:32:19.766252 env[1377]: time="2025-10-31T01:32:19.766235200Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/486bd74957af39ae53b05b97b0184ed8cd96851f976fde336008de6d39efba97 pid=2334 runtime=io.containerd.runc.v2 Oct 31 01:32:19.766370 kubelet[2284]: I1031 01:32:19.766356 2284 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/3f80ec65-76ba-4a8b-9bd9-ba13432d2e18-var-lib-calico\") pod \"tigera-operator-7dcd859c48-8d5xd\" (UID: \"3f80ec65-76ba-4a8b-9bd9-ba13432d2e18\") " pod="tigera-operator/tigera-operator-7dcd859c48-8d5xd" Oct 31 01:32:19.766505 kubelet[2284]: I1031 01:32:19.766483 2284 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-blmt5\" (UniqueName: \"kubernetes.io/projected/3f80ec65-76ba-4a8b-9bd9-ba13432d2e18-kube-api-access-blmt5\") pod \"tigera-operator-7dcd859c48-8d5xd\" (UID: \"3f80ec65-76ba-4a8b-9bd9-ba13432d2e18\") " pod="tigera-operator/tigera-operator-7dcd859c48-8d5xd" Oct 31 01:32:19.795571 env[1377]: time="2025-10-31T01:32:19.795542795Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-dk487,Uid:508eedc6-be1f-4519-be9b-10935970d31f,Namespace:kube-system,Attempt:0,} returns sandbox id \"486bd74957af39ae53b05b97b0184ed8cd96851f976fde336008de6d39efba97\"" Oct 31 01:32:19.798440 env[1377]: time="2025-10-31T01:32:19.798424091Z" level=info msg="CreateContainer within sandbox \"486bd74957af39ae53b05b97b0184ed8cd96851f976fde336008de6d39efba97\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Oct 31 01:32:19.805739 env[1377]: time="2025-10-31T01:32:19.805722668Z" level=info msg="CreateContainer within sandbox \"486bd74957af39ae53b05b97b0184ed8cd96851f976fde336008de6d39efba97\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"93b6ff56265ab3f250ed35f643ab45a996ef97f70de383d2e4edf659630b35dd\"" Oct 31 01:32:19.806887 env[1377]: time="2025-10-31T01:32:19.806874254Z" level=info msg="StartContainer for \"93b6ff56265ab3f250ed35f643ab45a996ef97f70de383d2e4edf659630b35dd\"" Oct 31 01:32:19.835819 env[1377]: time="2025-10-31T01:32:19.835797149Z" level=info msg="StartContainer for \"93b6ff56265ab3f250ed35f643ab45a996ef97f70de383d2e4edf659630b35dd\" returns successfully" Oct 31 01:32:20.036884 env[1377]: time="2025-10-31T01:32:20.036570455Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-7dcd859c48-8d5xd,Uid:3f80ec65-76ba-4a8b-9bd9-ba13432d2e18,Namespace:tigera-operator,Attempt:0,}" Oct 31 01:32:20.090742 env[1377]: time="2025-10-31T01:32:20.090698505Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 31 01:32:20.090742 env[1377]: time="2025-10-31T01:32:20.090724018Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 31 01:32:20.090903 env[1377]: time="2025-10-31T01:32:20.090731230Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 31 01:32:20.091104 env[1377]: time="2025-10-31T01:32:20.091079929Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/0f38d9e638235a7cdb8310088c290dac1d9ddfab262189dbf4e0ca2b9c9c9a9b pid=2410 runtime=io.containerd.runc.v2 Oct 31 01:32:20.126334 env[1377]: time="2025-10-31T01:32:20.126306344Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-7dcd859c48-8d5xd,Uid:3f80ec65-76ba-4a8b-9bd9-ba13432d2e18,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"0f38d9e638235a7cdb8310088c290dac1d9ddfab262189dbf4e0ca2b9c9c9a9b\"" Oct 31 01:32:20.128015 env[1377]: time="2025-10-31T01:32:20.127740093Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.7\"" Oct 31 01:32:20.529000 audit[2478]: NETFILTER_CFG table=mangle:38 family=10 entries=1 op=nft_register_chain pid=2478 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 31 01:32:20.532224 kernel: kauditd_printk_skb: 4 callbacks suppressed Oct 31 01:32:20.532268 kernel: audit: type=1325 audit(1761874340.529:224): table=mangle:38 family=10 entries=1 op=nft_register_chain pid=2478 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 31 01:32:20.529000 audit[2478]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7fff40bc58c0 a2=0 a3=7fff40bc58ac items=0 ppid=2387 pid=2478 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 01:32:20.537871 kernel: audit: type=1300 audit(1761874340.529:224): arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7fff40bc58c0 a2=0 a3=7fff40bc58ac items=0 ppid=2387 pid=2478 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 01:32:20.529000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006D616E676C65 Oct 31 01:32:20.540206 kernel: audit: type=1327 audit(1761874340.529:224): proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006D616E676C65 Oct 31 01:32:20.542124 kernel: audit: type=1325 audit(1761874340.531:225): table=nat:39 family=10 entries=1 op=nft_register_chain pid=2479 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 31 01:32:20.531000 audit[2479]: NETFILTER_CFG table=nat:39 family=10 entries=1 op=nft_register_chain pid=2479 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 31 01:32:20.543144 kernel: audit: type=1300 audit(1761874340.531:225): arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7fff79ce6a30 a2=0 a3=7fff79ce6a1c items=0 ppid=2387 pid=2479 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 01:32:20.531000 audit[2479]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7fff79ce6a30 a2=0 a3=7fff79ce6a1c items=0 ppid=2387 pid=2479 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 01:32:20.531000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006E6174 Oct 31 01:32:20.548241 kernel: audit: type=1327 audit(1761874340.531:225): proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006E6174 Oct 31 01:32:20.548273 kernel: audit: type=1325 audit(1761874340.539:226): table=filter:40 family=10 entries=1 op=nft_register_chain pid=2480 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 31 01:32:20.539000 audit[2480]: NETFILTER_CFG table=filter:40 family=10 entries=1 op=nft_register_chain pid=2480 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 31 01:32:20.539000 audit[2480]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffd3e032e30 a2=0 a3=7ffd3e032e1c items=0 ppid=2387 pid=2480 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 01:32:20.553848 kernel: audit: type=1300 audit(1761874340.539:226): arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffd3e032e30 a2=0 a3=7ffd3e032e1c items=0 ppid=2387 pid=2480 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 01:32:20.553879 kernel: audit: type=1327 audit(1761874340.539:226): proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D740066696C746572 Oct 31 01:32:20.539000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D740066696C746572 Oct 31 01:32:20.539000 audit[2481]: NETFILTER_CFG table=mangle:41 family=2 entries=1 op=nft_register_chain pid=2481 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 31 01:32:20.557511 kernel: audit: type=1325 audit(1761874340.539:227): table=mangle:41 family=2 entries=1 op=nft_register_chain pid=2481 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 31 01:32:20.539000 audit[2481]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffc179a26f0 a2=0 a3=7ffc179a26dc items=0 ppid=2387 pid=2481 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 01:32:20.539000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006D616E676C65 Oct 31 01:32:20.539000 audit[2482]: NETFILTER_CFG table=nat:42 family=2 entries=1 op=nft_register_chain pid=2482 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 31 01:32:20.539000 audit[2482]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7fff287d01e0 a2=0 a3=7fff287d01cc items=0 ppid=2387 pid=2482 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 01:32:20.539000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006E6174 Oct 31 01:32:20.541000 audit[2483]: NETFILTER_CFG table=filter:43 family=2 entries=1 op=nft_register_chain pid=2483 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 31 01:32:20.541000 audit[2483]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffd9a3db4b0 a2=0 a3=7ffd9a3db49c items=0 ppid=2387 pid=2483 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 01:32:20.541000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D740066696C746572 Oct 31 01:32:20.638000 audit[2484]: NETFILTER_CFG table=filter:44 family=2 entries=1 op=nft_register_chain pid=2484 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 31 01:32:20.638000 audit[2484]: SYSCALL arch=c000003e syscall=46 success=yes exit=108 a0=3 a1=7fff7c6bc980 a2=0 a3=7fff7c6bc96c items=0 ppid=2387 pid=2484 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 01:32:20.638000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D45585445524E414C2D5345525649434553002D740066696C746572 Oct 31 01:32:20.654000 audit[2486]: NETFILTER_CFG table=filter:45 family=2 entries=1 op=nft_register_rule pid=2486 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 31 01:32:20.654000 audit[2486]: SYSCALL arch=c000003e syscall=46 success=yes exit=752 a0=3 a1=7ffda871ba10 a2=0 a3=7ffda871b9fc items=0 ppid=2387 pid=2486 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 01:32:20.654000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C652073657276696365 Oct 31 01:32:20.663000 audit[2489]: NETFILTER_CFG table=filter:46 family=2 entries=1 op=nft_register_rule pid=2489 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 31 01:32:20.663000 audit[2489]: SYSCALL arch=c000003e syscall=46 success=yes exit=752 a0=3 a1=7ffef92d06f0 a2=0 a3=7ffef92d06dc items=0 ppid=2387 pid=2489 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 01:32:20.663000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C65207365727669 Oct 31 01:32:20.663000 audit[2490]: NETFILTER_CFG table=filter:47 family=2 entries=1 op=nft_register_chain pid=2490 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 31 01:32:20.663000 audit[2490]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffefbab9ee0 a2=0 a3=7ffefbab9ecc items=0 ppid=2387 pid=2490 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 01:32:20.663000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4E4F4445504F525453002D740066696C746572 Oct 31 01:32:20.665000 audit[2492]: NETFILTER_CFG table=filter:48 family=2 entries=1 op=nft_register_rule pid=2492 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 31 01:32:20.665000 audit[2492]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7fff6b168840 a2=0 a3=7fff6b16882c items=0 ppid=2387 pid=2492 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 01:32:20.665000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206865616C746820636865636B207365727669636520706F727473002D6A004B5542452D4E4F4445504F525453 Oct 31 01:32:20.666000 audit[2493]: NETFILTER_CFG table=filter:49 family=2 entries=1 op=nft_register_chain pid=2493 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 31 01:32:20.666000 audit[2493]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7fff8d807440 a2=0 a3=7fff8d80742c items=0 ppid=2387 pid=2493 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 01:32:20.666000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D740066696C746572 Oct 31 01:32:20.667000 audit[2495]: NETFILTER_CFG table=filter:50 family=2 entries=1 op=nft_register_rule pid=2495 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 31 01:32:20.667000 audit[2495]: SYSCALL arch=c000003e syscall=46 success=yes exit=744 a0=3 a1=7ffc0986b540 a2=0 a3=7ffc0986b52c items=0 ppid=2387 pid=2495 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 01:32:20.667000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D Oct 31 01:32:20.670000 audit[2498]: NETFILTER_CFG table=filter:51 family=2 entries=1 op=nft_register_rule pid=2498 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 31 01:32:20.670000 audit[2498]: SYSCALL arch=c000003e syscall=46 success=yes exit=744 a0=3 a1=7fff24f3be10 a2=0 a3=7fff24f3bdfc items=0 ppid=2387 pid=2498 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 01:32:20.670000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D53 Oct 31 01:32:20.671000 audit[2499]: NETFILTER_CFG table=filter:52 family=2 entries=1 op=nft_register_chain pid=2499 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 31 01:32:20.671000 audit[2499]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffd49c7cb30 a2=0 a3=7ffd49c7cb1c items=0 ppid=2387 pid=2499 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 01:32:20.671000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D464F5257415244002D740066696C746572 Oct 31 01:32:20.672000 audit[2501]: NETFILTER_CFG table=filter:53 family=2 entries=1 op=nft_register_rule pid=2501 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 31 01:32:20.672000 audit[2501]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7fffcbb9fe60 a2=0 a3=7fffcbb9fe4c items=0 ppid=2387 pid=2501 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 01:32:20.672000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320666F7277617264696E672072756C6573002D6A004B5542452D464F5257415244 Oct 31 01:32:20.677480 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount618504942.mount: Deactivated successfully. Oct 31 01:32:20.677000 audit[2502]: NETFILTER_CFG table=filter:54 family=2 entries=1 op=nft_register_chain pid=2502 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 31 01:32:20.677000 audit[2502]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffc3c46d100 a2=0 a3=7ffc3c46d0ec items=0 ppid=2387 pid=2502 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 01:32:20.677000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D4649524557414C4C002D740066696C746572 Oct 31 01:32:20.679000 audit[2504]: NETFILTER_CFG table=filter:55 family=2 entries=1 op=nft_register_rule pid=2504 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 31 01:32:20.679000 audit[2504]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7ffe8d4e0f10 a2=0 a3=7ffe8d4e0efc items=0 ppid=2387 pid=2504 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 01:32:20.679000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A Oct 31 01:32:20.681000 audit[2507]: NETFILTER_CFG table=filter:56 family=2 entries=1 op=nft_register_rule pid=2507 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 31 01:32:20.681000 audit[2507]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7ffe970e1800 a2=0 a3=7ffe970e17ec items=0 ppid=2387 pid=2507 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 01:32:20.681000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A Oct 31 01:32:20.683000 audit[2510]: NETFILTER_CFG table=filter:57 family=2 entries=1 op=nft_register_rule pid=2510 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 31 01:32:20.683000 audit[2510]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7ffeac704300 a2=0 a3=7ffeac7042ec items=0 ppid=2387 pid=2510 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 01:32:20.683000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D Oct 31 01:32:20.684000 audit[2511]: NETFILTER_CFG table=nat:58 family=2 entries=1 op=nft_register_chain pid=2511 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 31 01:32:20.684000 audit[2511]: SYSCALL arch=c000003e syscall=46 success=yes exit=96 a0=3 a1=7fff6f0af830 a2=0 a3=7fff6f0af81c items=0 ppid=2387 pid=2511 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 01:32:20.684000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D74006E6174 Oct 31 01:32:20.689000 audit[2513]: NETFILTER_CFG table=nat:59 family=2 entries=1 op=nft_register_rule pid=2513 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 31 01:32:20.689000 audit[2513]: SYSCALL arch=c000003e syscall=46 success=yes exit=524 a0=3 a1=7ffe9f5a7b40 a2=0 a3=7ffe9f5a7b2c items=0 ppid=2387 pid=2513 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 01:32:20.689000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Oct 31 01:32:20.691000 audit[2516]: NETFILTER_CFG table=nat:60 family=2 entries=1 op=nft_register_rule pid=2516 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 31 01:32:20.691000 audit[2516]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7ffe3916ba80 a2=0 a3=7ffe3916ba6c items=0 ppid=2387 pid=2516 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 01:32:20.691000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900505245524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Oct 31 01:32:20.692000 audit[2517]: NETFILTER_CFG table=nat:61 family=2 entries=1 op=nft_register_chain pid=2517 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 31 01:32:20.692000 audit[2517]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffded70dd60 a2=0 a3=7ffded70dd4c items=0 ppid=2387 pid=2517 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 01:32:20.692000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D504F5354524F5554494E47002D74006E6174 Oct 31 01:32:20.693000 audit[2519]: NETFILTER_CFG table=nat:62 family=2 entries=1 op=nft_register_rule pid=2519 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 31 01:32:20.693000 audit[2519]: SYSCALL arch=c000003e syscall=46 success=yes exit=532 a0=3 a1=7fffa4fdc420 a2=0 a3=7fffa4fdc40c items=0 ppid=2387 pid=2519 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 01:32:20.693000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900504F5354524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320706F7374726F7574696E672072756C6573002D6A004B5542452D504F5354524F5554494E47 Oct 31 01:32:20.766000 audit[2525]: NETFILTER_CFG table=filter:63 family=2 entries=8 op=nft_register_rule pid=2525 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Oct 31 01:32:20.766000 audit[2525]: SYSCALL arch=c000003e syscall=46 success=yes exit=5248 a0=3 a1=7ffe266159a0 a2=0 a3=7ffe2661598c items=0 ppid=2387 pid=2525 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 01:32:20.766000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Oct 31 01:32:20.783000 audit[2525]: NETFILTER_CFG table=nat:64 family=2 entries=14 op=nft_register_chain pid=2525 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Oct 31 01:32:20.783000 audit[2525]: SYSCALL arch=c000003e syscall=46 success=yes exit=5508 a0=3 a1=7ffe266159a0 a2=0 a3=7ffe2661598c items=0 ppid=2387 pid=2525 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 01:32:20.783000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Oct 31 01:32:20.784000 audit[2530]: NETFILTER_CFG table=filter:65 family=10 entries=1 op=nft_register_chain pid=2530 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 31 01:32:20.784000 audit[2530]: SYSCALL arch=c000003e syscall=46 success=yes exit=108 a0=3 a1=7ffdd5593a90 a2=0 a3=7ffdd5593a7c items=0 ppid=2387 pid=2530 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 01:32:20.784000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D45585445524E414C2D5345525649434553002D740066696C746572 Oct 31 01:32:20.785000 audit[2532]: NETFILTER_CFG table=filter:66 family=10 entries=2 op=nft_register_chain pid=2532 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 31 01:32:20.785000 audit[2532]: SYSCALL arch=c000003e syscall=46 success=yes exit=836 a0=3 a1=7fff233760b0 a2=0 a3=7fff2337609c items=0 ppid=2387 pid=2532 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 01:32:20.785000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C6520736572766963 Oct 31 01:32:20.788000 audit[2535]: NETFILTER_CFG table=filter:67 family=10 entries=2 op=nft_register_chain pid=2535 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 31 01:32:20.788000 audit[2535]: SYSCALL arch=c000003e syscall=46 success=yes exit=836 a0=3 a1=7fff10f35570 a2=0 a3=7fff10f3555c items=0 ppid=2387 pid=2535 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 01:32:20.788000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C652073657276 Oct 31 01:32:20.789000 audit[2536]: NETFILTER_CFG table=filter:68 family=10 entries=1 op=nft_register_chain pid=2536 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 31 01:32:20.789000 audit[2536]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7fffeb555fa0 a2=0 a3=7fffeb555f8c items=0 ppid=2387 pid=2536 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 01:32:20.789000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4E4F4445504F525453002D740066696C746572 Oct 31 01:32:20.790000 audit[2538]: NETFILTER_CFG table=filter:69 family=10 entries=1 op=nft_register_rule pid=2538 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 31 01:32:20.790000 audit[2538]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7ffefb427c80 a2=0 a3=7ffefb427c6c items=0 ppid=2387 pid=2538 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 01:32:20.790000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206865616C746820636865636B207365727669636520706F727473002D6A004B5542452D4E4F4445504F525453 Oct 31 01:32:20.791000 audit[2539]: NETFILTER_CFG table=filter:70 family=10 entries=1 op=nft_register_chain pid=2539 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 31 01:32:20.791000 audit[2539]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7fff858b7430 a2=0 a3=7fff858b741c items=0 ppid=2387 pid=2539 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 01:32:20.791000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D740066696C746572 Oct 31 01:32:20.793000 audit[2541]: NETFILTER_CFG table=filter:71 family=10 entries=1 op=nft_register_rule pid=2541 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 31 01:32:20.793000 audit[2541]: SYSCALL arch=c000003e syscall=46 success=yes exit=744 a0=3 a1=7ffcdfbc7760 a2=0 a3=7ffcdfbc774c items=0 ppid=2387 pid=2541 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 01:32:20.793000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B554245 Oct 31 01:32:20.795000 audit[2544]: NETFILTER_CFG table=filter:72 family=10 entries=2 op=nft_register_chain pid=2544 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 31 01:32:20.795000 audit[2544]: SYSCALL arch=c000003e syscall=46 success=yes exit=828 a0=3 a1=7ffde948dce0 a2=0 a3=7ffde948dccc items=0 ppid=2387 pid=2544 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 01:32:20.795000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D Oct 31 01:32:20.796000 audit[2545]: NETFILTER_CFG table=filter:73 family=10 entries=1 op=nft_register_chain pid=2545 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 31 01:32:20.796000 audit[2545]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffcd5af7740 a2=0 a3=7ffcd5af772c items=0 ppid=2387 pid=2545 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 01:32:20.796000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D464F5257415244002D740066696C746572 Oct 31 01:32:20.797000 audit[2547]: NETFILTER_CFG table=filter:74 family=10 entries=1 op=nft_register_rule pid=2547 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 31 01:32:20.797000 audit[2547]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7ffd5c6d6120 a2=0 a3=7ffd5c6d610c items=0 ppid=2387 pid=2547 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 01:32:20.797000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320666F7277617264696E672072756C6573002D6A004B5542452D464F5257415244 Oct 31 01:32:20.798000 audit[2548]: NETFILTER_CFG table=filter:75 family=10 entries=1 op=nft_register_chain pid=2548 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 31 01:32:20.798000 audit[2548]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7fff59f91d00 a2=0 a3=7fff59f91cec items=0 ppid=2387 pid=2548 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 01:32:20.798000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D4649524557414C4C002D740066696C746572 Oct 31 01:32:20.800000 audit[2550]: NETFILTER_CFG table=filter:76 family=10 entries=1 op=nft_register_rule pid=2550 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 31 01:32:20.800000 audit[2550]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7ffed6272630 a2=0 a3=7ffed627261c items=0 ppid=2387 pid=2550 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 01:32:20.800000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A Oct 31 01:32:20.802000 audit[2553]: NETFILTER_CFG table=filter:77 family=10 entries=1 op=nft_register_rule pid=2553 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 31 01:32:20.802000 audit[2553]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7ffee1ec62f0 a2=0 a3=7ffee1ec62dc items=0 ppid=2387 pid=2553 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 01:32:20.802000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D Oct 31 01:32:20.806000 audit[2556]: NETFILTER_CFG table=filter:78 family=10 entries=1 op=nft_register_rule pid=2556 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 31 01:32:20.806000 audit[2556]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7fffca2b4ee0 a2=0 a3=7fffca2b4ecc items=0 ppid=2387 pid=2556 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 01:32:20.806000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C Oct 31 01:32:20.806000 audit[2557]: NETFILTER_CFG table=nat:79 family=10 entries=1 op=nft_register_chain pid=2557 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 31 01:32:20.806000 audit[2557]: SYSCALL arch=c000003e syscall=46 success=yes exit=96 a0=3 a1=7ffd19c091f0 a2=0 a3=7ffd19c091dc items=0 ppid=2387 pid=2557 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 01:32:20.806000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D74006E6174 Oct 31 01:32:20.808000 audit[2559]: NETFILTER_CFG table=nat:80 family=10 entries=2 op=nft_register_chain pid=2559 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 31 01:32:20.808000 audit[2559]: SYSCALL arch=c000003e syscall=46 success=yes exit=600 a0=3 a1=7ffe478194c0 a2=0 a3=7ffe478194ac items=0 ppid=2387 pid=2559 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 01:32:20.808000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Oct 31 01:32:20.810000 audit[2562]: NETFILTER_CFG table=nat:81 family=10 entries=2 op=nft_register_chain pid=2562 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 31 01:32:20.810000 audit[2562]: SYSCALL arch=c000003e syscall=46 success=yes exit=608 a0=3 a1=7ffe734fc4b0 a2=0 a3=7ffe734fc49c items=0 ppid=2387 pid=2562 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 01:32:20.810000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900505245524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Oct 31 01:32:20.810000 audit[2563]: NETFILTER_CFG table=nat:82 family=10 entries=1 op=nft_register_chain pid=2563 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 31 01:32:20.810000 audit[2563]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7fff98948e20 a2=0 a3=7fff98948e0c items=0 ppid=2387 pid=2563 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 01:32:20.810000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D504F5354524F5554494E47002D74006E6174 Oct 31 01:32:20.812000 audit[2565]: NETFILTER_CFG table=nat:83 family=10 entries=2 op=nft_register_chain pid=2565 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 31 01:32:20.812000 audit[2565]: SYSCALL arch=c000003e syscall=46 success=yes exit=612 a0=3 a1=7ffc452ebd10 a2=0 a3=7ffc452ebcfc items=0 ppid=2387 pid=2565 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 01:32:20.812000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900504F5354524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320706F7374726F7574696E672072756C6573002D6A004B5542452D504F5354524F5554494E47 Oct 31 01:32:20.812000 audit[2566]: NETFILTER_CFG table=filter:84 family=10 entries=1 op=nft_register_chain pid=2566 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 31 01:32:20.812000 audit[2566]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffe98b3b350 a2=0 a3=7ffe98b3b33c items=0 ppid=2387 pid=2566 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 01:32:20.812000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4649524557414C4C002D740066696C746572 Oct 31 01:32:20.814000 audit[2568]: NETFILTER_CFG table=filter:85 family=10 entries=1 op=nft_register_rule pid=2568 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 31 01:32:20.814000 audit[2568]: SYSCALL arch=c000003e syscall=46 success=yes exit=228 a0=3 a1=7ffe4290c830 a2=0 a3=7ffe4290c81c items=0 ppid=2387 pid=2568 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 01:32:20.814000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6A004B5542452D4649524557414C4C Oct 31 01:32:20.816000 audit[2571]: NETFILTER_CFG table=filter:86 family=10 entries=1 op=nft_register_rule pid=2571 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 31 01:32:20.816000 audit[2571]: SYSCALL arch=c000003e syscall=46 success=yes exit=228 a0=3 a1=7fffe9937f60 a2=0 a3=7fffe9937f4c items=0 ppid=2387 pid=2571 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 01:32:20.816000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6A004B5542452D4649524557414C4C Oct 31 01:32:20.817000 audit[2573]: NETFILTER_CFG table=filter:87 family=10 entries=3 op=nft_register_rule pid=2573 subj=system_u:system_r:kernel_t:s0 comm="ip6tables-resto" Oct 31 01:32:20.817000 audit[2573]: SYSCALL arch=c000003e syscall=46 success=yes exit=2088 a0=3 a1=7ffe01d20d10 a2=0 a3=7ffe01d20cfc items=0 ppid=2387 pid=2573 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables-resto" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 01:32:20.817000 audit: PROCTITLE proctitle=6970367461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Oct 31 01:32:20.818000 audit[2573]: NETFILTER_CFG table=nat:88 family=10 entries=7 op=nft_register_chain pid=2573 subj=system_u:system_r:kernel_t:s0 comm="ip6tables-resto" Oct 31 01:32:20.818000 audit[2573]: SYSCALL arch=c000003e syscall=46 success=yes exit=2056 a0=3 a1=7ffe01d20d10 a2=0 a3=7ffe01d20cfc items=0 ppid=2387 pid=2573 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables-resto" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 01:32:20.818000 audit: PROCTITLE proctitle=6970367461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Oct 31 01:32:21.432407 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1472461399.mount: Deactivated successfully. Oct 31 01:32:21.661896 kubelet[2284]: I1031 01:32:21.661848 2284 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-dk487" podStartSLOduration=2.66183405 podStartE2EDuration="2.66183405s" podCreationTimestamp="2025-10-31 01:32:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-31 01:32:20.589837913 +0000 UTC m=+5.201651993" watchObservedRunningTime="2025-10-31 01:32:21.66183405 +0000 UTC m=+6.273648137" Oct 31 01:32:22.290710 env[1377]: time="2025-10-31T01:32:22.290681496Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/tigera/operator:v1.38.7,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 31 01:32:22.297772 env[1377]: time="2025-10-31T01:32:22.297754047Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:f2c1be207523e593db82e3b8cf356a12f3ad8d1aad2225f8114b2cf9d6486cf1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 31 01:32:22.306169 env[1377]: time="2025-10-31T01:32:22.306147577Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/tigera/operator:v1.38.7,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 31 01:32:22.310912 env[1377]: time="2025-10-31T01:32:22.310893854Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/tigera/operator@sha256:1b629a1403f5b6d7243f7dd523d04b8a50352a33c1d4d6970b6002a8733acf2e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 31 01:32:22.311226 env[1377]: time="2025-10-31T01:32:22.311196571Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.7\" returns image reference \"sha256:f2c1be207523e593db82e3b8cf356a12f3ad8d1aad2225f8114b2cf9d6486cf1\"" Oct 31 01:32:22.313580 env[1377]: time="2025-10-31T01:32:22.313554475Z" level=info msg="CreateContainer within sandbox \"0f38d9e638235a7cdb8310088c290dac1d9ddfab262189dbf4e0ca2b9c9c9a9b\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Oct 31 01:32:22.342931 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3628945993.mount: Deactivated successfully. Oct 31 01:32:22.346091 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2471894658.mount: Deactivated successfully. Oct 31 01:32:22.363899 env[1377]: time="2025-10-31T01:32:22.363872176Z" level=info msg="CreateContainer within sandbox \"0f38d9e638235a7cdb8310088c290dac1d9ddfab262189dbf4e0ca2b9c9c9a9b\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"e6b51b1fc3ecc5b4a865f345964ae9574df666de3a39de5598d553c2db3d6d03\"" Oct 31 01:32:22.364803 env[1377]: time="2025-10-31T01:32:22.364789344Z" level=info msg="StartContainer for \"e6b51b1fc3ecc5b4a865f345964ae9574df666de3a39de5598d553c2db3d6d03\"" Oct 31 01:32:22.420957 env[1377]: time="2025-10-31T01:32:22.420914528Z" level=info msg="StartContainer for \"e6b51b1fc3ecc5b4a865f345964ae9574df666de3a39de5598d553c2db3d6d03\" returns successfully" Oct 31 01:32:25.447370 kubelet[2284]: I1031 01:32:25.447323 2284 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-7dcd859c48-8d5xd" podStartSLOduration=4.26167603 podStartE2EDuration="6.446575725s" podCreationTimestamp="2025-10-31 01:32:19 +0000 UTC" firstStartedPulling="2025-10-31 01:32:20.126997828 +0000 UTC m=+4.738811908" lastFinishedPulling="2025-10-31 01:32:22.311897525 +0000 UTC m=+6.923711603" observedRunningTime="2025-10-31 01:32:22.609514698 +0000 UTC m=+7.221328784" watchObservedRunningTime="2025-10-31 01:32:25.446575725 +0000 UTC m=+10.058389805" Oct 31 01:32:27.756000 audit[1614]: USER_END pid=1614 uid=500 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Oct 31 01:32:27.757686 sudo[1614]: pam_unix(sudo:session): session closed for user root Oct 31 01:32:27.761617 kernel: kauditd_printk_skb: 143 callbacks suppressed Oct 31 01:32:27.762054 kernel: audit: type=1106 audit(1761874347.756:275): pid=1614 uid=500 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Oct 31 01:32:27.762086 kernel: audit: type=1104 audit(1761874347.756:276): pid=1614 uid=500 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Oct 31 01:32:27.756000 audit[1614]: CRED_DISP pid=1614 uid=500 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Oct 31 01:32:27.775292 sshd[1608]: pam_unix(sshd:session): session closed for user core Oct 31 01:32:27.783000 audit[1608]: USER_END pid=1608 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Oct 31 01:32:27.789618 kernel: audit: type=1106 audit(1761874347.783:277): pid=1608 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Oct 31 01:32:27.783000 audit[1608]: CRED_DISP pid=1608 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Oct 31 01:32:27.793617 kernel: audit: type=1104 audit(1761874347.783:278): pid=1608 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Oct 31 01:32:27.803827 systemd[1]: sshd@6-139.178.70.102:22-147.75.109.163:46174.service: Deactivated successfully. Oct 31 01:32:27.804707 systemd[1]: session-9.scope: Deactivated successfully. Oct 31 01:32:27.804910 systemd-logind[1342]: Session 9 logged out. Waiting for processes to exit. Oct 31 01:32:27.809968 kernel: audit: type=1131 audit(1761874347.802:279): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@6-139.178.70.102:22-147.75.109.163:46174 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 01:32:27.802000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@6-139.178.70.102:22-147.75.109.163:46174 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 01:32:27.808529 systemd-logind[1342]: Removed session 9. Oct 31 01:32:27.952000 audit[2657]: NETFILTER_CFG table=filter:89 family=2 entries=14 op=nft_register_rule pid=2657 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Oct 31 01:32:27.952000 audit[2657]: SYSCALL arch=c000003e syscall=46 success=yes exit=5248 a0=3 a1=7fffb4833340 a2=0 a3=7fffb483332c items=0 ppid=2387 pid=2657 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 01:32:27.959504 kernel: audit: type=1325 audit(1761874347.952:280): table=filter:89 family=2 entries=14 op=nft_register_rule pid=2657 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Oct 31 01:32:27.959545 kernel: audit: type=1300 audit(1761874347.952:280): arch=c000003e syscall=46 success=yes exit=5248 a0=3 a1=7fffb4833340 a2=0 a3=7fffb483332c items=0 ppid=2387 pid=2657 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 01:32:27.959566 kernel: audit: type=1327 audit(1761874347.952:280): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Oct 31 01:32:27.952000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Oct 31 01:32:27.960000 audit[2657]: NETFILTER_CFG table=nat:90 family=2 entries=12 op=nft_register_rule pid=2657 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Oct 31 01:32:27.967619 kernel: audit: type=1325 audit(1761874347.960:281): table=nat:90 family=2 entries=12 op=nft_register_rule pid=2657 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Oct 31 01:32:27.960000 audit[2657]: SYSCALL arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7fffb4833340 a2=0 a3=0 items=0 ppid=2387 pid=2657 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 01:32:27.973624 kernel: audit: type=1300 audit(1761874347.960:281): arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7fffb4833340 a2=0 a3=0 items=0 ppid=2387 pid=2657 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 01:32:27.960000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Oct 31 01:32:27.997000 audit[2659]: NETFILTER_CFG table=filter:91 family=2 entries=15 op=nft_register_rule pid=2659 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Oct 31 01:32:27.997000 audit[2659]: SYSCALL arch=c000003e syscall=46 success=yes exit=5992 a0=3 a1=7ffd6fdc5370 a2=0 a3=7ffd6fdc535c items=0 ppid=2387 pid=2659 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 01:32:27.997000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Oct 31 01:32:28.003000 audit[2659]: NETFILTER_CFG table=nat:92 family=2 entries=12 op=nft_register_rule pid=2659 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Oct 31 01:32:28.003000 audit[2659]: SYSCALL arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7ffd6fdc5370 a2=0 a3=0 items=0 ppid=2387 pid=2659 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 01:32:28.003000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Oct 31 01:32:29.586000 audit[2661]: NETFILTER_CFG table=filter:93 family=2 entries=17 op=nft_register_rule pid=2661 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Oct 31 01:32:29.586000 audit[2661]: SYSCALL arch=c000003e syscall=46 success=yes exit=6736 a0=3 a1=7ffd0ba04630 a2=0 a3=7ffd0ba0461c items=0 ppid=2387 pid=2661 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 01:32:29.586000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Oct 31 01:32:29.590000 audit[2661]: NETFILTER_CFG table=nat:94 family=2 entries=12 op=nft_register_rule pid=2661 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Oct 31 01:32:29.590000 audit[2661]: SYSCALL arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7ffd0ba04630 a2=0 a3=0 items=0 ppid=2387 pid=2661 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 01:32:29.590000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Oct 31 01:32:30.685000 audit[2663]: NETFILTER_CFG table=filter:95 family=2 entries=19 op=nft_register_rule pid=2663 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Oct 31 01:32:30.685000 audit[2663]: SYSCALL arch=c000003e syscall=46 success=yes exit=7480 a0=3 a1=7ffe5c2ef430 a2=0 a3=7ffe5c2ef41c items=0 ppid=2387 pid=2663 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 01:32:30.685000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Oct 31 01:32:30.690000 audit[2663]: NETFILTER_CFG table=nat:96 family=2 entries=12 op=nft_register_rule pid=2663 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Oct 31 01:32:30.690000 audit[2663]: SYSCALL arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7ffe5c2ef430 a2=0 a3=0 items=0 ppid=2387 pid=2663 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 01:32:30.690000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Oct 31 01:32:31.651000 audit[2666]: NETFILTER_CFG table=filter:97 family=2 entries=21 op=nft_register_rule pid=2666 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Oct 31 01:32:31.651000 audit[2666]: SYSCALL arch=c000003e syscall=46 success=yes exit=8224 a0=3 a1=7ffe84bef270 a2=0 a3=7ffe84bef25c items=0 ppid=2387 pid=2666 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 01:32:31.651000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Oct 31 01:32:31.658000 audit[2666]: NETFILTER_CFG table=nat:98 family=2 entries=12 op=nft_register_rule pid=2666 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Oct 31 01:32:31.658000 audit[2666]: SYSCALL arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7ffe84bef270 a2=0 a3=0 items=0 ppid=2387 pid=2666 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 01:32:31.658000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Oct 31 01:32:31.738916 kubelet[2284]: I1031 01:32:31.738886 2284 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/fe60a9f4-9375-4a95-8a87-2a85f3696fc8-tigera-ca-bundle\") pod \"calico-typha-6d9666d8bf-tntbt\" (UID: \"fe60a9f4-9375-4a95-8a87-2a85f3696fc8\") " pod="calico-system/calico-typha-6d9666d8bf-tntbt" Oct 31 01:32:31.739174 kubelet[2284]: I1031 01:32:31.738920 2284 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dc4z2\" (UniqueName: \"kubernetes.io/projected/fe60a9f4-9375-4a95-8a87-2a85f3696fc8-kube-api-access-dc4z2\") pod \"calico-typha-6d9666d8bf-tntbt\" (UID: \"fe60a9f4-9375-4a95-8a87-2a85f3696fc8\") " pod="calico-system/calico-typha-6d9666d8bf-tntbt" Oct 31 01:32:31.739174 kubelet[2284]: I1031 01:32:31.738936 2284 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/fe60a9f4-9375-4a95-8a87-2a85f3696fc8-typha-certs\") pod \"calico-typha-6d9666d8bf-tntbt\" (UID: \"fe60a9f4-9375-4a95-8a87-2a85f3696fc8\") " pod="calico-system/calico-typha-6d9666d8bf-tntbt" Oct 31 01:32:31.939742 kubelet[2284]: I1031 01:32:31.939657 2284 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/af6aa3a0-b43b-44ef-be8e-1975d547bafc-tigera-ca-bundle\") pod \"calico-node-fsvc4\" (UID: \"af6aa3a0-b43b-44ef-be8e-1975d547bafc\") " pod="calico-system/calico-node-fsvc4" Oct 31 01:32:31.939950 kubelet[2284]: I1031 01:32:31.939854 2284 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/af6aa3a0-b43b-44ef-be8e-1975d547bafc-var-run-calico\") pod \"calico-node-fsvc4\" (UID: \"af6aa3a0-b43b-44ef-be8e-1975d547bafc\") " pod="calico-system/calico-node-fsvc4" Oct 31 01:32:31.940064 kubelet[2284]: I1031 01:32:31.940055 2284 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/af6aa3a0-b43b-44ef-be8e-1975d547bafc-cni-bin-dir\") pod \"calico-node-fsvc4\" (UID: \"af6aa3a0-b43b-44ef-be8e-1975d547bafc\") " pod="calico-system/calico-node-fsvc4" Oct 31 01:32:31.940145 kubelet[2284]: I1031 01:32:31.940137 2284 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/af6aa3a0-b43b-44ef-be8e-1975d547bafc-cni-net-dir\") pod \"calico-node-fsvc4\" (UID: \"af6aa3a0-b43b-44ef-be8e-1975d547bafc\") " pod="calico-system/calico-node-fsvc4" Oct 31 01:32:31.940221 kubelet[2284]: I1031 01:32:31.940212 2284 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/af6aa3a0-b43b-44ef-be8e-1975d547bafc-var-lib-calico\") pod \"calico-node-fsvc4\" (UID: \"af6aa3a0-b43b-44ef-be8e-1975d547bafc\") " pod="calico-system/calico-node-fsvc4" Oct 31 01:32:31.940294 kubelet[2284]: I1031 01:32:31.940284 2284 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/af6aa3a0-b43b-44ef-be8e-1975d547bafc-flexvol-driver-host\") pod \"calico-node-fsvc4\" (UID: \"af6aa3a0-b43b-44ef-be8e-1975d547bafc\") " pod="calico-system/calico-node-fsvc4" Oct 31 01:32:31.940369 kubelet[2284]: I1031 01:32:31.940359 2284 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/af6aa3a0-b43b-44ef-be8e-1975d547bafc-policysync\") pod \"calico-node-fsvc4\" (UID: \"af6aa3a0-b43b-44ef-be8e-1975d547bafc\") " pod="calico-system/calico-node-fsvc4" Oct 31 01:32:31.940442 kubelet[2284]: I1031 01:32:31.940434 2284 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/af6aa3a0-b43b-44ef-be8e-1975d547bafc-cni-log-dir\") pod \"calico-node-fsvc4\" (UID: \"af6aa3a0-b43b-44ef-be8e-1975d547bafc\") " pod="calico-system/calico-node-fsvc4" Oct 31 01:32:31.940519 kubelet[2284]: I1031 01:32:31.940510 2284 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/af6aa3a0-b43b-44ef-be8e-1975d547bafc-node-certs\") pod \"calico-node-fsvc4\" (UID: \"af6aa3a0-b43b-44ef-be8e-1975d547bafc\") " pod="calico-system/calico-node-fsvc4" Oct 31 01:32:31.940593 kubelet[2284]: I1031 01:32:31.940585 2284 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/af6aa3a0-b43b-44ef-be8e-1975d547bafc-xtables-lock\") pod \"calico-node-fsvc4\" (UID: \"af6aa3a0-b43b-44ef-be8e-1975d547bafc\") " pod="calico-system/calico-node-fsvc4" Oct 31 01:32:31.940676 kubelet[2284]: I1031 01:32:31.940668 2284 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6vhrt\" (UniqueName: \"kubernetes.io/projected/af6aa3a0-b43b-44ef-be8e-1975d547bafc-kube-api-access-6vhrt\") pod \"calico-node-fsvc4\" (UID: \"af6aa3a0-b43b-44ef-be8e-1975d547bafc\") " pod="calico-system/calico-node-fsvc4" Oct 31 01:32:31.940753 kubelet[2284]: I1031 01:32:31.940744 2284 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/af6aa3a0-b43b-44ef-be8e-1975d547bafc-lib-modules\") pod \"calico-node-fsvc4\" (UID: \"af6aa3a0-b43b-44ef-be8e-1975d547bafc\") " pod="calico-system/calico-node-fsvc4" Oct 31 01:32:31.976884 env[1377]: time="2025-10-31T01:32:31.976847386Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-6d9666d8bf-tntbt,Uid:fe60a9f4-9375-4a95-8a87-2a85f3696fc8,Namespace:calico-system,Attempt:0,}" Oct 31 01:32:32.033336 env[1377]: time="2025-10-31T01:32:32.033277041Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 31 01:32:32.033336 env[1377]: time="2025-10-31T01:32:32.033312596Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 31 01:32:32.033480 env[1377]: time="2025-10-31T01:32:32.033456863Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 31 01:32:32.035912 env[1377]: time="2025-10-31T01:32:32.033631647Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/52f61607374fddec5a1ee5e15c01d0e9c8a786ff637b422dc6e9f079142e4af9 pid=2676 runtime=io.containerd.runc.v2 Oct 31 01:32:32.040284 kubelet[2284]: E1031 01:32:32.040261 2284 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-vc6st" podUID="68e0baab-eac1-409d-a79c-945bc83eb739" Oct 31 01:32:32.066419 kubelet[2284]: E1031 01:32:32.066396 2284 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 01:32:32.066550 kubelet[2284]: W1031 01:32:32.066539 2284 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 01:32:32.070637 kubelet[2284]: E1031 01:32:32.070594 2284 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 01:32:32.076725 kubelet[2284]: E1031 01:32:32.076693 2284 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 01:32:32.076860 kubelet[2284]: W1031 01:32:32.076845 2284 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 01:32:32.076940 kubelet[2284]: E1031 01:32:32.076927 2284 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 01:32:32.117402 env[1377]: time="2025-10-31T01:32:32.117336331Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-6d9666d8bf-tntbt,Uid:fe60a9f4-9375-4a95-8a87-2a85f3696fc8,Namespace:calico-system,Attempt:0,} returns sandbox id \"52f61607374fddec5a1ee5e15c01d0e9c8a786ff637b422dc6e9f079142e4af9\"" Oct 31 01:32:32.118319 env[1377]: time="2025-10-31T01:32:32.118307264Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.4\"" Oct 31 01:32:32.123670 kubelet[2284]: E1031 01:32:32.120686 2284 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 01:32:32.123670 kubelet[2284]: W1031 01:32:32.120702 2284 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 01:32:32.123670 kubelet[2284]: E1031 01:32:32.120716 2284 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 01:32:32.123670 kubelet[2284]: E1031 01:32:32.120797 2284 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 01:32:32.123670 kubelet[2284]: W1031 01:32:32.120802 2284 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 01:32:32.123670 kubelet[2284]: E1031 01:32:32.120807 2284 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 01:32:32.123670 kubelet[2284]: E1031 01:32:32.120877 2284 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 01:32:32.123670 kubelet[2284]: W1031 01:32:32.120883 2284 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 01:32:32.123670 kubelet[2284]: E1031 01:32:32.120890 2284 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 01:32:32.123670 kubelet[2284]: E1031 01:32:32.120985 2284 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 01:32:32.127538 kubelet[2284]: W1031 01:32:32.120990 2284 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 01:32:32.127538 kubelet[2284]: E1031 01:32:32.120995 2284 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 01:32:32.127538 kubelet[2284]: E1031 01:32:32.121071 2284 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 01:32:32.127538 kubelet[2284]: W1031 01:32:32.121075 2284 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 01:32:32.127538 kubelet[2284]: E1031 01:32:32.121080 2284 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 01:32:32.127538 kubelet[2284]: E1031 01:32:32.121148 2284 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 01:32:32.127538 kubelet[2284]: W1031 01:32:32.121152 2284 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 01:32:32.127538 kubelet[2284]: E1031 01:32:32.121157 2284 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 01:32:32.127538 kubelet[2284]: E1031 01:32:32.121233 2284 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 01:32:32.127538 kubelet[2284]: W1031 01:32:32.121239 2284 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 01:32:32.133537 env[1377]: time="2025-10-31T01:32:32.127098488Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-fsvc4,Uid:af6aa3a0-b43b-44ef-be8e-1975d547bafc,Namespace:calico-system,Attempt:0,}" Oct 31 01:32:32.133569 kubelet[2284]: E1031 01:32:32.121244 2284 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 01:32:32.133569 kubelet[2284]: E1031 01:32:32.121315 2284 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 01:32:32.133569 kubelet[2284]: W1031 01:32:32.121319 2284 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 01:32:32.133569 kubelet[2284]: E1031 01:32:32.121323 2284 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 01:32:32.133569 kubelet[2284]: E1031 01:32:32.121395 2284 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 01:32:32.133569 kubelet[2284]: W1031 01:32:32.121400 2284 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 01:32:32.133569 kubelet[2284]: E1031 01:32:32.121404 2284 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 01:32:32.133569 kubelet[2284]: E1031 01:32:32.121471 2284 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 01:32:32.133569 kubelet[2284]: W1031 01:32:32.121476 2284 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 01:32:32.133569 kubelet[2284]: E1031 01:32:32.121480 2284 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 01:32:32.133772 kubelet[2284]: E1031 01:32:32.121550 2284 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 01:32:32.133772 kubelet[2284]: W1031 01:32:32.121554 2284 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 01:32:32.133772 kubelet[2284]: E1031 01:32:32.121559 2284 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 01:32:32.133772 kubelet[2284]: E1031 01:32:32.121632 2284 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 01:32:32.133772 kubelet[2284]: W1031 01:32:32.121636 2284 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 01:32:32.133772 kubelet[2284]: E1031 01:32:32.121640 2284 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 01:32:32.133772 kubelet[2284]: E1031 01:32:32.121710 2284 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 01:32:32.133772 kubelet[2284]: W1031 01:32:32.121714 2284 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 01:32:32.133772 kubelet[2284]: E1031 01:32:32.121719 2284 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 01:32:32.133772 kubelet[2284]: E1031 01:32:32.121787 2284 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 01:32:32.144845 kubelet[2284]: W1031 01:32:32.121791 2284 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 01:32:32.144845 kubelet[2284]: E1031 01:32:32.121796 2284 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 01:32:32.144845 kubelet[2284]: E1031 01:32:32.121869 2284 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 01:32:32.144845 kubelet[2284]: W1031 01:32:32.121873 2284 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 01:32:32.144845 kubelet[2284]: E1031 01:32:32.121877 2284 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 01:32:32.144845 kubelet[2284]: E1031 01:32:32.121945 2284 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 01:32:32.144845 kubelet[2284]: W1031 01:32:32.121949 2284 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 01:32:32.144845 kubelet[2284]: E1031 01:32:32.121953 2284 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 01:32:32.144845 kubelet[2284]: E1031 01:32:32.122030 2284 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 01:32:32.144845 kubelet[2284]: W1031 01:32:32.122039 2284 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 01:32:32.145022 kubelet[2284]: E1031 01:32:32.122045 2284 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 01:32:32.145022 kubelet[2284]: E1031 01:32:32.122115 2284 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 01:32:32.145022 kubelet[2284]: W1031 01:32:32.122119 2284 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 01:32:32.145022 kubelet[2284]: E1031 01:32:32.122124 2284 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 01:32:32.145022 kubelet[2284]: E1031 01:32:32.122190 2284 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 01:32:32.145022 kubelet[2284]: W1031 01:32:32.122195 2284 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 01:32:32.145022 kubelet[2284]: E1031 01:32:32.122199 2284 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 01:32:32.145022 kubelet[2284]: E1031 01:32:32.122261 2284 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 01:32:32.145022 kubelet[2284]: W1031 01:32:32.122265 2284 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 01:32:32.145022 kubelet[2284]: E1031 01:32:32.122270 2284 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 01:32:32.145198 kubelet[2284]: E1031 01:32:32.142254 2284 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 01:32:32.145198 kubelet[2284]: W1031 01:32:32.142269 2284 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 01:32:32.145198 kubelet[2284]: E1031 01:32:32.142284 2284 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 01:32:32.145198 kubelet[2284]: I1031 01:32:32.142306 2284 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/68e0baab-eac1-409d-a79c-945bc83eb739-varrun\") pod \"csi-node-driver-vc6st\" (UID: \"68e0baab-eac1-409d-a79c-945bc83eb739\") " pod="calico-system/csi-node-driver-vc6st" Oct 31 01:32:32.145198 kubelet[2284]: E1031 01:32:32.142438 2284 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 01:32:32.145198 kubelet[2284]: W1031 01:32:32.142444 2284 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 01:32:32.145198 kubelet[2284]: E1031 01:32:32.142496 2284 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 01:32:32.145198 kubelet[2284]: I1031 01:32:32.142510 2284 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/68e0baab-eac1-409d-a79c-945bc83eb739-kubelet-dir\") pod \"csi-node-driver-vc6st\" (UID: \"68e0baab-eac1-409d-a79c-945bc83eb739\") " pod="calico-system/csi-node-driver-vc6st" Oct 31 01:32:32.145198 kubelet[2284]: E1031 01:32:32.142654 2284 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 01:32:32.145404 kubelet[2284]: W1031 01:32:32.142663 2284 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 01:32:32.145404 kubelet[2284]: E1031 01:32:32.142675 2284 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 01:32:32.145404 kubelet[2284]: E1031 01:32:32.142783 2284 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 01:32:32.145404 kubelet[2284]: W1031 01:32:32.142788 2284 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 01:32:32.145404 kubelet[2284]: E1031 01:32:32.142804 2284 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 01:32:32.145404 kubelet[2284]: E1031 01:32:32.142912 2284 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 01:32:32.145404 kubelet[2284]: W1031 01:32:32.142916 2284 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 01:32:32.145404 kubelet[2284]: E1031 01:32:32.142925 2284 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 01:32:32.145404 kubelet[2284]: I1031 01:32:32.142943 2284 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/68e0baab-eac1-409d-a79c-945bc83eb739-socket-dir\") pod \"csi-node-driver-vc6st\" (UID: \"68e0baab-eac1-409d-a79c-945bc83eb739\") " pod="calico-system/csi-node-driver-vc6st" Oct 31 01:32:32.145692 kubelet[2284]: E1031 01:32:32.143037 2284 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 01:32:32.145692 kubelet[2284]: W1031 01:32:32.143043 2284 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 01:32:32.145692 kubelet[2284]: E1031 01:32:32.143049 2284 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 01:32:32.145692 kubelet[2284]: I1031 01:32:32.143058 2284 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-747rs\" (UniqueName: \"kubernetes.io/projected/68e0baab-eac1-409d-a79c-945bc83eb739-kube-api-access-747rs\") pod \"csi-node-driver-vc6st\" (UID: \"68e0baab-eac1-409d-a79c-945bc83eb739\") " pod="calico-system/csi-node-driver-vc6st" Oct 31 01:32:32.145692 kubelet[2284]: E1031 01:32:32.143151 2284 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 01:32:32.145692 kubelet[2284]: W1031 01:32:32.143157 2284 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 01:32:32.145692 kubelet[2284]: E1031 01:32:32.143165 2284 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 01:32:32.145692 kubelet[2284]: E1031 01:32:32.143252 2284 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 01:32:32.145692 kubelet[2284]: W1031 01:32:32.143263 2284 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 01:32:32.145987 kubelet[2284]: E1031 01:32:32.143269 2284 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 01:32:32.145987 kubelet[2284]: E1031 01:32:32.143369 2284 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 01:32:32.145987 kubelet[2284]: W1031 01:32:32.143374 2284 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 01:32:32.145987 kubelet[2284]: E1031 01:32:32.143383 2284 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 01:32:32.145987 kubelet[2284]: E1031 01:32:32.143468 2284 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 01:32:32.145987 kubelet[2284]: W1031 01:32:32.143473 2284 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 01:32:32.145987 kubelet[2284]: E1031 01:32:32.143482 2284 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 01:32:32.145987 kubelet[2284]: E1031 01:32:32.143638 2284 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 01:32:32.145987 kubelet[2284]: W1031 01:32:32.143643 2284 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 01:32:32.145987 kubelet[2284]: E1031 01:32:32.143652 2284 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 01:32:32.146277 kubelet[2284]: I1031 01:32:32.143661 2284 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/68e0baab-eac1-409d-a79c-945bc83eb739-registration-dir\") pod \"csi-node-driver-vc6st\" (UID: \"68e0baab-eac1-409d-a79c-945bc83eb739\") " pod="calico-system/csi-node-driver-vc6st" Oct 31 01:32:32.146277 kubelet[2284]: E1031 01:32:32.143751 2284 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 01:32:32.146277 kubelet[2284]: W1031 01:32:32.143756 2284 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 01:32:32.146277 kubelet[2284]: E1031 01:32:32.143765 2284 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 01:32:32.146277 kubelet[2284]: E1031 01:32:32.143864 2284 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 01:32:32.146277 kubelet[2284]: W1031 01:32:32.143869 2284 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 01:32:32.146277 kubelet[2284]: E1031 01:32:32.143878 2284 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 01:32:32.146277 kubelet[2284]: E1031 01:32:32.143977 2284 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 01:32:32.146277 kubelet[2284]: W1031 01:32:32.143982 2284 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 01:32:32.146582 kubelet[2284]: E1031 01:32:32.143986 2284 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 01:32:32.146582 kubelet[2284]: E1031 01:32:32.144092 2284 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 01:32:32.146582 kubelet[2284]: W1031 01:32:32.144097 2284 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 01:32:32.146582 kubelet[2284]: E1031 01:32:32.144101 2284 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 01:32:32.190669 env[1377]: time="2025-10-31T01:32:32.189931403Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 31 01:32:32.190669 env[1377]: time="2025-10-31T01:32:32.189976633Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 31 01:32:32.190669 env[1377]: time="2025-10-31T01:32:32.189999749Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 31 01:32:32.190669 env[1377]: time="2025-10-31T01:32:32.190097246Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/c530cd53bb9f598cbecda6dcc9b122a6a60c5e174b594ef02a40d55acd24cbf1 pid=2764 runtime=io.containerd.runc.v2 Oct 31 01:32:32.220505 env[1377]: time="2025-10-31T01:32:32.220199288Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-fsvc4,Uid:af6aa3a0-b43b-44ef-be8e-1975d547bafc,Namespace:calico-system,Attempt:0,} returns sandbox id \"c530cd53bb9f598cbecda6dcc9b122a6a60c5e174b594ef02a40d55acd24cbf1\"" Oct 31 01:32:32.244711 kubelet[2284]: E1031 01:32:32.244689 2284 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 01:32:32.244711 kubelet[2284]: W1031 01:32:32.244705 2284 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 01:32:32.244833 kubelet[2284]: E1031 01:32:32.244719 2284 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 01:32:32.244833 kubelet[2284]: E1031 01:32:32.244831 2284 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 01:32:32.244874 kubelet[2284]: W1031 01:32:32.244835 2284 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 01:32:32.244874 kubelet[2284]: E1031 01:32:32.244840 2284 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 01:32:32.244970 kubelet[2284]: E1031 01:32:32.244930 2284 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 01:32:32.244970 kubelet[2284]: W1031 01:32:32.244936 2284 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 01:32:32.244970 kubelet[2284]: E1031 01:32:32.244942 2284 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 01:32:32.245096 kubelet[2284]: E1031 01:32:32.245027 2284 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 01:32:32.245096 kubelet[2284]: W1031 01:32:32.245031 2284 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 01:32:32.245096 kubelet[2284]: E1031 01:32:32.245036 2284 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 01:32:32.245260 kubelet[2284]: E1031 01:32:32.245202 2284 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 01:32:32.245260 kubelet[2284]: W1031 01:32:32.245207 2284 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 01:32:32.245260 kubelet[2284]: E1031 01:32:32.245213 2284 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 01:32:32.245353 kubelet[2284]: E1031 01:32:32.245326 2284 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 01:32:32.245353 kubelet[2284]: W1031 01:32:32.245330 2284 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 01:32:32.245353 kubelet[2284]: E1031 01:32:32.245337 2284 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 01:32:32.245475 kubelet[2284]: E1031 01:32:32.245445 2284 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 01:32:32.245475 kubelet[2284]: W1031 01:32:32.245450 2284 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 01:32:32.245475 kubelet[2284]: E1031 01:32:32.245454 2284 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 01:32:32.245744 kubelet[2284]: E1031 01:32:32.245594 2284 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 01:32:32.245744 kubelet[2284]: W1031 01:32:32.245611 2284 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 01:32:32.245744 kubelet[2284]: E1031 01:32:32.245621 2284 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 01:32:32.245825 kubelet[2284]: E1031 01:32:32.245756 2284 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 01:32:32.245825 kubelet[2284]: W1031 01:32:32.245762 2284 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 01:32:32.245825 kubelet[2284]: E1031 01:32:32.245769 2284 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 01:32:32.246705 kubelet[2284]: E1031 01:32:32.245992 2284 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 01:32:32.246705 kubelet[2284]: W1031 01:32:32.245999 2284 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 01:32:32.246705 kubelet[2284]: E1031 01:32:32.246014 2284 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 01:32:32.246705 kubelet[2284]: E1031 01:32:32.246094 2284 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 01:32:32.246705 kubelet[2284]: W1031 01:32:32.246098 2284 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 01:32:32.246705 kubelet[2284]: E1031 01:32:32.246109 2284 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 01:32:32.246705 kubelet[2284]: E1031 01:32:32.246203 2284 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 01:32:32.246705 kubelet[2284]: W1031 01:32:32.246208 2284 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 01:32:32.246705 kubelet[2284]: E1031 01:32:32.246246 2284 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 01:32:32.246705 kubelet[2284]: E1031 01:32:32.246302 2284 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 01:32:32.247000 kubelet[2284]: W1031 01:32:32.246306 2284 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 01:32:32.247000 kubelet[2284]: E1031 01:32:32.246348 2284 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 01:32:32.247000 kubelet[2284]: E1031 01:32:32.246399 2284 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 01:32:32.247000 kubelet[2284]: W1031 01:32:32.246403 2284 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 01:32:32.247000 kubelet[2284]: E1031 01:32:32.246438 2284 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 01:32:32.247000 kubelet[2284]: E1031 01:32:32.246486 2284 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 01:32:32.247000 kubelet[2284]: W1031 01:32:32.246492 2284 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 01:32:32.247000 kubelet[2284]: E1031 01:32:32.246546 2284 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 01:32:32.247000 kubelet[2284]: E1031 01:32:32.246596 2284 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 01:32:32.247000 kubelet[2284]: W1031 01:32:32.246600 2284 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 01:32:32.248787 kubelet[2284]: E1031 01:32:32.246616 2284 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 01:32:32.248787 kubelet[2284]: E1031 01:32:32.246696 2284 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 01:32:32.248787 kubelet[2284]: W1031 01:32:32.246700 2284 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 01:32:32.248787 kubelet[2284]: E1031 01:32:32.246733 2284 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 01:32:32.248787 kubelet[2284]: E1031 01:32:32.246998 2284 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 01:32:32.248787 kubelet[2284]: W1031 01:32:32.247015 2284 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 01:32:32.248787 kubelet[2284]: E1031 01:32:32.247024 2284 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 01:32:32.248787 kubelet[2284]: E1031 01:32:32.247264 2284 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 01:32:32.248787 kubelet[2284]: W1031 01:32:32.247269 2284 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 01:32:32.248787 kubelet[2284]: E1031 01:32:32.247277 2284 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 01:32:32.248960 kubelet[2284]: E1031 01:32:32.247421 2284 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 01:32:32.248960 kubelet[2284]: W1031 01:32:32.247426 2284 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 01:32:32.248960 kubelet[2284]: E1031 01:32:32.247434 2284 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 01:32:32.248960 kubelet[2284]: E1031 01:32:32.247571 2284 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 01:32:32.248960 kubelet[2284]: W1031 01:32:32.247576 2284 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 01:32:32.248960 kubelet[2284]: E1031 01:32:32.247645 2284 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 01:32:32.248960 kubelet[2284]: E1031 01:32:32.247717 2284 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 01:32:32.248960 kubelet[2284]: W1031 01:32:32.247721 2284 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 01:32:32.248960 kubelet[2284]: E1031 01:32:32.247728 2284 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 01:32:32.248960 kubelet[2284]: E1031 01:32:32.247844 2284 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 01:32:32.258589 kubelet[2284]: W1031 01:32:32.247849 2284 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 01:32:32.258589 kubelet[2284]: E1031 01:32:32.247856 2284 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 01:32:32.258589 kubelet[2284]: E1031 01:32:32.247997 2284 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 01:32:32.258589 kubelet[2284]: W1031 01:32:32.248001 2284 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 01:32:32.258589 kubelet[2284]: E1031 01:32:32.248007 2284 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 01:32:32.258589 kubelet[2284]: E1031 01:32:32.251136 2284 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 01:32:32.258589 kubelet[2284]: W1031 01:32:32.251143 2284 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 01:32:32.258589 kubelet[2284]: E1031 01:32:32.251150 2284 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 01:32:32.259012 kubelet[2284]: E1031 01:32:32.259002 2284 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 01:32:32.259068 kubelet[2284]: W1031 01:32:32.259058 2284 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 01:32:32.259121 kubelet[2284]: E1031 01:32:32.259111 2284 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 01:32:32.670000 audit[2827]: NETFILTER_CFG table=filter:99 family=2 entries=22 op=nft_register_rule pid=2827 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Oct 31 01:32:32.670000 audit[2827]: SYSCALL arch=c000003e syscall=46 success=yes exit=8224 a0=3 a1=7ffd73601ad0 a2=0 a3=7ffd73601abc items=0 ppid=2387 pid=2827 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 01:32:32.670000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Oct 31 01:32:32.674000 audit[2827]: NETFILTER_CFG table=nat:100 family=2 entries=12 op=nft_register_rule pid=2827 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Oct 31 01:32:32.674000 audit[2827]: SYSCALL arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7ffd73601ad0 a2=0 a3=0 items=0 ppid=2387 pid=2827 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 01:32:32.674000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Oct 31 01:32:32.845396 systemd[1]: run-containerd-runc-k8s.io-52f61607374fddec5a1ee5e15c01d0e9c8a786ff637b422dc6e9f079142e4af9-runc.uav08D.mount: Deactivated successfully. Oct 31 01:32:33.570494 kubelet[2284]: E1031 01:32:33.570462 2284 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-vc6st" podUID="68e0baab-eac1-409d-a79c-945bc83eb739" Oct 31 01:32:33.769153 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount786560429.mount: Deactivated successfully. Oct 31 01:32:35.348938 env[1377]: time="2025-10-31T01:32:35.348911617Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/typha:v3.30.4,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 31 01:32:35.363451 env[1377]: time="2025-10-31T01:32:35.363425032Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:aa1490366a77160b4cc8f9af82281ab7201ffda0882871f860e1eb1c4f825958,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 31 01:32:35.370895 env[1377]: time="2025-10-31T01:32:35.370872603Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/typha:v3.30.4,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 31 01:32:35.375655 env[1377]: time="2025-10-31T01:32:35.375635663Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/typha@sha256:6f437220b5b3c627fb4a0fc8dc323363101f3c22a8f337612c2a1ddfb73b810c,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 31 01:32:35.375985 env[1377]: time="2025-10-31T01:32:35.375970287Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.4\" returns image reference \"sha256:aa1490366a77160b4cc8f9af82281ab7201ffda0882871f860e1eb1c4f825958\"" Oct 31 01:32:35.377220 env[1377]: time="2025-10-31T01:32:35.377146611Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\"" Oct 31 01:32:35.391891 env[1377]: time="2025-10-31T01:32:35.391864829Z" level=info msg="CreateContainer within sandbox \"52f61607374fddec5a1ee5e15c01d0e9c8a786ff637b422dc6e9f079142e4af9\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Oct 31 01:32:35.570703 kubelet[2284]: E1031 01:32:35.570678 2284 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-vc6st" podUID="68e0baab-eac1-409d-a79c-945bc83eb739" Oct 31 01:32:35.847221 env[1377]: time="2025-10-31T01:32:35.847195006Z" level=info msg="CreateContainer within sandbox \"52f61607374fddec5a1ee5e15c01d0e9c8a786ff637b422dc6e9f079142e4af9\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"ab8c71d43fa0f39fded66ae6fc0e6c9cda3fb82af9339596958cff8d50a5ff8e\"" Oct 31 01:32:35.847816 env[1377]: time="2025-10-31T01:32:35.847797528Z" level=info msg="StartContainer for \"ab8c71d43fa0f39fded66ae6fc0e6c9cda3fb82af9339596958cff8d50a5ff8e\"" Oct 31 01:32:35.926356 env[1377]: time="2025-10-31T01:32:35.926302889Z" level=info msg="StartContainer for \"ab8c71d43fa0f39fded66ae6fc0e6c9cda3fb82af9339596958cff8d50a5ff8e\" returns successfully" Oct 31 01:32:36.647026 kubelet[2284]: I1031 01:32:36.646990 2284 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-6d9666d8bf-tntbt" podStartSLOduration=2.388340253 podStartE2EDuration="5.646966302s" podCreationTimestamp="2025-10-31 01:32:31 +0000 UTC" firstStartedPulling="2025-10-31 01:32:32.118154673 +0000 UTC m=+16.729968748" lastFinishedPulling="2025-10-31 01:32:35.376780719 +0000 UTC m=+19.988594797" observedRunningTime="2025-10-31 01:32:36.646487965 +0000 UTC m=+21.258302048" watchObservedRunningTime="2025-10-31 01:32:36.646966302 +0000 UTC m=+21.258780381" Oct 31 01:32:36.651591 kubelet[2284]: E1031 01:32:36.651575 2284 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 01:32:36.651591 kubelet[2284]: W1031 01:32:36.651589 2284 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 01:32:36.653904 kubelet[2284]: E1031 01:32:36.651601 2284 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 01:32:36.653904 kubelet[2284]: E1031 01:32:36.651706 2284 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 01:32:36.653904 kubelet[2284]: W1031 01:32:36.651712 2284 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 01:32:36.653904 kubelet[2284]: E1031 01:32:36.651719 2284 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 01:32:36.653904 kubelet[2284]: E1031 01:32:36.651823 2284 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 01:32:36.653904 kubelet[2284]: W1031 01:32:36.651829 2284 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 01:32:36.653904 kubelet[2284]: E1031 01:32:36.651837 2284 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 01:32:36.653904 kubelet[2284]: E1031 01:32:36.651944 2284 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 01:32:36.653904 kubelet[2284]: W1031 01:32:36.651949 2284 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 01:32:36.653904 kubelet[2284]: E1031 01:32:36.651954 2284 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 01:32:36.654112 kubelet[2284]: E1031 01:32:36.652046 2284 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 01:32:36.654112 kubelet[2284]: W1031 01:32:36.652051 2284 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 01:32:36.654112 kubelet[2284]: E1031 01:32:36.652055 2284 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 01:32:36.654112 kubelet[2284]: E1031 01:32:36.652132 2284 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 01:32:36.654112 kubelet[2284]: W1031 01:32:36.652136 2284 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 01:32:36.654112 kubelet[2284]: E1031 01:32:36.652141 2284 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 01:32:36.654112 kubelet[2284]: E1031 01:32:36.652210 2284 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 01:32:36.654112 kubelet[2284]: W1031 01:32:36.652214 2284 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 01:32:36.654112 kubelet[2284]: E1031 01:32:36.652220 2284 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 01:32:36.654112 kubelet[2284]: E1031 01:32:36.652290 2284 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 01:32:36.654292 kubelet[2284]: W1031 01:32:36.652295 2284 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 01:32:36.654292 kubelet[2284]: E1031 01:32:36.652303 2284 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 01:32:36.654292 kubelet[2284]: E1031 01:32:36.652377 2284 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 01:32:36.654292 kubelet[2284]: W1031 01:32:36.652381 2284 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 01:32:36.654292 kubelet[2284]: E1031 01:32:36.652385 2284 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 01:32:36.654292 kubelet[2284]: E1031 01:32:36.652453 2284 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 01:32:36.654292 kubelet[2284]: W1031 01:32:36.652458 2284 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 01:32:36.654292 kubelet[2284]: E1031 01:32:36.652462 2284 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 01:32:36.654292 kubelet[2284]: E1031 01:32:36.652532 2284 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 01:32:36.654292 kubelet[2284]: W1031 01:32:36.652536 2284 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 01:32:36.654481 kubelet[2284]: E1031 01:32:36.652540 2284 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 01:32:36.654481 kubelet[2284]: E1031 01:32:36.652643 2284 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 01:32:36.654481 kubelet[2284]: W1031 01:32:36.652650 2284 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 01:32:36.654481 kubelet[2284]: E1031 01:32:36.652655 2284 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 01:32:36.654481 kubelet[2284]: E1031 01:32:36.652729 2284 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 01:32:36.654481 kubelet[2284]: W1031 01:32:36.652740 2284 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 01:32:36.654481 kubelet[2284]: E1031 01:32:36.652745 2284 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 01:32:36.654481 kubelet[2284]: E1031 01:32:36.652838 2284 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 01:32:36.654481 kubelet[2284]: W1031 01:32:36.652842 2284 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 01:32:36.654481 kubelet[2284]: E1031 01:32:36.652847 2284 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 01:32:36.654675 kubelet[2284]: E1031 01:32:36.652992 2284 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 01:32:36.654675 kubelet[2284]: W1031 01:32:36.652996 2284 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 01:32:36.654675 kubelet[2284]: E1031 01:32:36.653014 2284 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 01:32:36.677725 kubelet[2284]: E1031 01:32:36.677706 2284 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 01:32:36.677725 kubelet[2284]: W1031 01:32:36.677719 2284 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 01:32:36.677725 kubelet[2284]: E1031 01:32:36.677731 2284 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 01:32:36.677876 kubelet[2284]: E1031 01:32:36.677823 2284 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 01:32:36.677876 kubelet[2284]: W1031 01:32:36.677829 2284 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 01:32:36.677876 kubelet[2284]: E1031 01:32:36.677834 2284 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 01:32:36.677959 kubelet[2284]: E1031 01:32:36.677912 2284 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 01:32:36.677959 kubelet[2284]: W1031 01:32:36.677916 2284 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 01:32:36.677959 kubelet[2284]: E1031 01:32:36.677921 2284 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 01:32:36.678017 kubelet[2284]: E1031 01:32:36.678006 2284 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 01:32:36.678017 kubelet[2284]: W1031 01:32:36.678011 2284 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 01:32:36.678056 kubelet[2284]: E1031 01:32:36.678016 2284 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 01:32:36.678097 kubelet[2284]: E1031 01:32:36.678088 2284 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 01:32:36.678097 kubelet[2284]: W1031 01:32:36.678095 2284 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 01:32:36.678168 kubelet[2284]: E1031 01:32:36.678099 2284 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 01:32:36.678193 kubelet[2284]: E1031 01:32:36.678168 2284 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 01:32:36.678193 kubelet[2284]: W1031 01:32:36.678173 2284 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 01:32:36.678193 kubelet[2284]: E1031 01:32:36.678177 2284 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 01:32:36.678272 kubelet[2284]: E1031 01:32:36.678263 2284 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 01:32:36.678272 kubelet[2284]: W1031 01:32:36.678269 2284 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 01:32:36.678330 kubelet[2284]: E1031 01:32:36.678274 2284 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 01:32:36.678438 kubelet[2284]: E1031 01:32:36.678418 2284 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 01:32:36.678438 kubelet[2284]: W1031 01:32:36.678427 2284 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 01:32:36.678438 kubelet[2284]: E1031 01:32:36.678433 2284 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 01:32:36.678517 kubelet[2284]: E1031 01:32:36.678507 2284 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 01:32:36.678517 kubelet[2284]: W1031 01:32:36.678511 2284 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 01:32:36.678517 kubelet[2284]: E1031 01:32:36.678516 2284 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 01:32:36.678593 kubelet[2284]: E1031 01:32:36.678583 2284 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 01:32:36.678593 kubelet[2284]: W1031 01:32:36.678590 2284 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 01:32:36.678668 kubelet[2284]: E1031 01:32:36.678597 2284 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 01:32:36.678693 kubelet[2284]: E1031 01:32:36.678676 2284 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 01:32:36.678693 kubelet[2284]: W1031 01:32:36.678680 2284 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 01:32:36.678693 kubelet[2284]: E1031 01:32:36.678685 2284 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 01:32:36.678784 kubelet[2284]: E1031 01:32:36.678775 2284 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 01:32:36.678784 kubelet[2284]: W1031 01:32:36.678783 2284 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 01:32:36.678845 kubelet[2284]: E1031 01:32:36.678789 2284 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 01:32:36.678978 kubelet[2284]: E1031 01:32:36.678967 2284 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 01:32:36.678978 kubelet[2284]: W1031 01:32:36.678975 2284 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 01:32:36.679031 kubelet[2284]: E1031 01:32:36.678981 2284 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 01:32:36.679071 kubelet[2284]: E1031 01:32:36.679063 2284 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 01:32:36.679071 kubelet[2284]: W1031 01:32:36.679069 2284 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 01:32:36.679122 kubelet[2284]: E1031 01:32:36.679074 2284 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 01:32:36.679158 kubelet[2284]: E1031 01:32:36.679149 2284 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 01:32:36.679158 kubelet[2284]: W1031 01:32:36.679156 2284 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 01:32:36.679209 kubelet[2284]: E1031 01:32:36.679160 2284 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 01:32:36.679242 kubelet[2284]: E1031 01:32:36.679234 2284 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 01:32:36.679242 kubelet[2284]: W1031 01:32:36.679239 2284 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 01:32:36.679293 kubelet[2284]: E1031 01:32:36.679244 2284 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 01:32:36.679339 kubelet[2284]: E1031 01:32:36.679327 2284 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 01:32:36.679339 kubelet[2284]: W1031 01:32:36.679336 2284 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 01:32:36.679390 kubelet[2284]: E1031 01:32:36.679341 2284 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 01:32:36.679891 kubelet[2284]: E1031 01:32:36.679879 2284 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 01:32:36.679891 kubelet[2284]: W1031 01:32:36.679888 2284 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 01:32:36.679953 kubelet[2284]: E1031 01:32:36.679895 2284 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 01:32:37.519836 env[1377]: time="2025-10-31T01:32:37.519801414Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 31 01:32:37.529545 env[1377]: time="2025-10-31T01:32:37.529515774Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:570719e9c34097019014ae2ad94edf4e523bc6892e77fb1c64c23e5b7f390fe5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 31 01:32:37.531595 env[1377]: time="2025-10-31T01:32:37.531577049Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 31 01:32:37.538280 env[1377]: time="2025-10-31T01:32:37.538247158Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:50bdfe370b7308fa9957ed1eaccd094aa4f27f9a4f1dfcfef2f8a7696a1551e1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 31 01:32:37.538776 env[1377]: time="2025-10-31T01:32:37.538752376Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" returns image reference \"sha256:570719e9c34097019014ae2ad94edf4e523bc6892e77fb1c64c23e5b7f390fe5\"" Oct 31 01:32:37.540864 env[1377]: time="2025-10-31T01:32:37.540839128Z" level=info msg="CreateContainer within sandbox \"c530cd53bb9f598cbecda6dcc9b122a6a60c5e174b594ef02a40d55acd24cbf1\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Oct 31 01:32:37.570614 kubelet[2284]: E1031 01:32:37.570572 2284 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-vc6st" podUID="68e0baab-eac1-409d-a79c-945bc83eb739" Oct 31 01:32:37.577822 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1200087950.mount: Deactivated successfully. Oct 31 01:32:37.599357 env[1377]: time="2025-10-31T01:32:37.599306653Z" level=info msg="CreateContainer within sandbox \"c530cd53bb9f598cbecda6dcc9b122a6a60c5e174b594ef02a40d55acd24cbf1\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"ad287018003b1a6b02ef9083c73fcb7c8435795919618324e7ad4ef4a873f1c0\"" Oct 31 01:32:37.599774 env[1377]: time="2025-10-31T01:32:37.599754976Z" level=info msg="StartContainer for \"ad287018003b1a6b02ef9083c73fcb7c8435795919618324e7ad4ef4a873f1c0\"" Oct 31 01:32:37.619053 kubelet[2284]: I1031 01:32:37.619029 2284 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Oct 31 01:32:37.623387 systemd[1]: run-containerd-runc-k8s.io-ad287018003b1a6b02ef9083c73fcb7c8435795919618324e7ad4ef4a873f1c0-runc.XmdeBN.mount: Deactivated successfully. Oct 31 01:32:37.652988 env[1377]: time="2025-10-31T01:32:37.652960307Z" level=info msg="StartContainer for \"ad287018003b1a6b02ef9083c73fcb7c8435795919618324e7ad4ef4a873f1c0\" returns successfully" Oct 31 01:32:37.660051 kubelet[2284]: E1031 01:32:37.659934 2284 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 01:32:37.660051 kubelet[2284]: W1031 01:32:37.659952 2284 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 01:32:37.660051 kubelet[2284]: E1031 01:32:37.659968 2284 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 01:32:37.660548 kubelet[2284]: E1031 01:32:37.660462 2284 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 01:32:37.660548 kubelet[2284]: W1031 01:32:37.660471 2284 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 01:32:37.660548 kubelet[2284]: E1031 01:32:37.660480 2284 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 01:32:37.661309 kubelet[2284]: E1031 01:32:37.660706 2284 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 01:32:37.661309 kubelet[2284]: W1031 01:32:37.660713 2284 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 01:32:37.661309 kubelet[2284]: E1031 01:32:37.660719 2284 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 01:32:37.661309 kubelet[2284]: E1031 01:32:37.661252 2284 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 01:32:37.661309 kubelet[2284]: W1031 01:32:37.661259 2284 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 01:32:37.661309 kubelet[2284]: E1031 01:32:37.661267 2284 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 01:32:37.664479 kubelet[2284]: E1031 01:32:37.661772 2284 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 01:32:37.664479 kubelet[2284]: W1031 01:32:37.661803 2284 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 01:32:37.664479 kubelet[2284]: E1031 01:32:37.661814 2284 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 01:32:37.664479 kubelet[2284]: E1031 01:32:37.661965 2284 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 01:32:37.664479 kubelet[2284]: W1031 01:32:37.661972 2284 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 01:32:37.664479 kubelet[2284]: E1031 01:32:37.661979 2284 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 01:32:37.664479 kubelet[2284]: E1031 01:32:37.662104 2284 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 01:32:37.664479 kubelet[2284]: W1031 01:32:37.662111 2284 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 01:32:37.664479 kubelet[2284]: E1031 01:32:37.662118 2284 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 01:32:37.664479 kubelet[2284]: E1031 01:32:37.662284 2284 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 01:32:37.667763 kubelet[2284]: W1031 01:32:37.662291 2284 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 01:32:37.667763 kubelet[2284]: E1031 01:32:37.662302 2284 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 01:32:37.667763 kubelet[2284]: E1031 01:32:37.662457 2284 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 01:32:37.667763 kubelet[2284]: W1031 01:32:37.662462 2284 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 01:32:37.667763 kubelet[2284]: E1031 01:32:37.662469 2284 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 01:32:37.667763 kubelet[2284]: E1031 01:32:37.662585 2284 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 01:32:37.667763 kubelet[2284]: W1031 01:32:37.662592 2284 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 01:32:37.667763 kubelet[2284]: E1031 01:32:37.662636 2284 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 01:32:37.667763 kubelet[2284]: E1031 01:32:37.662759 2284 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 01:32:37.667763 kubelet[2284]: W1031 01:32:37.662979 2284 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 01:32:37.667962 kubelet[2284]: E1031 01:32:37.662991 2284 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 01:32:37.989037 env[1377]: time="2025-10-31T01:32:37.989006090Z" level=info msg="shim disconnected" id=ad287018003b1a6b02ef9083c73fcb7c8435795919618324e7ad4ef4a873f1c0 Oct 31 01:32:37.989222 env[1377]: time="2025-10-31T01:32:37.989208748Z" level=warning msg="cleaning up after shim disconnected" id=ad287018003b1a6b02ef9083c73fcb7c8435795919618324e7ad4ef4a873f1c0 namespace=k8s.io Oct 31 01:32:37.989288 env[1377]: time="2025-10-31T01:32:37.989276429Z" level=info msg="cleaning up dead shim" Oct 31 01:32:37.994142 env[1377]: time="2025-10-31T01:32:37.994114298Z" level=warning msg="cleanup warnings time=\"2025-10-31T01:32:37Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2968 runtime=io.containerd.runc.v2\n" Oct 31 01:32:38.576419 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-ad287018003b1a6b02ef9083c73fcb7c8435795919618324e7ad4ef4a873f1c0-rootfs.mount: Deactivated successfully. Oct 31 01:32:38.622285 env[1377]: time="2025-10-31T01:32:38.622253923Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.4\"" Oct 31 01:32:39.570467 kubelet[2284]: E1031 01:32:39.570435 2284 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-vc6st" podUID="68e0baab-eac1-409d-a79c-945bc83eb739" Oct 31 01:32:41.571039 kubelet[2284]: E1031 01:32:41.570775 2284 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-vc6st" podUID="68e0baab-eac1-409d-a79c-945bc83eb739" Oct 31 01:32:43.570639 kubelet[2284]: E1031 01:32:43.570596 2284 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-vc6st" podUID="68e0baab-eac1-409d-a79c-945bc83eb739" Oct 31 01:32:43.833215 env[1377]: time="2025-10-31T01:32:43.833191699Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/cni:v3.30.4,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 31 01:32:43.870417 env[1377]: time="2025-10-31T01:32:43.870396748Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:24e1e7377c738d4080eb462a29e2c6756d383d8d25ad87b7f49165581f20c3cd,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 31 01:32:43.884540 env[1377]: time="2025-10-31T01:32:43.884512794Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/cni:v3.30.4,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 31 01:32:43.899789 env[1377]: time="2025-10-31T01:32:43.899763730Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/cni@sha256:273501a9cfbd848ade2b6a8452dfafdd3adb4f9bf9aec45c398a5d19b8026627,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 31 01:32:43.900501 env[1377]: time="2025-10-31T01:32:43.900478625Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.4\" returns image reference \"sha256:24e1e7377c738d4080eb462a29e2c6756d383d8d25ad87b7f49165581f20c3cd\"" Oct 31 01:32:43.905159 env[1377]: time="2025-10-31T01:32:43.905133145Z" level=info msg="CreateContainer within sandbox \"c530cd53bb9f598cbecda6dcc9b122a6a60c5e174b594ef02a40d55acd24cbf1\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Oct 31 01:32:43.943128 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1072755281.mount: Deactivated successfully. Oct 31 01:32:43.947801 env[1377]: time="2025-10-31T01:32:43.947778286Z" level=info msg="CreateContainer within sandbox \"c530cd53bb9f598cbecda6dcc9b122a6a60c5e174b594ef02a40d55acd24cbf1\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"d57915bd8d0f4dc7555a27e6d015ea0753aab3698597315f2208705cf15958f5\"" Oct 31 01:32:43.949341 env[1377]: time="2025-10-31T01:32:43.949321303Z" level=info msg="StartContainer for \"d57915bd8d0f4dc7555a27e6d015ea0753aab3698597315f2208705cf15958f5\"" Oct 31 01:32:44.037456 env[1377]: time="2025-10-31T01:32:44.037432944Z" level=info msg="StartContainer for \"d57915bd8d0f4dc7555a27e6d015ea0753aab3698597315f2208705cf15958f5\" returns successfully" Oct 31 01:32:44.942736 systemd[1]: run-containerd-runc-k8s.io-d57915bd8d0f4dc7555a27e6d015ea0753aab3698597315f2208705cf15958f5-runc.Aubo5M.mount: Deactivated successfully. Oct 31 01:32:45.572141 kubelet[2284]: E1031 01:32:45.572108 2284 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-vc6st" podUID="68e0baab-eac1-409d-a79c-945bc83eb739" Oct 31 01:32:46.568881 env[1377]: time="2025-10-31T01:32:46.568808820Z" level=error msg="failed to reload cni configuration after receiving fs change event(\"/etc/cni/net.d/calico-kubeconfig\": WRITE)" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Oct 31 01:32:46.585678 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-d57915bd8d0f4dc7555a27e6d015ea0753aab3698597315f2208705cf15958f5-rootfs.mount: Deactivated successfully. Oct 31 01:32:46.607980 env[1377]: time="2025-10-31T01:32:46.607947143Z" level=info msg="shim disconnected" id=d57915bd8d0f4dc7555a27e6d015ea0753aab3698597315f2208705cf15958f5 Oct 31 01:32:46.607980 env[1377]: time="2025-10-31T01:32:46.607978932Z" level=warning msg="cleaning up after shim disconnected" id=d57915bd8d0f4dc7555a27e6d015ea0753aab3698597315f2208705cf15958f5 namespace=k8s.io Oct 31 01:32:46.607980 env[1377]: time="2025-10-31T01:32:46.607988225Z" level=info msg="cleaning up dead shim" Oct 31 01:32:46.613455 env[1377]: time="2025-10-31T01:32:46.613302246Z" level=warning msg="cleanup warnings time=\"2025-10-31T01:32:46Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3034 runtime=io.containerd.runc.v2\n" Oct 31 01:32:46.651529 env[1377]: time="2025-10-31T01:32:46.651484861Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.4\"" Oct 31 01:32:46.675964 kubelet[2284]: I1031 01:32:46.675934 2284 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Oct 31 01:32:46.909271 kubelet[2284]: I1031 01:32:46.909249 2284 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qzscg\" (UniqueName: \"kubernetes.io/projected/03dcc52f-4acf-4546-b99b-cf4de4d54704-kube-api-access-qzscg\") pod \"calico-apiserver-59f47b46b-44rl6\" (UID: \"03dcc52f-4acf-4546-b99b-cf4de4d54704\") " pod="calico-apiserver/calico-apiserver-59f47b46b-44rl6" Oct 31 01:32:46.909444 kubelet[2284]: I1031 01:32:46.909434 2284 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-std5w\" (UniqueName: \"kubernetes.io/projected/d48c0130-5c14-4eac-a5d1-65ae7e9f18bd-kube-api-access-std5w\") pod \"whisker-dc76fd6ff-vkhmk\" (UID: \"d48c0130-5c14-4eac-a5d1-65ae7e9f18bd\") " pod="calico-system/whisker-dc76fd6ff-vkhmk" Oct 31 01:32:46.909520 kubelet[2284]: I1031 01:32:46.909512 2284 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4trwg\" (UniqueName: \"kubernetes.io/projected/c35d3eac-7307-43de-bf4e-73472193a4cb-kube-api-access-4trwg\") pod \"coredns-668d6bf9bc-j9jdm\" (UID: \"c35d3eac-7307-43de-bf4e-73472193a4cb\") " pod="kube-system/coredns-668d6bf9bc-j9jdm" Oct 31 01:32:46.909600 kubelet[2284]: I1031 01:32:46.909591 2284 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d48c0130-5c14-4eac-a5d1-65ae7e9f18bd-whisker-ca-bundle\") pod \"whisker-dc76fd6ff-vkhmk\" (UID: \"d48c0130-5c14-4eac-a5d1-65ae7e9f18bd\") " pod="calico-system/whisker-dc76fd6ff-vkhmk" Oct 31 01:32:46.909684 kubelet[2284]: I1031 01:32:46.909674 2284 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/03dcc52f-4acf-4546-b99b-cf4de4d54704-calico-apiserver-certs\") pod \"calico-apiserver-59f47b46b-44rl6\" (UID: \"03dcc52f-4acf-4546-b99b-cf4de4d54704\") " pod="calico-apiserver/calico-apiserver-59f47b46b-44rl6" Oct 31 01:32:46.909977 kubelet[2284]: I1031 01:32:46.909967 2284 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5b64v\" (UniqueName: \"kubernetes.io/projected/8f69b598-caa2-4abe-911a-df60fbb3c4df-kube-api-access-5b64v\") pod \"calico-apiserver-59f47b46b-s9kdt\" (UID: \"8f69b598-caa2-4abe-911a-df60fbb3c4df\") " pod="calico-apiserver/calico-apiserver-59f47b46b-s9kdt" Oct 31 01:32:46.910041 kubelet[2284]: I1031 01:32:46.910031 2284 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/c35d3eac-7307-43de-bf4e-73472193a4cb-config-volume\") pod \"coredns-668d6bf9bc-j9jdm\" (UID: \"c35d3eac-7307-43de-bf4e-73472193a4cb\") " pod="kube-system/coredns-668d6bf9bc-j9jdm" Oct 31 01:32:46.910103 kubelet[2284]: I1031 01:32:46.910094 2284 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9kdtz\" (UniqueName: \"kubernetes.io/projected/d10ebfbe-91f8-4576-8542-06b4d8a152be-kube-api-access-9kdtz\") pod \"goldmane-666569f655-8nrww\" (UID: \"d10ebfbe-91f8-4576-8542-06b4d8a152be\") " pod="calico-system/goldmane-666569f655-8nrww" Oct 31 01:32:46.922010 kubelet[2284]: I1031 01:32:46.910166 2284 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/8f69b598-caa2-4abe-911a-df60fbb3c4df-calico-apiserver-certs\") pod \"calico-apiserver-59f47b46b-s9kdt\" (UID: \"8f69b598-caa2-4abe-911a-df60fbb3c4df\") " pod="calico-apiserver/calico-apiserver-59f47b46b-s9kdt" Oct 31 01:32:46.922010 kubelet[2284]: I1031 01:32:46.910184 2284 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-828gm\" (UniqueName: \"kubernetes.io/projected/4bc01c1a-5001-4f6b-ad3f-615201398d71-kube-api-access-828gm\") pod \"coredns-668d6bf9bc-27j8s\" (UID: \"4bc01c1a-5001-4f6b-ad3f-615201398d71\") " pod="kube-system/coredns-668d6bf9bc-27j8s" Oct 31 01:32:46.922010 kubelet[2284]: I1031 01:32:46.910194 2284 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d10ebfbe-91f8-4576-8542-06b4d8a152be-goldmane-ca-bundle\") pod \"goldmane-666569f655-8nrww\" (UID: \"d10ebfbe-91f8-4576-8542-06b4d8a152be\") " pod="calico-system/goldmane-666569f655-8nrww" Oct 31 01:32:46.922010 kubelet[2284]: I1031 01:32:46.910207 2284 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/d48c0130-5c14-4eac-a5d1-65ae7e9f18bd-whisker-backend-key-pair\") pod \"whisker-dc76fd6ff-vkhmk\" (UID: \"d48c0130-5c14-4eac-a5d1-65ae7e9f18bd\") " pod="calico-system/whisker-dc76fd6ff-vkhmk" Oct 31 01:32:46.922010 kubelet[2284]: I1031 01:32:46.910218 2284 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c5ff4853-41a8-4d7e-a0bc-f8d8451a400b-tigera-ca-bundle\") pod \"calico-kube-controllers-6cf7465748-bpws2\" (UID: \"c5ff4853-41a8-4d7e-a0bc-f8d8451a400b\") " pod="calico-system/calico-kube-controllers-6cf7465748-bpws2" Oct 31 01:32:46.934702 kubelet[2284]: I1031 01:32:46.910228 2284 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/3586a90a-6636-45ab-8082-c6aa9bdb62e3-calico-apiserver-certs\") pod \"calico-apiserver-7495b6f49d-9bz8s\" (UID: \"3586a90a-6636-45ab-8082-c6aa9bdb62e3\") " pod="calico-apiserver/calico-apiserver-7495b6f49d-9bz8s" Oct 31 01:32:46.934702 kubelet[2284]: I1031 01:32:46.910241 2284 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/4bc01c1a-5001-4f6b-ad3f-615201398d71-config-volume\") pod \"coredns-668d6bf9bc-27j8s\" (UID: \"4bc01c1a-5001-4f6b-ad3f-615201398d71\") " pod="kube-system/coredns-668d6bf9bc-27j8s" Oct 31 01:32:46.934702 kubelet[2284]: I1031 01:32:46.910251 2284 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wqf4c\" (UniqueName: \"kubernetes.io/projected/3586a90a-6636-45ab-8082-c6aa9bdb62e3-kube-api-access-wqf4c\") pod \"calico-apiserver-7495b6f49d-9bz8s\" (UID: \"3586a90a-6636-45ab-8082-c6aa9bdb62e3\") " pod="calico-apiserver/calico-apiserver-7495b6f49d-9bz8s" Oct 31 01:32:46.934702 kubelet[2284]: I1031 01:32:46.910272 2284 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d10ebfbe-91f8-4576-8542-06b4d8a152be-config\") pod \"goldmane-666569f655-8nrww\" (UID: \"d10ebfbe-91f8-4576-8542-06b4d8a152be\") " pod="calico-system/goldmane-666569f655-8nrww" Oct 31 01:32:46.934702 kubelet[2284]: I1031 01:32:46.910295 2284 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/d10ebfbe-91f8-4576-8542-06b4d8a152be-goldmane-key-pair\") pod \"goldmane-666569f655-8nrww\" (UID: \"d10ebfbe-91f8-4576-8542-06b4d8a152be\") " pod="calico-system/goldmane-666569f655-8nrww" Oct 31 01:32:46.934806 kubelet[2284]: I1031 01:32:46.910304 2284 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r9l7q\" (UniqueName: \"kubernetes.io/projected/c5ff4853-41a8-4d7e-a0bc-f8d8451a400b-kube-api-access-r9l7q\") pod \"calico-kube-controllers-6cf7465748-bpws2\" (UID: \"c5ff4853-41a8-4d7e-a0bc-f8d8451a400b\") " pod="calico-system/calico-kube-controllers-6cf7465748-bpws2" Oct 31 01:32:47.086564 env[1377]: time="2025-10-31T01:32:47.086339760Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-j9jdm,Uid:c35d3eac-7307-43de-bf4e-73472193a4cb,Namespace:kube-system,Attempt:0,}" Oct 31 01:32:47.097406 env[1377]: time="2025-10-31T01:32:47.097255017Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7495b6f49d-9bz8s,Uid:3586a90a-6636-45ab-8082-c6aa9bdb62e3,Namespace:calico-apiserver,Attempt:0,}" Oct 31 01:32:47.103100 env[1377]: time="2025-10-31T01:32:47.103072928Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-dc76fd6ff-vkhmk,Uid:d48c0130-5c14-4eac-a5d1-65ae7e9f18bd,Namespace:calico-system,Attempt:0,}" Oct 31 01:32:47.104770 env[1377]: time="2025-10-31T01:32:47.104745674Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-27j8s,Uid:4bc01c1a-5001-4f6b-ad3f-615201398d71,Namespace:kube-system,Attempt:0,}" Oct 31 01:32:47.107422 env[1377]: time="2025-10-31T01:32:47.107406244Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-59f47b46b-s9kdt,Uid:8f69b598-caa2-4abe-911a-df60fbb3c4df,Namespace:calico-apiserver,Attempt:0,}" Oct 31 01:32:47.109011 env[1377]: time="2025-10-31T01:32:47.108994834Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6cf7465748-bpws2,Uid:c5ff4853-41a8-4d7e-a0bc-f8d8451a400b,Namespace:calico-system,Attempt:0,}" Oct 31 01:32:47.110341 env[1377]: time="2025-10-31T01:32:47.110317074Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-59f47b46b-44rl6,Uid:03dcc52f-4acf-4546-b99b-cf4de4d54704,Namespace:calico-apiserver,Attempt:0,}" Oct 31 01:32:47.112049 env[1377]: time="2025-10-31T01:32:47.112028834Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-8nrww,Uid:d10ebfbe-91f8-4576-8542-06b4d8a152be,Namespace:calico-system,Attempt:0,}" Oct 31 01:32:47.574655 env[1377]: time="2025-10-31T01:32:47.574594255Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-vc6st,Uid:68e0baab-eac1-409d-a79c-945bc83eb739,Namespace:calico-system,Attempt:0,}" Oct 31 01:32:48.318060 env[1377]: time="2025-10-31T01:32:48.318002895Z" level=error msg="Failed to destroy network for sandbox \"68ea07f2cec36a0ea709039e67ed228650906cd3b6a583d6f78a2e7c9988da4d\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 31 01:32:48.319969 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-68ea07f2cec36a0ea709039e67ed228650906cd3b6a583d6f78a2e7c9988da4d-shm.mount: Deactivated successfully. Oct 31 01:32:48.320791 env[1377]: time="2025-10-31T01:32:48.320763908Z" level=error msg="encountered an error cleaning up failed sandbox \"68ea07f2cec36a0ea709039e67ed228650906cd3b6a583d6f78a2e7c9988da4d\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 31 01:32:48.320834 env[1377]: time="2025-10-31T01:32:48.320803963Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-59f47b46b-44rl6,Uid:03dcc52f-4acf-4546-b99b-cf4de4d54704,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"68ea07f2cec36a0ea709039e67ed228650906cd3b6a583d6f78a2e7c9988da4d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 31 01:32:48.324708 env[1377]: time="2025-10-31T01:32:48.324577317Z" level=error msg="Failed to destroy network for sandbox \"a3f2fe237e95c197a2f6b326a1b871ff16dcf0c00c677db5ddf2964768d2fbe0\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 31 01:32:48.329426 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-a3f2fe237e95c197a2f6b326a1b871ff16dcf0c00c677db5ddf2964768d2fbe0-shm.mount: Deactivated successfully. Oct 31 01:32:48.330953 kubelet[2284]: E1031 01:32:48.327013 2284 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"68ea07f2cec36a0ea709039e67ed228650906cd3b6a583d6f78a2e7c9988da4d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 31 01:32:48.330953 kubelet[2284]: E1031 01:32:48.330864 2284 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a3f2fe237e95c197a2f6b326a1b871ff16dcf0c00c677db5ddf2964768d2fbe0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 31 01:32:48.331391 env[1377]: time="2025-10-31T01:32:48.330660364Z" level=error msg="encountered an error cleaning up failed sandbox \"a3f2fe237e95c197a2f6b326a1b871ff16dcf0c00c677db5ddf2964768d2fbe0\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 31 01:32:48.331391 env[1377]: time="2025-10-31T01:32:48.330718889Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-27j8s,Uid:4bc01c1a-5001-4f6b-ad3f-615201398d71,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"a3f2fe237e95c197a2f6b326a1b871ff16dcf0c00c677db5ddf2964768d2fbe0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 31 01:32:48.333635 kubelet[2284]: E1031 01:32:48.333565 2284 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"68ea07f2cec36a0ea709039e67ed228650906cd3b6a583d6f78a2e7c9988da4d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-59f47b46b-44rl6" Oct 31 01:32:48.333812 env[1377]: time="2025-10-31T01:32:48.333784575Z" level=error msg="Failed to destroy network for sandbox \"16e510b62672d74d105ab6a96889818618766c7743e9a2af0797521a7f0d6681\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 31 01:32:48.335644 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-16e510b62672d74d105ab6a96889818618766c7743e9a2af0797521a7f0d6681-shm.mount: Deactivated successfully. Oct 31 01:32:48.336615 kubelet[2284]: E1031 01:32:48.336472 2284 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a3f2fe237e95c197a2f6b326a1b871ff16dcf0c00c677db5ddf2964768d2fbe0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-27j8s" Oct 31 01:32:48.336967 env[1377]: time="2025-10-31T01:32:48.336933203Z" level=error msg="encountered an error cleaning up failed sandbox \"16e510b62672d74d105ab6a96889818618766c7743e9a2af0797521a7f0d6681\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 31 01:32:48.337530 env[1377]: time="2025-10-31T01:32:48.337500473Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-dc76fd6ff-vkhmk,Uid:d48c0130-5c14-4eac-a5d1-65ae7e9f18bd,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"16e510b62672d74d105ab6a96889818618766c7743e9a2af0797521a7f0d6681\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 31 01:32:48.338351 kubelet[2284]: E1031 01:32:48.338159 2284 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a3f2fe237e95c197a2f6b326a1b871ff16dcf0c00c677db5ddf2964768d2fbe0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-27j8s" Oct 31 01:32:48.338351 kubelet[2284]: E1031 01:32:48.338168 2284 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"68ea07f2cec36a0ea709039e67ed228650906cd3b6a583d6f78a2e7c9988da4d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-59f47b46b-44rl6" Oct 31 01:32:48.338351 kubelet[2284]: E1031 01:32:48.338217 2284 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-59f47b46b-44rl6_calico-apiserver(03dcc52f-4acf-4546-b99b-cf4de4d54704)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-59f47b46b-44rl6_calico-apiserver(03dcc52f-4acf-4546-b99b-cf4de4d54704)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"68ea07f2cec36a0ea709039e67ed228650906cd3b6a583d6f78a2e7c9988da4d\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-59f47b46b-44rl6" podUID="03dcc52f-4acf-4546-b99b-cf4de4d54704" Oct 31 01:32:48.338462 kubelet[2284]: E1031 01:32:48.338246 2284 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-27j8s_kube-system(4bc01c1a-5001-4f6b-ad3f-615201398d71)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-27j8s_kube-system(4bc01c1a-5001-4f6b-ad3f-615201398d71)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"a3f2fe237e95c197a2f6b326a1b871ff16dcf0c00c677db5ddf2964768d2fbe0\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-27j8s" podUID="4bc01c1a-5001-4f6b-ad3f-615201398d71" Oct 31 01:32:48.338462 kubelet[2284]: E1031 01:32:48.338252 2284 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"16e510b62672d74d105ab6a96889818618766c7743e9a2af0797521a7f0d6681\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 31 01:32:48.338462 kubelet[2284]: E1031 01:32:48.338277 2284 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"16e510b62672d74d105ab6a96889818618766c7743e9a2af0797521a7f0d6681\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-dc76fd6ff-vkhmk" Oct 31 01:32:48.338601 kubelet[2284]: E1031 01:32:48.338287 2284 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"16e510b62672d74d105ab6a96889818618766c7743e9a2af0797521a7f0d6681\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-dc76fd6ff-vkhmk" Oct 31 01:32:48.338601 kubelet[2284]: E1031 01:32:48.338301 2284 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-dc76fd6ff-vkhmk_calico-system(d48c0130-5c14-4eac-a5d1-65ae7e9f18bd)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-dc76fd6ff-vkhmk_calico-system(d48c0130-5c14-4eac-a5d1-65ae7e9f18bd)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"16e510b62672d74d105ab6a96889818618766c7743e9a2af0797521a7f0d6681\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-dc76fd6ff-vkhmk" podUID="d48c0130-5c14-4eac-a5d1-65ae7e9f18bd" Oct 31 01:32:48.375958 env[1377]: time="2025-10-31T01:32:48.375922519Z" level=error msg="Failed to destroy network for sandbox \"e9b9f960fe0a05e1639f4b425911f59ace57268750f8e775d196eb85258ce494\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 31 01:32:48.376363 env[1377]: time="2025-10-31T01:32:48.376341575Z" level=error msg="encountered an error cleaning up failed sandbox \"e9b9f960fe0a05e1639f4b425911f59ace57268750f8e775d196eb85258ce494\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 31 01:32:48.376449 env[1377]: time="2025-10-31T01:32:48.376431830Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7495b6f49d-9bz8s,Uid:3586a90a-6636-45ab-8082-c6aa9bdb62e3,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"e9b9f960fe0a05e1639f4b425911f59ace57268750f8e775d196eb85258ce494\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 31 01:32:48.377317 kubelet[2284]: E1031 01:32:48.376678 2284 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e9b9f960fe0a05e1639f4b425911f59ace57268750f8e775d196eb85258ce494\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 31 01:32:48.377317 kubelet[2284]: E1031 01:32:48.376716 2284 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e9b9f960fe0a05e1639f4b425911f59ace57268750f8e775d196eb85258ce494\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7495b6f49d-9bz8s" Oct 31 01:32:48.377317 kubelet[2284]: E1031 01:32:48.376730 2284 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e9b9f960fe0a05e1639f4b425911f59ace57268750f8e775d196eb85258ce494\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7495b6f49d-9bz8s" Oct 31 01:32:48.377465 kubelet[2284]: E1031 01:32:48.376761 2284 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-7495b6f49d-9bz8s_calico-apiserver(3586a90a-6636-45ab-8082-c6aa9bdb62e3)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-7495b6f49d-9bz8s_calico-apiserver(3586a90a-6636-45ab-8082-c6aa9bdb62e3)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"e9b9f960fe0a05e1639f4b425911f59ace57268750f8e775d196eb85258ce494\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-7495b6f49d-9bz8s" podUID="3586a90a-6636-45ab-8082-c6aa9bdb62e3" Oct 31 01:32:48.383810 env[1377]: time="2025-10-31T01:32:48.383759500Z" level=error msg="Failed to destroy network for sandbox \"0f9e76c08e7066d5d6aee4b43aab25d278801a49577b0dd54668526bb5edbf57\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 31 01:32:48.384388 env[1377]: time="2025-10-31T01:32:48.384241860Z" level=error msg="encountered an error cleaning up failed sandbox \"0f9e76c08e7066d5d6aee4b43aab25d278801a49577b0dd54668526bb5edbf57\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 31 01:32:48.384388 env[1377]: time="2025-10-31T01:32:48.384330212Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-59f47b46b-s9kdt,Uid:8f69b598-caa2-4abe-911a-df60fbb3c4df,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"0f9e76c08e7066d5d6aee4b43aab25d278801a49577b0dd54668526bb5edbf57\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 31 01:32:48.384628 kubelet[2284]: E1031 01:32:48.384573 2284 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0f9e76c08e7066d5d6aee4b43aab25d278801a49577b0dd54668526bb5edbf57\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 31 01:32:48.384700 kubelet[2284]: E1031 01:32:48.384656 2284 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0f9e76c08e7066d5d6aee4b43aab25d278801a49577b0dd54668526bb5edbf57\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-59f47b46b-s9kdt" Oct 31 01:32:48.384700 kubelet[2284]: E1031 01:32:48.384687 2284 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0f9e76c08e7066d5d6aee4b43aab25d278801a49577b0dd54668526bb5edbf57\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-59f47b46b-s9kdt" Oct 31 01:32:48.384767 kubelet[2284]: E1031 01:32:48.384740 2284 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-59f47b46b-s9kdt_calico-apiserver(8f69b598-caa2-4abe-911a-df60fbb3c4df)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-59f47b46b-s9kdt_calico-apiserver(8f69b598-caa2-4abe-911a-df60fbb3c4df)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"0f9e76c08e7066d5d6aee4b43aab25d278801a49577b0dd54668526bb5edbf57\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-59f47b46b-s9kdt" podUID="8f69b598-caa2-4abe-911a-df60fbb3c4df" Oct 31 01:32:48.393980 env[1377]: time="2025-10-31T01:32:48.393945896Z" level=error msg="Failed to destroy network for sandbox \"6d715b3156b7623086840544a9b2e8c95015db1e296dbe8cc2dc2e036b11cdb8\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 31 01:32:48.394459 env[1377]: time="2025-10-31T01:32:48.394437022Z" level=error msg="encountered an error cleaning up failed sandbox \"6d715b3156b7623086840544a9b2e8c95015db1e296dbe8cc2dc2e036b11cdb8\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 31 01:32:48.394575 env[1377]: time="2025-10-31T01:32:48.394551630Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-vc6st,Uid:68e0baab-eac1-409d-a79c-945bc83eb739,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"6d715b3156b7623086840544a9b2e8c95015db1e296dbe8cc2dc2e036b11cdb8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 31 01:32:48.394860 kubelet[2284]: E1031 01:32:48.394821 2284 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6d715b3156b7623086840544a9b2e8c95015db1e296dbe8cc2dc2e036b11cdb8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 31 01:32:48.394920 kubelet[2284]: E1031 01:32:48.394891 2284 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6d715b3156b7623086840544a9b2e8c95015db1e296dbe8cc2dc2e036b11cdb8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-vc6st" Oct 31 01:32:48.394920 kubelet[2284]: E1031 01:32:48.394912 2284 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6d715b3156b7623086840544a9b2e8c95015db1e296dbe8cc2dc2e036b11cdb8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-vc6st" Oct 31 01:32:48.394998 kubelet[2284]: E1031 01:32:48.394967 2284 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-vc6st_calico-system(68e0baab-eac1-409d-a79c-945bc83eb739)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-vc6st_calico-system(68e0baab-eac1-409d-a79c-945bc83eb739)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"6d715b3156b7623086840544a9b2e8c95015db1e296dbe8cc2dc2e036b11cdb8\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-vc6st" podUID="68e0baab-eac1-409d-a79c-945bc83eb739" Oct 31 01:32:48.396723 env[1377]: time="2025-10-31T01:32:48.396695456Z" level=error msg="Failed to destroy network for sandbox \"660a1bc9f26fedc398ff472056caa51e1793e5e319a6e213b713da6c45aecdd8\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 31 01:32:48.397493 env[1377]: time="2025-10-31T01:32:48.397466853Z" level=error msg="encountered an error cleaning up failed sandbox \"660a1bc9f26fedc398ff472056caa51e1793e5e319a6e213b713da6c45aecdd8\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 31 01:32:48.397625 env[1377]: time="2025-10-31T01:32:48.397596850Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-j9jdm,Uid:c35d3eac-7307-43de-bf4e-73472193a4cb,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"660a1bc9f26fedc398ff472056caa51e1793e5e319a6e213b713da6c45aecdd8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 31 01:32:48.397929 kubelet[2284]: E1031 01:32:48.397813 2284 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"660a1bc9f26fedc398ff472056caa51e1793e5e319a6e213b713da6c45aecdd8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 31 01:32:48.397929 kubelet[2284]: E1031 01:32:48.397848 2284 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"660a1bc9f26fedc398ff472056caa51e1793e5e319a6e213b713da6c45aecdd8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-j9jdm" Oct 31 01:32:48.397929 kubelet[2284]: E1031 01:32:48.397862 2284 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"660a1bc9f26fedc398ff472056caa51e1793e5e319a6e213b713da6c45aecdd8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-j9jdm" Oct 31 01:32:48.398034 kubelet[2284]: E1031 01:32:48.397901 2284 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-j9jdm_kube-system(c35d3eac-7307-43de-bf4e-73472193a4cb)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-j9jdm_kube-system(c35d3eac-7307-43de-bf4e-73472193a4cb)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"660a1bc9f26fedc398ff472056caa51e1793e5e319a6e213b713da6c45aecdd8\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-j9jdm" podUID="c35d3eac-7307-43de-bf4e-73472193a4cb" Oct 31 01:32:48.402495 env[1377]: time="2025-10-31T01:32:48.402451869Z" level=error msg="Failed to destroy network for sandbox \"adb25d497f530262e3f30149078f5eaa0d0ef52dcabb55036103df9d0fc53bf3\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 31 01:32:48.402920 env[1377]: time="2025-10-31T01:32:48.402893167Z" level=error msg="encountered an error cleaning up failed sandbox \"adb25d497f530262e3f30149078f5eaa0d0ef52dcabb55036103df9d0fc53bf3\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 31 01:32:48.408179 env[1377]: time="2025-10-31T01:32:48.403012152Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6cf7465748-bpws2,Uid:c5ff4853-41a8-4d7e-a0bc-f8d8451a400b,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"adb25d497f530262e3f30149078f5eaa0d0ef52dcabb55036103df9d0fc53bf3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 31 01:32:48.408265 kubelet[2284]: E1031 01:32:48.403183 2284 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"adb25d497f530262e3f30149078f5eaa0d0ef52dcabb55036103df9d0fc53bf3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 31 01:32:48.408265 kubelet[2284]: E1031 01:32:48.403237 2284 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"adb25d497f530262e3f30149078f5eaa0d0ef52dcabb55036103df9d0fc53bf3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-6cf7465748-bpws2" Oct 31 01:32:48.408265 kubelet[2284]: E1031 01:32:48.403257 2284 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"adb25d497f530262e3f30149078f5eaa0d0ef52dcabb55036103df9d0fc53bf3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-6cf7465748-bpws2" Oct 31 01:32:48.412907 kubelet[2284]: E1031 01:32:48.403296 2284 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-6cf7465748-bpws2_calico-system(c5ff4853-41a8-4d7e-a0bc-f8d8451a400b)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-6cf7465748-bpws2_calico-system(c5ff4853-41a8-4d7e-a0bc-f8d8451a400b)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"adb25d497f530262e3f30149078f5eaa0d0ef52dcabb55036103df9d0fc53bf3\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-6cf7465748-bpws2" podUID="c5ff4853-41a8-4d7e-a0bc-f8d8451a400b" Oct 31 01:32:48.417135 env[1377]: time="2025-10-31T01:32:48.417099794Z" level=error msg="Failed to destroy network for sandbox \"4c0bab7a2ed6ec3c12b99faf075344d81f292b4cad6e7a55ec04d0974d8e7759\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 31 01:32:48.417457 env[1377]: time="2025-10-31T01:32:48.417438384Z" level=error msg="encountered an error cleaning up failed sandbox \"4c0bab7a2ed6ec3c12b99faf075344d81f292b4cad6e7a55ec04d0974d8e7759\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 31 01:32:48.417534 env[1377]: time="2025-10-31T01:32:48.417516370Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-8nrww,Uid:d10ebfbe-91f8-4576-8542-06b4d8a152be,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"4c0bab7a2ed6ec3c12b99faf075344d81f292b4cad6e7a55ec04d0974d8e7759\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 31 01:32:48.417790 kubelet[2284]: E1031 01:32:48.417759 2284 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4c0bab7a2ed6ec3c12b99faf075344d81f292b4cad6e7a55ec04d0974d8e7759\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 31 01:32:48.417830 kubelet[2284]: E1031 01:32:48.417811 2284 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4c0bab7a2ed6ec3c12b99faf075344d81f292b4cad6e7a55ec04d0974d8e7759\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-666569f655-8nrww" Oct 31 01:32:48.417863 kubelet[2284]: E1031 01:32:48.417834 2284 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4c0bab7a2ed6ec3c12b99faf075344d81f292b4cad6e7a55ec04d0974d8e7759\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-666569f655-8nrww" Oct 31 01:32:48.417896 kubelet[2284]: E1031 01:32:48.417871 2284 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-666569f655-8nrww_calico-system(d10ebfbe-91f8-4576-8542-06b4d8a152be)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-666569f655-8nrww_calico-system(d10ebfbe-91f8-4576-8542-06b4d8a152be)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"4c0bab7a2ed6ec3c12b99faf075344d81f292b4cad6e7a55ec04d0974d8e7759\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-666569f655-8nrww" podUID="d10ebfbe-91f8-4576-8542-06b4d8a152be" Oct 31 01:32:48.586593 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-6d715b3156b7623086840544a9b2e8c95015db1e296dbe8cc2dc2e036b11cdb8-shm.mount: Deactivated successfully. Oct 31 01:32:48.586709 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-4c0bab7a2ed6ec3c12b99faf075344d81f292b4cad6e7a55ec04d0974d8e7759-shm.mount: Deactivated successfully. Oct 31 01:32:48.586767 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-adb25d497f530262e3f30149078f5eaa0d0ef52dcabb55036103df9d0fc53bf3-shm.mount: Deactivated successfully. Oct 31 01:32:48.586820 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-0f9e76c08e7066d5d6aee4b43aab25d278801a49577b0dd54668526bb5edbf57-shm.mount: Deactivated successfully. Oct 31 01:32:48.586872 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-e9b9f960fe0a05e1639f4b425911f59ace57268750f8e775d196eb85258ce494-shm.mount: Deactivated successfully. Oct 31 01:32:48.586926 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-660a1bc9f26fedc398ff472056caa51e1793e5e319a6e213b713da6c45aecdd8-shm.mount: Deactivated successfully. Oct 31 01:32:48.653871 kubelet[2284]: I1031 01:32:48.653848 2284 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="68ea07f2cec36a0ea709039e67ed228650906cd3b6a583d6f78a2e7c9988da4d" Oct 31 01:32:48.655920 kubelet[2284]: I1031 01:32:48.655724 2284 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0f9e76c08e7066d5d6aee4b43aab25d278801a49577b0dd54668526bb5edbf57" Oct 31 01:32:48.696096 env[1377]: time="2025-10-31T01:32:48.696066635Z" level=info msg="StopPodSandbox for \"0f9e76c08e7066d5d6aee4b43aab25d278801a49577b0dd54668526bb5edbf57\"" Oct 31 01:32:48.696390 env[1377]: time="2025-10-31T01:32:48.696332930Z" level=info msg="StopPodSandbox for \"68ea07f2cec36a0ea709039e67ed228650906cd3b6a583d6f78a2e7c9988da4d\"" Oct 31 01:32:48.696769 kubelet[2284]: I1031 01:32:48.696750 2284 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e9b9f960fe0a05e1639f4b425911f59ace57268750f8e775d196eb85258ce494" Oct 31 01:32:48.698108 env[1377]: time="2025-10-31T01:32:48.697288289Z" level=info msg="StopPodSandbox for \"e9b9f960fe0a05e1639f4b425911f59ace57268750f8e775d196eb85258ce494\"" Oct 31 01:32:48.704044 kubelet[2284]: I1031 01:32:48.704026 2284 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a3f2fe237e95c197a2f6b326a1b871ff16dcf0c00c677db5ddf2964768d2fbe0" Oct 31 01:32:48.707988 env[1377]: time="2025-10-31T01:32:48.707963760Z" level=info msg="StopPodSandbox for \"a3f2fe237e95c197a2f6b326a1b871ff16dcf0c00c677db5ddf2964768d2fbe0\"" Oct 31 01:32:48.708700 kubelet[2284]: I1031 01:32:48.708664 2284 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="660a1bc9f26fedc398ff472056caa51e1793e5e319a6e213b713da6c45aecdd8" Oct 31 01:32:48.709846 env[1377]: time="2025-10-31T01:32:48.709143249Z" level=info msg="StopPodSandbox for \"660a1bc9f26fedc398ff472056caa51e1793e5e319a6e213b713da6c45aecdd8\"" Oct 31 01:32:48.710913 kubelet[2284]: I1031 01:32:48.710483 2284 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="16e510b62672d74d105ab6a96889818618766c7743e9a2af0797521a7f0d6681" Oct 31 01:32:48.710980 env[1377]: time="2025-10-31T01:32:48.710955904Z" level=info msg="StopPodSandbox for \"16e510b62672d74d105ab6a96889818618766c7743e9a2af0797521a7f0d6681\"" Oct 31 01:32:48.711324 kubelet[2284]: I1031 01:32:48.711091 2284 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6d715b3156b7623086840544a9b2e8c95015db1e296dbe8cc2dc2e036b11cdb8" Oct 31 01:32:48.711371 env[1377]: time="2025-10-31T01:32:48.711352945Z" level=info msg="StopPodSandbox for \"6d715b3156b7623086840544a9b2e8c95015db1e296dbe8cc2dc2e036b11cdb8\"" Oct 31 01:32:48.712370 kubelet[2284]: I1031 01:32:48.711998 2284 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="adb25d497f530262e3f30149078f5eaa0d0ef52dcabb55036103df9d0fc53bf3" Oct 31 01:32:48.712485 env[1377]: time="2025-10-31T01:32:48.712464112Z" level=info msg="StopPodSandbox for \"adb25d497f530262e3f30149078f5eaa0d0ef52dcabb55036103df9d0fc53bf3\"" Oct 31 01:32:48.713003 kubelet[2284]: I1031 01:32:48.712787 2284 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4c0bab7a2ed6ec3c12b99faf075344d81f292b4cad6e7a55ec04d0974d8e7759" Oct 31 01:32:48.713173 env[1377]: time="2025-10-31T01:32:48.713156332Z" level=info msg="StopPodSandbox for \"4c0bab7a2ed6ec3c12b99faf075344d81f292b4cad6e7a55ec04d0974d8e7759\"" Oct 31 01:32:48.753391 env[1377]: time="2025-10-31T01:32:48.753355729Z" level=error msg="StopPodSandbox for \"0f9e76c08e7066d5d6aee4b43aab25d278801a49577b0dd54668526bb5edbf57\" failed" error="failed to destroy network for sandbox \"0f9e76c08e7066d5d6aee4b43aab25d278801a49577b0dd54668526bb5edbf57\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 31 01:32:48.761416 env[1377]: time="2025-10-31T01:32:48.761380517Z" level=error msg="StopPodSandbox for \"68ea07f2cec36a0ea709039e67ed228650906cd3b6a583d6f78a2e7c9988da4d\" failed" error="failed to destroy network for sandbox \"68ea07f2cec36a0ea709039e67ed228650906cd3b6a583d6f78a2e7c9988da4d\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 31 01:32:48.764554 kubelet[2284]: E1031 01:32:48.764524 2284 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"68ea07f2cec36a0ea709039e67ed228650906cd3b6a583d6f78a2e7c9988da4d\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="68ea07f2cec36a0ea709039e67ed228650906cd3b6a583d6f78a2e7c9988da4d" Oct 31 01:32:48.764816 kubelet[2284]: E1031 01:32:48.764672 2284 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"68ea07f2cec36a0ea709039e67ed228650906cd3b6a583d6f78a2e7c9988da4d"} Oct 31 01:32:48.764816 kubelet[2284]: E1031 01:32:48.764730 2284 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"03dcc52f-4acf-4546-b99b-cf4de4d54704\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"68ea07f2cec36a0ea709039e67ed228650906cd3b6a583d6f78a2e7c9988da4d\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Oct 31 01:32:48.764816 kubelet[2284]: E1031 01:32:48.764745 2284 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"03dcc52f-4acf-4546-b99b-cf4de4d54704\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"68ea07f2cec36a0ea709039e67ed228650906cd3b6a583d6f78a2e7c9988da4d\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-59f47b46b-44rl6" podUID="03dcc52f-4acf-4546-b99b-cf4de4d54704" Oct 31 01:32:48.769091 kubelet[2284]: E1031 01:32:48.757863 2284 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"0f9e76c08e7066d5d6aee4b43aab25d278801a49577b0dd54668526bb5edbf57\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="0f9e76c08e7066d5d6aee4b43aab25d278801a49577b0dd54668526bb5edbf57" Oct 31 01:32:48.769091 kubelet[2284]: E1031 01:32:48.769036 2284 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"0f9e76c08e7066d5d6aee4b43aab25d278801a49577b0dd54668526bb5edbf57"} Oct 31 01:32:48.769091 kubelet[2284]: E1031 01:32:48.769056 2284 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"8f69b598-caa2-4abe-911a-df60fbb3c4df\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"0f9e76c08e7066d5d6aee4b43aab25d278801a49577b0dd54668526bb5edbf57\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Oct 31 01:32:48.769091 kubelet[2284]: E1031 01:32:48.769073 2284 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"8f69b598-caa2-4abe-911a-df60fbb3c4df\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"0f9e76c08e7066d5d6aee4b43aab25d278801a49577b0dd54668526bb5edbf57\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-59f47b46b-s9kdt" podUID="8f69b598-caa2-4abe-911a-df60fbb3c4df" Oct 31 01:32:48.783202 env[1377]: time="2025-10-31T01:32:48.783167071Z" level=error msg="StopPodSandbox for \"e9b9f960fe0a05e1639f4b425911f59ace57268750f8e775d196eb85258ce494\" failed" error="failed to destroy network for sandbox \"e9b9f960fe0a05e1639f4b425911f59ace57268750f8e775d196eb85258ce494\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 31 01:32:48.783548 kubelet[2284]: E1031 01:32:48.783436 2284 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"e9b9f960fe0a05e1639f4b425911f59ace57268750f8e775d196eb85258ce494\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="e9b9f960fe0a05e1639f4b425911f59ace57268750f8e775d196eb85258ce494" Oct 31 01:32:48.783548 kubelet[2284]: E1031 01:32:48.783476 2284 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"e9b9f960fe0a05e1639f4b425911f59ace57268750f8e775d196eb85258ce494"} Oct 31 01:32:48.783548 kubelet[2284]: E1031 01:32:48.783500 2284 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"3586a90a-6636-45ab-8082-c6aa9bdb62e3\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"e9b9f960fe0a05e1639f4b425911f59ace57268750f8e775d196eb85258ce494\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Oct 31 01:32:48.783548 kubelet[2284]: E1031 01:32:48.783512 2284 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"3586a90a-6636-45ab-8082-c6aa9bdb62e3\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"e9b9f960fe0a05e1639f4b425911f59ace57268750f8e775d196eb85258ce494\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-7495b6f49d-9bz8s" podUID="3586a90a-6636-45ab-8082-c6aa9bdb62e3" Oct 31 01:32:48.794345 env[1377]: time="2025-10-31T01:32:48.794306089Z" level=error msg="StopPodSandbox for \"adb25d497f530262e3f30149078f5eaa0d0ef52dcabb55036103df9d0fc53bf3\" failed" error="failed to destroy network for sandbox \"adb25d497f530262e3f30149078f5eaa0d0ef52dcabb55036103df9d0fc53bf3\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 31 01:32:48.794504 env[1377]: time="2025-10-31T01:32:48.794356646Z" level=error msg="StopPodSandbox for \"a3f2fe237e95c197a2f6b326a1b871ff16dcf0c00c677db5ddf2964768d2fbe0\" failed" error="failed to destroy network for sandbox \"a3f2fe237e95c197a2f6b326a1b871ff16dcf0c00c677db5ddf2964768d2fbe0\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 31 01:32:48.794782 kubelet[2284]: E1031 01:32:48.794663 2284 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"adb25d497f530262e3f30149078f5eaa0d0ef52dcabb55036103df9d0fc53bf3\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="adb25d497f530262e3f30149078f5eaa0d0ef52dcabb55036103df9d0fc53bf3" Oct 31 01:32:48.794782 kubelet[2284]: E1031 01:32:48.794705 2284 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"adb25d497f530262e3f30149078f5eaa0d0ef52dcabb55036103df9d0fc53bf3"} Oct 31 01:32:48.794782 kubelet[2284]: E1031 01:32:48.794734 2284 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"c5ff4853-41a8-4d7e-a0bc-f8d8451a400b\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"adb25d497f530262e3f30149078f5eaa0d0ef52dcabb55036103df9d0fc53bf3\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Oct 31 01:32:48.794782 kubelet[2284]: E1031 01:32:48.794752 2284 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"c5ff4853-41a8-4d7e-a0bc-f8d8451a400b\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"adb25d497f530262e3f30149078f5eaa0d0ef52dcabb55036103df9d0fc53bf3\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-6cf7465748-bpws2" podUID="c5ff4853-41a8-4d7e-a0bc-f8d8451a400b" Oct 31 01:32:48.795067 kubelet[2284]: E1031 01:32:48.794995 2284 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"a3f2fe237e95c197a2f6b326a1b871ff16dcf0c00c677db5ddf2964768d2fbe0\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="a3f2fe237e95c197a2f6b326a1b871ff16dcf0c00c677db5ddf2964768d2fbe0" Oct 31 01:32:48.795067 kubelet[2284]: E1031 01:32:48.795018 2284 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"a3f2fe237e95c197a2f6b326a1b871ff16dcf0c00c677db5ddf2964768d2fbe0"} Oct 31 01:32:48.795067 kubelet[2284]: E1031 01:32:48.795035 2284 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"4bc01c1a-5001-4f6b-ad3f-615201398d71\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"a3f2fe237e95c197a2f6b326a1b871ff16dcf0c00c677db5ddf2964768d2fbe0\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Oct 31 01:32:48.795067 kubelet[2284]: E1031 01:32:48.795047 2284 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"4bc01c1a-5001-4f6b-ad3f-615201398d71\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"a3f2fe237e95c197a2f6b326a1b871ff16dcf0c00c677db5ddf2964768d2fbe0\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-27j8s" podUID="4bc01c1a-5001-4f6b-ad3f-615201398d71" Oct 31 01:32:48.807308 env[1377]: time="2025-10-31T01:32:48.807271370Z" level=error msg="StopPodSandbox for \"4c0bab7a2ed6ec3c12b99faf075344d81f292b4cad6e7a55ec04d0974d8e7759\" failed" error="failed to destroy network for sandbox \"4c0bab7a2ed6ec3c12b99faf075344d81f292b4cad6e7a55ec04d0974d8e7759\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 31 01:32:48.807610 kubelet[2284]: E1031 01:32:48.807511 2284 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"4c0bab7a2ed6ec3c12b99faf075344d81f292b4cad6e7a55ec04d0974d8e7759\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="4c0bab7a2ed6ec3c12b99faf075344d81f292b4cad6e7a55ec04d0974d8e7759" Oct 31 01:32:48.807610 kubelet[2284]: E1031 01:32:48.807543 2284 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"4c0bab7a2ed6ec3c12b99faf075344d81f292b4cad6e7a55ec04d0974d8e7759"} Oct 31 01:32:48.807610 kubelet[2284]: E1031 01:32:48.807565 2284 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"d10ebfbe-91f8-4576-8542-06b4d8a152be\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"4c0bab7a2ed6ec3c12b99faf075344d81f292b4cad6e7a55ec04d0974d8e7759\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Oct 31 01:32:48.807610 kubelet[2284]: E1031 01:32:48.807585 2284 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"d10ebfbe-91f8-4576-8542-06b4d8a152be\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"4c0bab7a2ed6ec3c12b99faf075344d81f292b4cad6e7a55ec04d0974d8e7759\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-666569f655-8nrww" podUID="d10ebfbe-91f8-4576-8542-06b4d8a152be" Oct 31 01:32:48.811926 env[1377]: time="2025-10-31T01:32:48.811888149Z" level=error msg="StopPodSandbox for \"660a1bc9f26fedc398ff472056caa51e1793e5e319a6e213b713da6c45aecdd8\" failed" error="failed to destroy network for sandbox \"660a1bc9f26fedc398ff472056caa51e1793e5e319a6e213b713da6c45aecdd8\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 31 01:32:48.812025 env[1377]: time="2025-10-31T01:32:48.811976263Z" level=error msg="StopPodSandbox for \"16e510b62672d74d105ab6a96889818618766c7743e9a2af0797521a7f0d6681\" failed" error="failed to destroy network for sandbox \"16e510b62672d74d105ab6a96889818618766c7743e9a2af0797521a7f0d6681\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 31 01:32:48.812815 kubelet[2284]: E1031 01:32:48.812184 2284 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"16e510b62672d74d105ab6a96889818618766c7743e9a2af0797521a7f0d6681\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="16e510b62672d74d105ab6a96889818618766c7743e9a2af0797521a7f0d6681" Oct 31 01:32:48.812815 kubelet[2284]: E1031 01:32:48.812222 2284 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"16e510b62672d74d105ab6a96889818618766c7743e9a2af0797521a7f0d6681"} Oct 31 01:32:48.812815 kubelet[2284]: E1031 01:32:48.812249 2284 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"d48c0130-5c14-4eac-a5d1-65ae7e9f18bd\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"16e510b62672d74d105ab6a96889818618766c7743e9a2af0797521a7f0d6681\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Oct 31 01:32:48.812815 kubelet[2284]: E1031 01:32:48.812264 2284 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"d48c0130-5c14-4eac-a5d1-65ae7e9f18bd\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"16e510b62672d74d105ab6a96889818618766c7743e9a2af0797521a7f0d6681\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-dc76fd6ff-vkhmk" podUID="d48c0130-5c14-4eac-a5d1-65ae7e9f18bd" Oct 31 01:32:48.812964 kubelet[2284]: E1031 01:32:48.812294 2284 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"660a1bc9f26fedc398ff472056caa51e1793e5e319a6e213b713da6c45aecdd8\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="660a1bc9f26fedc398ff472056caa51e1793e5e319a6e213b713da6c45aecdd8" Oct 31 01:32:48.812964 kubelet[2284]: E1031 01:32:48.812305 2284 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"660a1bc9f26fedc398ff472056caa51e1793e5e319a6e213b713da6c45aecdd8"} Oct 31 01:32:48.812964 kubelet[2284]: E1031 01:32:48.812316 2284 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"c35d3eac-7307-43de-bf4e-73472193a4cb\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"660a1bc9f26fedc398ff472056caa51e1793e5e319a6e213b713da6c45aecdd8\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Oct 31 01:32:48.812964 kubelet[2284]: E1031 01:32:48.812328 2284 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"c35d3eac-7307-43de-bf4e-73472193a4cb\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"660a1bc9f26fedc398ff472056caa51e1793e5e319a6e213b713da6c45aecdd8\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-j9jdm" podUID="c35d3eac-7307-43de-bf4e-73472193a4cb" Oct 31 01:32:48.816080 env[1377]: time="2025-10-31T01:32:48.816047490Z" level=error msg="StopPodSandbox for \"6d715b3156b7623086840544a9b2e8c95015db1e296dbe8cc2dc2e036b11cdb8\" failed" error="failed to destroy network for sandbox \"6d715b3156b7623086840544a9b2e8c95015db1e296dbe8cc2dc2e036b11cdb8\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 31 01:32:48.816353 kubelet[2284]: E1031 01:32:48.816247 2284 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"6d715b3156b7623086840544a9b2e8c95015db1e296dbe8cc2dc2e036b11cdb8\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="6d715b3156b7623086840544a9b2e8c95015db1e296dbe8cc2dc2e036b11cdb8" Oct 31 01:32:48.816353 kubelet[2284]: E1031 01:32:48.816290 2284 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"6d715b3156b7623086840544a9b2e8c95015db1e296dbe8cc2dc2e036b11cdb8"} Oct 31 01:32:48.816353 kubelet[2284]: E1031 01:32:48.816311 2284 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"68e0baab-eac1-409d-a79c-945bc83eb739\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"6d715b3156b7623086840544a9b2e8c95015db1e296dbe8cc2dc2e036b11cdb8\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Oct 31 01:32:48.816353 kubelet[2284]: E1031 01:32:48.816328 2284 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"68e0baab-eac1-409d-a79c-945bc83eb739\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"6d715b3156b7623086840544a9b2e8c95015db1e296dbe8cc2dc2e036b11cdb8\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-vc6st" podUID="68e0baab-eac1-409d-a79c-945bc83eb739" Oct 31 01:32:54.215321 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3860095458.mount: Deactivated successfully. Oct 31 01:32:54.389254 env[1377]: time="2025-10-31T01:32:54.389211459Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/node:v3.30.4,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 31 01:32:54.402202 env[1377]: time="2025-10-31T01:32:54.402172912Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:833e8e11d9dc187377eab6f31e275114a6b0f8f0afc3bf578a2a00507e85afc9,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 31 01:32:54.402861 env[1377]: time="2025-10-31T01:32:54.402845556Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/node:v3.30.4,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 31 01:32:54.403668 env[1377]: time="2025-10-31T01:32:54.403652160Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/node@sha256:e92cca333202c87d07bf57f38182fd68f0779f912ef55305eda1fccc9f33667c,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 31 01:32:54.404089 env[1377]: time="2025-10-31T01:32:54.404074003Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.4\" returns image reference \"sha256:833e8e11d9dc187377eab6f31e275114a6b0f8f0afc3bf578a2a00507e85afc9\"" Oct 31 01:32:54.443940 env[1377]: time="2025-10-31T01:32:54.443911493Z" level=info msg="CreateContainer within sandbox \"c530cd53bb9f598cbecda6dcc9b122a6a60c5e174b594ef02a40d55acd24cbf1\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Oct 31 01:32:54.469146 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1768508948.mount: Deactivated successfully. Oct 31 01:32:54.471770 env[1377]: time="2025-10-31T01:32:54.471745036Z" level=info msg="CreateContainer within sandbox \"c530cd53bb9f598cbecda6dcc9b122a6a60c5e174b594ef02a40d55acd24cbf1\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"7c3f7ff64beedeb944cd56391b76480b6781f2d051e3cc7f588f6a4ce9585d21\"" Oct 31 01:32:54.474217 env[1377]: time="2025-10-31T01:32:54.474192201Z" level=info msg="StartContainer for \"7c3f7ff64beedeb944cd56391b76480b6781f2d051e3cc7f588f6a4ce9585d21\"" Oct 31 01:32:54.520946 env[1377]: time="2025-10-31T01:32:54.520923948Z" level=info msg="StartContainer for \"7c3f7ff64beedeb944cd56391b76480b6781f2d051e3cc7f588f6a4ce9585d21\" returns successfully" Oct 31 01:32:55.554393 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Oct 31 01:32:55.554819 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Oct 31 01:32:55.727796 kubelet[2284]: I1031 01:32:55.727774 2284 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Oct 31 01:32:55.762094 systemd[1]: run-containerd-runc-k8s.io-7c3f7ff64beedeb944cd56391b76480b6781f2d051e3cc7f588f6a4ce9585d21-runc.sCRCdV.mount: Deactivated successfully. Oct 31 01:32:55.845715 kubelet[2284]: I1031 01:32:55.842753 2284 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-fsvc4" podStartSLOduration=2.657457117 podStartE2EDuration="24.84126919s" podCreationTimestamp="2025-10-31 01:32:31 +0000 UTC" firstStartedPulling="2025-10-31 01:32:32.220980278 +0000 UTC m=+16.832794356" lastFinishedPulling="2025-10-31 01:32:54.404792352 +0000 UTC m=+39.016606429" observedRunningTime="2025-10-31 01:32:54.791140016 +0000 UTC m=+39.402954105" watchObservedRunningTime="2025-10-31 01:32:55.84126919 +0000 UTC m=+40.453083272" Oct 31 01:32:56.314335 env[1377]: time="2025-10-31T01:32:56.314273555Z" level=info msg="StopPodSandbox for \"16e510b62672d74d105ab6a96889818618766c7743e9a2af0797521a7f0d6681\"" Oct 31 01:32:56.324505 kernel: kauditd_printk_skb: 31 callbacks suppressed Oct 31 01:32:56.326004 kernel: audit: type=1325 audit(1761874376.320:292): table=filter:101 family=2 entries=21 op=nft_register_rule pid=3547 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Oct 31 01:32:56.320000 audit[3547]: NETFILTER_CFG table=filter:101 family=2 entries=21 op=nft_register_rule pid=3547 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Oct 31 01:32:56.336790 kernel: audit: type=1300 audit(1761874376.320:292): arch=c000003e syscall=46 success=yes exit=7480 a0=3 a1=7ffcf40c9440 a2=0 a3=7ffcf40c942c items=0 ppid=2387 pid=3547 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 01:32:56.336839 kernel: audit: type=1327 audit(1761874376.320:292): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Oct 31 01:32:56.320000 audit[3547]: SYSCALL arch=c000003e syscall=46 success=yes exit=7480 a0=3 a1=7ffcf40c9440 a2=0 a3=7ffcf40c942c items=0 ppid=2387 pid=3547 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 01:32:56.349514 kernel: audit: type=1325 audit(1761874376.335:293): table=nat:102 family=2 entries=19 op=nft_register_chain pid=3547 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Oct 31 01:32:56.349565 kernel: audit: type=1300 audit(1761874376.335:293): arch=c000003e syscall=46 success=yes exit=6276 a0=3 a1=7ffcf40c9440 a2=0 a3=7ffcf40c942c items=0 ppid=2387 pid=3547 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 01:32:56.349583 kernel: audit: type=1327 audit(1761874376.335:293): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Oct 31 01:32:56.320000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Oct 31 01:32:56.335000 audit[3547]: NETFILTER_CFG table=nat:102 family=2 entries=19 op=nft_register_chain pid=3547 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Oct 31 01:32:56.335000 audit[3547]: SYSCALL arch=c000003e syscall=46 success=yes exit=6276 a0=3 a1=7ffcf40c9440 a2=0 a3=7ffcf40c942c items=0 ppid=2387 pid=3547 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 01:32:56.335000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Oct 31 01:32:56.971485 env[1377]: 2025-10-31 01:32:56.432 [INFO][3553] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="16e510b62672d74d105ab6a96889818618766c7743e9a2af0797521a7f0d6681" Oct 31 01:32:56.971485 env[1377]: 2025-10-31 01:32:56.435 [INFO][3553] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="16e510b62672d74d105ab6a96889818618766c7743e9a2af0797521a7f0d6681" iface="eth0" netns="/var/run/netns/cni-a22317ef-3526-75ff-bbe6-b006193f08b5" Oct 31 01:32:56.971485 env[1377]: 2025-10-31 01:32:56.435 [INFO][3553] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="16e510b62672d74d105ab6a96889818618766c7743e9a2af0797521a7f0d6681" iface="eth0" netns="/var/run/netns/cni-a22317ef-3526-75ff-bbe6-b006193f08b5" Oct 31 01:32:56.971485 env[1377]: 2025-10-31 01:32:56.440 [INFO][3553] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="16e510b62672d74d105ab6a96889818618766c7743e9a2af0797521a7f0d6681" iface="eth0" netns="/var/run/netns/cni-a22317ef-3526-75ff-bbe6-b006193f08b5" Oct 31 01:32:56.971485 env[1377]: 2025-10-31 01:32:56.440 [INFO][3553] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="16e510b62672d74d105ab6a96889818618766c7743e9a2af0797521a7f0d6681" Oct 31 01:32:56.971485 env[1377]: 2025-10-31 01:32:56.440 [INFO][3553] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="16e510b62672d74d105ab6a96889818618766c7743e9a2af0797521a7f0d6681" Oct 31 01:32:56.971485 env[1377]: 2025-10-31 01:32:56.935 [INFO][3571] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="16e510b62672d74d105ab6a96889818618766c7743e9a2af0797521a7f0d6681" HandleID="k8s-pod-network.16e510b62672d74d105ab6a96889818618766c7743e9a2af0797521a7f0d6681" Workload="localhost-k8s-whisker--dc76fd6ff--vkhmk-eth0" Oct 31 01:32:56.971485 env[1377]: 2025-10-31 01:32:56.941 [INFO][3571] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Oct 31 01:32:56.971485 env[1377]: 2025-10-31 01:32:56.941 [INFO][3571] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Oct 31 01:32:56.971485 env[1377]: 2025-10-31 01:32:56.966 [WARNING][3571] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="16e510b62672d74d105ab6a96889818618766c7743e9a2af0797521a7f0d6681" HandleID="k8s-pod-network.16e510b62672d74d105ab6a96889818618766c7743e9a2af0797521a7f0d6681" Workload="localhost-k8s-whisker--dc76fd6ff--vkhmk-eth0" Oct 31 01:32:56.971485 env[1377]: 2025-10-31 01:32:56.966 [INFO][3571] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="16e510b62672d74d105ab6a96889818618766c7743e9a2af0797521a7f0d6681" HandleID="k8s-pod-network.16e510b62672d74d105ab6a96889818618766c7743e9a2af0797521a7f0d6681" Workload="localhost-k8s-whisker--dc76fd6ff--vkhmk-eth0" Oct 31 01:32:56.971485 env[1377]: 2025-10-31 01:32:56.967 [INFO][3571] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Oct 31 01:32:56.971485 env[1377]: 2025-10-31 01:32:56.969 [INFO][3553] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="16e510b62672d74d105ab6a96889818618766c7743e9a2af0797521a7f0d6681" Oct 31 01:32:56.989584 env[1377]: time="2025-10-31T01:32:56.974017919Z" level=info msg="TearDown network for sandbox \"16e510b62672d74d105ab6a96889818618766c7743e9a2af0797521a7f0d6681\" successfully" Oct 31 01:32:56.989584 env[1377]: time="2025-10-31T01:32:56.974044781Z" level=info msg="StopPodSandbox for \"16e510b62672d74d105ab6a96889818618766c7743e9a2af0797521a7f0d6681\" returns successfully" Oct 31 01:32:56.973926 systemd[1]: run-netns-cni\x2da22317ef\x2d3526\x2d75ff\x2dbbe6\x2db006193f08b5.mount: Deactivated successfully. Oct 31 01:32:57.084164 kubelet[2284]: I1031 01:32:57.084120 2284 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d48c0130-5c14-4eac-a5d1-65ae7e9f18bd-whisker-ca-bundle\") pod \"d48c0130-5c14-4eac-a5d1-65ae7e9f18bd\" (UID: \"d48c0130-5c14-4eac-a5d1-65ae7e9f18bd\") " Oct 31 01:32:57.084482 kubelet[2284]: I1031 01:32:57.084182 2284 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-std5w\" (UniqueName: \"kubernetes.io/projected/d48c0130-5c14-4eac-a5d1-65ae7e9f18bd-kube-api-access-std5w\") pod \"d48c0130-5c14-4eac-a5d1-65ae7e9f18bd\" (UID: \"d48c0130-5c14-4eac-a5d1-65ae7e9f18bd\") " Oct 31 01:32:57.084482 kubelet[2284]: I1031 01:32:57.084204 2284 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/d48c0130-5c14-4eac-a5d1-65ae7e9f18bd-whisker-backend-key-pair\") pod \"d48c0130-5c14-4eac-a5d1-65ae7e9f18bd\" (UID: \"d48c0130-5c14-4eac-a5d1-65ae7e9f18bd\") " Oct 31 01:32:57.096628 kubelet[2284]: I1031 01:32:57.093581 2284 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d48c0130-5c14-4eac-a5d1-65ae7e9f18bd-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "d48c0130-5c14-4eac-a5d1-65ae7e9f18bd" (UID: "d48c0130-5c14-4eac-a5d1-65ae7e9f18bd"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Oct 31 01:32:57.100306 systemd[1]: var-lib-kubelet-pods-d48c0130\x2d5c14\x2d4eac\x2da5d1\x2d65ae7e9f18bd-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dstd5w.mount: Deactivated successfully. Oct 31 01:32:57.102255 systemd[1]: var-lib-kubelet-pods-d48c0130\x2d5c14\x2d4eac\x2da5d1\x2d65ae7e9f18bd-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Oct 31 01:32:57.102973 kubelet[2284]: I1031 01:32:57.102947 2284 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d48c0130-5c14-4eac-a5d1-65ae7e9f18bd-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "d48c0130-5c14-4eac-a5d1-65ae7e9f18bd" (UID: "d48c0130-5c14-4eac-a5d1-65ae7e9f18bd"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGIDValue "" Oct 31 01:32:57.103105 kubelet[2284]: I1031 01:32:57.103093 2284 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d48c0130-5c14-4eac-a5d1-65ae7e9f18bd-kube-api-access-std5w" (OuterVolumeSpecName: "kube-api-access-std5w") pod "d48c0130-5c14-4eac-a5d1-65ae7e9f18bd" (UID: "d48c0130-5c14-4eac-a5d1-65ae7e9f18bd"). InnerVolumeSpecName "kube-api-access-std5w". PluginName "kubernetes.io/projected", VolumeGIDValue "" Oct 31 01:32:57.184767 kubelet[2284]: I1031 01:32:57.184740 2284 reconciler_common.go:299] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d48c0130-5c14-4eac-a5d1-65ae7e9f18bd-whisker-ca-bundle\") on node \"localhost\" DevicePath \"\"" Oct 31 01:32:57.184921 kubelet[2284]: I1031 01:32:57.184908 2284 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-std5w\" (UniqueName: \"kubernetes.io/projected/d48c0130-5c14-4eac-a5d1-65ae7e9f18bd-kube-api-access-std5w\") on node \"localhost\" DevicePath \"\"" Oct 31 01:32:57.184992 kubelet[2284]: I1031 01:32:57.184981 2284 reconciler_common.go:299] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/d48c0130-5c14-4eac-a5d1-65ae7e9f18bd-whisker-backend-key-pair\") on node \"localhost\" DevicePath \"\"" Oct 31 01:32:57.454000 audit[3633]: AVC avc: denied { write } for pid=3633 comm="tee" name="fd" dev="proc" ino=37902 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 Oct 31 01:32:57.464466 kernel: audit: type=1400 audit(1761874377.454:294): avc: denied { write } for pid=3633 comm="tee" name="fd" dev="proc" ino=37902 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 Oct 31 01:32:57.464941 kernel: audit: type=1300 audit(1761874377.454:294): arch=c000003e syscall=257 success=yes exit=3 a0=ffffff9c a1=7ffdd8e7c7ed a2=241 a3=1b6 items=1 ppid=3594 pid=3633 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 01:32:57.464967 kernel: audit: type=1307 audit(1761874377.454:294): cwd="/etc/service/enabled/felix/log" Oct 31 01:32:57.454000 audit[3633]: SYSCALL arch=c000003e syscall=257 success=yes exit=3 a0=ffffff9c a1=7ffdd8e7c7ed a2=241 a3=1b6 items=1 ppid=3594 pid=3633 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 01:32:57.454000 audit: CWD cwd="/etc/service/enabled/felix/log" Oct 31 01:32:57.454000 audit: PATH item=0 name="/dev/fd/63" inode=36859 dev=00:0c mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 31 01:32:57.471621 kernel: audit: type=1302 audit(1761874377.454:294): item=0 name="/dev/fd/63" inode=36859 dev=00:0c mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 31 01:32:57.454000 audit: PROCTITLE proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 Oct 31 01:32:57.454000 audit[3636]: AVC avc: denied { write } for pid=3636 comm="tee" name="fd" dev="proc" ino=37906 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 Oct 31 01:32:57.454000 audit[3636]: SYSCALL arch=c000003e syscall=257 success=yes exit=3 a0=ffffff9c a1=7ffcc8c987de a2=241 a3=1b6 items=1 ppid=3592 pid=3636 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 01:32:57.454000 audit: CWD cwd="/etc/service/enabled/node-status-reporter/log" Oct 31 01:32:57.454000 audit: PATH item=0 name="/dev/fd/63" inode=37070 dev=00:0c mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 31 01:32:57.454000 audit: PROCTITLE proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 Oct 31 01:32:57.454000 audit[3640]: AVC avc: denied { write } for pid=3640 comm="tee" name="fd" dev="proc" ino=37910 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 Oct 31 01:32:57.454000 audit[3640]: SYSCALL arch=c000003e syscall=257 success=yes exit=3 a0=ffffff9c a1=7ffc5f7437ed a2=241 a3=1b6 items=1 ppid=3599 pid=3640 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 01:32:57.454000 audit: CWD cwd="/etc/service/enabled/bird6/log" Oct 31 01:32:57.454000 audit: PATH item=0 name="/dev/fd/63" inode=37891 dev=00:0c mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 31 01:32:57.454000 audit: PROCTITLE proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 Oct 31 01:32:57.458000 audit[3638]: AVC avc: denied { write } for pid=3638 comm="tee" name="fd" dev="proc" ino=37914 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 Oct 31 01:32:57.458000 audit[3638]: SYSCALL arch=c000003e syscall=257 success=yes exit=3 a0=ffffff9c a1=7ffe090937ee a2=241 a3=1b6 items=1 ppid=3596 pid=3638 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 01:32:57.458000 audit: CWD cwd="/etc/service/enabled/bird/log" Oct 31 01:32:57.458000 audit: PATH item=0 name="/dev/fd/63" inode=36862 dev=00:0c mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 31 01:32:57.458000 audit: PROCTITLE proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 Oct 31 01:32:57.470000 audit[3645]: AVC avc: denied { write } for pid=3645 comm="tee" name="fd" dev="proc" ino=37923 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 Oct 31 01:32:57.470000 audit[3645]: SYSCALL arch=c000003e syscall=257 success=yes exit=3 a0=ffffff9c a1=7fff524d07dd a2=241 a3=1b6 items=1 ppid=3588 pid=3645 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 01:32:57.470000 audit: CWD cwd="/etc/service/enabled/allocate-tunnel-addrs/log" Oct 31 01:32:57.470000 audit: PATH item=0 name="/dev/fd/63" inode=37071 dev=00:0c mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 31 01:32:57.470000 audit: PROCTITLE proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 Oct 31 01:32:57.473000 audit[3655]: AVC avc: denied { write } for pid=3655 comm="tee" name="fd" dev="proc" ino=37927 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 Oct 31 01:32:57.473000 audit[3655]: SYSCALL arch=c000003e syscall=257 success=yes exit=3 a0=ffffff9c a1=7ffc277c97ed a2=241 a3=1b6 items=1 ppid=3597 pid=3655 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 01:32:57.473000 audit: CWD cwd="/etc/service/enabled/confd/log" Oct 31 01:32:57.473000 audit: PATH item=0 name="/dev/fd/63" inode=37073 dev=00:0c mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 31 01:32:57.473000 audit: PROCTITLE proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 Oct 31 01:32:57.508000 audit[3659]: AVC avc: denied { write } for pid=3659 comm="tee" name="fd" dev="proc" ino=37939 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 Oct 31 01:32:57.508000 audit[3659]: SYSCALL arch=c000003e syscall=257 success=yes exit=3 a0=ffffff9c a1=7fff0bd7b7ef a2=241 a3=1b6 items=1 ppid=3614 pid=3659 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 01:32:57.508000 audit: CWD cwd="/etc/service/enabled/cni/log" Oct 31 01:32:57.508000 audit: PATH item=0 name="/dev/fd/63" inode=37935 dev=00:0c mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 31 01:32:57.508000 audit: PROCTITLE proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 Oct 31 01:32:57.775000 audit[3697]: AVC avc: denied { bpf } for pid=3697 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 01:32:57.775000 audit[3697]: AVC avc: denied { bpf } for pid=3697 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 01:32:57.775000 audit[3697]: AVC avc: denied { perfmon } for pid=3697 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 01:32:57.775000 audit[3697]: AVC avc: denied { perfmon } for pid=3697 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 01:32:57.775000 audit[3697]: AVC avc: denied { perfmon } for pid=3697 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 01:32:57.775000 audit[3697]: AVC avc: denied { perfmon } for pid=3697 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 01:32:57.775000 audit[3697]: AVC avc: denied { perfmon } for pid=3697 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 01:32:57.775000 audit[3697]: AVC avc: denied { bpf } for pid=3697 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 01:32:57.775000 audit[3697]: AVC avc: denied { bpf } for pid=3697 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 01:32:57.775000 audit: BPF prog-id=10 op=LOAD Oct 31 01:32:57.775000 audit[3697]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffdc1c9b040 a2=98 a3=1fffffffffffffff items=0 ppid=3595 pid=3697 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 01:32:57.775000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Oct 31 01:32:57.775000 audit: BPF prog-id=10 op=UNLOAD Oct 31 01:32:57.776000 audit[3697]: AVC avc: denied { bpf } for pid=3697 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 01:32:57.776000 audit[3697]: AVC avc: denied { bpf } for pid=3697 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 01:32:57.776000 audit[3697]: AVC avc: denied { perfmon } for pid=3697 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 01:32:57.776000 audit[3697]: AVC avc: denied { perfmon } for pid=3697 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 01:32:57.776000 audit[3697]: AVC avc: denied { perfmon } for pid=3697 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 01:32:57.776000 audit[3697]: AVC avc: denied { perfmon } for pid=3697 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 01:32:57.776000 audit[3697]: AVC avc: denied { perfmon } for pid=3697 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 01:32:57.776000 audit[3697]: AVC avc: denied { bpf } for pid=3697 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 01:32:57.776000 audit[3697]: AVC avc: denied { bpf } for pid=3697 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 01:32:57.776000 audit: BPF prog-id=11 op=LOAD Oct 31 01:32:57.776000 audit[3697]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffdc1c9af20 a2=94 a3=3 items=0 ppid=3595 pid=3697 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 01:32:57.776000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Oct 31 01:32:57.776000 audit: BPF prog-id=11 op=UNLOAD Oct 31 01:32:57.776000 audit[3697]: AVC avc: denied { bpf } for pid=3697 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 01:32:57.776000 audit[3697]: AVC avc: denied { bpf } for pid=3697 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 01:32:57.776000 audit[3697]: AVC avc: denied { perfmon } for pid=3697 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 01:32:57.776000 audit[3697]: AVC avc: denied { perfmon } for pid=3697 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 01:32:57.776000 audit[3697]: AVC avc: denied { perfmon } for pid=3697 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 01:32:57.776000 audit[3697]: AVC avc: denied { perfmon } for pid=3697 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 01:32:57.776000 audit[3697]: AVC avc: denied { perfmon } for pid=3697 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 01:32:57.776000 audit[3697]: AVC avc: denied { bpf } for pid=3697 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 01:32:57.776000 audit[3697]: AVC avc: denied { bpf } for pid=3697 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 01:32:57.776000 audit: BPF prog-id=12 op=LOAD Oct 31 01:32:57.776000 audit[3697]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffdc1c9af60 a2=94 a3=7ffdc1c9b140 items=0 ppid=3595 pid=3697 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 01:32:57.776000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Oct 31 01:32:57.776000 audit: BPF prog-id=12 op=UNLOAD Oct 31 01:32:57.776000 audit[3697]: AVC avc: denied { perfmon } for pid=3697 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 01:32:57.776000 audit[3697]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=0 a1=7ffdc1c9b030 a2=50 a3=a000000085 items=0 ppid=3595 pid=3697 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 01:32:57.776000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Oct 31 01:32:57.779000 audit[3698]: AVC avc: denied { bpf } for pid=3698 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 01:32:57.779000 audit[3698]: AVC avc: denied { bpf } for pid=3698 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 01:32:57.779000 audit[3698]: AVC avc: denied { perfmon } for pid=3698 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 01:32:57.779000 audit[3698]: AVC avc: denied { perfmon } for pid=3698 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 01:32:57.779000 audit[3698]: AVC avc: denied { perfmon } for pid=3698 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 01:32:57.779000 audit[3698]: AVC avc: denied { perfmon } for pid=3698 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 01:32:57.779000 audit[3698]: AVC avc: denied { perfmon } for pid=3698 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 01:32:57.779000 audit[3698]: AVC avc: denied { bpf } for pid=3698 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 01:32:57.779000 audit[3698]: AVC avc: denied { bpf } for pid=3698 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 01:32:57.779000 audit: BPF prog-id=13 op=LOAD Oct 31 01:32:57.779000 audit[3698]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7fff9e896a90 a2=98 a3=3 items=0 ppid=3595 pid=3698 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 01:32:57.779000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Oct 31 01:32:57.780000 audit: BPF prog-id=13 op=UNLOAD Oct 31 01:32:57.780000 audit[3698]: AVC avc: denied { bpf } for pid=3698 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 01:32:57.780000 audit[3698]: AVC avc: denied { bpf } for pid=3698 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 01:32:57.780000 audit[3698]: AVC avc: denied { perfmon } for pid=3698 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 01:32:57.780000 audit[3698]: AVC avc: denied { perfmon } for pid=3698 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 01:32:57.780000 audit[3698]: AVC avc: denied { perfmon } for pid=3698 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 01:32:57.780000 audit[3698]: AVC avc: denied { perfmon } for pid=3698 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 01:32:57.780000 audit[3698]: AVC avc: denied { perfmon } for pid=3698 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 01:32:57.780000 audit[3698]: AVC avc: denied { bpf } for pid=3698 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 01:32:57.780000 audit[3698]: AVC avc: denied { bpf } for pid=3698 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 01:32:57.780000 audit: BPF prog-id=14 op=LOAD Oct 31 01:32:57.780000 audit[3698]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7fff9e896880 a2=94 a3=54428f items=0 ppid=3595 pid=3698 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 01:32:57.780000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Oct 31 01:32:57.781000 audit: BPF prog-id=14 op=UNLOAD Oct 31 01:32:57.781000 audit[3698]: AVC avc: denied { bpf } for pid=3698 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 01:32:57.781000 audit[3698]: AVC avc: denied { bpf } for pid=3698 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 01:32:57.781000 audit[3698]: AVC avc: denied { perfmon } for pid=3698 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 01:32:57.781000 audit[3698]: AVC avc: denied { perfmon } for pid=3698 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 01:32:57.781000 audit[3698]: AVC avc: denied { perfmon } for pid=3698 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 01:32:57.781000 audit[3698]: AVC avc: denied { perfmon } for pid=3698 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 01:32:57.781000 audit[3698]: AVC avc: denied { perfmon } for pid=3698 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 01:32:57.781000 audit[3698]: AVC avc: denied { bpf } for pid=3698 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 01:32:57.781000 audit[3698]: AVC avc: denied { bpf } for pid=3698 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 01:32:57.781000 audit: BPF prog-id=15 op=LOAD Oct 31 01:32:57.781000 audit[3698]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7fff9e8968b0 a2=94 a3=2 items=0 ppid=3595 pid=3698 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 01:32:57.781000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Oct 31 01:32:57.781000 audit: BPF prog-id=15 op=UNLOAD Oct 31 01:32:57.891000 audit[3698]: AVC avc: denied { bpf } for pid=3698 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 01:32:57.891000 audit[3698]: AVC avc: denied { bpf } for pid=3698 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 01:32:57.891000 audit[3698]: AVC avc: denied { perfmon } for pid=3698 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 01:32:57.891000 audit[3698]: AVC avc: denied { perfmon } for pid=3698 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 01:32:57.891000 audit[3698]: AVC avc: denied { perfmon } for pid=3698 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 01:32:57.891000 audit[3698]: AVC avc: denied { perfmon } for pid=3698 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 01:32:57.891000 audit[3698]: AVC avc: denied { perfmon } for pid=3698 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 01:32:57.891000 audit[3698]: AVC avc: denied { bpf } for pid=3698 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 01:32:57.891000 audit[3698]: AVC avc: denied { bpf } for pid=3698 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 01:32:57.891000 audit: BPF prog-id=16 op=LOAD Oct 31 01:32:57.891000 audit[3698]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7fff9e896770 a2=94 a3=1 items=0 ppid=3595 pid=3698 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 01:32:57.891000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Oct 31 01:32:57.891000 audit: BPF prog-id=16 op=UNLOAD Oct 31 01:32:57.891000 audit[3698]: AVC avc: denied { perfmon } for pid=3698 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 01:32:57.891000 audit[3698]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=0 a1=7fff9e896840 a2=50 a3=7fff9e896920 items=0 ppid=3595 pid=3698 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 01:32:57.891000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Oct 31 01:32:57.898000 audit[3698]: AVC avc: denied { bpf } for pid=3698 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 01:32:57.898000 audit[3698]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=12 a1=7fff9e896780 a2=28 a3=0 items=0 ppid=3595 pid=3698 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 01:32:57.898000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Oct 31 01:32:57.899000 audit[3698]: AVC avc: denied { bpf } for pid=3698 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 01:32:57.899000 audit[3698]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=12 a1=7fff9e8967b0 a2=28 a3=0 items=0 ppid=3595 pid=3698 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 01:32:57.899000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Oct 31 01:32:57.899000 audit[3698]: AVC avc: denied { bpf } for pid=3698 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 01:32:57.899000 audit[3698]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=12 a1=7fff9e8966c0 a2=28 a3=0 items=0 ppid=3595 pid=3698 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 01:32:57.899000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Oct 31 01:32:57.899000 audit[3698]: AVC avc: denied { bpf } for pid=3698 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 01:32:57.899000 audit[3698]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=12 a1=7fff9e8967d0 a2=28 a3=0 items=0 ppid=3595 pid=3698 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 01:32:57.899000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Oct 31 01:32:57.899000 audit[3698]: AVC avc: denied { bpf } for pid=3698 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 01:32:57.899000 audit[3698]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=12 a1=7fff9e8967b0 a2=28 a3=0 items=0 ppid=3595 pid=3698 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 01:32:57.899000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Oct 31 01:32:57.899000 audit[3698]: AVC avc: denied { bpf } for pid=3698 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 01:32:57.899000 audit[3698]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=12 a1=7fff9e8967a0 a2=28 a3=0 items=0 ppid=3595 pid=3698 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 01:32:57.899000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Oct 31 01:32:57.899000 audit[3698]: AVC avc: denied { bpf } for pid=3698 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 01:32:57.899000 audit[3698]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=12 a1=7fff9e8967d0 a2=28 a3=0 items=0 ppid=3595 pid=3698 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 01:32:57.899000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Oct 31 01:32:57.900000 audit[3698]: AVC avc: denied { bpf } for pid=3698 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 01:32:57.900000 audit[3698]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=12 a1=7fff9e8967b0 a2=28 a3=0 items=0 ppid=3595 pid=3698 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 01:32:57.900000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Oct 31 01:32:57.900000 audit[3698]: AVC avc: denied { bpf } for pid=3698 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 01:32:57.900000 audit[3698]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=12 a1=7fff9e8967d0 a2=28 a3=0 items=0 ppid=3595 pid=3698 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 01:32:57.900000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Oct 31 01:32:57.900000 audit[3698]: AVC avc: denied { bpf } for pid=3698 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 01:32:57.900000 audit[3698]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=12 a1=7fff9e8967a0 a2=28 a3=0 items=0 ppid=3595 pid=3698 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 01:32:57.900000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Oct 31 01:32:57.900000 audit[3698]: AVC avc: denied { bpf } for pid=3698 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 01:32:57.900000 audit[3698]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=12 a1=7fff9e896810 a2=28 a3=0 items=0 ppid=3595 pid=3698 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 01:32:57.900000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Oct 31 01:32:57.901000 audit[3698]: AVC avc: denied { perfmon } for pid=3698 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 01:32:57.901000 audit[3698]: SYSCALL arch=c000003e syscall=321 success=yes exit=5 a0=0 a1=7fff9e8965c0 a2=50 a3=1 items=0 ppid=3595 pid=3698 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 01:32:57.901000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Oct 31 01:32:57.901000 audit[3698]: AVC avc: denied { bpf } for pid=3698 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 01:32:57.901000 audit[3698]: AVC avc: denied { bpf } for pid=3698 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 01:32:57.901000 audit[3698]: AVC avc: denied { perfmon } for pid=3698 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 01:32:57.901000 audit[3698]: AVC avc: denied { perfmon } for pid=3698 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 01:32:57.901000 audit[3698]: AVC avc: denied { perfmon } for pid=3698 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 01:32:57.901000 audit[3698]: AVC avc: denied { perfmon } for pid=3698 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 01:32:57.901000 audit[3698]: AVC avc: denied { perfmon } for pid=3698 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 01:32:57.901000 audit[3698]: AVC avc: denied { bpf } for pid=3698 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 01:32:57.901000 audit[3698]: AVC avc: denied { bpf } for pid=3698 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 01:32:57.901000 audit: BPF prog-id=17 op=LOAD Oct 31 01:32:57.901000 audit[3698]: SYSCALL arch=c000003e syscall=321 success=yes exit=6 a0=5 a1=7fff9e8965c0 a2=94 a3=5 items=0 ppid=3595 pid=3698 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 01:32:57.901000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Oct 31 01:32:57.901000 audit: BPF prog-id=17 op=UNLOAD Oct 31 01:32:57.901000 audit[3698]: AVC avc: denied { perfmon } for pid=3698 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 01:32:57.901000 audit[3698]: SYSCALL arch=c000003e syscall=321 success=yes exit=5 a0=0 a1=7fff9e896670 a2=50 a3=1 items=0 ppid=3595 pid=3698 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 01:32:57.901000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Oct 31 01:32:57.901000 audit[3698]: AVC avc: denied { bpf } for pid=3698 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 01:32:57.901000 audit[3698]: SYSCALL arch=c000003e syscall=321 success=yes exit=0 a0=16 a1=7fff9e896790 a2=4 a3=38 items=0 ppid=3595 pid=3698 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 01:32:57.901000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Oct 31 01:32:57.901000 audit[3698]: AVC avc: denied { bpf } for pid=3698 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 01:32:57.901000 audit[3698]: AVC avc: denied { bpf } for pid=3698 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 01:32:57.901000 audit[3698]: AVC avc: denied { perfmon } for pid=3698 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 01:32:57.901000 audit[3698]: AVC avc: denied { bpf } for pid=3698 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 01:32:57.901000 audit[3698]: AVC avc: denied { perfmon } for pid=3698 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 01:32:57.901000 audit[3698]: AVC avc: denied { perfmon } for pid=3698 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 01:32:57.901000 audit[3698]: AVC avc: denied { perfmon } for pid=3698 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 01:32:57.901000 audit[3698]: AVC avc: denied { perfmon } for pid=3698 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 01:32:57.901000 audit[3698]: AVC avc: denied { perfmon } for pid=3698 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 01:32:57.901000 audit[3698]: AVC avc: denied { bpf } for pid=3698 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 01:32:57.901000 audit[3698]: AVC avc: denied { confidentiality } for pid=3698 comm="bpftool" lockdown_reason="use of bpf to read kernel RAM" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Oct 31 01:32:57.901000 audit[3698]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=5 a1=7fff9e8967e0 a2=94 a3=6 items=0 ppid=3595 pid=3698 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 01:32:57.901000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Oct 31 01:32:57.902000 audit[3698]: AVC avc: denied { bpf } for pid=3698 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 01:32:57.902000 audit[3698]: AVC avc: denied { bpf } for pid=3698 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 01:32:57.902000 audit[3698]: AVC avc: denied { perfmon } for pid=3698 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 01:32:57.902000 audit[3698]: AVC avc: denied { bpf } for pid=3698 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 01:32:57.902000 audit[3698]: AVC avc: denied { perfmon } for pid=3698 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 01:32:57.902000 audit[3698]: AVC avc: denied { perfmon } for pid=3698 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 01:32:57.902000 audit[3698]: AVC avc: denied { perfmon } for pid=3698 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 01:32:57.902000 audit[3698]: AVC avc: denied { perfmon } for pid=3698 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 01:32:57.902000 audit[3698]: AVC avc: denied { perfmon } for pid=3698 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 01:32:57.902000 audit[3698]: AVC avc: denied { bpf } for pid=3698 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 01:32:57.902000 audit[3698]: AVC avc: denied { confidentiality } for pid=3698 comm="bpftool" lockdown_reason="use of bpf to read kernel RAM" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Oct 31 01:32:57.902000 audit[3698]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=5 a1=7fff9e895f90 a2=94 a3=88 items=0 ppid=3595 pid=3698 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 01:32:57.902000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Oct 31 01:32:57.902000 audit[3698]: AVC avc: denied { bpf } for pid=3698 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 01:32:57.902000 audit[3698]: AVC avc: denied { bpf } for pid=3698 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 01:32:57.902000 audit[3698]: AVC avc: denied { perfmon } for pid=3698 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 01:32:57.902000 audit[3698]: AVC avc: denied { bpf } for pid=3698 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 01:32:57.902000 audit[3698]: AVC avc: denied { perfmon } for pid=3698 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 01:32:57.902000 audit[3698]: AVC avc: denied { perfmon } for pid=3698 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 01:32:57.902000 audit[3698]: AVC avc: denied { perfmon } for pid=3698 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 01:32:57.902000 audit[3698]: AVC avc: denied { perfmon } for pid=3698 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 01:32:57.902000 audit[3698]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=5 a1=7fff9e895f90 a2=94 a3=88 items=0 ppid=3595 pid=3698 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 01:32:57.902000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Oct 31 01:32:57.910761 kubelet[2284]: I1031 01:32:57.910738 2284 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/034a3d4f-f436-4259-8570-9d57a6e1d274-whisker-ca-bundle\") pod \"whisker-84f98497cf-b6pmv\" (UID: \"034a3d4f-f436-4259-8570-9d57a6e1d274\") " pod="calico-system/whisker-84f98497cf-b6pmv" Oct 31 01:32:57.910881 kubelet[2284]: I1031 01:32:57.910770 2284 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/034a3d4f-f436-4259-8570-9d57a6e1d274-whisker-backend-key-pair\") pod \"whisker-84f98497cf-b6pmv\" (UID: \"034a3d4f-f436-4259-8570-9d57a6e1d274\") " pod="calico-system/whisker-84f98497cf-b6pmv" Oct 31 01:32:57.910881 kubelet[2284]: I1031 01:32:57.910796 2284 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jvt5b\" (UniqueName: \"kubernetes.io/projected/034a3d4f-f436-4259-8570-9d57a6e1d274-kube-api-access-jvt5b\") pod \"whisker-84f98497cf-b6pmv\" (UID: \"034a3d4f-f436-4259-8570-9d57a6e1d274\") " pod="calico-system/whisker-84f98497cf-b6pmv" Oct 31 01:32:57.911000 audit[3701]: AVC avc: denied { bpf } for pid=3701 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 01:32:57.911000 audit[3701]: AVC avc: denied { bpf } for pid=3701 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 01:32:57.911000 audit[3701]: AVC avc: denied { perfmon } for pid=3701 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 01:32:57.911000 audit[3701]: AVC avc: denied { perfmon } for pid=3701 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 01:32:57.911000 audit[3701]: AVC avc: denied { perfmon } for pid=3701 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 01:32:57.911000 audit[3701]: AVC avc: denied { perfmon } for pid=3701 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 01:32:57.911000 audit[3701]: AVC avc: denied { perfmon } for pid=3701 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 01:32:57.911000 audit[3701]: AVC avc: denied { bpf } for pid=3701 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 01:32:57.911000 audit[3701]: AVC avc: denied { bpf } for pid=3701 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 01:32:57.911000 audit: BPF prog-id=18 op=LOAD Oct 31 01:32:57.911000 audit[3701]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffeca340bd0 a2=98 a3=1999999999999999 items=0 ppid=3595 pid=3701 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 01:32:57.911000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Oct 31 01:32:57.911000 audit: BPF prog-id=18 op=UNLOAD Oct 31 01:32:57.911000 audit[3701]: AVC avc: denied { bpf } for pid=3701 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 01:32:57.911000 audit[3701]: AVC avc: denied { bpf } for pid=3701 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 01:32:57.911000 audit[3701]: AVC avc: denied { perfmon } for pid=3701 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 01:32:57.911000 audit[3701]: AVC avc: denied { perfmon } for pid=3701 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 01:32:57.911000 audit[3701]: AVC avc: denied { perfmon } for pid=3701 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 01:32:57.911000 audit[3701]: AVC avc: denied { perfmon } for pid=3701 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 01:32:57.911000 audit[3701]: AVC avc: denied { perfmon } for pid=3701 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 01:32:57.911000 audit[3701]: AVC avc: denied { bpf } for pid=3701 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 01:32:57.911000 audit[3701]: AVC avc: denied { bpf } for pid=3701 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 01:32:57.911000 audit: BPF prog-id=19 op=LOAD Oct 31 01:32:57.911000 audit[3701]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffeca340ab0 a2=94 a3=ffff items=0 ppid=3595 pid=3701 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 01:32:57.911000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Oct 31 01:32:57.911000 audit: BPF prog-id=19 op=UNLOAD Oct 31 01:32:57.911000 audit[3701]: AVC avc: denied { bpf } for pid=3701 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 01:32:57.911000 audit[3701]: AVC avc: denied { bpf } for pid=3701 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 01:32:57.911000 audit[3701]: AVC avc: denied { perfmon } for pid=3701 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 01:32:57.911000 audit[3701]: AVC avc: denied { perfmon } for pid=3701 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 01:32:57.911000 audit[3701]: AVC avc: denied { perfmon } for pid=3701 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 01:32:57.911000 audit[3701]: AVC avc: denied { perfmon } for pid=3701 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 01:32:57.911000 audit[3701]: AVC avc: denied { perfmon } for pid=3701 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 01:32:57.911000 audit[3701]: AVC avc: denied { bpf } for pid=3701 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 01:32:57.911000 audit[3701]: AVC avc: denied { bpf } for pid=3701 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 01:32:57.911000 audit: BPF prog-id=20 op=LOAD Oct 31 01:32:57.911000 audit[3701]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffeca340af0 a2=94 a3=7ffeca340cd0 items=0 ppid=3595 pid=3701 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 01:32:57.911000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Oct 31 01:32:57.911000 audit: BPF prog-id=20 op=UNLOAD Oct 31 01:32:57.966424 systemd-networkd[1125]: vxlan.calico: Link UP Oct 31 01:32:57.966429 systemd-networkd[1125]: vxlan.calico: Gained carrier Oct 31 01:32:57.985000 audit[3730]: AVC avc: denied { bpf } for pid=3730 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 01:32:57.985000 audit[3730]: AVC avc: denied { bpf } for pid=3730 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 01:32:57.985000 audit[3730]: AVC avc: denied { perfmon } for pid=3730 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 01:32:57.985000 audit[3730]: AVC avc: denied { perfmon } for pid=3730 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 01:32:57.985000 audit[3730]: AVC avc: denied { perfmon } for pid=3730 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 01:32:57.985000 audit[3730]: AVC avc: denied { perfmon } for pid=3730 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 01:32:57.985000 audit[3730]: AVC avc: denied { perfmon } for pid=3730 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 01:32:57.985000 audit[3730]: AVC avc: denied { bpf } for pid=3730 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 01:32:57.985000 audit[3730]: AVC avc: denied { bpf } for pid=3730 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 01:32:57.985000 audit: BPF prog-id=21 op=LOAD Oct 31 01:32:57.985000 audit[3730]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffe2cd53d10 a2=98 a3=0 items=0 ppid=3595 pid=3730 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 01:32:57.985000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Oct 31 01:32:57.985000 audit: BPF prog-id=21 op=UNLOAD Oct 31 01:32:57.986000 audit[3730]: AVC avc: denied { bpf } for pid=3730 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 01:32:57.986000 audit[3730]: AVC avc: denied { bpf } for pid=3730 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 01:32:57.986000 audit[3730]: AVC avc: denied { perfmon } for pid=3730 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 01:32:57.986000 audit[3730]: AVC avc: denied { perfmon } for pid=3730 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 01:32:57.986000 audit[3730]: AVC avc: denied { perfmon } for pid=3730 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 01:32:57.986000 audit[3730]: AVC avc: denied { perfmon } for pid=3730 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 01:32:57.986000 audit[3730]: AVC avc: denied { perfmon } for pid=3730 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 01:32:57.986000 audit[3730]: AVC avc: denied { bpf } for pid=3730 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 01:32:57.986000 audit[3730]: AVC avc: denied { bpf } for pid=3730 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 01:32:57.986000 audit: BPF prog-id=22 op=LOAD Oct 31 01:32:57.986000 audit[3730]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffe2cd53b20 a2=94 a3=54428f items=0 ppid=3595 pid=3730 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 01:32:57.986000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Oct 31 01:32:57.986000 audit: BPF prog-id=22 op=UNLOAD Oct 31 01:32:57.986000 audit[3730]: AVC avc: denied { bpf } for pid=3730 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 01:32:57.986000 audit[3730]: AVC avc: denied { bpf } for pid=3730 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 01:32:57.986000 audit[3730]: AVC avc: denied { perfmon } for pid=3730 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 01:32:57.986000 audit[3730]: AVC avc: denied { perfmon } for pid=3730 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 01:32:57.986000 audit[3730]: AVC avc: denied { perfmon } for pid=3730 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 01:32:57.986000 audit[3730]: AVC avc: denied { perfmon } for pid=3730 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 01:32:57.986000 audit[3730]: AVC avc: denied { perfmon } for pid=3730 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 01:32:57.986000 audit[3730]: AVC avc: denied { bpf } for pid=3730 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 01:32:57.986000 audit[3730]: AVC avc: denied { bpf } for pid=3730 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 01:32:57.986000 audit: BPF prog-id=23 op=LOAD Oct 31 01:32:57.986000 audit[3730]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffe2cd53b50 a2=94 a3=2 items=0 ppid=3595 pid=3730 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 01:32:57.986000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Oct 31 01:32:57.986000 audit: BPF prog-id=23 op=UNLOAD Oct 31 01:32:57.986000 audit[3730]: AVC avc: denied { bpf } for pid=3730 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 01:32:57.986000 audit[3730]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=12 a1=7ffe2cd53a20 a2=28 a3=0 items=0 ppid=3595 pid=3730 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 01:32:57.986000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Oct 31 01:32:57.986000 audit[3730]: AVC avc: denied { bpf } for pid=3730 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 01:32:57.986000 audit[3730]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=12 a1=7ffe2cd53a50 a2=28 a3=0 items=0 ppid=3595 pid=3730 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 01:32:57.986000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Oct 31 01:32:57.986000 audit[3730]: AVC avc: denied { bpf } for pid=3730 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 01:32:57.986000 audit[3730]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=12 a1=7ffe2cd53960 a2=28 a3=0 items=0 ppid=3595 pid=3730 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 01:32:57.986000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Oct 31 01:32:57.986000 audit[3730]: AVC avc: denied { bpf } for pid=3730 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 01:32:57.986000 audit[3730]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=12 a1=7ffe2cd53a70 a2=28 a3=0 items=0 ppid=3595 pid=3730 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 01:32:57.986000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Oct 31 01:32:57.986000 audit[3730]: AVC avc: denied { bpf } for pid=3730 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 01:32:57.986000 audit[3730]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=12 a1=7ffe2cd53a50 a2=28 a3=0 items=0 ppid=3595 pid=3730 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 01:32:57.986000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Oct 31 01:32:57.986000 audit[3730]: AVC avc: denied { bpf } for pid=3730 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 01:32:57.986000 audit[3730]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=12 a1=7ffe2cd53a40 a2=28 a3=0 items=0 ppid=3595 pid=3730 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 01:32:57.986000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Oct 31 01:32:57.986000 audit[3730]: AVC avc: denied { bpf } for pid=3730 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 01:32:57.986000 audit[3730]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=12 a1=7ffe2cd53a70 a2=28 a3=0 items=0 ppid=3595 pid=3730 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 01:32:57.986000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Oct 31 01:32:57.986000 audit[3730]: AVC avc: denied { bpf } for pid=3730 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 01:32:57.986000 audit[3730]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=12 a1=7ffe2cd53a50 a2=28 a3=0 items=0 ppid=3595 pid=3730 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 01:32:57.986000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Oct 31 01:32:57.986000 audit[3730]: AVC avc: denied { bpf } for pid=3730 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 01:32:57.986000 audit[3730]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=12 a1=7ffe2cd53a70 a2=28 a3=0 items=0 ppid=3595 pid=3730 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 01:32:57.986000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Oct 31 01:32:57.986000 audit[3730]: AVC avc: denied { bpf } for pid=3730 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 01:32:57.986000 audit[3730]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=12 a1=7ffe2cd53a40 a2=28 a3=0 items=0 ppid=3595 pid=3730 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 01:32:57.986000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Oct 31 01:32:57.986000 audit[3730]: AVC avc: denied { bpf } for pid=3730 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 01:32:57.986000 audit[3730]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=12 a1=7ffe2cd53ab0 a2=28 a3=0 items=0 ppid=3595 pid=3730 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 01:32:57.986000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Oct 31 01:32:57.986000 audit[3730]: AVC avc: denied { bpf } for pid=3730 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 01:32:57.986000 audit[3730]: AVC avc: denied { bpf } for pid=3730 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 01:32:57.986000 audit[3730]: AVC avc: denied { perfmon } for pid=3730 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 01:32:57.986000 audit[3730]: AVC avc: denied { perfmon } for pid=3730 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 01:32:57.986000 audit[3730]: AVC avc: denied { perfmon } for pid=3730 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 01:32:57.986000 audit[3730]: AVC avc: denied { perfmon } for pid=3730 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 01:32:57.986000 audit[3730]: AVC avc: denied { perfmon } for pid=3730 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 01:32:57.986000 audit[3730]: AVC avc: denied { bpf } for pid=3730 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 01:32:57.986000 audit[3730]: AVC avc: denied { bpf } for pid=3730 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 01:32:57.986000 audit: BPF prog-id=24 op=LOAD Oct 31 01:32:57.986000 audit[3730]: SYSCALL arch=c000003e syscall=321 success=yes exit=6 a0=5 a1=7ffe2cd53920 a2=94 a3=0 items=0 ppid=3595 pid=3730 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 01:32:57.986000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Oct 31 01:32:57.986000 audit: BPF prog-id=24 op=UNLOAD Oct 31 01:32:57.987000 audit[3730]: AVC avc: denied { bpf } for pid=3730 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 01:32:57.987000 audit[3730]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=0 a1=7ffe2cd53910 a2=50 a3=2800 items=0 ppid=3595 pid=3730 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 01:32:57.987000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Oct 31 01:32:57.987000 audit[3730]: AVC avc: denied { bpf } for pid=3730 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 01:32:57.987000 audit[3730]: SYSCALL arch=c000003e syscall=321 success=yes exit=6 a0=0 a1=7ffe2cd53910 a2=50 a3=2800 items=0 ppid=3595 pid=3730 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 01:32:57.987000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Oct 31 01:32:57.987000 audit[3730]: AVC avc: denied { bpf } for pid=3730 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 01:32:57.987000 audit[3730]: AVC avc: denied { bpf } for pid=3730 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 01:32:57.987000 audit[3730]: AVC avc: denied { bpf } for pid=3730 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 01:32:57.987000 audit[3730]: AVC avc: denied { perfmon } for pid=3730 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 01:32:57.987000 audit[3730]: AVC avc: denied { perfmon } for pid=3730 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 01:32:57.987000 audit[3730]: AVC avc: denied { perfmon } for pid=3730 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 01:32:57.987000 audit[3730]: AVC avc: denied { perfmon } for pid=3730 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 01:32:57.987000 audit[3730]: AVC avc: denied { perfmon } for pid=3730 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 01:32:57.987000 audit[3730]: AVC avc: denied { bpf } for pid=3730 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 01:32:57.987000 audit[3730]: AVC avc: denied { bpf } for pid=3730 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 01:32:57.987000 audit: BPF prog-id=25 op=LOAD Oct 31 01:32:57.987000 audit[3730]: SYSCALL arch=c000003e syscall=321 success=yes exit=6 a0=5 a1=7ffe2cd53130 a2=94 a3=2 items=0 ppid=3595 pid=3730 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 01:32:57.987000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Oct 31 01:32:57.987000 audit: BPF prog-id=25 op=UNLOAD Oct 31 01:32:57.987000 audit[3730]: AVC avc: denied { bpf } for pid=3730 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 01:32:57.987000 audit[3730]: AVC avc: denied { bpf } for pid=3730 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 01:32:57.987000 audit[3730]: AVC avc: denied { bpf } for pid=3730 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 01:32:57.987000 audit[3730]: AVC avc: denied { perfmon } for pid=3730 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 01:32:57.987000 audit[3730]: AVC avc: denied { perfmon } for pid=3730 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 01:32:57.987000 audit[3730]: AVC avc: denied { perfmon } for pid=3730 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 01:32:57.987000 audit[3730]: AVC avc: denied { perfmon } for pid=3730 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 01:32:57.987000 audit[3730]: AVC avc: denied { perfmon } for pid=3730 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 01:32:57.987000 audit[3730]: AVC avc: denied { bpf } for pid=3730 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 01:32:57.987000 audit[3730]: AVC avc: denied { bpf } for pid=3730 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 01:32:57.987000 audit: BPF prog-id=26 op=LOAD Oct 31 01:32:57.987000 audit[3730]: SYSCALL arch=c000003e syscall=321 success=yes exit=6 a0=5 a1=7ffe2cd53230 a2=94 a3=30 items=0 ppid=3595 pid=3730 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 01:32:57.987000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Oct 31 01:32:57.990000 audit[3732]: AVC avc: denied { bpf } for pid=3732 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 01:32:57.990000 audit[3732]: AVC avc: denied { bpf } for pid=3732 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 01:32:57.990000 audit[3732]: AVC avc: denied { perfmon } for pid=3732 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 01:32:57.990000 audit[3732]: AVC avc: denied { perfmon } for pid=3732 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 01:32:57.990000 audit[3732]: AVC avc: denied { perfmon } for pid=3732 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 01:32:57.990000 audit[3732]: AVC avc: denied { perfmon } for pid=3732 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 01:32:57.990000 audit[3732]: AVC avc: denied { perfmon } for pid=3732 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 01:32:57.990000 audit[3732]: AVC avc: denied { bpf } for pid=3732 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 01:32:57.990000 audit[3732]: AVC avc: denied { bpf } for pid=3732 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 01:32:57.990000 audit: BPF prog-id=27 op=LOAD Oct 31 01:32:57.990000 audit[3732]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffe9a1f14e0 a2=98 a3=0 items=0 ppid=3595 pid=3732 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 01:32:57.990000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Oct 31 01:32:57.990000 audit: BPF prog-id=27 op=UNLOAD Oct 31 01:32:57.991000 audit[3732]: AVC avc: denied { bpf } for pid=3732 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 01:32:57.991000 audit[3732]: AVC avc: denied { bpf } for pid=3732 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 01:32:57.991000 audit[3732]: AVC avc: denied { perfmon } for pid=3732 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 01:32:57.991000 audit[3732]: AVC avc: denied { perfmon } for pid=3732 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 01:32:57.991000 audit[3732]: AVC avc: denied { perfmon } for pid=3732 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 01:32:57.991000 audit[3732]: AVC avc: denied { perfmon } for pid=3732 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 01:32:57.991000 audit[3732]: AVC avc: denied { perfmon } for pid=3732 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 01:32:57.991000 audit[3732]: AVC avc: denied { bpf } for pid=3732 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 01:32:57.991000 audit[3732]: AVC avc: denied { bpf } for pid=3732 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 01:32:57.991000 audit: BPF prog-id=28 op=LOAD Oct 31 01:32:57.991000 audit[3732]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7ffe9a1f12d0 a2=94 a3=54428f items=0 ppid=3595 pid=3732 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 01:32:57.991000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Oct 31 01:32:57.991000 audit: BPF prog-id=28 op=UNLOAD Oct 31 01:32:57.991000 audit[3732]: AVC avc: denied { bpf } for pid=3732 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 01:32:57.991000 audit[3732]: AVC avc: denied { bpf } for pid=3732 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 01:32:57.991000 audit[3732]: AVC avc: denied { perfmon } for pid=3732 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 01:32:57.991000 audit[3732]: AVC avc: denied { perfmon } for pid=3732 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 01:32:57.991000 audit[3732]: AVC avc: denied { perfmon } for pid=3732 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 01:32:57.991000 audit[3732]: AVC avc: denied { perfmon } for pid=3732 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 01:32:57.991000 audit[3732]: AVC avc: denied { perfmon } for pid=3732 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 01:32:57.991000 audit[3732]: AVC avc: denied { bpf } for pid=3732 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 01:32:57.991000 audit[3732]: AVC avc: denied { bpf } for pid=3732 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 01:32:57.991000 audit: BPF prog-id=29 op=LOAD Oct 31 01:32:57.991000 audit[3732]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7ffe9a1f1300 a2=94 a3=2 items=0 ppid=3595 pid=3732 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 01:32:57.991000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Oct 31 01:32:57.991000 audit: BPF prog-id=29 op=UNLOAD Oct 31 01:32:58.100000 audit[3732]: AVC avc: denied { bpf } for pid=3732 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 01:32:58.100000 audit[3732]: AVC avc: denied { bpf } for pid=3732 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 01:32:58.100000 audit[3732]: AVC avc: denied { perfmon } for pid=3732 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 01:32:58.100000 audit[3732]: AVC avc: denied { perfmon } for pid=3732 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 01:32:58.100000 audit[3732]: AVC avc: denied { perfmon } for pid=3732 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 01:32:58.100000 audit[3732]: AVC avc: denied { perfmon } for pid=3732 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 01:32:58.100000 audit[3732]: AVC avc: denied { perfmon } for pid=3732 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 01:32:58.100000 audit[3732]: AVC avc: denied { bpf } for pid=3732 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 01:32:58.100000 audit[3732]: AVC avc: denied { bpf } for pid=3732 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 01:32:58.100000 audit: BPF prog-id=30 op=LOAD Oct 31 01:32:58.100000 audit[3732]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7ffe9a1f11c0 a2=94 a3=1 items=0 ppid=3595 pid=3732 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 01:32:58.100000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Oct 31 01:32:58.101000 audit: BPF prog-id=30 op=UNLOAD Oct 31 01:32:58.101000 audit[3732]: AVC avc: denied { perfmon } for pid=3732 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 01:32:58.101000 audit[3732]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=0 a1=7ffe9a1f1290 a2=50 a3=7ffe9a1f1370 items=0 ppid=3595 pid=3732 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 01:32:58.101000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Oct 31 01:32:58.108000 audit[3732]: AVC avc: denied { bpf } for pid=3732 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 01:32:58.108000 audit[3732]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=12 a1=7ffe9a1f11d0 a2=28 a3=0 items=0 ppid=3595 pid=3732 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 01:32:58.108000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Oct 31 01:32:58.109000 audit[3732]: AVC avc: denied { bpf } for pid=3732 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 01:32:58.109000 audit[3732]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=12 a1=7ffe9a1f1200 a2=28 a3=0 items=0 ppid=3595 pid=3732 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 01:32:58.109000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Oct 31 01:32:58.109000 audit[3732]: AVC avc: denied { bpf } for pid=3732 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 01:32:58.109000 audit[3732]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=12 a1=7ffe9a1f1110 a2=28 a3=0 items=0 ppid=3595 pid=3732 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 01:32:58.109000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Oct 31 01:32:58.109000 audit[3732]: AVC avc: denied { bpf } for pid=3732 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 01:32:58.109000 audit[3732]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=12 a1=7ffe9a1f1220 a2=28 a3=0 items=0 ppid=3595 pid=3732 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 01:32:58.109000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Oct 31 01:32:58.109000 audit[3732]: AVC avc: denied { bpf } for pid=3732 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 01:32:58.109000 audit[3732]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=12 a1=7ffe9a1f1200 a2=28 a3=0 items=0 ppid=3595 pid=3732 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 01:32:58.109000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Oct 31 01:32:58.109000 audit[3732]: AVC avc: denied { bpf } for pid=3732 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 01:32:58.109000 audit[3732]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=12 a1=7ffe9a1f11f0 a2=28 a3=0 items=0 ppid=3595 pid=3732 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 01:32:58.109000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Oct 31 01:32:58.109000 audit[3732]: AVC avc: denied { bpf } for pid=3732 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 01:32:58.109000 audit[3732]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=12 a1=7ffe9a1f1220 a2=28 a3=0 items=0 ppid=3595 pid=3732 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 01:32:58.109000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Oct 31 01:32:58.109000 audit[3732]: AVC avc: denied { bpf } for pid=3732 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 01:32:58.109000 audit[3732]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=12 a1=7ffe9a1f1200 a2=28 a3=0 items=0 ppid=3595 pid=3732 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 01:32:58.109000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Oct 31 01:32:58.109000 audit[3732]: AVC avc: denied { bpf } for pid=3732 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 01:32:58.109000 audit[3732]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=12 a1=7ffe9a1f1220 a2=28 a3=0 items=0 ppid=3595 pid=3732 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 01:32:58.109000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Oct 31 01:32:58.109000 audit[3732]: AVC avc: denied { bpf } for pid=3732 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 01:32:58.109000 audit[3732]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=12 a1=7ffe9a1f11f0 a2=28 a3=0 items=0 ppid=3595 pid=3732 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 01:32:58.109000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Oct 31 01:32:58.109000 audit[3732]: AVC avc: denied { bpf } for pid=3732 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 01:32:58.109000 audit[3732]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=12 a1=7ffe9a1f1260 a2=28 a3=0 items=0 ppid=3595 pid=3732 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 01:32:58.109000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Oct 31 01:32:58.109000 audit[3732]: AVC avc: denied { perfmon } for pid=3732 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 01:32:58.109000 audit[3732]: SYSCALL arch=c000003e syscall=321 success=yes exit=5 a0=0 a1=7ffe9a1f1010 a2=50 a3=1 items=0 ppid=3595 pid=3732 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 01:32:58.109000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Oct 31 01:32:58.109000 audit[3732]: AVC avc: denied { bpf } for pid=3732 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 01:32:58.109000 audit[3732]: AVC avc: denied { bpf } for pid=3732 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 01:32:58.109000 audit[3732]: AVC avc: denied { perfmon } for pid=3732 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 01:32:58.109000 audit[3732]: AVC avc: denied { perfmon } for pid=3732 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 01:32:58.109000 audit[3732]: AVC avc: denied { perfmon } for pid=3732 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 01:32:58.109000 audit[3732]: AVC avc: denied { perfmon } for pid=3732 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 01:32:58.109000 audit[3732]: AVC avc: denied { perfmon } for pid=3732 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 01:32:58.109000 audit[3732]: AVC avc: denied { bpf } for pid=3732 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 01:32:58.109000 audit[3732]: AVC avc: denied { bpf } for pid=3732 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 01:32:58.109000 audit: BPF prog-id=31 op=LOAD Oct 31 01:32:58.109000 audit[3732]: SYSCALL arch=c000003e syscall=321 success=yes exit=6 a0=5 a1=7ffe9a1f1010 a2=94 a3=5 items=0 ppid=3595 pid=3732 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 01:32:58.109000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Oct 31 01:32:58.109000 audit: BPF prog-id=31 op=UNLOAD Oct 31 01:32:58.109000 audit[3732]: AVC avc: denied { perfmon } for pid=3732 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 01:32:58.109000 audit[3732]: SYSCALL arch=c000003e syscall=321 success=yes exit=5 a0=0 a1=7ffe9a1f10c0 a2=50 a3=1 items=0 ppid=3595 pid=3732 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 01:32:58.109000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Oct 31 01:32:58.109000 audit[3732]: AVC avc: denied { bpf } for pid=3732 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 01:32:58.109000 audit[3732]: SYSCALL arch=c000003e syscall=321 success=yes exit=0 a0=16 a1=7ffe9a1f11e0 a2=4 a3=38 items=0 ppid=3595 pid=3732 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 01:32:58.109000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Oct 31 01:32:58.109000 audit[3732]: AVC avc: denied { bpf } for pid=3732 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 01:32:58.109000 audit[3732]: AVC avc: denied { bpf } for pid=3732 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 01:32:58.109000 audit[3732]: AVC avc: denied { perfmon } for pid=3732 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 01:32:58.109000 audit[3732]: AVC avc: denied { bpf } for pid=3732 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 01:32:58.109000 audit[3732]: AVC avc: denied { perfmon } for pid=3732 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 01:32:58.109000 audit[3732]: AVC avc: denied { perfmon } for pid=3732 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 01:32:58.109000 audit[3732]: AVC avc: denied { perfmon } for pid=3732 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 01:32:58.109000 audit[3732]: AVC avc: denied { perfmon } for pid=3732 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 01:32:58.109000 audit[3732]: AVC avc: denied { perfmon } for pid=3732 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 01:32:58.109000 audit[3732]: AVC avc: denied { bpf } for pid=3732 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 01:32:58.109000 audit[3732]: AVC avc: denied { confidentiality } for pid=3732 comm="bpftool" lockdown_reason="use of bpf to read kernel RAM" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Oct 31 01:32:58.109000 audit[3732]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=5 a1=7ffe9a1f1230 a2=94 a3=6 items=0 ppid=3595 pid=3732 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 01:32:58.109000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Oct 31 01:32:58.109000 audit[3732]: AVC avc: denied { bpf } for pid=3732 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 01:32:58.109000 audit[3732]: AVC avc: denied { bpf } for pid=3732 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 01:32:58.109000 audit[3732]: AVC avc: denied { perfmon } for pid=3732 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 01:32:58.109000 audit[3732]: AVC avc: denied { bpf } for pid=3732 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 01:32:58.109000 audit[3732]: AVC avc: denied { perfmon } for pid=3732 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 01:32:58.109000 audit[3732]: AVC avc: denied { perfmon } for pid=3732 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 01:32:58.109000 audit[3732]: AVC avc: denied { perfmon } for pid=3732 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 01:32:58.109000 audit[3732]: AVC avc: denied { perfmon } for pid=3732 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 01:32:58.109000 audit[3732]: AVC avc: denied { perfmon } for pid=3732 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 01:32:58.109000 audit[3732]: AVC avc: denied { bpf } for pid=3732 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 01:32:58.109000 audit[3732]: AVC avc: denied { confidentiality } for pid=3732 comm="bpftool" lockdown_reason="use of bpf to read kernel RAM" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Oct 31 01:32:58.109000 audit[3732]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=5 a1=7ffe9a1f09e0 a2=94 a3=88 items=0 ppid=3595 pid=3732 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 01:32:58.109000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Oct 31 01:32:58.109000 audit[3732]: AVC avc: denied { bpf } for pid=3732 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 01:32:58.109000 audit[3732]: AVC avc: denied { bpf } for pid=3732 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 01:32:58.109000 audit[3732]: AVC avc: denied { perfmon } for pid=3732 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 01:32:58.109000 audit[3732]: AVC avc: denied { bpf } for pid=3732 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 01:32:58.109000 audit[3732]: AVC avc: denied { perfmon } for pid=3732 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 01:32:58.109000 audit[3732]: AVC avc: denied { perfmon } for pid=3732 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 01:32:58.109000 audit[3732]: AVC avc: denied { perfmon } for pid=3732 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 01:32:58.109000 audit[3732]: AVC avc: denied { perfmon } for pid=3732 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 01:32:58.109000 audit[3732]: AVC avc: denied { perfmon } for pid=3732 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 01:32:58.109000 audit[3732]: AVC avc: denied { bpf } for pid=3732 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 01:32:58.109000 audit[3732]: AVC avc: denied { confidentiality } for pid=3732 comm="bpftool" lockdown_reason="use of bpf to read kernel RAM" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Oct 31 01:32:58.109000 audit[3732]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=5 a1=7ffe9a1f09e0 a2=94 a3=88 items=0 ppid=3595 pid=3732 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 01:32:58.109000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Oct 31 01:32:58.109000 audit[3732]: AVC avc: denied { bpf } for pid=3732 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 01:32:58.109000 audit[3732]: SYSCALL arch=c000003e syscall=321 success=yes exit=0 a0=f a1=7ffe9a1f2410 a2=10 a3=f8f00800 items=0 ppid=3595 pid=3732 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 01:32:58.109000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Oct 31 01:32:58.110000 audit[3732]: AVC avc: denied { bpf } for pid=3732 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 01:32:58.110000 audit[3732]: SYSCALL arch=c000003e syscall=321 success=yes exit=0 a0=f a1=7ffe9a1f22b0 a2=10 a3=3 items=0 ppid=3595 pid=3732 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 01:32:58.110000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Oct 31 01:32:58.110000 audit[3732]: AVC avc: denied { bpf } for pid=3732 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 01:32:58.110000 audit[3732]: SYSCALL arch=c000003e syscall=321 success=yes exit=0 a0=f a1=7ffe9a1f2250 a2=10 a3=3 items=0 ppid=3595 pid=3732 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 01:32:58.110000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Oct 31 01:32:58.110000 audit[3732]: AVC avc: denied { bpf } for pid=3732 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 01:32:58.110000 audit[3732]: SYSCALL arch=c000003e syscall=321 success=yes exit=0 a0=f a1=7ffe9a1f2250 a2=10 a3=7 items=0 ppid=3595 pid=3732 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 01:32:58.110000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Oct 31 01:32:58.117000 audit: BPF prog-id=26 op=UNLOAD Oct 31 01:32:58.126947 env[1377]: time="2025-10-31T01:32:58.126921532Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-84f98497cf-b6pmv,Uid:034a3d4f-f436-4259-8570-9d57a6e1d274,Namespace:calico-system,Attempt:0,}" Oct 31 01:32:58.193000 audit[3780]: NETFILTER_CFG table=mangle:103 family=2 entries=16 op=nft_register_chain pid=3780 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Oct 31 01:32:58.193000 audit[3780]: SYSCALL arch=c000003e syscall=46 success=yes exit=6868 a0=3 a1=7fff347ffb90 a2=0 a3=7fff347ffb7c items=0 ppid=3595 pid=3780 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 01:32:58.193000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Oct 31 01:32:58.205000 audit[3777]: NETFILTER_CFG table=nat:104 family=2 entries=15 op=nft_register_chain pid=3777 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Oct 31 01:32:58.205000 audit[3777]: SYSCALL arch=c000003e syscall=46 success=yes exit=5084 a0=3 a1=7fff9e48a000 a2=0 a3=7fff9e489fec items=0 ppid=3595 pid=3777 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 01:32:58.205000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Oct 31 01:32:58.209000 audit[3782]: NETFILTER_CFG table=filter:105 family=2 entries=39 op=nft_register_chain pid=3782 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Oct 31 01:32:58.209000 audit[3782]: SYSCALL arch=c000003e syscall=46 success=yes exit=18968 a0=3 a1=7ffc03d222f0 a2=0 a3=7ffc03d222dc items=0 ppid=3595 pid=3782 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 01:32:58.209000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Oct 31 01:32:58.214000 audit[3776]: NETFILTER_CFG table=raw:106 family=2 entries=21 op=nft_register_chain pid=3776 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Oct 31 01:32:58.214000 audit[3776]: SYSCALL arch=c000003e syscall=46 success=yes exit=8452 a0=3 a1=7ffcea2d00d0 a2=0 a3=7ffcea2d00bc items=0 ppid=3595 pid=3776 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 01:32:58.214000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Oct 31 01:32:58.269027 systemd-networkd[1125]: cali6f784cc9aa9: Link UP Oct 31 01:32:58.270624 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cali6f784cc9aa9: link becomes ready Oct 31 01:32:58.270642 systemd-networkd[1125]: cali6f784cc9aa9: Gained carrier Oct 31 01:32:58.281922 env[1377]: 2025-10-31 01:32:58.194 [INFO][3758] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-whisker--84f98497cf--b6pmv-eth0 whisker-84f98497cf- calico-system 034a3d4f-f436-4259-8570-9d57a6e1d274 942 0 2025-10-31 01:32:57 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:84f98497cf projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s localhost whisker-84f98497cf-b6pmv eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] cali6f784cc9aa9 [] [] }} ContainerID="86b863eae276933bfb6f50cd9a8f259cd9273f1c3b03402995bd8b8eadba181b" Namespace="calico-system" Pod="whisker-84f98497cf-b6pmv" WorkloadEndpoint="localhost-k8s-whisker--84f98497cf--b6pmv-" Oct 31 01:32:58.281922 env[1377]: 2025-10-31 01:32:58.194 [INFO][3758] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="86b863eae276933bfb6f50cd9a8f259cd9273f1c3b03402995bd8b8eadba181b" Namespace="calico-system" Pod="whisker-84f98497cf-b6pmv" WorkloadEndpoint="localhost-k8s-whisker--84f98497cf--b6pmv-eth0" Oct 31 01:32:58.281922 env[1377]: 2025-10-31 01:32:58.233 [INFO][3792] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="86b863eae276933bfb6f50cd9a8f259cd9273f1c3b03402995bd8b8eadba181b" HandleID="k8s-pod-network.86b863eae276933bfb6f50cd9a8f259cd9273f1c3b03402995bd8b8eadba181b" Workload="localhost-k8s-whisker--84f98497cf--b6pmv-eth0" Oct 31 01:32:58.281922 env[1377]: 2025-10-31 01:32:58.234 [INFO][3792] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="86b863eae276933bfb6f50cd9a8f259cd9273f1c3b03402995bd8b8eadba181b" HandleID="k8s-pod-network.86b863eae276933bfb6f50cd9a8f259cd9273f1c3b03402995bd8b8eadba181b" Workload="localhost-k8s-whisker--84f98497cf--b6pmv-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002d5010), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"whisker-84f98497cf-b6pmv", "timestamp":"2025-10-31 01:32:58.233854715 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Oct 31 01:32:58.281922 env[1377]: 2025-10-31 01:32:58.234 [INFO][3792] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Oct 31 01:32:58.281922 env[1377]: 2025-10-31 01:32:58.234 [INFO][3792] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Oct 31 01:32:58.281922 env[1377]: 2025-10-31 01:32:58.234 [INFO][3792] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Oct 31 01:32:58.281922 env[1377]: 2025-10-31 01:32:58.242 [INFO][3792] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.86b863eae276933bfb6f50cd9a8f259cd9273f1c3b03402995bd8b8eadba181b" host="localhost" Oct 31 01:32:58.281922 env[1377]: 2025-10-31 01:32:58.249 [INFO][3792] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Oct 31 01:32:58.281922 env[1377]: 2025-10-31 01:32:58.253 [INFO][3792] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Oct 31 01:32:58.281922 env[1377]: 2025-10-31 01:32:58.254 [INFO][3792] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Oct 31 01:32:58.281922 env[1377]: 2025-10-31 01:32:58.255 [INFO][3792] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Oct 31 01:32:58.281922 env[1377]: 2025-10-31 01:32:58.255 [INFO][3792] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.86b863eae276933bfb6f50cd9a8f259cd9273f1c3b03402995bd8b8eadba181b" host="localhost" Oct 31 01:32:58.281922 env[1377]: 2025-10-31 01:32:58.255 [INFO][3792] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.86b863eae276933bfb6f50cd9a8f259cd9273f1c3b03402995bd8b8eadba181b Oct 31 01:32:58.281922 env[1377]: 2025-10-31 01:32:58.258 [INFO][3792] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.86b863eae276933bfb6f50cd9a8f259cd9273f1c3b03402995bd8b8eadba181b" host="localhost" Oct 31 01:32:58.281922 env[1377]: 2025-10-31 01:32:58.262 [INFO][3792] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.129/26] block=192.168.88.128/26 handle="k8s-pod-network.86b863eae276933bfb6f50cd9a8f259cd9273f1c3b03402995bd8b8eadba181b" host="localhost" Oct 31 01:32:58.281922 env[1377]: 2025-10-31 01:32:58.262 [INFO][3792] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.129/26] handle="k8s-pod-network.86b863eae276933bfb6f50cd9a8f259cd9273f1c3b03402995bd8b8eadba181b" host="localhost" Oct 31 01:32:58.281922 env[1377]: 2025-10-31 01:32:58.262 [INFO][3792] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Oct 31 01:32:58.281922 env[1377]: 2025-10-31 01:32:58.262 [INFO][3792] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.129/26] IPv6=[] ContainerID="86b863eae276933bfb6f50cd9a8f259cd9273f1c3b03402995bd8b8eadba181b" HandleID="k8s-pod-network.86b863eae276933bfb6f50cd9a8f259cd9273f1c3b03402995bd8b8eadba181b" Workload="localhost-k8s-whisker--84f98497cf--b6pmv-eth0" Oct 31 01:32:58.284027 env[1377]: 2025-10-31 01:32:58.264 [INFO][3758] cni-plugin/k8s.go 418: Populated endpoint ContainerID="86b863eae276933bfb6f50cd9a8f259cd9273f1c3b03402995bd8b8eadba181b" Namespace="calico-system" Pod="whisker-84f98497cf-b6pmv" WorkloadEndpoint="localhost-k8s-whisker--84f98497cf--b6pmv-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--84f98497cf--b6pmv-eth0", GenerateName:"whisker-84f98497cf-", Namespace:"calico-system", SelfLink:"", UID:"034a3d4f-f436-4259-8570-9d57a6e1d274", ResourceVersion:"942", Generation:0, CreationTimestamp:time.Date(2025, time.October, 31, 1, 32, 57, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"84f98497cf", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"whisker-84f98497cf-b6pmv", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali6f784cc9aa9", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 31 01:32:58.284027 env[1377]: 2025-10-31 01:32:58.264 [INFO][3758] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.129/32] ContainerID="86b863eae276933bfb6f50cd9a8f259cd9273f1c3b03402995bd8b8eadba181b" Namespace="calico-system" Pod="whisker-84f98497cf-b6pmv" WorkloadEndpoint="localhost-k8s-whisker--84f98497cf--b6pmv-eth0" Oct 31 01:32:58.284027 env[1377]: 2025-10-31 01:32:58.264 [INFO][3758] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali6f784cc9aa9 ContainerID="86b863eae276933bfb6f50cd9a8f259cd9273f1c3b03402995bd8b8eadba181b" Namespace="calico-system" Pod="whisker-84f98497cf-b6pmv" WorkloadEndpoint="localhost-k8s-whisker--84f98497cf--b6pmv-eth0" Oct 31 01:32:58.284027 env[1377]: 2025-10-31 01:32:58.273 [INFO][3758] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="86b863eae276933bfb6f50cd9a8f259cd9273f1c3b03402995bd8b8eadba181b" Namespace="calico-system" Pod="whisker-84f98497cf-b6pmv" WorkloadEndpoint="localhost-k8s-whisker--84f98497cf--b6pmv-eth0" Oct 31 01:32:58.284027 env[1377]: 2025-10-31 01:32:58.273 [INFO][3758] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="86b863eae276933bfb6f50cd9a8f259cd9273f1c3b03402995bd8b8eadba181b" Namespace="calico-system" Pod="whisker-84f98497cf-b6pmv" WorkloadEndpoint="localhost-k8s-whisker--84f98497cf--b6pmv-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--84f98497cf--b6pmv-eth0", GenerateName:"whisker-84f98497cf-", Namespace:"calico-system", SelfLink:"", UID:"034a3d4f-f436-4259-8570-9d57a6e1d274", ResourceVersion:"942", Generation:0, CreationTimestamp:time.Date(2025, time.October, 31, 1, 32, 57, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"84f98497cf", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"86b863eae276933bfb6f50cd9a8f259cd9273f1c3b03402995bd8b8eadba181b", Pod:"whisker-84f98497cf-b6pmv", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali6f784cc9aa9", MAC:"f6:2a:0b:ec:40:18", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 31 01:32:58.284027 env[1377]: 2025-10-31 01:32:58.280 [INFO][3758] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="86b863eae276933bfb6f50cd9a8f259cd9273f1c3b03402995bd8b8eadba181b" Namespace="calico-system" Pod="whisker-84f98497cf-b6pmv" WorkloadEndpoint="localhost-k8s-whisker--84f98497cf--b6pmv-eth0" Oct 31 01:32:58.288517 env[1377]: time="2025-10-31T01:32:58.288077050Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 31 01:32:58.288517 env[1377]: time="2025-10-31T01:32:58.288128440Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 31 01:32:58.288517 env[1377]: time="2025-10-31T01:32:58.288146225Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 31 01:32:58.288517 env[1377]: time="2025-10-31T01:32:58.288282401Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/86b863eae276933bfb6f50cd9a8f259cd9273f1c3b03402995bd8b8eadba181b pid=3814 runtime=io.containerd.runc.v2 Oct 31 01:32:58.312629 systemd-resolved[1294]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Oct 31 01:32:58.304000 audit[3836]: NETFILTER_CFG table=filter:107 family=2 entries=59 op=nft_register_chain pid=3836 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Oct 31 01:32:58.304000 audit[3836]: SYSCALL arch=c000003e syscall=46 success=yes exit=35860 a0=3 a1=7ffe10728f50 a2=0 a3=7ffe10728f3c items=0 ppid=3595 pid=3836 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 01:32:58.304000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Oct 31 01:32:58.336900 env[1377]: time="2025-10-31T01:32:58.336872088Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-84f98497cf-b6pmv,Uid:034a3d4f-f436-4259-8570-9d57a6e1d274,Namespace:calico-system,Attempt:0,} returns sandbox id \"86b863eae276933bfb6f50cd9a8f259cd9273f1c3b03402995bd8b8eadba181b\"" Oct 31 01:32:58.345598 env[1377]: time="2025-10-31T01:32:58.345566288Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Oct 31 01:32:58.645885 env[1377]: time="2025-10-31T01:32:58.645848491Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Oct 31 01:32:58.646386 env[1377]: time="2025-10-31T01:32:58.646342040Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Oct 31 01:32:58.649886 kubelet[2284]: E1031 01:32:58.648057 2284 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Oct 31 01:32:58.650191 kubelet[2284]: E1031 01:32:58.650176 2284 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Oct 31 01:32:58.664210 kubelet[2284]: E1031 01:32:58.657440 2284 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:7fca255853824dd5924def6ba75879e0,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-jvt5b,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-84f98497cf-b6pmv_calico-system(034a3d4f-f436-4259-8570-9d57a6e1d274): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Oct 31 01:32:58.666334 env[1377]: time="2025-10-31T01:32:58.666317250Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Oct 31 01:32:58.988737 env[1377]: time="2025-10-31T01:32:58.988641811Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Oct 31 01:32:58.989430 env[1377]: time="2025-10-31T01:32:58.989293291Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Oct 31 01:32:58.989734 kubelet[2284]: E1031 01:32:58.989702 2284 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Oct 31 01:32:58.989825 kubelet[2284]: E1031 01:32:58.989814 2284 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Oct 31 01:32:58.989993 kubelet[2284]: E1031 01:32:58.989965 2284 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-jvt5b,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-84f98497cf-b6pmv_calico-system(034a3d4f-f436-4259-8570-9d57a6e1d274): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Oct 31 01:32:58.991699 kubelet[2284]: E1031 01:32:58.991651 2284 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-84f98497cf-b6pmv" podUID="034a3d4f-f436-4259-8570-9d57a6e1d274" Oct 31 01:32:59.252716 systemd-networkd[1125]: vxlan.calico: Gained IPv6LL Oct 31 01:32:59.444690 systemd-networkd[1125]: cali6f784cc9aa9: Gained IPv6LL Oct 31 01:32:59.570886 env[1377]: time="2025-10-31T01:32:59.570867447Z" level=info msg="StopPodSandbox for \"4c0bab7a2ed6ec3c12b99faf075344d81f292b4cad6e7a55ec04d0974d8e7759\"" Oct 31 01:32:59.571487 env[1377]: time="2025-10-31T01:32:59.571475093Z" level=info msg="StopPodSandbox for \"0f9e76c08e7066d5d6aee4b43aab25d278801a49577b0dd54668526bb5edbf57\"" Oct 31 01:32:59.585526 kubelet[2284]: I1031 01:32:59.585492 2284 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d48c0130-5c14-4eac-a5d1-65ae7e9f18bd" path="/var/lib/kubelet/pods/d48c0130-5c14-4eac-a5d1-65ae7e9f18bd/volumes" Oct 31 01:32:59.677508 env[1377]: 2025-10-31 01:32:59.648 [INFO][3880] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="0f9e76c08e7066d5d6aee4b43aab25d278801a49577b0dd54668526bb5edbf57" Oct 31 01:32:59.677508 env[1377]: 2025-10-31 01:32:59.648 [INFO][3880] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="0f9e76c08e7066d5d6aee4b43aab25d278801a49577b0dd54668526bb5edbf57" iface="eth0" netns="/var/run/netns/cni-20162817-a32d-bffa-c0fd-a0a1bce0cef5" Oct 31 01:32:59.677508 env[1377]: 2025-10-31 01:32:59.648 [INFO][3880] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="0f9e76c08e7066d5d6aee4b43aab25d278801a49577b0dd54668526bb5edbf57" iface="eth0" netns="/var/run/netns/cni-20162817-a32d-bffa-c0fd-a0a1bce0cef5" Oct 31 01:32:59.677508 env[1377]: 2025-10-31 01:32:59.648 [INFO][3880] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="0f9e76c08e7066d5d6aee4b43aab25d278801a49577b0dd54668526bb5edbf57" iface="eth0" netns="/var/run/netns/cni-20162817-a32d-bffa-c0fd-a0a1bce0cef5" Oct 31 01:32:59.677508 env[1377]: 2025-10-31 01:32:59.648 [INFO][3880] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="0f9e76c08e7066d5d6aee4b43aab25d278801a49577b0dd54668526bb5edbf57" Oct 31 01:32:59.677508 env[1377]: 2025-10-31 01:32:59.648 [INFO][3880] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="0f9e76c08e7066d5d6aee4b43aab25d278801a49577b0dd54668526bb5edbf57" Oct 31 01:32:59.677508 env[1377]: 2025-10-31 01:32:59.668 [INFO][3897] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="0f9e76c08e7066d5d6aee4b43aab25d278801a49577b0dd54668526bb5edbf57" HandleID="k8s-pod-network.0f9e76c08e7066d5d6aee4b43aab25d278801a49577b0dd54668526bb5edbf57" Workload="localhost-k8s-calico--apiserver--59f47b46b--s9kdt-eth0" Oct 31 01:32:59.677508 env[1377]: 2025-10-31 01:32:59.668 [INFO][3897] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Oct 31 01:32:59.677508 env[1377]: 2025-10-31 01:32:59.668 [INFO][3897] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Oct 31 01:32:59.677508 env[1377]: 2025-10-31 01:32:59.673 [WARNING][3897] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="0f9e76c08e7066d5d6aee4b43aab25d278801a49577b0dd54668526bb5edbf57" HandleID="k8s-pod-network.0f9e76c08e7066d5d6aee4b43aab25d278801a49577b0dd54668526bb5edbf57" Workload="localhost-k8s-calico--apiserver--59f47b46b--s9kdt-eth0" Oct 31 01:32:59.677508 env[1377]: 2025-10-31 01:32:59.673 [INFO][3897] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="0f9e76c08e7066d5d6aee4b43aab25d278801a49577b0dd54668526bb5edbf57" HandleID="k8s-pod-network.0f9e76c08e7066d5d6aee4b43aab25d278801a49577b0dd54668526bb5edbf57" Workload="localhost-k8s-calico--apiserver--59f47b46b--s9kdt-eth0" Oct 31 01:32:59.677508 env[1377]: 2025-10-31 01:32:59.675 [INFO][3897] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Oct 31 01:32:59.677508 env[1377]: 2025-10-31 01:32:59.676 [INFO][3880] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="0f9e76c08e7066d5d6aee4b43aab25d278801a49577b0dd54668526bb5edbf57" Oct 31 01:32:59.679562 systemd[1]: run-netns-cni\x2d20162817\x2da32d\x2dbffa\x2dc0fd\x2da0a1bce0cef5.mount: Deactivated successfully. Oct 31 01:32:59.679994 env[1377]: time="2025-10-31T01:32:59.679969277Z" level=info msg="TearDown network for sandbox \"0f9e76c08e7066d5d6aee4b43aab25d278801a49577b0dd54668526bb5edbf57\" successfully" Oct 31 01:32:59.680048 env[1377]: time="2025-10-31T01:32:59.680036527Z" level=info msg="StopPodSandbox for \"0f9e76c08e7066d5d6aee4b43aab25d278801a49577b0dd54668526bb5edbf57\" returns successfully" Oct 31 01:32:59.680810 env[1377]: time="2025-10-31T01:32:59.680791093Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-59f47b46b-s9kdt,Uid:8f69b598-caa2-4abe-911a-df60fbb3c4df,Namespace:calico-apiserver,Attempt:1,}" Oct 31 01:32:59.703792 env[1377]: 2025-10-31 01:32:59.641 [INFO][3881] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="4c0bab7a2ed6ec3c12b99faf075344d81f292b4cad6e7a55ec04d0974d8e7759" Oct 31 01:32:59.703792 env[1377]: 2025-10-31 01:32:59.641 [INFO][3881] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="4c0bab7a2ed6ec3c12b99faf075344d81f292b4cad6e7a55ec04d0974d8e7759" iface="eth0" netns="/var/run/netns/cni-4d8f2ba8-4226-95d3-e99e-48659f1c8155" Oct 31 01:32:59.703792 env[1377]: 2025-10-31 01:32:59.641 [INFO][3881] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="4c0bab7a2ed6ec3c12b99faf075344d81f292b4cad6e7a55ec04d0974d8e7759" iface="eth0" netns="/var/run/netns/cni-4d8f2ba8-4226-95d3-e99e-48659f1c8155" Oct 31 01:32:59.703792 env[1377]: 2025-10-31 01:32:59.644 [INFO][3881] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="4c0bab7a2ed6ec3c12b99faf075344d81f292b4cad6e7a55ec04d0974d8e7759" iface="eth0" netns="/var/run/netns/cni-4d8f2ba8-4226-95d3-e99e-48659f1c8155" Oct 31 01:32:59.703792 env[1377]: 2025-10-31 01:32:59.644 [INFO][3881] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="4c0bab7a2ed6ec3c12b99faf075344d81f292b4cad6e7a55ec04d0974d8e7759" Oct 31 01:32:59.703792 env[1377]: 2025-10-31 01:32:59.644 [INFO][3881] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="4c0bab7a2ed6ec3c12b99faf075344d81f292b4cad6e7a55ec04d0974d8e7759" Oct 31 01:32:59.703792 env[1377]: 2025-10-31 01:32:59.681 [INFO][3895] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="4c0bab7a2ed6ec3c12b99faf075344d81f292b4cad6e7a55ec04d0974d8e7759" HandleID="k8s-pod-network.4c0bab7a2ed6ec3c12b99faf075344d81f292b4cad6e7a55ec04d0974d8e7759" Workload="localhost-k8s-goldmane--666569f655--8nrww-eth0" Oct 31 01:32:59.703792 env[1377]: 2025-10-31 01:32:59.681 [INFO][3895] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Oct 31 01:32:59.703792 env[1377]: 2025-10-31 01:32:59.681 [INFO][3895] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Oct 31 01:32:59.703792 env[1377]: 2025-10-31 01:32:59.696 [WARNING][3895] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="4c0bab7a2ed6ec3c12b99faf075344d81f292b4cad6e7a55ec04d0974d8e7759" HandleID="k8s-pod-network.4c0bab7a2ed6ec3c12b99faf075344d81f292b4cad6e7a55ec04d0974d8e7759" Workload="localhost-k8s-goldmane--666569f655--8nrww-eth0" Oct 31 01:32:59.703792 env[1377]: 2025-10-31 01:32:59.696 [INFO][3895] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="4c0bab7a2ed6ec3c12b99faf075344d81f292b4cad6e7a55ec04d0974d8e7759" HandleID="k8s-pod-network.4c0bab7a2ed6ec3c12b99faf075344d81f292b4cad6e7a55ec04d0974d8e7759" Workload="localhost-k8s-goldmane--666569f655--8nrww-eth0" Oct 31 01:32:59.703792 env[1377]: 2025-10-31 01:32:59.698 [INFO][3895] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Oct 31 01:32:59.703792 env[1377]: 2025-10-31 01:32:59.701 [INFO][3881] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="4c0bab7a2ed6ec3c12b99faf075344d81f292b4cad6e7a55ec04d0974d8e7759" Oct 31 01:32:59.706340 systemd[1]: run-netns-cni\x2d4d8f2ba8\x2d4226\x2d95d3\x2de99e\x2d48659f1c8155.mount: Deactivated successfully. Oct 31 01:32:59.707766 env[1377]: time="2025-10-31T01:32:59.707732489Z" level=info msg="TearDown network for sandbox \"4c0bab7a2ed6ec3c12b99faf075344d81f292b4cad6e7a55ec04d0974d8e7759\" successfully" Oct 31 01:32:59.707934 env[1377]: time="2025-10-31T01:32:59.707911792Z" level=info msg="StopPodSandbox for \"4c0bab7a2ed6ec3c12b99faf075344d81f292b4cad6e7a55ec04d0974d8e7759\" returns successfully" Oct 31 01:32:59.715752 env[1377]: time="2025-10-31T01:32:59.708568161Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-8nrww,Uid:d10ebfbe-91f8-4576-8542-06b4d8a152be,Namespace:calico-system,Attempt:1,}" Oct 31 01:32:59.753062 kubelet[2284]: E1031 01:32:59.753018 2284 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-84f98497cf-b6pmv" podUID="034a3d4f-f436-4259-8570-9d57a6e1d274" Oct 31 01:32:59.909000 audit[3947]: NETFILTER_CFG table=filter:108 family=2 entries=20 op=nft_register_rule pid=3947 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Oct 31 01:32:59.909000 audit[3947]: SYSCALL arch=c000003e syscall=46 success=yes exit=7480 a0=3 a1=7ffcf60f9d00 a2=0 a3=7ffcf60f9cec items=0 ppid=2387 pid=3947 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 01:32:59.909000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Oct 31 01:32:59.913000 audit[3947]: NETFILTER_CFG table=nat:109 family=2 entries=14 op=nft_register_rule pid=3947 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Oct 31 01:32:59.913000 audit[3947]: SYSCALL arch=c000003e syscall=46 success=yes exit=3468 a0=3 a1=7ffcf60f9d00 a2=0 a3=0 items=0 ppid=2387 pid=3947 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 01:32:59.913000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Oct 31 01:32:59.923146 systemd-networkd[1125]: cali8fc7743da35: Link UP Oct 31 01:32:59.926479 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Oct 31 01:32:59.926557 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cali8fc7743da35: link becomes ready Oct 31 01:32:59.926669 systemd-networkd[1125]: cali8fc7743da35: Gained carrier Oct 31 01:32:59.943423 env[1377]: 2025-10-31 01:32:59.792 [INFO][3909] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--59f47b46b--s9kdt-eth0 calico-apiserver-59f47b46b- calico-apiserver 8f69b598-caa2-4abe-911a-df60fbb3c4df 959 0 2025-10-31 01:32:27 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:59f47b46b projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-59f47b46b-s9kdt eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali8fc7743da35 [] [] }} ContainerID="f2bd4d841158d061e1b924d180675f86ca9db5a6617aa40ba034c415e5e116a1" Namespace="calico-apiserver" Pod="calico-apiserver-59f47b46b-s9kdt" WorkloadEndpoint="localhost-k8s-calico--apiserver--59f47b46b--s9kdt-" Oct 31 01:32:59.943423 env[1377]: 2025-10-31 01:32:59.792 [INFO][3909] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="f2bd4d841158d061e1b924d180675f86ca9db5a6617aa40ba034c415e5e116a1" Namespace="calico-apiserver" Pod="calico-apiserver-59f47b46b-s9kdt" WorkloadEndpoint="localhost-k8s-calico--apiserver--59f47b46b--s9kdt-eth0" Oct 31 01:32:59.943423 env[1377]: 2025-10-31 01:32:59.857 [INFO][3922] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="f2bd4d841158d061e1b924d180675f86ca9db5a6617aa40ba034c415e5e116a1" HandleID="k8s-pod-network.f2bd4d841158d061e1b924d180675f86ca9db5a6617aa40ba034c415e5e116a1" Workload="localhost-k8s-calico--apiserver--59f47b46b--s9kdt-eth0" Oct 31 01:32:59.943423 env[1377]: 2025-10-31 01:32:59.858 [INFO][3922] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="f2bd4d841158d061e1b924d180675f86ca9db5a6617aa40ba034c415e5e116a1" HandleID="k8s-pod-network.f2bd4d841158d061e1b924d180675f86ca9db5a6617aa40ba034c415e5e116a1" Workload="localhost-k8s-calico--apiserver--59f47b46b--s9kdt-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002d4fe0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-59f47b46b-s9kdt", "timestamp":"2025-10-31 01:32:59.8579022 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Oct 31 01:32:59.943423 env[1377]: 2025-10-31 01:32:59.858 [INFO][3922] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Oct 31 01:32:59.943423 env[1377]: 2025-10-31 01:32:59.858 [INFO][3922] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Oct 31 01:32:59.943423 env[1377]: 2025-10-31 01:32:59.858 [INFO][3922] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Oct 31 01:32:59.943423 env[1377]: 2025-10-31 01:32:59.862 [INFO][3922] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.f2bd4d841158d061e1b924d180675f86ca9db5a6617aa40ba034c415e5e116a1" host="localhost" Oct 31 01:32:59.943423 env[1377]: 2025-10-31 01:32:59.882 [INFO][3922] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Oct 31 01:32:59.943423 env[1377]: 2025-10-31 01:32:59.898 [INFO][3922] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Oct 31 01:32:59.943423 env[1377]: 2025-10-31 01:32:59.900 [INFO][3922] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Oct 31 01:32:59.943423 env[1377]: 2025-10-31 01:32:59.902 [INFO][3922] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Oct 31 01:32:59.943423 env[1377]: 2025-10-31 01:32:59.902 [INFO][3922] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.f2bd4d841158d061e1b924d180675f86ca9db5a6617aa40ba034c415e5e116a1" host="localhost" Oct 31 01:32:59.943423 env[1377]: 2025-10-31 01:32:59.904 [INFO][3922] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.f2bd4d841158d061e1b924d180675f86ca9db5a6617aa40ba034c415e5e116a1 Oct 31 01:32:59.943423 env[1377]: 2025-10-31 01:32:59.912 [INFO][3922] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.f2bd4d841158d061e1b924d180675f86ca9db5a6617aa40ba034c415e5e116a1" host="localhost" Oct 31 01:32:59.943423 env[1377]: 2025-10-31 01:32:59.918 [INFO][3922] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.130/26] block=192.168.88.128/26 handle="k8s-pod-network.f2bd4d841158d061e1b924d180675f86ca9db5a6617aa40ba034c415e5e116a1" host="localhost" Oct 31 01:32:59.943423 env[1377]: 2025-10-31 01:32:59.918 [INFO][3922] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.130/26] handle="k8s-pod-network.f2bd4d841158d061e1b924d180675f86ca9db5a6617aa40ba034c415e5e116a1" host="localhost" Oct 31 01:32:59.943423 env[1377]: 2025-10-31 01:32:59.918 [INFO][3922] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Oct 31 01:32:59.943423 env[1377]: 2025-10-31 01:32:59.918 [INFO][3922] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.130/26] IPv6=[] ContainerID="f2bd4d841158d061e1b924d180675f86ca9db5a6617aa40ba034c415e5e116a1" HandleID="k8s-pod-network.f2bd4d841158d061e1b924d180675f86ca9db5a6617aa40ba034c415e5e116a1" Workload="localhost-k8s-calico--apiserver--59f47b46b--s9kdt-eth0" Oct 31 01:32:59.943953 env[1377]: 2025-10-31 01:32:59.919 [INFO][3909] cni-plugin/k8s.go 418: Populated endpoint ContainerID="f2bd4d841158d061e1b924d180675f86ca9db5a6617aa40ba034c415e5e116a1" Namespace="calico-apiserver" Pod="calico-apiserver-59f47b46b-s9kdt" WorkloadEndpoint="localhost-k8s-calico--apiserver--59f47b46b--s9kdt-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--59f47b46b--s9kdt-eth0", GenerateName:"calico-apiserver-59f47b46b-", Namespace:"calico-apiserver", SelfLink:"", UID:"8f69b598-caa2-4abe-911a-df60fbb3c4df", ResourceVersion:"959", Generation:0, CreationTimestamp:time.Date(2025, time.October, 31, 1, 32, 27, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"59f47b46b", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-59f47b46b-s9kdt", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali8fc7743da35", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 31 01:32:59.943953 env[1377]: 2025-10-31 01:32:59.920 [INFO][3909] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.130/32] ContainerID="f2bd4d841158d061e1b924d180675f86ca9db5a6617aa40ba034c415e5e116a1" Namespace="calico-apiserver" Pod="calico-apiserver-59f47b46b-s9kdt" WorkloadEndpoint="localhost-k8s-calico--apiserver--59f47b46b--s9kdt-eth0" Oct 31 01:32:59.943953 env[1377]: 2025-10-31 01:32:59.920 [INFO][3909] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali8fc7743da35 ContainerID="f2bd4d841158d061e1b924d180675f86ca9db5a6617aa40ba034c415e5e116a1" Namespace="calico-apiserver" Pod="calico-apiserver-59f47b46b-s9kdt" WorkloadEndpoint="localhost-k8s-calico--apiserver--59f47b46b--s9kdt-eth0" Oct 31 01:32:59.943953 env[1377]: 2025-10-31 01:32:59.927 [INFO][3909] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="f2bd4d841158d061e1b924d180675f86ca9db5a6617aa40ba034c415e5e116a1" Namespace="calico-apiserver" Pod="calico-apiserver-59f47b46b-s9kdt" WorkloadEndpoint="localhost-k8s-calico--apiserver--59f47b46b--s9kdt-eth0" Oct 31 01:32:59.943953 env[1377]: 2025-10-31 01:32:59.927 [INFO][3909] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="f2bd4d841158d061e1b924d180675f86ca9db5a6617aa40ba034c415e5e116a1" Namespace="calico-apiserver" Pod="calico-apiserver-59f47b46b-s9kdt" WorkloadEndpoint="localhost-k8s-calico--apiserver--59f47b46b--s9kdt-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--59f47b46b--s9kdt-eth0", GenerateName:"calico-apiserver-59f47b46b-", Namespace:"calico-apiserver", SelfLink:"", UID:"8f69b598-caa2-4abe-911a-df60fbb3c4df", ResourceVersion:"959", Generation:0, CreationTimestamp:time.Date(2025, time.October, 31, 1, 32, 27, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"59f47b46b", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"f2bd4d841158d061e1b924d180675f86ca9db5a6617aa40ba034c415e5e116a1", Pod:"calico-apiserver-59f47b46b-s9kdt", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali8fc7743da35", MAC:"a6:ec:2a:47:5e:8e", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 31 01:32:59.943953 env[1377]: 2025-10-31 01:32:59.941 [INFO][3909] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="f2bd4d841158d061e1b924d180675f86ca9db5a6617aa40ba034c415e5e116a1" Namespace="calico-apiserver" Pod="calico-apiserver-59f47b46b-s9kdt" WorkloadEndpoint="localhost-k8s-calico--apiserver--59f47b46b--s9kdt-eth0" Oct 31 01:32:59.960060 env[1377]: time="2025-10-31T01:32:59.960014824Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 31 01:32:59.960243 env[1377]: time="2025-10-31T01:32:59.960227004Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 31 01:32:59.960348 env[1377]: time="2025-10-31T01:32:59.960307305Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 31 01:32:59.960562 env[1377]: time="2025-10-31T01:32:59.960523661Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/f2bd4d841158d061e1b924d180675f86ca9db5a6617aa40ba034c415e5e116a1 pid=3962 runtime=io.containerd.runc.v2 Oct 31 01:32:59.999923 systemd-resolved[1294]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Oct 31 01:33:00.044335 systemd-networkd[1125]: calia9c5889affe: Link UP Oct 31 01:33:00.044489 systemd-networkd[1125]: calia9c5889affe: Gained carrier Oct 31 01:33:00.044693 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): calia9c5889affe: link becomes ready Oct 31 01:33:00.050232 env[1377]: time="2025-10-31T01:33:00.050201742Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-59f47b46b-s9kdt,Uid:8f69b598-caa2-4abe-911a-df60fbb3c4df,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"f2bd4d841158d061e1b924d180675f86ca9db5a6617aa40ba034c415e5e116a1\"" Oct 31 01:33:00.052001 env[1377]: time="2025-10-31T01:33:00.051229640Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Oct 31 01:33:00.064340 env[1377]: 2025-10-31 01:32:59.860 [INFO][3927] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-goldmane--666569f655--8nrww-eth0 goldmane-666569f655- calico-system d10ebfbe-91f8-4576-8542-06b4d8a152be 958 0 2025-10-31 01:32:29 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:666569f655 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s localhost goldmane-666569f655-8nrww eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] calia9c5889affe [] [] }} ContainerID="e189e0ecdeec3111c46c118e943a28fc0ca959044e13af3d633adf11e200f0e9" Namespace="calico-system" Pod="goldmane-666569f655-8nrww" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--8nrww-" Oct 31 01:33:00.064340 env[1377]: 2025-10-31 01:32:59.860 [INFO][3927] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="e189e0ecdeec3111c46c118e943a28fc0ca959044e13af3d633adf11e200f0e9" Namespace="calico-system" Pod="goldmane-666569f655-8nrww" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--8nrww-eth0" Oct 31 01:33:00.064340 env[1377]: 2025-10-31 01:32:59.929 [INFO][3941] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="e189e0ecdeec3111c46c118e943a28fc0ca959044e13af3d633adf11e200f0e9" HandleID="k8s-pod-network.e189e0ecdeec3111c46c118e943a28fc0ca959044e13af3d633adf11e200f0e9" Workload="localhost-k8s-goldmane--666569f655--8nrww-eth0" Oct 31 01:33:00.064340 env[1377]: 2025-10-31 01:32:59.929 [INFO][3941] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="e189e0ecdeec3111c46c118e943a28fc0ca959044e13af3d633adf11e200f0e9" HandleID="k8s-pod-network.e189e0ecdeec3111c46c118e943a28fc0ca959044e13af3d633adf11e200f0e9" Workload="localhost-k8s-goldmane--666569f655--8nrww-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002c8fe0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"goldmane-666569f655-8nrww", "timestamp":"2025-10-31 01:32:59.929063587 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Oct 31 01:33:00.064340 env[1377]: 2025-10-31 01:32:59.929 [INFO][3941] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Oct 31 01:33:00.064340 env[1377]: 2025-10-31 01:32:59.929 [INFO][3941] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Oct 31 01:33:00.064340 env[1377]: 2025-10-31 01:32:59.929 [INFO][3941] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Oct 31 01:33:00.064340 env[1377]: 2025-10-31 01:32:59.967 [INFO][3941] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.e189e0ecdeec3111c46c118e943a28fc0ca959044e13af3d633adf11e200f0e9" host="localhost" Oct 31 01:33:00.064340 env[1377]: 2025-10-31 01:32:59.979 [INFO][3941] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Oct 31 01:33:00.064340 env[1377]: 2025-10-31 01:33:00.005 [INFO][3941] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Oct 31 01:33:00.064340 env[1377]: 2025-10-31 01:33:00.006 [INFO][3941] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Oct 31 01:33:00.064340 env[1377]: 2025-10-31 01:33:00.008 [INFO][3941] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Oct 31 01:33:00.064340 env[1377]: 2025-10-31 01:33:00.008 [INFO][3941] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.e189e0ecdeec3111c46c118e943a28fc0ca959044e13af3d633adf11e200f0e9" host="localhost" Oct 31 01:33:00.064340 env[1377]: 2025-10-31 01:33:00.008 [INFO][3941] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.e189e0ecdeec3111c46c118e943a28fc0ca959044e13af3d633adf11e200f0e9 Oct 31 01:33:00.064340 env[1377]: 2025-10-31 01:33:00.011 [INFO][3941] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.e189e0ecdeec3111c46c118e943a28fc0ca959044e13af3d633adf11e200f0e9" host="localhost" Oct 31 01:33:00.064340 env[1377]: 2025-10-31 01:33:00.026 [INFO][3941] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.131/26] block=192.168.88.128/26 handle="k8s-pod-network.e189e0ecdeec3111c46c118e943a28fc0ca959044e13af3d633adf11e200f0e9" host="localhost" Oct 31 01:33:00.064340 env[1377]: 2025-10-31 01:33:00.026 [INFO][3941] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.131/26] handle="k8s-pod-network.e189e0ecdeec3111c46c118e943a28fc0ca959044e13af3d633adf11e200f0e9" host="localhost" Oct 31 01:33:00.064340 env[1377]: 2025-10-31 01:33:00.026 [INFO][3941] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Oct 31 01:33:00.064340 env[1377]: 2025-10-31 01:33:00.026 [INFO][3941] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.131/26] IPv6=[] ContainerID="e189e0ecdeec3111c46c118e943a28fc0ca959044e13af3d633adf11e200f0e9" HandleID="k8s-pod-network.e189e0ecdeec3111c46c118e943a28fc0ca959044e13af3d633adf11e200f0e9" Workload="localhost-k8s-goldmane--666569f655--8nrww-eth0" Oct 31 01:33:00.064853 env[1377]: 2025-10-31 01:33:00.028 [INFO][3927] cni-plugin/k8s.go 418: Populated endpoint ContainerID="e189e0ecdeec3111c46c118e943a28fc0ca959044e13af3d633adf11e200f0e9" Namespace="calico-system" Pod="goldmane-666569f655-8nrww" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--8nrww-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--666569f655--8nrww-eth0", GenerateName:"goldmane-666569f655-", Namespace:"calico-system", SelfLink:"", UID:"d10ebfbe-91f8-4576-8542-06b4d8a152be", ResourceVersion:"958", Generation:0, CreationTimestamp:time.Date(2025, time.October, 31, 1, 32, 29, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"666569f655", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"goldmane-666569f655-8nrww", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calia9c5889affe", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 31 01:33:00.064853 env[1377]: 2025-10-31 01:33:00.028 [INFO][3927] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.131/32] ContainerID="e189e0ecdeec3111c46c118e943a28fc0ca959044e13af3d633adf11e200f0e9" Namespace="calico-system" Pod="goldmane-666569f655-8nrww" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--8nrww-eth0" Oct 31 01:33:00.064853 env[1377]: 2025-10-31 01:33:00.028 [INFO][3927] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calia9c5889affe ContainerID="e189e0ecdeec3111c46c118e943a28fc0ca959044e13af3d633adf11e200f0e9" Namespace="calico-system" Pod="goldmane-666569f655-8nrww" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--8nrww-eth0" Oct 31 01:33:00.064853 env[1377]: 2025-10-31 01:33:00.044 [INFO][3927] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="e189e0ecdeec3111c46c118e943a28fc0ca959044e13af3d633adf11e200f0e9" Namespace="calico-system" Pod="goldmane-666569f655-8nrww" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--8nrww-eth0" Oct 31 01:33:00.064853 env[1377]: 2025-10-31 01:33:00.045 [INFO][3927] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="e189e0ecdeec3111c46c118e943a28fc0ca959044e13af3d633adf11e200f0e9" Namespace="calico-system" Pod="goldmane-666569f655-8nrww" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--8nrww-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--666569f655--8nrww-eth0", GenerateName:"goldmane-666569f655-", Namespace:"calico-system", SelfLink:"", UID:"d10ebfbe-91f8-4576-8542-06b4d8a152be", ResourceVersion:"958", Generation:0, CreationTimestamp:time.Date(2025, time.October, 31, 1, 32, 29, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"666569f655", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"e189e0ecdeec3111c46c118e943a28fc0ca959044e13af3d633adf11e200f0e9", Pod:"goldmane-666569f655-8nrww", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calia9c5889affe", MAC:"3a:1e:15:0d:3c:85", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 31 01:33:00.064853 env[1377]: 2025-10-31 01:33:00.062 [INFO][3927] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="e189e0ecdeec3111c46c118e943a28fc0ca959044e13af3d633adf11e200f0e9" Namespace="calico-system" Pod="goldmane-666569f655-8nrww" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--8nrww-eth0" Oct 31 01:33:00.093139 env[1377]: time="2025-10-31T01:33:00.093002562Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 31 01:33:00.093139 env[1377]: time="2025-10-31T01:33:00.093037985Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 31 01:33:00.093139 env[1377]: time="2025-10-31T01:33:00.093046098Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 31 01:33:00.093331 env[1377]: time="2025-10-31T01:33:00.093304830Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/e189e0ecdeec3111c46c118e943a28fc0ca959044e13af3d633adf11e200f0e9 pid=4012 runtime=io.containerd.runc.v2 Oct 31 01:33:00.117000 audit[4034]: NETFILTER_CFG table=filter:110 family=2 entries=50 op=nft_register_chain pid=4034 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Oct 31 01:33:00.117000 audit[4034]: SYSCALL arch=c000003e syscall=46 success=yes exit=28208 a0=3 a1=7ffeb099b140 a2=0 a3=7ffeb099b12c items=0 ppid=3595 pid=4034 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 01:33:00.117000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Oct 31 01:33:00.127531 systemd-resolved[1294]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Oct 31 01:33:00.134000 audit[4045]: NETFILTER_CFG table=filter:111 family=2 entries=48 op=nft_register_chain pid=4045 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Oct 31 01:33:00.134000 audit[4045]: SYSCALL arch=c000003e syscall=46 success=yes exit=26368 a0=3 a1=7ffcf53ccc70 a2=0 a3=7ffcf53ccc5c items=0 ppid=3595 pid=4045 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 01:33:00.134000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Oct 31 01:33:00.152898 env[1377]: time="2025-10-31T01:33:00.152868461Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-8nrww,Uid:d10ebfbe-91f8-4576-8542-06b4d8a152be,Namespace:calico-system,Attempt:1,} returns sandbox id \"e189e0ecdeec3111c46c118e943a28fc0ca959044e13af3d633adf11e200f0e9\"" Oct 31 01:33:00.570802 env[1377]: time="2025-10-31T01:33:00.570295360Z" level=info msg="StopPodSandbox for \"660a1bc9f26fedc398ff472056caa51e1793e5e319a6e213b713da6c45aecdd8\"" Oct 31 01:33:00.591414 env[1377]: time="2025-10-31T01:33:00.591377949Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Oct 31 01:33:00.598783 env[1377]: time="2025-10-31T01:33:00.598744675Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Oct 31 01:33:00.599326 kubelet[2284]: E1031 01:33:00.598924 2284 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Oct 31 01:33:00.599326 kubelet[2284]: E1031 01:33:00.598962 2284 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Oct 31 01:33:00.599326 kubelet[2284]: E1031 01:33:00.599134 2284 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-5b64v,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-59f47b46b-s9kdt_calico-apiserver(8f69b598-caa2-4abe-911a-df60fbb3c4df): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Oct 31 01:33:00.599600 env[1377]: time="2025-10-31T01:33:00.599587350Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Oct 31 01:33:00.600510 kubelet[2284]: E1031 01:33:00.600482 2284 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-59f47b46b-s9kdt" podUID="8f69b598-caa2-4abe-911a-df60fbb3c4df" Oct 31 01:33:00.754153 kubelet[2284]: E1031 01:33:00.754031 2284 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-59f47b46b-s9kdt" podUID="8f69b598-caa2-4abe-911a-df60fbb3c4df" Oct 31 01:33:00.794000 audit[4077]: NETFILTER_CFG table=filter:112 family=2 entries=20 op=nft_register_rule pid=4077 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Oct 31 01:33:00.794000 audit[4077]: SYSCALL arch=c000003e syscall=46 success=yes exit=7480 a0=3 a1=7fffea314840 a2=0 a3=7fffea31482c items=0 ppid=2387 pid=4077 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 01:33:00.794000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Oct 31 01:33:00.798000 audit[4077]: NETFILTER_CFG table=nat:113 family=2 entries=14 op=nft_register_rule pid=4077 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Oct 31 01:33:00.798000 audit[4077]: SYSCALL arch=c000003e syscall=46 success=yes exit=3468 a0=3 a1=7fffea314840 a2=0 a3=0 items=0 ppid=2387 pid=4077 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 01:33:00.798000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Oct 31 01:33:00.801090 env[1377]: 2025-10-31 01:33:00.751 [INFO][4062] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="660a1bc9f26fedc398ff472056caa51e1793e5e319a6e213b713da6c45aecdd8" Oct 31 01:33:00.801090 env[1377]: 2025-10-31 01:33:00.751 [INFO][4062] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="660a1bc9f26fedc398ff472056caa51e1793e5e319a6e213b713da6c45aecdd8" iface="eth0" netns="/var/run/netns/cni-269aee25-2c33-6838-4e46-9b9546bf71f1" Oct 31 01:33:00.801090 env[1377]: 2025-10-31 01:33:00.751 [INFO][4062] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="660a1bc9f26fedc398ff472056caa51e1793e5e319a6e213b713da6c45aecdd8" iface="eth0" netns="/var/run/netns/cni-269aee25-2c33-6838-4e46-9b9546bf71f1" Oct 31 01:33:00.801090 env[1377]: 2025-10-31 01:33:00.751 [INFO][4062] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="660a1bc9f26fedc398ff472056caa51e1793e5e319a6e213b713da6c45aecdd8" iface="eth0" netns="/var/run/netns/cni-269aee25-2c33-6838-4e46-9b9546bf71f1" Oct 31 01:33:00.801090 env[1377]: 2025-10-31 01:33:00.751 [INFO][4062] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="660a1bc9f26fedc398ff472056caa51e1793e5e319a6e213b713da6c45aecdd8" Oct 31 01:33:00.801090 env[1377]: 2025-10-31 01:33:00.751 [INFO][4062] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="660a1bc9f26fedc398ff472056caa51e1793e5e319a6e213b713da6c45aecdd8" Oct 31 01:33:00.801090 env[1377]: 2025-10-31 01:33:00.776 [INFO][4070] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="660a1bc9f26fedc398ff472056caa51e1793e5e319a6e213b713da6c45aecdd8" HandleID="k8s-pod-network.660a1bc9f26fedc398ff472056caa51e1793e5e319a6e213b713da6c45aecdd8" Workload="localhost-k8s-coredns--668d6bf9bc--j9jdm-eth0" Oct 31 01:33:00.801090 env[1377]: 2025-10-31 01:33:00.776 [INFO][4070] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Oct 31 01:33:00.801090 env[1377]: 2025-10-31 01:33:00.777 [INFO][4070] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Oct 31 01:33:00.801090 env[1377]: 2025-10-31 01:33:00.797 [WARNING][4070] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="660a1bc9f26fedc398ff472056caa51e1793e5e319a6e213b713da6c45aecdd8" HandleID="k8s-pod-network.660a1bc9f26fedc398ff472056caa51e1793e5e319a6e213b713da6c45aecdd8" Workload="localhost-k8s-coredns--668d6bf9bc--j9jdm-eth0" Oct 31 01:33:00.801090 env[1377]: 2025-10-31 01:33:00.797 [INFO][4070] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="660a1bc9f26fedc398ff472056caa51e1793e5e319a6e213b713da6c45aecdd8" HandleID="k8s-pod-network.660a1bc9f26fedc398ff472056caa51e1793e5e319a6e213b713da6c45aecdd8" Workload="localhost-k8s-coredns--668d6bf9bc--j9jdm-eth0" Oct 31 01:33:00.801090 env[1377]: 2025-10-31 01:33:00.798 [INFO][4070] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Oct 31 01:33:00.801090 env[1377]: 2025-10-31 01:33:00.799 [INFO][4062] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="660a1bc9f26fedc398ff472056caa51e1793e5e319a6e213b713da6c45aecdd8" Oct 31 01:33:00.803064 systemd[1]: run-netns-cni\x2d269aee25\x2d2c33\x2d6838\x2d4e46\x2d9b9546bf71f1.mount: Deactivated successfully. Oct 31 01:33:00.803354 env[1377]: time="2025-10-31T01:33:00.803324557Z" level=info msg="TearDown network for sandbox \"660a1bc9f26fedc398ff472056caa51e1793e5e319a6e213b713da6c45aecdd8\" successfully" Oct 31 01:33:00.803412 env[1377]: time="2025-10-31T01:33:00.803400567Z" level=info msg="StopPodSandbox for \"660a1bc9f26fedc398ff472056caa51e1793e5e319a6e213b713da6c45aecdd8\" returns successfully" Oct 31 01:33:00.803995 env[1377]: time="2025-10-31T01:33:00.803973155Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-j9jdm,Uid:c35d3eac-7307-43de-bf4e-73472193a4cb,Namespace:kube-system,Attempt:1,}" Oct 31 01:33:00.961798 env[1377]: time="2025-10-31T01:33:00.961083512Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Oct 31 01:33:00.968186 env[1377]: time="2025-10-31T01:33:00.968134905Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Oct 31 01:33:00.968649 kubelet[2284]: E1031 01:33:00.968464 2284 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Oct 31 01:33:00.968649 kubelet[2284]: E1031 01:33:00.968508 2284 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Oct 31 01:33:00.968649 kubelet[2284]: E1031 01:33:00.968618 2284 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-9kdtz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-8nrww_calico-system(d10ebfbe-91f8-4576-8542-06b4d8a152be): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Oct 31 01:33:00.969743 kubelet[2284]: E1031 01:33:00.969714 2284 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-8nrww" podUID="d10ebfbe-91f8-4576-8542-06b4d8a152be" Oct 31 01:33:00.983741 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cali35c6f7c7cd9: link becomes ready Oct 31 01:33:00.981447 systemd-networkd[1125]: cali35c6f7c7cd9: Link UP Oct 31 01:33:00.981548 systemd-networkd[1125]: cali35c6f7c7cd9: Gained carrier Oct 31 01:33:00.999231 env[1377]: 2025-10-31 01:33:00.900 [INFO][4078] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--668d6bf9bc--j9jdm-eth0 coredns-668d6bf9bc- kube-system c35d3eac-7307-43de-bf4e-73472193a4cb 980 0 2025-10-31 01:32:19 +0000 UTC map[k8s-app:kube-dns pod-template-hash:668d6bf9bc projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-668d6bf9bc-j9jdm eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali35c6f7c7cd9 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="1f3473c4800660473d609053d1289022cfd0444ef95e01da3854d558ae53db2e" Namespace="kube-system" Pod="coredns-668d6bf9bc-j9jdm" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--j9jdm-" Oct 31 01:33:00.999231 env[1377]: 2025-10-31 01:33:00.900 [INFO][4078] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="1f3473c4800660473d609053d1289022cfd0444ef95e01da3854d558ae53db2e" Namespace="kube-system" Pod="coredns-668d6bf9bc-j9jdm" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--j9jdm-eth0" Oct 31 01:33:00.999231 env[1377]: 2025-10-31 01:33:00.917 [INFO][4091] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="1f3473c4800660473d609053d1289022cfd0444ef95e01da3854d558ae53db2e" HandleID="k8s-pod-network.1f3473c4800660473d609053d1289022cfd0444ef95e01da3854d558ae53db2e" Workload="localhost-k8s-coredns--668d6bf9bc--j9jdm-eth0" Oct 31 01:33:00.999231 env[1377]: 2025-10-31 01:33:00.917 [INFO][4091] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="1f3473c4800660473d609053d1289022cfd0444ef95e01da3854d558ae53db2e" HandleID="k8s-pod-network.1f3473c4800660473d609053d1289022cfd0444ef95e01da3854d558ae53db2e" Workload="localhost-k8s-coredns--668d6bf9bc--j9jdm-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000251660), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-668d6bf9bc-j9jdm", "timestamp":"2025-10-31 01:33:00.917434401 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Oct 31 01:33:00.999231 env[1377]: 2025-10-31 01:33:00.917 [INFO][4091] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Oct 31 01:33:00.999231 env[1377]: 2025-10-31 01:33:00.917 [INFO][4091] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Oct 31 01:33:00.999231 env[1377]: 2025-10-31 01:33:00.917 [INFO][4091] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Oct 31 01:33:00.999231 env[1377]: 2025-10-31 01:33:00.927 [INFO][4091] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.1f3473c4800660473d609053d1289022cfd0444ef95e01da3854d558ae53db2e" host="localhost" Oct 31 01:33:00.999231 env[1377]: 2025-10-31 01:33:00.949 [INFO][4091] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Oct 31 01:33:00.999231 env[1377]: 2025-10-31 01:33:00.953 [INFO][4091] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Oct 31 01:33:00.999231 env[1377]: 2025-10-31 01:33:00.954 [INFO][4091] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Oct 31 01:33:00.999231 env[1377]: 2025-10-31 01:33:00.956 [INFO][4091] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Oct 31 01:33:00.999231 env[1377]: 2025-10-31 01:33:00.956 [INFO][4091] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.1f3473c4800660473d609053d1289022cfd0444ef95e01da3854d558ae53db2e" host="localhost" Oct 31 01:33:00.999231 env[1377]: 2025-10-31 01:33:00.957 [INFO][4091] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.1f3473c4800660473d609053d1289022cfd0444ef95e01da3854d558ae53db2e Oct 31 01:33:00.999231 env[1377]: 2025-10-31 01:33:00.960 [INFO][4091] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.1f3473c4800660473d609053d1289022cfd0444ef95e01da3854d558ae53db2e" host="localhost" Oct 31 01:33:00.999231 env[1377]: 2025-10-31 01:33:00.975 [INFO][4091] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.132/26] block=192.168.88.128/26 handle="k8s-pod-network.1f3473c4800660473d609053d1289022cfd0444ef95e01da3854d558ae53db2e" host="localhost" Oct 31 01:33:00.999231 env[1377]: 2025-10-31 01:33:00.975 [INFO][4091] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.132/26] handle="k8s-pod-network.1f3473c4800660473d609053d1289022cfd0444ef95e01da3854d558ae53db2e" host="localhost" Oct 31 01:33:00.999231 env[1377]: 2025-10-31 01:33:00.975 [INFO][4091] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Oct 31 01:33:00.999231 env[1377]: 2025-10-31 01:33:00.975 [INFO][4091] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.132/26] IPv6=[] ContainerID="1f3473c4800660473d609053d1289022cfd0444ef95e01da3854d558ae53db2e" HandleID="k8s-pod-network.1f3473c4800660473d609053d1289022cfd0444ef95e01da3854d558ae53db2e" Workload="localhost-k8s-coredns--668d6bf9bc--j9jdm-eth0" Oct 31 01:33:00.999708 env[1377]: 2025-10-31 01:33:00.977 [INFO][4078] cni-plugin/k8s.go 418: Populated endpoint ContainerID="1f3473c4800660473d609053d1289022cfd0444ef95e01da3854d558ae53db2e" Namespace="kube-system" Pod="coredns-668d6bf9bc-j9jdm" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--j9jdm-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--668d6bf9bc--j9jdm-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"c35d3eac-7307-43de-bf4e-73472193a4cb", ResourceVersion:"980", Generation:0, CreationTimestamp:time.Date(2025, time.October, 31, 1, 32, 19, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-668d6bf9bc-j9jdm", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali35c6f7c7cd9", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 31 01:33:00.999708 env[1377]: 2025-10-31 01:33:00.977 [INFO][4078] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.132/32] ContainerID="1f3473c4800660473d609053d1289022cfd0444ef95e01da3854d558ae53db2e" Namespace="kube-system" Pod="coredns-668d6bf9bc-j9jdm" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--j9jdm-eth0" Oct 31 01:33:00.999708 env[1377]: 2025-10-31 01:33:00.977 [INFO][4078] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali35c6f7c7cd9 ContainerID="1f3473c4800660473d609053d1289022cfd0444ef95e01da3854d558ae53db2e" Namespace="kube-system" Pod="coredns-668d6bf9bc-j9jdm" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--j9jdm-eth0" Oct 31 01:33:00.999708 env[1377]: 2025-10-31 01:33:00.978 [INFO][4078] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="1f3473c4800660473d609053d1289022cfd0444ef95e01da3854d558ae53db2e" Namespace="kube-system" Pod="coredns-668d6bf9bc-j9jdm" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--j9jdm-eth0" Oct 31 01:33:00.999708 env[1377]: 2025-10-31 01:33:00.978 [INFO][4078] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="1f3473c4800660473d609053d1289022cfd0444ef95e01da3854d558ae53db2e" Namespace="kube-system" Pod="coredns-668d6bf9bc-j9jdm" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--j9jdm-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--668d6bf9bc--j9jdm-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"c35d3eac-7307-43de-bf4e-73472193a4cb", ResourceVersion:"980", Generation:0, CreationTimestamp:time.Date(2025, time.October, 31, 1, 32, 19, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"1f3473c4800660473d609053d1289022cfd0444ef95e01da3854d558ae53db2e", Pod:"coredns-668d6bf9bc-j9jdm", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali35c6f7c7cd9", MAC:"46:98:ea:97:39:d7", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 31 01:33:00.999708 env[1377]: 2025-10-31 01:33:00.995 [INFO][4078] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="1f3473c4800660473d609053d1289022cfd0444ef95e01da3854d558ae53db2e" Namespace="kube-system" Pod="coredns-668d6bf9bc-j9jdm" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--j9jdm-eth0" Oct 31 01:33:01.049000 audit[4106]: NETFILTER_CFG table=filter:114 family=2 entries=50 op=nft_register_chain pid=4106 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Oct 31 01:33:01.049000 audit[4106]: SYSCALL arch=c000003e syscall=46 success=yes exit=24928 a0=3 a1=7ffc543796d0 a2=0 a3=7ffc543796bc items=0 ppid=3595 pid=4106 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 01:33:01.049000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Oct 31 01:33:01.102297 env[1377]: time="2025-10-31T01:33:01.102252967Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 31 01:33:01.102449 env[1377]: time="2025-10-31T01:33:01.102434387Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 31 01:33:01.102525 env[1377]: time="2025-10-31T01:33:01.102511455Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 31 01:33:01.102808 env[1377]: time="2025-10-31T01:33:01.102704770Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/1f3473c4800660473d609053d1289022cfd0444ef95e01da3854d558ae53db2e pid=4115 runtime=io.containerd.runc.v2 Oct 31 01:33:01.139080 systemd-resolved[1294]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Oct 31 01:33:01.160896 env[1377]: time="2025-10-31T01:33:01.160872530Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-j9jdm,Uid:c35d3eac-7307-43de-bf4e-73472193a4cb,Namespace:kube-system,Attempt:1,} returns sandbox id \"1f3473c4800660473d609053d1289022cfd0444ef95e01da3854d558ae53db2e\"" Oct 31 01:33:01.170240 env[1377]: time="2025-10-31T01:33:01.170207166Z" level=info msg="CreateContainer within sandbox \"1f3473c4800660473d609053d1289022cfd0444ef95e01da3854d558ae53db2e\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Oct 31 01:33:01.197787 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1353160109.mount: Deactivated successfully. Oct 31 01:33:01.204436 env[1377]: time="2025-10-31T01:33:01.204406667Z" level=info msg="CreateContainer within sandbox \"1f3473c4800660473d609053d1289022cfd0444ef95e01da3854d558ae53db2e\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"a8df013b9e6643d5a8f43e3f616fb92c7be00137cae097e2a0dc031131cbf8de\"" Oct 31 01:33:01.205835 env[1377]: time="2025-10-31T01:33:01.204852669Z" level=info msg="StartContainer for \"a8df013b9e6643d5a8f43e3f616fb92c7be00137cae097e2a0dc031131cbf8de\"" Oct 31 01:33:01.262331 env[1377]: time="2025-10-31T01:33:01.261398285Z" level=info msg="StartContainer for \"a8df013b9e6643d5a8f43e3f616fb92c7be00137cae097e2a0dc031131cbf8de\" returns successfully" Oct 31 01:33:01.300768 systemd-networkd[1125]: cali8fc7743da35: Gained IPv6LL Oct 31 01:33:01.571913 env[1377]: time="2025-10-31T01:33:01.571891739Z" level=info msg="StopPodSandbox for \"e9b9f960fe0a05e1639f4b425911f59ace57268750f8e775d196eb85258ce494\"" Oct 31 01:33:01.583131 env[1377]: time="2025-10-31T01:33:01.582653607Z" level=info msg="StopPodSandbox for \"a3f2fe237e95c197a2f6b326a1b871ff16dcf0c00c677db5ddf2964768d2fbe0\"" Oct 31 01:33:01.732208 env[1377]: 2025-10-31 01:33:01.638 [INFO][4207] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="e9b9f960fe0a05e1639f4b425911f59ace57268750f8e775d196eb85258ce494" Oct 31 01:33:01.732208 env[1377]: 2025-10-31 01:33:01.638 [INFO][4207] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="e9b9f960fe0a05e1639f4b425911f59ace57268750f8e775d196eb85258ce494" iface="eth0" netns="/var/run/netns/cni-31a0e543-bd01-cb41-4789-9bcc6bb52b5a" Oct 31 01:33:01.732208 env[1377]: 2025-10-31 01:33:01.640 [INFO][4207] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="e9b9f960fe0a05e1639f4b425911f59ace57268750f8e775d196eb85258ce494" iface="eth0" netns="/var/run/netns/cni-31a0e543-bd01-cb41-4789-9bcc6bb52b5a" Oct 31 01:33:01.732208 env[1377]: 2025-10-31 01:33:01.640 [INFO][4207] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="e9b9f960fe0a05e1639f4b425911f59ace57268750f8e775d196eb85258ce494" iface="eth0" netns="/var/run/netns/cni-31a0e543-bd01-cb41-4789-9bcc6bb52b5a" Oct 31 01:33:01.732208 env[1377]: 2025-10-31 01:33:01.640 [INFO][4207] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="e9b9f960fe0a05e1639f4b425911f59ace57268750f8e775d196eb85258ce494" Oct 31 01:33:01.732208 env[1377]: 2025-10-31 01:33:01.640 [INFO][4207] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="e9b9f960fe0a05e1639f4b425911f59ace57268750f8e775d196eb85258ce494" Oct 31 01:33:01.732208 env[1377]: 2025-10-31 01:33:01.717 [INFO][4220] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="e9b9f960fe0a05e1639f4b425911f59ace57268750f8e775d196eb85258ce494" HandleID="k8s-pod-network.e9b9f960fe0a05e1639f4b425911f59ace57268750f8e775d196eb85258ce494" Workload="localhost-k8s-calico--apiserver--7495b6f49d--9bz8s-eth0" Oct 31 01:33:01.732208 env[1377]: 2025-10-31 01:33:01.718 [INFO][4220] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Oct 31 01:33:01.732208 env[1377]: 2025-10-31 01:33:01.721 [INFO][4220] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Oct 31 01:33:01.732208 env[1377]: 2025-10-31 01:33:01.727 [WARNING][4220] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="e9b9f960fe0a05e1639f4b425911f59ace57268750f8e775d196eb85258ce494" HandleID="k8s-pod-network.e9b9f960fe0a05e1639f4b425911f59ace57268750f8e775d196eb85258ce494" Workload="localhost-k8s-calico--apiserver--7495b6f49d--9bz8s-eth0" Oct 31 01:33:01.732208 env[1377]: 2025-10-31 01:33:01.727 [INFO][4220] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="e9b9f960fe0a05e1639f4b425911f59ace57268750f8e775d196eb85258ce494" HandleID="k8s-pod-network.e9b9f960fe0a05e1639f4b425911f59ace57268750f8e775d196eb85258ce494" Workload="localhost-k8s-calico--apiserver--7495b6f49d--9bz8s-eth0" Oct 31 01:33:01.732208 env[1377]: 2025-10-31 01:33:01.728 [INFO][4220] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Oct 31 01:33:01.732208 env[1377]: 2025-10-31 01:33:01.729 [INFO][4207] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="e9b9f960fe0a05e1639f4b425911f59ace57268750f8e775d196eb85258ce494" Oct 31 01:33:01.733490 env[1377]: time="2025-10-31T01:33:01.732329210Z" level=info msg="TearDown network for sandbox \"e9b9f960fe0a05e1639f4b425911f59ace57268750f8e775d196eb85258ce494\" successfully" Oct 31 01:33:01.733490 env[1377]: time="2025-10-31T01:33:01.732350485Z" level=info msg="StopPodSandbox for \"e9b9f960fe0a05e1639f4b425911f59ace57268750f8e775d196eb85258ce494\" returns successfully" Oct 31 01:33:01.733490 env[1377]: time="2025-10-31T01:33:01.733047355Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7495b6f49d-9bz8s,Uid:3586a90a-6636-45ab-8082-c6aa9bdb62e3,Namespace:calico-apiserver,Attempt:1,}" Oct 31 01:33:01.746440 env[1377]: 2025-10-31 01:33:01.662 [INFO][4208] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="a3f2fe237e95c197a2f6b326a1b871ff16dcf0c00c677db5ddf2964768d2fbe0" Oct 31 01:33:01.746440 env[1377]: 2025-10-31 01:33:01.662 [INFO][4208] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="a3f2fe237e95c197a2f6b326a1b871ff16dcf0c00c677db5ddf2964768d2fbe0" iface="eth0" netns="/var/run/netns/cni-052a589e-f83a-92d3-b652-8463b38d6c41" Oct 31 01:33:01.746440 env[1377]: 2025-10-31 01:33:01.663 [INFO][4208] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="a3f2fe237e95c197a2f6b326a1b871ff16dcf0c00c677db5ddf2964768d2fbe0" iface="eth0" netns="/var/run/netns/cni-052a589e-f83a-92d3-b652-8463b38d6c41" Oct 31 01:33:01.746440 env[1377]: 2025-10-31 01:33:01.669 [INFO][4208] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="a3f2fe237e95c197a2f6b326a1b871ff16dcf0c00c677db5ddf2964768d2fbe0" iface="eth0" netns="/var/run/netns/cni-052a589e-f83a-92d3-b652-8463b38d6c41" Oct 31 01:33:01.746440 env[1377]: 2025-10-31 01:33:01.669 [INFO][4208] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="a3f2fe237e95c197a2f6b326a1b871ff16dcf0c00c677db5ddf2964768d2fbe0" Oct 31 01:33:01.746440 env[1377]: 2025-10-31 01:33:01.669 [INFO][4208] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="a3f2fe237e95c197a2f6b326a1b871ff16dcf0c00c677db5ddf2964768d2fbe0" Oct 31 01:33:01.746440 env[1377]: 2025-10-31 01:33:01.739 [INFO][4226] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="a3f2fe237e95c197a2f6b326a1b871ff16dcf0c00c677db5ddf2964768d2fbe0" HandleID="k8s-pod-network.a3f2fe237e95c197a2f6b326a1b871ff16dcf0c00c677db5ddf2964768d2fbe0" Workload="localhost-k8s-coredns--668d6bf9bc--27j8s-eth0" Oct 31 01:33:01.746440 env[1377]: 2025-10-31 01:33:01.739 [INFO][4226] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Oct 31 01:33:01.746440 env[1377]: 2025-10-31 01:33:01.739 [INFO][4226] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Oct 31 01:33:01.746440 env[1377]: 2025-10-31 01:33:01.743 [WARNING][4226] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="a3f2fe237e95c197a2f6b326a1b871ff16dcf0c00c677db5ddf2964768d2fbe0" HandleID="k8s-pod-network.a3f2fe237e95c197a2f6b326a1b871ff16dcf0c00c677db5ddf2964768d2fbe0" Workload="localhost-k8s-coredns--668d6bf9bc--27j8s-eth0" Oct 31 01:33:01.746440 env[1377]: 2025-10-31 01:33:01.743 [INFO][4226] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="a3f2fe237e95c197a2f6b326a1b871ff16dcf0c00c677db5ddf2964768d2fbe0" HandleID="k8s-pod-network.a3f2fe237e95c197a2f6b326a1b871ff16dcf0c00c677db5ddf2964768d2fbe0" Workload="localhost-k8s-coredns--668d6bf9bc--27j8s-eth0" Oct 31 01:33:01.746440 env[1377]: 2025-10-31 01:33:01.744 [INFO][4226] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Oct 31 01:33:01.746440 env[1377]: 2025-10-31 01:33:01.745 [INFO][4208] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="a3f2fe237e95c197a2f6b326a1b871ff16dcf0c00c677db5ddf2964768d2fbe0" Oct 31 01:33:01.746790 env[1377]: time="2025-10-31T01:33:01.746570233Z" level=info msg="TearDown network for sandbox \"a3f2fe237e95c197a2f6b326a1b871ff16dcf0c00c677db5ddf2964768d2fbe0\" successfully" Oct 31 01:33:01.746790 env[1377]: time="2025-10-31T01:33:01.746592948Z" level=info msg="StopPodSandbox for \"a3f2fe237e95c197a2f6b326a1b871ff16dcf0c00c677db5ddf2964768d2fbe0\" returns successfully" Oct 31 01:33:01.747242 env[1377]: time="2025-10-31T01:33:01.747227845Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-27j8s,Uid:4bc01c1a-5001-4f6b-ad3f-615201398d71,Namespace:kube-system,Attempt:1,}" Oct 31 01:33:01.782855 kubelet[2284]: E1031 01:33:01.782828 2284 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-8nrww" podUID="d10ebfbe-91f8-4576-8542-06b4d8a152be" Oct 31 01:33:01.784728 kubelet[2284]: E1031 01:33:01.782989 2284 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-59f47b46b-s9kdt" podUID="8f69b598-caa2-4abe-911a-df60fbb3c4df" Oct 31 01:33:01.837563 kubelet[2284]: I1031 01:33:01.831763 2284 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-j9jdm" podStartSLOduration=42.825671171 podStartE2EDuration="42.825671171s" podCreationTimestamp="2025-10-31 01:32:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-31 01:33:01.825546724 +0000 UTC m=+46.437360814" watchObservedRunningTime="2025-10-31 01:33:01.825671171 +0000 UTC m=+46.437485250" Oct 31 01:33:01.852438 kernel: kauditd_printk_skb: 583 callbacks suppressed Oct 31 01:33:01.853114 kernel: audit: type=1325 audit(1761874381.844:411): table=filter:115 family=2 entries=20 op=nft_register_rule pid=4274 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Oct 31 01:33:01.853152 kernel: audit: type=1300 audit(1761874381.844:411): arch=c000003e syscall=46 success=yes exit=7480 a0=3 a1=7ffdf5b99b80 a2=0 a3=7ffdf5b99b6c items=0 ppid=2387 pid=4274 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 01:33:01.844000 audit[4274]: NETFILTER_CFG table=filter:115 family=2 entries=20 op=nft_register_rule pid=4274 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Oct 31 01:33:01.844000 audit[4274]: SYSCALL arch=c000003e syscall=46 success=yes exit=7480 a0=3 a1=7ffdf5b99b80 a2=0 a3=7ffdf5b99b6c items=0 ppid=2387 pid=4274 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 01:33:01.844000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Oct 31 01:33:01.857334 kernel: audit: type=1327 audit(1761874381.844:411): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Oct 31 01:33:01.857381 kernel: audit: type=1325 audit(1761874381.855:412): table=nat:116 family=2 entries=14 op=nft_register_rule pid=4274 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Oct 31 01:33:01.855000 audit[4274]: NETFILTER_CFG table=nat:116 family=2 entries=14 op=nft_register_rule pid=4274 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Oct 31 01:33:01.855000 audit[4274]: SYSCALL arch=c000003e syscall=46 success=yes exit=3468 a0=3 a1=7ffdf5b99b80 a2=0 a3=0 items=0 ppid=2387 pid=4274 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 01:33:01.866587 kernel: audit: type=1300 audit(1761874381.855:412): arch=c000003e syscall=46 success=yes exit=3468 a0=3 a1=7ffdf5b99b80 a2=0 a3=0 items=0 ppid=2387 pid=4274 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 01:33:01.866665 kernel: audit: type=1327 audit(1761874381.855:412): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Oct 31 01:33:01.855000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Oct 31 01:33:01.872252 systemd-networkd[1125]: cali674cf0b2982: Link UP Oct 31 01:33:01.873777 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cali674cf0b2982: link becomes ready Oct 31 01:33:01.872357 systemd-networkd[1125]: cali674cf0b2982: Gained carrier Oct 31 01:33:01.882761 env[1377]: 2025-10-31 01:33:01.775 [INFO][4234] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--7495b6f49d--9bz8s-eth0 calico-apiserver-7495b6f49d- calico-apiserver 3586a90a-6636-45ab-8082-c6aa9bdb62e3 1002 0 2025-10-31 01:32:28 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:7495b6f49d projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-7495b6f49d-9bz8s eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali674cf0b2982 [] [] }} ContainerID="f04fa00d61a9e6613391799a4c0cda337310bb9eb143696f6bb69877342ca9eb" Namespace="calico-apiserver" Pod="calico-apiserver-7495b6f49d-9bz8s" WorkloadEndpoint="localhost-k8s-calico--apiserver--7495b6f49d--9bz8s-" Oct 31 01:33:01.882761 env[1377]: 2025-10-31 01:33:01.775 [INFO][4234] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="f04fa00d61a9e6613391799a4c0cda337310bb9eb143696f6bb69877342ca9eb" Namespace="calico-apiserver" Pod="calico-apiserver-7495b6f49d-9bz8s" WorkloadEndpoint="localhost-k8s-calico--apiserver--7495b6f49d--9bz8s-eth0" Oct 31 01:33:01.882761 env[1377]: 2025-10-31 01:33:01.808 [INFO][4260] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="f04fa00d61a9e6613391799a4c0cda337310bb9eb143696f6bb69877342ca9eb" HandleID="k8s-pod-network.f04fa00d61a9e6613391799a4c0cda337310bb9eb143696f6bb69877342ca9eb" Workload="localhost-k8s-calico--apiserver--7495b6f49d--9bz8s-eth0" Oct 31 01:33:01.882761 env[1377]: 2025-10-31 01:33:01.808 [INFO][4260] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="f04fa00d61a9e6613391799a4c0cda337310bb9eb143696f6bb69877342ca9eb" HandleID="k8s-pod-network.f04fa00d61a9e6613391799a4c0cda337310bb9eb143696f6bb69877342ca9eb" Workload="localhost-k8s-calico--apiserver--7495b6f49d--9bz8s-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002ccfe0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-7495b6f49d-9bz8s", "timestamp":"2025-10-31 01:33:01.808807252 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Oct 31 01:33:01.882761 env[1377]: 2025-10-31 01:33:01.809 [INFO][4260] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Oct 31 01:33:01.882761 env[1377]: 2025-10-31 01:33:01.809 [INFO][4260] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Oct 31 01:33:01.882761 env[1377]: 2025-10-31 01:33:01.809 [INFO][4260] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Oct 31 01:33:01.882761 env[1377]: 2025-10-31 01:33:01.819 [INFO][4260] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.f04fa00d61a9e6613391799a4c0cda337310bb9eb143696f6bb69877342ca9eb" host="localhost" Oct 31 01:33:01.882761 env[1377]: 2025-10-31 01:33:01.833 [INFO][4260] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Oct 31 01:33:01.882761 env[1377]: 2025-10-31 01:33:01.839 [INFO][4260] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Oct 31 01:33:01.882761 env[1377]: 2025-10-31 01:33:01.842 [INFO][4260] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Oct 31 01:33:01.882761 env[1377]: 2025-10-31 01:33:01.843 [INFO][4260] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Oct 31 01:33:01.882761 env[1377]: 2025-10-31 01:33:01.843 [INFO][4260] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.f04fa00d61a9e6613391799a4c0cda337310bb9eb143696f6bb69877342ca9eb" host="localhost" Oct 31 01:33:01.882761 env[1377]: 2025-10-31 01:33:01.847 [INFO][4260] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.f04fa00d61a9e6613391799a4c0cda337310bb9eb143696f6bb69877342ca9eb Oct 31 01:33:01.882761 env[1377]: 2025-10-31 01:33:01.857 [INFO][4260] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.f04fa00d61a9e6613391799a4c0cda337310bb9eb143696f6bb69877342ca9eb" host="localhost" Oct 31 01:33:01.882761 env[1377]: 2025-10-31 01:33:01.866 [INFO][4260] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.133/26] block=192.168.88.128/26 handle="k8s-pod-network.f04fa00d61a9e6613391799a4c0cda337310bb9eb143696f6bb69877342ca9eb" host="localhost" Oct 31 01:33:01.882761 env[1377]: 2025-10-31 01:33:01.866 [INFO][4260] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.133/26] handle="k8s-pod-network.f04fa00d61a9e6613391799a4c0cda337310bb9eb143696f6bb69877342ca9eb" host="localhost" Oct 31 01:33:01.882761 env[1377]: 2025-10-31 01:33:01.866 [INFO][4260] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Oct 31 01:33:01.882761 env[1377]: 2025-10-31 01:33:01.866 [INFO][4260] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.133/26] IPv6=[] ContainerID="f04fa00d61a9e6613391799a4c0cda337310bb9eb143696f6bb69877342ca9eb" HandleID="k8s-pod-network.f04fa00d61a9e6613391799a4c0cda337310bb9eb143696f6bb69877342ca9eb" Workload="localhost-k8s-calico--apiserver--7495b6f49d--9bz8s-eth0" Oct 31 01:33:01.883356 env[1377]: 2025-10-31 01:33:01.869 [INFO][4234] cni-plugin/k8s.go 418: Populated endpoint ContainerID="f04fa00d61a9e6613391799a4c0cda337310bb9eb143696f6bb69877342ca9eb" Namespace="calico-apiserver" Pod="calico-apiserver-7495b6f49d-9bz8s" WorkloadEndpoint="localhost-k8s-calico--apiserver--7495b6f49d--9bz8s-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--7495b6f49d--9bz8s-eth0", GenerateName:"calico-apiserver-7495b6f49d-", Namespace:"calico-apiserver", SelfLink:"", UID:"3586a90a-6636-45ab-8082-c6aa9bdb62e3", ResourceVersion:"1002", Generation:0, CreationTimestamp:time.Date(2025, time.October, 31, 1, 32, 28, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7495b6f49d", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-7495b6f49d-9bz8s", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali674cf0b2982", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 31 01:33:01.883356 env[1377]: 2025-10-31 01:33:01.869 [INFO][4234] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.133/32] ContainerID="f04fa00d61a9e6613391799a4c0cda337310bb9eb143696f6bb69877342ca9eb" Namespace="calico-apiserver" Pod="calico-apiserver-7495b6f49d-9bz8s" WorkloadEndpoint="localhost-k8s-calico--apiserver--7495b6f49d--9bz8s-eth0" Oct 31 01:33:01.883356 env[1377]: 2025-10-31 01:33:01.870 [INFO][4234] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali674cf0b2982 ContainerID="f04fa00d61a9e6613391799a4c0cda337310bb9eb143696f6bb69877342ca9eb" Namespace="calico-apiserver" Pod="calico-apiserver-7495b6f49d-9bz8s" WorkloadEndpoint="localhost-k8s-calico--apiserver--7495b6f49d--9bz8s-eth0" Oct 31 01:33:01.883356 env[1377]: 2025-10-31 01:33:01.871 [INFO][4234] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="f04fa00d61a9e6613391799a4c0cda337310bb9eb143696f6bb69877342ca9eb" Namespace="calico-apiserver" Pod="calico-apiserver-7495b6f49d-9bz8s" WorkloadEndpoint="localhost-k8s-calico--apiserver--7495b6f49d--9bz8s-eth0" Oct 31 01:33:01.883356 env[1377]: 2025-10-31 01:33:01.874 [INFO][4234] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="f04fa00d61a9e6613391799a4c0cda337310bb9eb143696f6bb69877342ca9eb" Namespace="calico-apiserver" Pod="calico-apiserver-7495b6f49d-9bz8s" WorkloadEndpoint="localhost-k8s-calico--apiserver--7495b6f49d--9bz8s-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--7495b6f49d--9bz8s-eth0", GenerateName:"calico-apiserver-7495b6f49d-", Namespace:"calico-apiserver", SelfLink:"", UID:"3586a90a-6636-45ab-8082-c6aa9bdb62e3", ResourceVersion:"1002", Generation:0, CreationTimestamp:time.Date(2025, time.October, 31, 1, 32, 28, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7495b6f49d", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"f04fa00d61a9e6613391799a4c0cda337310bb9eb143696f6bb69877342ca9eb", Pod:"calico-apiserver-7495b6f49d-9bz8s", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali674cf0b2982", MAC:"06:cd:4a:d0:91:a3", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 31 01:33:01.883356 env[1377]: 2025-10-31 01:33:01.880 [INFO][4234] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="f04fa00d61a9e6613391799a4c0cda337310bb9eb143696f6bb69877342ca9eb" Namespace="calico-apiserver" Pod="calico-apiserver-7495b6f49d-9bz8s" WorkloadEndpoint="localhost-k8s-calico--apiserver--7495b6f49d--9bz8s-eth0" Oct 31 01:33:01.883000 audit[4278]: NETFILTER_CFG table=filter:117 family=2 entries=20 op=nft_register_rule pid=4278 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Oct 31 01:33:01.883000 audit[4278]: SYSCALL arch=c000003e syscall=46 success=yes exit=7480 a0=3 a1=7ffeab6d4bf0 a2=0 a3=7ffeab6d4bdc items=0 ppid=2387 pid=4278 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 01:33:01.891263 kernel: audit: type=1325 audit(1761874381.883:413): table=filter:117 family=2 entries=20 op=nft_register_rule pid=4278 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Oct 31 01:33:01.893434 kernel: audit: type=1300 audit(1761874381.883:413): arch=c000003e syscall=46 success=yes exit=7480 a0=3 a1=7ffeab6d4bf0 a2=0 a3=7ffeab6d4bdc items=0 ppid=2387 pid=4278 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 01:33:01.893474 kernel: audit: type=1327 audit(1761874381.883:413): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Oct 31 01:33:01.883000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Oct 31 01:33:01.887000 audit[4278]: NETFILTER_CFG table=nat:118 family=2 entries=14 op=nft_register_rule pid=4278 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Oct 31 01:33:01.898838 kernel: audit: type=1325 audit(1761874381.887:414): table=nat:118 family=2 entries=14 op=nft_register_rule pid=4278 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Oct 31 01:33:01.887000 audit[4278]: SYSCALL arch=c000003e syscall=46 success=yes exit=3468 a0=3 a1=7ffeab6d4bf0 a2=0 a3=0 items=0 ppid=2387 pid=4278 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 01:33:01.887000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Oct 31 01:33:01.900375 env[1377]: time="2025-10-31T01:33:01.900324599Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 31 01:33:01.900469 env[1377]: time="2025-10-31T01:33:01.900357964Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 31 01:33:01.900525 env[1377]: time="2025-10-31T01:33:01.900455042Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 31 01:33:01.900703 env[1377]: time="2025-10-31T01:33:01.900677536Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/f04fa00d61a9e6613391799a4c0cda337310bb9eb143696f6bb69877342ca9eb pid=4292 runtime=io.containerd.runc.v2 Oct 31 01:33:01.911000 audit[4311]: NETFILTER_CFG table=filter:119 family=2 entries=55 op=nft_register_chain pid=4311 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Oct 31 01:33:01.911000 audit[4311]: SYSCALL arch=c000003e syscall=46 success=yes exit=28304 a0=3 a1=7fffd3a1ee10 a2=0 a3=7fffd3a1edfc items=0 ppid=3595 pid=4311 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 01:33:01.911000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Oct 31 01:33:01.929659 systemd-resolved[1294]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Oct 31 01:33:01.940774 systemd-networkd[1125]: calia9c5889affe: Gained IPv6LL Oct 31 01:33:01.953274 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): calie36a760a9ee: link becomes ready Oct 31 01:33:01.952933 systemd-networkd[1125]: calie36a760a9ee: Link UP Oct 31 01:33:01.953024 systemd-networkd[1125]: calie36a760a9ee: Gained carrier Oct 31 01:33:01.965963 env[1377]: 2025-10-31 01:33:01.802 [INFO][4244] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--668d6bf9bc--27j8s-eth0 coredns-668d6bf9bc- kube-system 4bc01c1a-5001-4f6b-ad3f-615201398d71 1003 0 2025-10-31 01:32:19 +0000 UTC map[k8s-app:kube-dns pod-template-hash:668d6bf9bc projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-668d6bf9bc-27j8s eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calie36a760a9ee [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="18c9ee7a3733ba468290c84864dca370471398d9869535678996bfce2f2771f4" Namespace="kube-system" Pod="coredns-668d6bf9bc-27j8s" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--27j8s-" Oct 31 01:33:01.965963 env[1377]: 2025-10-31 01:33:01.802 [INFO][4244] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="18c9ee7a3733ba468290c84864dca370471398d9869535678996bfce2f2771f4" Namespace="kube-system" Pod="coredns-668d6bf9bc-27j8s" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--27j8s-eth0" Oct 31 01:33:01.965963 env[1377]: 2025-10-31 01:33:01.859 [INFO][4267] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="18c9ee7a3733ba468290c84864dca370471398d9869535678996bfce2f2771f4" HandleID="k8s-pod-network.18c9ee7a3733ba468290c84864dca370471398d9869535678996bfce2f2771f4" Workload="localhost-k8s-coredns--668d6bf9bc--27j8s-eth0" Oct 31 01:33:01.965963 env[1377]: 2025-10-31 01:33:01.859 [INFO][4267] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="18c9ee7a3733ba468290c84864dca370471398d9869535678996bfce2f2771f4" HandleID="k8s-pod-network.18c9ee7a3733ba468290c84864dca370471398d9869535678996bfce2f2771f4" Workload="localhost-k8s-coredns--668d6bf9bc--27j8s-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002cdcb0), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-668d6bf9bc-27j8s", "timestamp":"2025-10-31 01:33:01.859225481 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Oct 31 01:33:01.965963 env[1377]: 2025-10-31 01:33:01.859 [INFO][4267] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Oct 31 01:33:01.965963 env[1377]: 2025-10-31 01:33:01.870 [INFO][4267] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Oct 31 01:33:01.965963 env[1377]: 2025-10-31 01:33:01.870 [INFO][4267] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Oct 31 01:33:01.965963 env[1377]: 2025-10-31 01:33:01.918 [INFO][4267] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.18c9ee7a3733ba468290c84864dca370471398d9869535678996bfce2f2771f4" host="localhost" Oct 31 01:33:01.965963 env[1377]: 2025-10-31 01:33:01.937 [INFO][4267] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Oct 31 01:33:01.965963 env[1377]: 2025-10-31 01:33:01.940 [INFO][4267] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Oct 31 01:33:01.965963 env[1377]: 2025-10-31 01:33:01.941 [INFO][4267] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Oct 31 01:33:01.965963 env[1377]: 2025-10-31 01:33:01.943 [INFO][4267] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Oct 31 01:33:01.965963 env[1377]: 2025-10-31 01:33:01.943 [INFO][4267] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.18c9ee7a3733ba468290c84864dca370471398d9869535678996bfce2f2771f4" host="localhost" Oct 31 01:33:01.965963 env[1377]: 2025-10-31 01:33:01.943 [INFO][4267] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.18c9ee7a3733ba468290c84864dca370471398d9869535678996bfce2f2771f4 Oct 31 01:33:01.965963 env[1377]: 2025-10-31 01:33:01.945 [INFO][4267] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.18c9ee7a3733ba468290c84864dca370471398d9869535678996bfce2f2771f4" host="localhost" Oct 31 01:33:01.965963 env[1377]: 2025-10-31 01:33:01.948 [INFO][4267] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.134/26] block=192.168.88.128/26 handle="k8s-pod-network.18c9ee7a3733ba468290c84864dca370471398d9869535678996bfce2f2771f4" host="localhost" Oct 31 01:33:01.965963 env[1377]: 2025-10-31 01:33:01.948 [INFO][4267] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.134/26] handle="k8s-pod-network.18c9ee7a3733ba468290c84864dca370471398d9869535678996bfce2f2771f4" host="localhost" Oct 31 01:33:01.965963 env[1377]: 2025-10-31 01:33:01.949 [INFO][4267] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Oct 31 01:33:01.965963 env[1377]: 2025-10-31 01:33:01.949 [INFO][4267] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.134/26] IPv6=[] ContainerID="18c9ee7a3733ba468290c84864dca370471398d9869535678996bfce2f2771f4" HandleID="k8s-pod-network.18c9ee7a3733ba468290c84864dca370471398d9869535678996bfce2f2771f4" Workload="localhost-k8s-coredns--668d6bf9bc--27j8s-eth0" Oct 31 01:33:01.966433 env[1377]: 2025-10-31 01:33:01.950 [INFO][4244] cni-plugin/k8s.go 418: Populated endpoint ContainerID="18c9ee7a3733ba468290c84864dca370471398d9869535678996bfce2f2771f4" Namespace="kube-system" Pod="coredns-668d6bf9bc-27j8s" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--27j8s-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--668d6bf9bc--27j8s-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"4bc01c1a-5001-4f6b-ad3f-615201398d71", ResourceVersion:"1003", Generation:0, CreationTimestamp:time.Date(2025, time.October, 31, 1, 32, 19, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-668d6bf9bc-27j8s", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calie36a760a9ee", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 31 01:33:01.966433 env[1377]: 2025-10-31 01:33:01.950 [INFO][4244] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.134/32] ContainerID="18c9ee7a3733ba468290c84864dca370471398d9869535678996bfce2f2771f4" Namespace="kube-system" Pod="coredns-668d6bf9bc-27j8s" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--27j8s-eth0" Oct 31 01:33:01.966433 env[1377]: 2025-10-31 01:33:01.950 [INFO][4244] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calie36a760a9ee ContainerID="18c9ee7a3733ba468290c84864dca370471398d9869535678996bfce2f2771f4" Namespace="kube-system" Pod="coredns-668d6bf9bc-27j8s" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--27j8s-eth0" Oct 31 01:33:01.966433 env[1377]: 2025-10-31 01:33:01.952 [INFO][4244] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="18c9ee7a3733ba468290c84864dca370471398d9869535678996bfce2f2771f4" Namespace="kube-system" Pod="coredns-668d6bf9bc-27j8s" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--27j8s-eth0" Oct 31 01:33:01.966433 env[1377]: 2025-10-31 01:33:01.954 [INFO][4244] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="18c9ee7a3733ba468290c84864dca370471398d9869535678996bfce2f2771f4" Namespace="kube-system" Pod="coredns-668d6bf9bc-27j8s" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--27j8s-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--668d6bf9bc--27j8s-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"4bc01c1a-5001-4f6b-ad3f-615201398d71", ResourceVersion:"1003", Generation:0, CreationTimestamp:time.Date(2025, time.October, 31, 1, 32, 19, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"18c9ee7a3733ba468290c84864dca370471398d9869535678996bfce2f2771f4", Pod:"coredns-668d6bf9bc-27j8s", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calie36a760a9ee", MAC:"62:e1:ed:08:99:f5", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 31 01:33:01.966433 env[1377]: 2025-10-31 01:33:01.964 [INFO][4244] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="18c9ee7a3733ba468290c84864dca370471398d9869535678996bfce2f2771f4" Namespace="kube-system" Pod="coredns-668d6bf9bc-27j8s" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--27j8s-eth0" Oct 31 01:33:01.976336 env[1377]: time="2025-10-31T01:33:01.976313294Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7495b6f49d-9bz8s,Uid:3586a90a-6636-45ab-8082-c6aa9bdb62e3,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"f04fa00d61a9e6613391799a4c0cda337310bb9eb143696f6bb69877342ca9eb\"" Oct 31 01:33:01.977722 env[1377]: time="2025-10-31T01:33:01.977708920Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Oct 31 01:33:01.978866 env[1377]: time="2025-10-31T01:33:01.978754071Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 31 01:33:01.978866 env[1377]: time="2025-10-31T01:33:01.978802862Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 31 01:33:01.978866 env[1377]: time="2025-10-31T01:33:01.978818750Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 31 01:33:01.980034 env[1377]: time="2025-10-31T01:33:01.978913209Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/18c9ee7a3733ba468290c84864dca370471398d9869535678996bfce2f2771f4 pid=4342 runtime=io.containerd.runc.v2 Oct 31 01:33:02.002290 systemd-resolved[1294]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Oct 31 01:33:02.006000 audit[4368]: NETFILTER_CFG table=filter:120 family=2 entries=44 op=nft_register_chain pid=4368 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Oct 31 01:33:02.006000 audit[4368]: SYSCALL arch=c000003e syscall=46 success=yes exit=21516 a0=3 a1=7fff8cedc0f0 a2=0 a3=7fff8cedc0dc items=0 ppid=3595 pid=4368 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 01:33:02.006000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Oct 31 01:33:02.027440 env[1377]: time="2025-10-31T01:33:02.027410720Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-27j8s,Uid:4bc01c1a-5001-4f6b-ad3f-615201398d71,Namespace:kube-system,Attempt:1,} returns sandbox id \"18c9ee7a3733ba468290c84864dca370471398d9869535678996bfce2f2771f4\"" Oct 31 01:33:02.029838 env[1377]: time="2025-10-31T01:33:02.029819030Z" level=info msg="CreateContainer within sandbox \"18c9ee7a3733ba468290c84864dca370471398d9869535678996bfce2f2771f4\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Oct 31 01:33:02.036119 env[1377]: time="2025-10-31T01:33:02.036073507Z" level=info msg="CreateContainer within sandbox \"18c9ee7a3733ba468290c84864dca370471398d9869535678996bfce2f2771f4\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"9bbf7375a4231c691ed13c8d97297797b338c970fbe3772c3e22c62b9ab64819\"" Oct 31 01:33:02.037043 env[1377]: time="2025-10-31T01:33:02.037019098Z" level=info msg="StartContainer for \"9bbf7375a4231c691ed13c8d97297797b338c970fbe3772c3e22c62b9ab64819\"" Oct 31 01:33:02.077885 env[1377]: time="2025-10-31T01:33:02.077848728Z" level=info msg="StartContainer for \"9bbf7375a4231c691ed13c8d97297797b338c970fbe3772c3e22c62b9ab64819\" returns successfully" Oct 31 01:33:02.113843 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1009948841.mount: Deactivated successfully. Oct 31 01:33:02.114310 systemd[1]: run-netns-cni\x2d31a0e543\x2dbd01\x2dcb41\x2d4789\x2d9bcc6bb52b5a.mount: Deactivated successfully. Oct 31 01:33:02.114483 systemd[1]: run-netns-cni\x2d052a589e\x2df83a\x2d92d3\x2db652\x2d8463b38d6c41.mount: Deactivated successfully. Oct 31 01:33:02.196736 systemd-networkd[1125]: cali35c6f7c7cd9: Gained IPv6LL Oct 31 01:33:02.339506 env[1377]: time="2025-10-31T01:33:02.339471993Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Oct 31 01:33:02.340615 env[1377]: time="2025-10-31T01:33:02.340585363Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Oct 31 01:33:02.340859 kubelet[2284]: E1031 01:33:02.340826 2284 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Oct 31 01:33:02.340904 kubelet[2284]: E1031 01:33:02.340875 2284 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Oct 31 01:33:02.341014 kubelet[2284]: E1031 01:33:02.340974 2284 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-wqf4c,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-7495b6f49d-9bz8s_calico-apiserver(3586a90a-6636-45ab-8082-c6aa9bdb62e3): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Oct 31 01:33:02.342190 kubelet[2284]: E1031 01:33:02.342125 2284 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7495b6f49d-9bz8s" podUID="3586a90a-6636-45ab-8082-c6aa9bdb62e3" Oct 31 01:33:02.571292 env[1377]: time="2025-10-31T01:33:02.570459807Z" level=info msg="StopPodSandbox for \"6d715b3156b7623086840544a9b2e8c95015db1e296dbe8cc2dc2e036b11cdb8\"" Oct 31 01:33:02.650377 env[1377]: 2025-10-31 01:33:02.613 [INFO][4425] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="6d715b3156b7623086840544a9b2e8c95015db1e296dbe8cc2dc2e036b11cdb8" Oct 31 01:33:02.650377 env[1377]: 2025-10-31 01:33:02.613 [INFO][4425] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="6d715b3156b7623086840544a9b2e8c95015db1e296dbe8cc2dc2e036b11cdb8" iface="eth0" netns="/var/run/netns/cni-69f175c7-220d-cabc-ac6a-9dee6e226f1e" Oct 31 01:33:02.650377 env[1377]: 2025-10-31 01:33:02.613 [INFO][4425] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="6d715b3156b7623086840544a9b2e8c95015db1e296dbe8cc2dc2e036b11cdb8" iface="eth0" netns="/var/run/netns/cni-69f175c7-220d-cabc-ac6a-9dee6e226f1e" Oct 31 01:33:02.650377 env[1377]: 2025-10-31 01:33:02.614 [INFO][4425] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="6d715b3156b7623086840544a9b2e8c95015db1e296dbe8cc2dc2e036b11cdb8" iface="eth0" netns="/var/run/netns/cni-69f175c7-220d-cabc-ac6a-9dee6e226f1e" Oct 31 01:33:02.650377 env[1377]: 2025-10-31 01:33:02.614 [INFO][4425] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="6d715b3156b7623086840544a9b2e8c95015db1e296dbe8cc2dc2e036b11cdb8" Oct 31 01:33:02.650377 env[1377]: 2025-10-31 01:33:02.614 [INFO][4425] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="6d715b3156b7623086840544a9b2e8c95015db1e296dbe8cc2dc2e036b11cdb8" Oct 31 01:33:02.650377 env[1377]: 2025-10-31 01:33:02.639 [INFO][4432] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="6d715b3156b7623086840544a9b2e8c95015db1e296dbe8cc2dc2e036b11cdb8" HandleID="k8s-pod-network.6d715b3156b7623086840544a9b2e8c95015db1e296dbe8cc2dc2e036b11cdb8" Workload="localhost-k8s-csi--node--driver--vc6st-eth0" Oct 31 01:33:02.650377 env[1377]: 2025-10-31 01:33:02.639 [INFO][4432] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Oct 31 01:33:02.650377 env[1377]: 2025-10-31 01:33:02.639 [INFO][4432] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Oct 31 01:33:02.650377 env[1377]: 2025-10-31 01:33:02.645 [WARNING][4432] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="6d715b3156b7623086840544a9b2e8c95015db1e296dbe8cc2dc2e036b11cdb8" HandleID="k8s-pod-network.6d715b3156b7623086840544a9b2e8c95015db1e296dbe8cc2dc2e036b11cdb8" Workload="localhost-k8s-csi--node--driver--vc6st-eth0" Oct 31 01:33:02.650377 env[1377]: 2025-10-31 01:33:02.645 [INFO][4432] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="6d715b3156b7623086840544a9b2e8c95015db1e296dbe8cc2dc2e036b11cdb8" HandleID="k8s-pod-network.6d715b3156b7623086840544a9b2e8c95015db1e296dbe8cc2dc2e036b11cdb8" Workload="localhost-k8s-csi--node--driver--vc6st-eth0" Oct 31 01:33:02.650377 env[1377]: 2025-10-31 01:33:02.647 [INFO][4432] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Oct 31 01:33:02.650377 env[1377]: 2025-10-31 01:33:02.648 [INFO][4425] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="6d715b3156b7623086840544a9b2e8c95015db1e296dbe8cc2dc2e036b11cdb8" Oct 31 01:33:02.653220 env[1377]: time="2025-10-31T01:33:02.650548746Z" level=info msg="TearDown network for sandbox \"6d715b3156b7623086840544a9b2e8c95015db1e296dbe8cc2dc2e036b11cdb8\" successfully" Oct 31 01:33:02.653220 env[1377]: time="2025-10-31T01:33:02.650573657Z" level=info msg="StopPodSandbox for \"6d715b3156b7623086840544a9b2e8c95015db1e296dbe8cc2dc2e036b11cdb8\" returns successfully" Oct 31 01:33:02.653220 env[1377]: time="2025-10-31T01:33:02.651083267Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-vc6st,Uid:68e0baab-eac1-409d-a79c-945bc83eb739,Namespace:calico-system,Attempt:1,}" Oct 31 01:33:02.652556 systemd[1]: run-netns-cni\x2d69f175c7\x2d220d\x2dcabc\x2dac6a\x2d9dee6e226f1e.mount: Deactivated successfully. Oct 31 01:33:02.751089 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Oct 31 01:33:02.751210 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cali5d49b778d08: link becomes ready Oct 31 01:33:02.751822 systemd-networkd[1125]: cali5d49b778d08: Link UP Oct 31 01:33:02.751925 systemd-networkd[1125]: cali5d49b778d08: Gained carrier Oct 31 01:33:02.761337 env[1377]: 2025-10-31 01:33:02.692 [INFO][4438] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-csi--node--driver--vc6st-eth0 csi-node-driver- calico-system 68e0baab-eac1-409d-a79c-945bc83eb739 1032 0 2025-10-31 01:32:31 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:857b56db8f k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s localhost csi-node-driver-vc6st eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali5d49b778d08 [] [] }} ContainerID="8823ccaeade7fd2f11fd189be65b605ea7ed472d7ff8242ef331dc88f04b6bf6" Namespace="calico-system" Pod="csi-node-driver-vc6st" WorkloadEndpoint="localhost-k8s-csi--node--driver--vc6st-" Oct 31 01:33:02.761337 env[1377]: 2025-10-31 01:33:02.692 [INFO][4438] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="8823ccaeade7fd2f11fd189be65b605ea7ed472d7ff8242ef331dc88f04b6bf6" Namespace="calico-system" Pod="csi-node-driver-vc6st" WorkloadEndpoint="localhost-k8s-csi--node--driver--vc6st-eth0" Oct 31 01:33:02.761337 env[1377]: 2025-10-31 01:33:02.713 [INFO][4451] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="8823ccaeade7fd2f11fd189be65b605ea7ed472d7ff8242ef331dc88f04b6bf6" HandleID="k8s-pod-network.8823ccaeade7fd2f11fd189be65b605ea7ed472d7ff8242ef331dc88f04b6bf6" Workload="localhost-k8s-csi--node--driver--vc6st-eth0" Oct 31 01:33:02.761337 env[1377]: 2025-10-31 01:33:02.713 [INFO][4451] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="8823ccaeade7fd2f11fd189be65b605ea7ed472d7ff8242ef331dc88f04b6bf6" HandleID="k8s-pod-network.8823ccaeade7fd2f11fd189be65b605ea7ed472d7ff8242ef331dc88f04b6bf6" Workload="localhost-k8s-csi--node--driver--vc6st-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002d5070), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"csi-node-driver-vc6st", "timestamp":"2025-10-31 01:33:02.713007695 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Oct 31 01:33:02.761337 env[1377]: 2025-10-31 01:33:02.713 [INFO][4451] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Oct 31 01:33:02.761337 env[1377]: 2025-10-31 01:33:02.713 [INFO][4451] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Oct 31 01:33:02.761337 env[1377]: 2025-10-31 01:33:02.713 [INFO][4451] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Oct 31 01:33:02.761337 env[1377]: 2025-10-31 01:33:02.718 [INFO][4451] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.8823ccaeade7fd2f11fd189be65b605ea7ed472d7ff8242ef331dc88f04b6bf6" host="localhost" Oct 31 01:33:02.761337 env[1377]: 2025-10-31 01:33:02.722 [INFO][4451] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Oct 31 01:33:02.761337 env[1377]: 2025-10-31 01:33:02.726 [INFO][4451] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Oct 31 01:33:02.761337 env[1377]: 2025-10-31 01:33:02.727 [INFO][4451] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Oct 31 01:33:02.761337 env[1377]: 2025-10-31 01:33:02.729 [INFO][4451] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Oct 31 01:33:02.761337 env[1377]: 2025-10-31 01:33:02.729 [INFO][4451] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.8823ccaeade7fd2f11fd189be65b605ea7ed472d7ff8242ef331dc88f04b6bf6" host="localhost" Oct 31 01:33:02.761337 env[1377]: 2025-10-31 01:33:02.730 [INFO][4451] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.8823ccaeade7fd2f11fd189be65b605ea7ed472d7ff8242ef331dc88f04b6bf6 Oct 31 01:33:02.761337 env[1377]: 2025-10-31 01:33:02.733 [INFO][4451] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.8823ccaeade7fd2f11fd189be65b605ea7ed472d7ff8242ef331dc88f04b6bf6" host="localhost" Oct 31 01:33:02.761337 env[1377]: 2025-10-31 01:33:02.738 [INFO][4451] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.135/26] block=192.168.88.128/26 handle="k8s-pod-network.8823ccaeade7fd2f11fd189be65b605ea7ed472d7ff8242ef331dc88f04b6bf6" host="localhost" Oct 31 01:33:02.761337 env[1377]: 2025-10-31 01:33:02.738 [INFO][4451] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.135/26] handle="k8s-pod-network.8823ccaeade7fd2f11fd189be65b605ea7ed472d7ff8242ef331dc88f04b6bf6" host="localhost" Oct 31 01:33:02.761337 env[1377]: 2025-10-31 01:33:02.738 [INFO][4451] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Oct 31 01:33:02.761337 env[1377]: 2025-10-31 01:33:02.738 [INFO][4451] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.135/26] IPv6=[] ContainerID="8823ccaeade7fd2f11fd189be65b605ea7ed472d7ff8242ef331dc88f04b6bf6" HandleID="k8s-pod-network.8823ccaeade7fd2f11fd189be65b605ea7ed472d7ff8242ef331dc88f04b6bf6" Workload="localhost-k8s-csi--node--driver--vc6st-eth0" Oct 31 01:33:02.762900 env[1377]: 2025-10-31 01:33:02.740 [INFO][4438] cni-plugin/k8s.go 418: Populated endpoint ContainerID="8823ccaeade7fd2f11fd189be65b605ea7ed472d7ff8242ef331dc88f04b6bf6" Namespace="calico-system" Pod="csi-node-driver-vc6st" WorkloadEndpoint="localhost-k8s-csi--node--driver--vc6st-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--vc6st-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"68e0baab-eac1-409d-a79c-945bc83eb739", ResourceVersion:"1032", Generation:0, CreationTimestamp:time.Date(2025, time.October, 31, 1, 32, 31, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"857b56db8f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"csi-node-driver-vc6st", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali5d49b778d08", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 31 01:33:02.762900 env[1377]: 2025-10-31 01:33:02.740 [INFO][4438] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.135/32] ContainerID="8823ccaeade7fd2f11fd189be65b605ea7ed472d7ff8242ef331dc88f04b6bf6" Namespace="calico-system" Pod="csi-node-driver-vc6st" WorkloadEndpoint="localhost-k8s-csi--node--driver--vc6st-eth0" Oct 31 01:33:02.762900 env[1377]: 2025-10-31 01:33:02.740 [INFO][4438] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali5d49b778d08 ContainerID="8823ccaeade7fd2f11fd189be65b605ea7ed472d7ff8242ef331dc88f04b6bf6" Namespace="calico-system" Pod="csi-node-driver-vc6st" WorkloadEndpoint="localhost-k8s-csi--node--driver--vc6st-eth0" Oct 31 01:33:02.762900 env[1377]: 2025-10-31 01:33:02.751 [INFO][4438] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="8823ccaeade7fd2f11fd189be65b605ea7ed472d7ff8242ef331dc88f04b6bf6" Namespace="calico-system" Pod="csi-node-driver-vc6st" WorkloadEndpoint="localhost-k8s-csi--node--driver--vc6st-eth0" Oct 31 01:33:02.762900 env[1377]: 2025-10-31 01:33:02.751 [INFO][4438] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="8823ccaeade7fd2f11fd189be65b605ea7ed472d7ff8242ef331dc88f04b6bf6" Namespace="calico-system" Pod="csi-node-driver-vc6st" WorkloadEndpoint="localhost-k8s-csi--node--driver--vc6st-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--vc6st-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"68e0baab-eac1-409d-a79c-945bc83eb739", ResourceVersion:"1032", Generation:0, CreationTimestamp:time.Date(2025, time.October, 31, 1, 32, 31, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"857b56db8f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"8823ccaeade7fd2f11fd189be65b605ea7ed472d7ff8242ef331dc88f04b6bf6", Pod:"csi-node-driver-vc6st", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali5d49b778d08", MAC:"c2:54:ee:57:78:84", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 31 01:33:02.762900 env[1377]: 2025-10-31 01:33:02.758 [INFO][4438] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="8823ccaeade7fd2f11fd189be65b605ea7ed472d7ff8242ef331dc88f04b6bf6" Namespace="calico-system" Pod="csi-node-driver-vc6st" WorkloadEndpoint="localhost-k8s-csi--node--driver--vc6st-eth0" Oct 31 01:33:02.772247 env[1377]: time="2025-10-31T01:33:02.772202503Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 31 01:33:02.772341 env[1377]: time="2025-10-31T01:33:02.772252324Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 31 01:33:02.772341 env[1377]: time="2025-10-31T01:33:02.772274808Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 31 01:33:02.772406 env[1377]: time="2025-10-31T01:33:02.772379081Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/8823ccaeade7fd2f11fd189be65b605ea7ed472d7ff8242ef331dc88f04b6bf6 pid=4473 runtime=io.containerd.runc.v2 Oct 31 01:33:02.773000 audit[4479]: NETFILTER_CFG table=filter:121 family=2 entries=52 op=nft_register_chain pid=4479 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Oct 31 01:33:02.773000 audit[4479]: SYSCALL arch=c000003e syscall=46 success=yes exit=24312 a0=3 a1=7ffe089b0410 a2=0 a3=7ffe089b03fc items=0 ppid=3595 pid=4479 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 01:33:02.773000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Oct 31 01:33:02.809501 systemd-resolved[1294]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Oct 31 01:33:02.819508 kubelet[2284]: E1031 01:33:02.818024 2284 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7495b6f49d-9bz8s" podUID="3586a90a-6636-45ab-8082-c6aa9bdb62e3" Oct 31 01:33:02.825868 env[1377]: time="2025-10-31T01:33:02.825798836Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-vc6st,Uid:68e0baab-eac1-409d-a79c-945bc83eb739,Namespace:calico-system,Attempt:1,} returns sandbox id \"8823ccaeade7fd2f11fd189be65b605ea7ed472d7ff8242ef331dc88f04b6bf6\"" Oct 31 01:33:02.845858 env[1377]: time="2025-10-31T01:33:02.845828354Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Oct 31 01:33:02.851261 kubelet[2284]: I1031 01:33:02.850677 2284 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-27j8s" podStartSLOduration=43.850663839 podStartE2EDuration="43.850663839s" podCreationTimestamp="2025-10-31 01:32:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-31 01:33:02.850120672 +0000 UTC m=+47.461934761" watchObservedRunningTime="2025-10-31 01:33:02.850663839 +0000 UTC m=+47.462477921" Oct 31 01:33:02.862000 audit[4509]: NETFILTER_CFG table=filter:122 family=2 entries=20 op=nft_register_rule pid=4509 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Oct 31 01:33:02.862000 audit[4509]: SYSCALL arch=c000003e syscall=46 success=yes exit=7480 a0=3 a1=7fffa8d3ad20 a2=0 a3=7fffa8d3ad0c items=0 ppid=2387 pid=4509 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 01:33:02.862000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Oct 31 01:33:02.869000 audit[4509]: NETFILTER_CFG table=nat:123 family=2 entries=14 op=nft_register_rule pid=4509 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Oct 31 01:33:02.869000 audit[4509]: SYSCALL arch=c000003e syscall=46 success=yes exit=3468 a0=3 a1=7fffa8d3ad20 a2=0 a3=0 items=0 ppid=2387 pid=4509 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 01:33:02.869000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Oct 31 01:33:03.167154 env[1377]: time="2025-10-31T01:33:03.167073156Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Oct 31 01:33:03.167573 env[1377]: time="2025-10-31T01:33:03.167548272Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Oct 31 01:33:03.167746 kubelet[2284]: E1031 01:33:03.167722 2284 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Oct 31 01:33:03.167796 kubelet[2284]: E1031 01:33:03.167753 2284 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Oct 31 01:33:03.167882 kubelet[2284]: E1031 01:33:03.167843 2284 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-747rs,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-vc6st_calico-system(68e0baab-eac1-409d-a79c-945bc83eb739): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Oct 31 01:33:03.169485 env[1377]: time="2025-10-31T01:33:03.169470774Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Oct 31 01:33:03.220697 systemd-networkd[1125]: calie36a760a9ee: Gained IPv6LL Oct 31 01:33:03.512531 env[1377]: time="2025-10-31T01:33:03.512450716Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Oct 31 01:33:03.512963 env[1377]: time="2025-10-31T01:33:03.512936943Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Oct 31 01:33:03.513100 kubelet[2284]: E1031 01:33:03.513077 2284 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Oct 31 01:33:03.513180 kubelet[2284]: E1031 01:33:03.513168 2284 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Oct 31 01:33:03.515036 kubelet[2284]: E1031 01:33:03.515005 2284 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-747rs,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-vc6st_calico-system(68e0baab-eac1-409d-a79c-945bc83eb739): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Oct 31 01:33:03.518697 kubelet[2284]: E1031 01:33:03.518673 2284 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-vc6st" podUID="68e0baab-eac1-409d-a79c-945bc83eb739" Oct 31 01:33:03.579120 env[1377]: time="2025-10-31T01:33:03.579060544Z" level=info msg="StopPodSandbox for \"68ea07f2cec36a0ea709039e67ed228650906cd3b6a583d6f78a2e7c9988da4d\"" Oct 31 01:33:03.579674 env[1377]: time="2025-10-31T01:33:03.579641942Z" level=info msg="StopPodSandbox for \"adb25d497f530262e3f30149078f5eaa0d0ef52dcabb55036103df9d0fc53bf3\"" Oct 31 01:33:03.645214 env[1377]: 2025-10-31 01:33:03.612 [INFO][4537] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="adb25d497f530262e3f30149078f5eaa0d0ef52dcabb55036103df9d0fc53bf3" Oct 31 01:33:03.645214 env[1377]: 2025-10-31 01:33:03.613 [INFO][4537] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="adb25d497f530262e3f30149078f5eaa0d0ef52dcabb55036103df9d0fc53bf3" iface="eth0" netns="/var/run/netns/cni-6c8f88ad-df44-f0ee-14bd-2594165e2d02" Oct 31 01:33:03.645214 env[1377]: 2025-10-31 01:33:03.613 [INFO][4537] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="adb25d497f530262e3f30149078f5eaa0d0ef52dcabb55036103df9d0fc53bf3" iface="eth0" netns="/var/run/netns/cni-6c8f88ad-df44-f0ee-14bd-2594165e2d02" Oct 31 01:33:03.645214 env[1377]: 2025-10-31 01:33:03.614 [INFO][4537] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="adb25d497f530262e3f30149078f5eaa0d0ef52dcabb55036103df9d0fc53bf3" iface="eth0" netns="/var/run/netns/cni-6c8f88ad-df44-f0ee-14bd-2594165e2d02" Oct 31 01:33:03.645214 env[1377]: 2025-10-31 01:33:03.614 [INFO][4537] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="adb25d497f530262e3f30149078f5eaa0d0ef52dcabb55036103df9d0fc53bf3" Oct 31 01:33:03.645214 env[1377]: 2025-10-31 01:33:03.614 [INFO][4537] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="adb25d497f530262e3f30149078f5eaa0d0ef52dcabb55036103df9d0fc53bf3" Oct 31 01:33:03.645214 env[1377]: 2025-10-31 01:33:03.634 [INFO][4550] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="adb25d497f530262e3f30149078f5eaa0d0ef52dcabb55036103df9d0fc53bf3" HandleID="k8s-pod-network.adb25d497f530262e3f30149078f5eaa0d0ef52dcabb55036103df9d0fc53bf3" Workload="localhost-k8s-calico--kube--controllers--6cf7465748--bpws2-eth0" Oct 31 01:33:03.645214 env[1377]: 2025-10-31 01:33:03.634 [INFO][4550] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Oct 31 01:33:03.645214 env[1377]: 2025-10-31 01:33:03.634 [INFO][4550] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Oct 31 01:33:03.645214 env[1377]: 2025-10-31 01:33:03.639 [WARNING][4550] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="adb25d497f530262e3f30149078f5eaa0d0ef52dcabb55036103df9d0fc53bf3" HandleID="k8s-pod-network.adb25d497f530262e3f30149078f5eaa0d0ef52dcabb55036103df9d0fc53bf3" Workload="localhost-k8s-calico--kube--controllers--6cf7465748--bpws2-eth0" Oct 31 01:33:03.645214 env[1377]: 2025-10-31 01:33:03.639 [INFO][4550] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="adb25d497f530262e3f30149078f5eaa0d0ef52dcabb55036103df9d0fc53bf3" HandleID="k8s-pod-network.adb25d497f530262e3f30149078f5eaa0d0ef52dcabb55036103df9d0fc53bf3" Workload="localhost-k8s-calico--kube--controllers--6cf7465748--bpws2-eth0" Oct 31 01:33:03.645214 env[1377]: 2025-10-31 01:33:03.642 [INFO][4550] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Oct 31 01:33:03.645214 env[1377]: 2025-10-31 01:33:03.643 [INFO][4537] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="adb25d497f530262e3f30149078f5eaa0d0ef52dcabb55036103df9d0fc53bf3" Oct 31 01:33:03.647372 systemd[1]: run-netns-cni\x2d6c8f88ad\x2ddf44\x2df0ee\x2d14bd\x2d2594165e2d02.mount: Deactivated successfully. Oct 31 01:33:03.648130 env[1377]: time="2025-10-31T01:33:03.647619905Z" level=info msg="TearDown network for sandbox \"adb25d497f530262e3f30149078f5eaa0d0ef52dcabb55036103df9d0fc53bf3\" successfully" Oct 31 01:33:03.648130 env[1377]: time="2025-10-31T01:33:03.647651922Z" level=info msg="StopPodSandbox for \"adb25d497f530262e3f30149078f5eaa0d0ef52dcabb55036103df9d0fc53bf3\" returns successfully" Oct 31 01:33:03.648370 env[1377]: time="2025-10-31T01:33:03.648351728Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6cf7465748-bpws2,Uid:c5ff4853-41a8-4d7e-a0bc-f8d8451a400b,Namespace:calico-system,Attempt:1,}" Oct 31 01:33:03.676003 env[1377]: 2025-10-31 01:33:03.625 [INFO][4536] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="68ea07f2cec36a0ea709039e67ed228650906cd3b6a583d6f78a2e7c9988da4d" Oct 31 01:33:03.676003 env[1377]: 2025-10-31 01:33:03.625 [INFO][4536] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="68ea07f2cec36a0ea709039e67ed228650906cd3b6a583d6f78a2e7c9988da4d" iface="eth0" netns="/var/run/netns/cni-150f0b7a-5241-b2e0-1e7d-9a3e0ea37d45" Oct 31 01:33:03.676003 env[1377]: 2025-10-31 01:33:03.625 [INFO][4536] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="68ea07f2cec36a0ea709039e67ed228650906cd3b6a583d6f78a2e7c9988da4d" iface="eth0" netns="/var/run/netns/cni-150f0b7a-5241-b2e0-1e7d-9a3e0ea37d45" Oct 31 01:33:03.676003 env[1377]: 2025-10-31 01:33:03.625 [INFO][4536] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="68ea07f2cec36a0ea709039e67ed228650906cd3b6a583d6f78a2e7c9988da4d" iface="eth0" netns="/var/run/netns/cni-150f0b7a-5241-b2e0-1e7d-9a3e0ea37d45" Oct 31 01:33:03.676003 env[1377]: 2025-10-31 01:33:03.625 [INFO][4536] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="68ea07f2cec36a0ea709039e67ed228650906cd3b6a583d6f78a2e7c9988da4d" Oct 31 01:33:03.676003 env[1377]: 2025-10-31 01:33:03.625 [INFO][4536] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="68ea07f2cec36a0ea709039e67ed228650906cd3b6a583d6f78a2e7c9988da4d" Oct 31 01:33:03.676003 env[1377]: 2025-10-31 01:33:03.668 [INFO][4556] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="68ea07f2cec36a0ea709039e67ed228650906cd3b6a583d6f78a2e7c9988da4d" HandleID="k8s-pod-network.68ea07f2cec36a0ea709039e67ed228650906cd3b6a583d6f78a2e7c9988da4d" Workload="localhost-k8s-calico--apiserver--59f47b46b--44rl6-eth0" Oct 31 01:33:03.676003 env[1377]: 2025-10-31 01:33:03.669 [INFO][4556] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Oct 31 01:33:03.676003 env[1377]: 2025-10-31 01:33:03.669 [INFO][4556] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Oct 31 01:33:03.676003 env[1377]: 2025-10-31 01:33:03.673 [WARNING][4556] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="68ea07f2cec36a0ea709039e67ed228650906cd3b6a583d6f78a2e7c9988da4d" HandleID="k8s-pod-network.68ea07f2cec36a0ea709039e67ed228650906cd3b6a583d6f78a2e7c9988da4d" Workload="localhost-k8s-calico--apiserver--59f47b46b--44rl6-eth0" Oct 31 01:33:03.676003 env[1377]: 2025-10-31 01:33:03.673 [INFO][4556] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="68ea07f2cec36a0ea709039e67ed228650906cd3b6a583d6f78a2e7c9988da4d" HandleID="k8s-pod-network.68ea07f2cec36a0ea709039e67ed228650906cd3b6a583d6f78a2e7c9988da4d" Workload="localhost-k8s-calico--apiserver--59f47b46b--44rl6-eth0" Oct 31 01:33:03.676003 env[1377]: 2025-10-31 01:33:03.673 [INFO][4556] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Oct 31 01:33:03.676003 env[1377]: 2025-10-31 01:33:03.674 [INFO][4536] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="68ea07f2cec36a0ea709039e67ed228650906cd3b6a583d6f78a2e7c9988da4d" Oct 31 01:33:03.677882 systemd[1]: run-netns-cni\x2d150f0b7a\x2d5241\x2db2e0\x2d1e7d\x2d9a3e0ea37d45.mount: Deactivated successfully. Oct 31 01:33:03.678589 env[1377]: time="2025-10-31T01:33:03.678563200Z" level=info msg="TearDown network for sandbox \"68ea07f2cec36a0ea709039e67ed228650906cd3b6a583d6f78a2e7c9988da4d\" successfully" Oct 31 01:33:03.678671 env[1377]: time="2025-10-31T01:33:03.678659946Z" level=info msg="StopPodSandbox for \"68ea07f2cec36a0ea709039e67ed228650906cd3b6a583d6f78a2e7c9988da4d\" returns successfully" Oct 31 01:33:03.679287 env[1377]: time="2025-10-31T01:33:03.679257403Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-59f47b46b-44rl6,Uid:03dcc52f-4acf-4546-b99b-cf4de4d54704,Namespace:calico-apiserver,Attempt:1,}" Oct 31 01:33:03.760458 systemd-networkd[1125]: cali544bc597c85: Link UP Oct 31 01:33:03.762437 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Oct 31 01:33:03.762567 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cali544bc597c85: link becomes ready Oct 31 01:33:03.763486 systemd-networkd[1125]: cali544bc597c85: Gained carrier Oct 31 01:33:03.777507 env[1377]: 2025-10-31 01:33:03.710 [INFO][4563] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--kube--controllers--6cf7465748--bpws2-eth0 calico-kube-controllers-6cf7465748- calico-system c5ff4853-41a8-4d7e-a0bc-f8d8451a400b 1064 0 2025-10-31 01:32:32 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:6cf7465748 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s localhost calico-kube-controllers-6cf7465748-bpws2 eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali544bc597c85 [] [] }} ContainerID="88881dc01f2f85191d2d129efd1c837fc21cc54245917dae20c856760f316fc9" Namespace="calico-system" Pod="calico-kube-controllers-6cf7465748-bpws2" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--6cf7465748--bpws2-" Oct 31 01:33:03.777507 env[1377]: 2025-10-31 01:33:03.710 [INFO][4563] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="88881dc01f2f85191d2d129efd1c837fc21cc54245917dae20c856760f316fc9" Namespace="calico-system" Pod="calico-kube-controllers-6cf7465748-bpws2" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--6cf7465748--bpws2-eth0" Oct 31 01:33:03.777507 env[1377]: 2025-10-31 01:33:03.737 [INFO][4585] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="88881dc01f2f85191d2d129efd1c837fc21cc54245917dae20c856760f316fc9" HandleID="k8s-pod-network.88881dc01f2f85191d2d129efd1c837fc21cc54245917dae20c856760f316fc9" Workload="localhost-k8s-calico--kube--controllers--6cf7465748--bpws2-eth0" Oct 31 01:33:03.777507 env[1377]: 2025-10-31 01:33:03.737 [INFO][4585] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="88881dc01f2f85191d2d129efd1c837fc21cc54245917dae20c856760f316fc9" HandleID="k8s-pod-network.88881dc01f2f85191d2d129efd1c837fc21cc54245917dae20c856760f316fc9" Workload="localhost-k8s-calico--kube--controllers--6cf7465748--bpws2-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002ccfe0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"calico-kube-controllers-6cf7465748-bpws2", "timestamp":"2025-10-31 01:33:03.737660688 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Oct 31 01:33:03.777507 env[1377]: 2025-10-31 01:33:03.738 [INFO][4585] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Oct 31 01:33:03.777507 env[1377]: 2025-10-31 01:33:03.738 [INFO][4585] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Oct 31 01:33:03.777507 env[1377]: 2025-10-31 01:33:03.738 [INFO][4585] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Oct 31 01:33:03.777507 env[1377]: 2025-10-31 01:33:03.743 [INFO][4585] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.88881dc01f2f85191d2d129efd1c837fc21cc54245917dae20c856760f316fc9" host="localhost" Oct 31 01:33:03.777507 env[1377]: 2025-10-31 01:33:03.745 [INFO][4585] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Oct 31 01:33:03.777507 env[1377]: 2025-10-31 01:33:03.747 [INFO][4585] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Oct 31 01:33:03.777507 env[1377]: 2025-10-31 01:33:03.748 [INFO][4585] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Oct 31 01:33:03.777507 env[1377]: 2025-10-31 01:33:03.749 [INFO][4585] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Oct 31 01:33:03.777507 env[1377]: 2025-10-31 01:33:03.749 [INFO][4585] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.88881dc01f2f85191d2d129efd1c837fc21cc54245917dae20c856760f316fc9" host="localhost" Oct 31 01:33:03.777507 env[1377]: 2025-10-31 01:33:03.750 [INFO][4585] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.88881dc01f2f85191d2d129efd1c837fc21cc54245917dae20c856760f316fc9 Oct 31 01:33:03.777507 env[1377]: 2025-10-31 01:33:03.752 [INFO][4585] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.88881dc01f2f85191d2d129efd1c837fc21cc54245917dae20c856760f316fc9" host="localhost" Oct 31 01:33:03.777507 env[1377]: 2025-10-31 01:33:03.755 [INFO][4585] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.136/26] block=192.168.88.128/26 handle="k8s-pod-network.88881dc01f2f85191d2d129efd1c837fc21cc54245917dae20c856760f316fc9" host="localhost" Oct 31 01:33:03.777507 env[1377]: 2025-10-31 01:33:03.755 [INFO][4585] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.136/26] handle="k8s-pod-network.88881dc01f2f85191d2d129efd1c837fc21cc54245917dae20c856760f316fc9" host="localhost" Oct 31 01:33:03.777507 env[1377]: 2025-10-31 01:33:03.755 [INFO][4585] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Oct 31 01:33:03.777507 env[1377]: 2025-10-31 01:33:03.755 [INFO][4585] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.136/26] IPv6=[] ContainerID="88881dc01f2f85191d2d129efd1c837fc21cc54245917dae20c856760f316fc9" HandleID="k8s-pod-network.88881dc01f2f85191d2d129efd1c837fc21cc54245917dae20c856760f316fc9" Workload="localhost-k8s-calico--kube--controllers--6cf7465748--bpws2-eth0" Oct 31 01:33:03.779908 env[1377]: 2025-10-31 01:33:03.757 [INFO][4563] cni-plugin/k8s.go 418: Populated endpoint ContainerID="88881dc01f2f85191d2d129efd1c837fc21cc54245917dae20c856760f316fc9" Namespace="calico-system" Pod="calico-kube-controllers-6cf7465748-bpws2" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--6cf7465748--bpws2-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--6cf7465748--bpws2-eth0", GenerateName:"calico-kube-controllers-6cf7465748-", Namespace:"calico-system", SelfLink:"", UID:"c5ff4853-41a8-4d7e-a0bc-f8d8451a400b", ResourceVersion:"1064", Generation:0, CreationTimestamp:time.Date(2025, time.October, 31, 1, 32, 32, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"6cf7465748", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-kube-controllers-6cf7465748-bpws2", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali544bc597c85", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 31 01:33:03.779908 env[1377]: 2025-10-31 01:33:03.757 [INFO][4563] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.136/32] ContainerID="88881dc01f2f85191d2d129efd1c837fc21cc54245917dae20c856760f316fc9" Namespace="calico-system" Pod="calico-kube-controllers-6cf7465748-bpws2" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--6cf7465748--bpws2-eth0" Oct 31 01:33:03.779908 env[1377]: 2025-10-31 01:33:03.757 [INFO][4563] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali544bc597c85 ContainerID="88881dc01f2f85191d2d129efd1c837fc21cc54245917dae20c856760f316fc9" Namespace="calico-system" Pod="calico-kube-controllers-6cf7465748-bpws2" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--6cf7465748--bpws2-eth0" Oct 31 01:33:03.779908 env[1377]: 2025-10-31 01:33:03.763 [INFO][4563] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="88881dc01f2f85191d2d129efd1c837fc21cc54245917dae20c856760f316fc9" Namespace="calico-system" Pod="calico-kube-controllers-6cf7465748-bpws2" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--6cf7465748--bpws2-eth0" Oct 31 01:33:03.779908 env[1377]: 2025-10-31 01:33:03.764 [INFO][4563] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="88881dc01f2f85191d2d129efd1c837fc21cc54245917dae20c856760f316fc9" Namespace="calico-system" Pod="calico-kube-controllers-6cf7465748-bpws2" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--6cf7465748--bpws2-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--6cf7465748--bpws2-eth0", GenerateName:"calico-kube-controllers-6cf7465748-", Namespace:"calico-system", SelfLink:"", UID:"c5ff4853-41a8-4d7e-a0bc-f8d8451a400b", ResourceVersion:"1064", Generation:0, CreationTimestamp:time.Date(2025, time.October, 31, 1, 32, 32, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"6cf7465748", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"88881dc01f2f85191d2d129efd1c837fc21cc54245917dae20c856760f316fc9", Pod:"calico-kube-controllers-6cf7465748-bpws2", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali544bc597c85", MAC:"ce:c7:5c:1d:0a:dd", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 31 01:33:03.779908 env[1377]: 2025-10-31 01:33:03.771 [INFO][4563] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="88881dc01f2f85191d2d129efd1c837fc21cc54245917dae20c856760f316fc9" Namespace="calico-system" Pod="calico-kube-controllers-6cf7465748-bpws2" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--6cf7465748--bpws2-eth0" Oct 31 01:33:03.783751 env[1377]: time="2025-10-31T01:33:03.783691241Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 31 01:33:03.783751 env[1377]: time="2025-10-31T01:33:03.783718851Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 31 01:33:03.783751 env[1377]: time="2025-10-31T01:33:03.783726042Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 31 01:33:03.783993 env[1377]: time="2025-10-31T01:33:03.783964544Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/88881dc01f2f85191d2d129efd1c837fc21cc54245917dae20c856760f316fc9 pid=4616 runtime=io.containerd.runc.v2 Oct 31 01:33:03.793000 audit[4633]: NETFILTER_CFG table=filter:124 family=2 entries=62 op=nft_register_chain pid=4633 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Oct 31 01:33:03.793000 audit[4633]: SYSCALL arch=c000003e syscall=46 success=yes exit=28352 a0=3 a1=7ffcee793840 a2=0 a3=7ffcee79382c items=0 ppid=3595 pid=4633 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 01:33:03.793000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Oct 31 01:33:03.804222 systemd-resolved[1294]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Oct 31 01:33:03.822004 kubelet[2284]: E1031 01:33:03.821979 2284 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7495b6f49d-9bz8s" podUID="3586a90a-6636-45ab-8082-c6aa9bdb62e3" Oct 31 01:33:03.825899 kubelet[2284]: E1031 01:33:03.824349 2284 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-vc6st" podUID="68e0baab-eac1-409d-a79c-945bc83eb739" Oct 31 01:33:03.858912 env[1377]: time="2025-10-31T01:33:03.855885905Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6cf7465748-bpws2,Uid:c5ff4853-41a8-4d7e-a0bc-f8d8451a400b,Namespace:calico-system,Attempt:1,} returns sandbox id \"88881dc01f2f85191d2d129efd1c837fc21cc54245917dae20c856760f316fc9\"" Oct 31 01:33:03.858912 env[1377]: time="2025-10-31T01:33:03.858755888Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Oct 31 01:33:03.860851 systemd-networkd[1125]: cali674cf0b2982: Gained IPv6LL Oct 31 01:33:03.877769 systemd-networkd[1125]: calif1a4806cb35: Link UP Oct 31 01:33:03.880074 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): calif1a4806cb35: link becomes ready Oct 31 01:33:03.879991 systemd-networkd[1125]: calif1a4806cb35: Gained carrier Oct 31 01:33:03.892000 audit[4654]: NETFILTER_CFG table=filter:125 family=2 entries=17 op=nft_register_rule pid=4654 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Oct 31 01:33:03.892000 audit[4654]: SYSCALL arch=c000003e syscall=46 success=yes exit=5248 a0=3 a1=7fffc021bed0 a2=0 a3=7fffc021bebc items=0 ppid=2387 pid=4654 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 01:33:03.892000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Oct 31 01:33:03.900032 env[1377]: 2025-10-31 01:33:03.726 [INFO][4573] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--59f47b46b--44rl6-eth0 calico-apiserver-59f47b46b- calico-apiserver 03dcc52f-4acf-4546-b99b-cf4de4d54704 1065 0 2025-10-31 01:32:27 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:59f47b46b projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-59f47b46b-44rl6 eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] calif1a4806cb35 [] [] }} ContainerID="c556828c3b1ca9507dd4a81c07df8d2090f1c926fdcb1e725e72126dbaec8d0f" Namespace="calico-apiserver" Pod="calico-apiserver-59f47b46b-44rl6" WorkloadEndpoint="localhost-k8s-calico--apiserver--59f47b46b--44rl6-" Oct 31 01:33:03.900032 env[1377]: 2025-10-31 01:33:03.727 [INFO][4573] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="c556828c3b1ca9507dd4a81c07df8d2090f1c926fdcb1e725e72126dbaec8d0f" Namespace="calico-apiserver" Pod="calico-apiserver-59f47b46b-44rl6" WorkloadEndpoint="localhost-k8s-calico--apiserver--59f47b46b--44rl6-eth0" Oct 31 01:33:03.900032 env[1377]: 2025-10-31 01:33:03.774 [INFO][4593] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="c556828c3b1ca9507dd4a81c07df8d2090f1c926fdcb1e725e72126dbaec8d0f" HandleID="k8s-pod-network.c556828c3b1ca9507dd4a81c07df8d2090f1c926fdcb1e725e72126dbaec8d0f" Workload="localhost-k8s-calico--apiserver--59f47b46b--44rl6-eth0" Oct 31 01:33:03.900032 env[1377]: 2025-10-31 01:33:03.775 [INFO][4593] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="c556828c3b1ca9507dd4a81c07df8d2090f1c926fdcb1e725e72126dbaec8d0f" HandleID="k8s-pod-network.c556828c3b1ca9507dd4a81c07df8d2090f1c926fdcb1e725e72126dbaec8d0f" Workload="localhost-k8s-calico--apiserver--59f47b46b--44rl6-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0003253a0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-59f47b46b-44rl6", "timestamp":"2025-10-31 01:33:03.77499247 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Oct 31 01:33:03.900032 env[1377]: 2025-10-31 01:33:03.775 [INFO][4593] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Oct 31 01:33:03.900032 env[1377]: 2025-10-31 01:33:03.775 [INFO][4593] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Oct 31 01:33:03.900032 env[1377]: 2025-10-31 01:33:03.775 [INFO][4593] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Oct 31 01:33:03.900032 env[1377]: 2025-10-31 01:33:03.845 [INFO][4593] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.c556828c3b1ca9507dd4a81c07df8d2090f1c926fdcb1e725e72126dbaec8d0f" host="localhost" Oct 31 01:33:03.900032 env[1377]: 2025-10-31 01:33:03.850 [INFO][4593] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Oct 31 01:33:03.900032 env[1377]: 2025-10-31 01:33:03.854 [INFO][4593] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Oct 31 01:33:03.900032 env[1377]: 2025-10-31 01:33:03.857 [INFO][4593] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Oct 31 01:33:03.900032 env[1377]: 2025-10-31 01:33:03.860 [INFO][4593] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Oct 31 01:33:03.900032 env[1377]: 2025-10-31 01:33:03.860 [INFO][4593] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.c556828c3b1ca9507dd4a81c07df8d2090f1c926fdcb1e725e72126dbaec8d0f" host="localhost" Oct 31 01:33:03.900032 env[1377]: 2025-10-31 01:33:03.863 [INFO][4593] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.c556828c3b1ca9507dd4a81c07df8d2090f1c926fdcb1e725e72126dbaec8d0f Oct 31 01:33:03.900032 env[1377]: 2025-10-31 01:33:03.868 [INFO][4593] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.c556828c3b1ca9507dd4a81c07df8d2090f1c926fdcb1e725e72126dbaec8d0f" host="localhost" Oct 31 01:33:03.900032 env[1377]: 2025-10-31 01:33:03.871 [INFO][4593] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.137/26] block=192.168.88.128/26 handle="k8s-pod-network.c556828c3b1ca9507dd4a81c07df8d2090f1c926fdcb1e725e72126dbaec8d0f" host="localhost" Oct 31 01:33:03.900032 env[1377]: 2025-10-31 01:33:03.871 [INFO][4593] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.137/26] handle="k8s-pod-network.c556828c3b1ca9507dd4a81c07df8d2090f1c926fdcb1e725e72126dbaec8d0f" host="localhost" Oct 31 01:33:03.900032 env[1377]: 2025-10-31 01:33:03.872 [INFO][4593] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Oct 31 01:33:03.900032 env[1377]: 2025-10-31 01:33:03.872 [INFO][4593] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.137/26] IPv6=[] ContainerID="c556828c3b1ca9507dd4a81c07df8d2090f1c926fdcb1e725e72126dbaec8d0f" HandleID="k8s-pod-network.c556828c3b1ca9507dd4a81c07df8d2090f1c926fdcb1e725e72126dbaec8d0f" Workload="localhost-k8s-calico--apiserver--59f47b46b--44rl6-eth0" Oct 31 01:33:03.902394 env[1377]: 2025-10-31 01:33:03.873 [INFO][4573] cni-plugin/k8s.go 418: Populated endpoint ContainerID="c556828c3b1ca9507dd4a81c07df8d2090f1c926fdcb1e725e72126dbaec8d0f" Namespace="calico-apiserver" Pod="calico-apiserver-59f47b46b-44rl6" WorkloadEndpoint="localhost-k8s-calico--apiserver--59f47b46b--44rl6-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--59f47b46b--44rl6-eth0", GenerateName:"calico-apiserver-59f47b46b-", Namespace:"calico-apiserver", SelfLink:"", UID:"03dcc52f-4acf-4546-b99b-cf4de4d54704", ResourceVersion:"1065", Generation:0, CreationTimestamp:time.Date(2025, time.October, 31, 1, 32, 27, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"59f47b46b", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-59f47b46b-44rl6", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.137/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calif1a4806cb35", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 31 01:33:03.902394 env[1377]: 2025-10-31 01:33:03.873 [INFO][4573] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.137/32] ContainerID="c556828c3b1ca9507dd4a81c07df8d2090f1c926fdcb1e725e72126dbaec8d0f" Namespace="calico-apiserver" Pod="calico-apiserver-59f47b46b-44rl6" WorkloadEndpoint="localhost-k8s-calico--apiserver--59f47b46b--44rl6-eth0" Oct 31 01:33:03.902394 env[1377]: 2025-10-31 01:33:03.873 [INFO][4573] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calif1a4806cb35 ContainerID="c556828c3b1ca9507dd4a81c07df8d2090f1c926fdcb1e725e72126dbaec8d0f" Namespace="calico-apiserver" Pod="calico-apiserver-59f47b46b-44rl6" WorkloadEndpoint="localhost-k8s-calico--apiserver--59f47b46b--44rl6-eth0" Oct 31 01:33:03.902394 env[1377]: 2025-10-31 01:33:03.882 [INFO][4573] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="c556828c3b1ca9507dd4a81c07df8d2090f1c926fdcb1e725e72126dbaec8d0f" Namespace="calico-apiserver" Pod="calico-apiserver-59f47b46b-44rl6" WorkloadEndpoint="localhost-k8s-calico--apiserver--59f47b46b--44rl6-eth0" Oct 31 01:33:03.902394 env[1377]: 2025-10-31 01:33:03.883 [INFO][4573] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="c556828c3b1ca9507dd4a81c07df8d2090f1c926fdcb1e725e72126dbaec8d0f" Namespace="calico-apiserver" Pod="calico-apiserver-59f47b46b-44rl6" WorkloadEndpoint="localhost-k8s-calico--apiserver--59f47b46b--44rl6-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--59f47b46b--44rl6-eth0", GenerateName:"calico-apiserver-59f47b46b-", Namespace:"calico-apiserver", SelfLink:"", UID:"03dcc52f-4acf-4546-b99b-cf4de4d54704", ResourceVersion:"1065", Generation:0, CreationTimestamp:time.Date(2025, time.October, 31, 1, 32, 27, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"59f47b46b", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"c556828c3b1ca9507dd4a81c07df8d2090f1c926fdcb1e725e72126dbaec8d0f", Pod:"calico-apiserver-59f47b46b-44rl6", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.137/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calif1a4806cb35", MAC:"56:3b:f1:5d:bc:c2", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 31 01:33:03.902394 env[1377]: 2025-10-31 01:33:03.888 [INFO][4573] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="c556828c3b1ca9507dd4a81c07df8d2090f1c926fdcb1e725e72126dbaec8d0f" Namespace="calico-apiserver" Pod="calico-apiserver-59f47b46b-44rl6" WorkloadEndpoint="localhost-k8s-calico--apiserver--59f47b46b--44rl6-eth0" Oct 31 01:33:03.907000 audit[4663]: NETFILTER_CFG table=filter:126 family=2 entries=57 op=nft_register_chain pid=4663 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Oct 31 01:33:03.907000 audit[4663]: SYSCALL arch=c000003e syscall=46 success=yes exit=27796 a0=3 a1=7ffe52808070 a2=0 a3=7ffe5280805c items=0 ppid=3595 pid=4663 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 01:33:03.907000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Oct 31 01:33:03.910000 audit[4654]: NETFILTER_CFG table=nat:127 family=2 entries=47 op=nft_register_chain pid=4654 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Oct 31 01:33:03.910000 audit[4654]: SYSCALL arch=c000003e syscall=46 success=yes exit=19860 a0=3 a1=7fffc021bed0 a2=0 a3=7fffc021bebc items=0 ppid=2387 pid=4654 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 01:33:03.910000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Oct 31 01:33:03.915582 env[1377]: time="2025-10-31T01:33:03.915527418Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 31 01:33:03.915699 env[1377]: time="2025-10-31T01:33:03.915592404Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 31 01:33:03.915699 env[1377]: time="2025-10-31T01:33:03.915638199Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 31 01:33:03.915913 env[1377]: time="2025-10-31T01:33:03.915877699Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/c556828c3b1ca9507dd4a81c07df8d2090f1c926fdcb1e725e72126dbaec8d0f pid=4673 runtime=io.containerd.runc.v2 Oct 31 01:33:03.946327 systemd-resolved[1294]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Oct 31 01:33:03.982841 env[1377]: time="2025-10-31T01:33:03.982042732Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-59f47b46b-44rl6,Uid:03dcc52f-4acf-4546-b99b-cf4de4d54704,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"c556828c3b1ca9507dd4a81c07df8d2090f1c926fdcb1e725e72126dbaec8d0f\"" Oct 31 01:33:04.201663 env[1377]: time="2025-10-31T01:33:04.201628616Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Oct 31 01:33:04.202064 env[1377]: time="2025-10-31T01:33:04.202040044Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Oct 31 01:33:04.207122 kubelet[2284]: E1031 01:33:04.207078 2284 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Oct 31 01:33:04.207241 kubelet[2284]: E1031 01:33:04.207131 2284 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Oct 31 01:33:04.207520 env[1377]: time="2025-10-31T01:33:04.207486193Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Oct 31 01:33:04.211697 kubelet[2284]: E1031 01:33:04.207308 2284 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-r9l7q,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-6cf7465748-bpws2_calico-system(c5ff4853-41a8-4d7e-a0bc-f8d8451a400b): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Oct 31 01:33:04.212961 kubelet[2284]: E1031 01:33:04.212929 2284 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-6cf7465748-bpws2" podUID="c5ff4853-41a8-4d7e-a0bc-f8d8451a400b" Oct 31 01:33:04.530620 env[1377]: time="2025-10-31T01:33:04.530533646Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Oct 31 01:33:04.544845 env[1377]: time="2025-10-31T01:33:04.544796999Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Oct 31 01:33:04.544984 kubelet[2284]: E1031 01:33:04.544960 2284 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Oct 31 01:33:04.545029 kubelet[2284]: E1031 01:33:04.544991 2284 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Oct 31 01:33:04.545112 kubelet[2284]: E1031 01:33:04.545074 2284 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-qzscg,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-59f47b46b-44rl6_calico-apiserver(03dcc52f-4acf-4546-b99b-cf4de4d54704): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Oct 31 01:33:04.546334 kubelet[2284]: E1031 01:33:04.546318 2284 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-59f47b46b-44rl6" podUID="03dcc52f-4acf-4546-b99b-cf4de4d54704" Oct 31 01:33:04.564813 systemd-networkd[1125]: cali5d49b778d08: Gained IPv6LL Oct 31 01:33:04.824522 kubelet[2284]: E1031 01:33:04.824485 2284 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-6cf7465748-bpws2" podUID="c5ff4853-41a8-4d7e-a0bc-f8d8451a400b" Oct 31 01:33:04.827923 kubelet[2284]: E1031 01:33:04.827903 2284 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-59f47b46b-44rl6" podUID="03dcc52f-4acf-4546-b99b-cf4de4d54704" Oct 31 01:33:04.828338 kubelet[2284]: E1031 01:33:04.828307 2284 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-vc6st" podUID="68e0baab-eac1-409d-a79c-945bc83eb739" Oct 31 01:33:04.863000 audit[4709]: NETFILTER_CFG table=filter:128 family=2 entries=14 op=nft_register_rule pid=4709 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Oct 31 01:33:04.863000 audit[4709]: SYSCALL arch=c000003e syscall=46 success=yes exit=5248 a0=3 a1=7fff63726c90 a2=0 a3=7fff63726c7c items=0 ppid=2387 pid=4709 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 01:33:04.863000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Oct 31 01:33:04.868000 audit[4709]: NETFILTER_CFG table=nat:129 family=2 entries=20 op=nft_register_rule pid=4709 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Oct 31 01:33:04.868000 audit[4709]: SYSCALL arch=c000003e syscall=46 success=yes exit=5772 a0=3 a1=7fff63726c90 a2=0 a3=7fff63726c7c items=0 ppid=2387 pid=4709 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 01:33:04.868000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Oct 31 01:33:05.140726 systemd-networkd[1125]: calif1a4806cb35: Gained IPv6LL Oct 31 01:33:05.780986 systemd-networkd[1125]: cali544bc597c85: Gained IPv6LL Oct 31 01:33:05.829250 kubelet[2284]: E1031 01:33:05.829221 2284 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-59f47b46b-44rl6" podUID="03dcc52f-4acf-4546-b99b-cf4de4d54704" Oct 31 01:33:05.829521 kubelet[2284]: E1031 01:33:05.829407 2284 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-6cf7465748-bpws2" podUID="c5ff4853-41a8-4d7e-a0bc-f8d8451a400b" Oct 31 01:33:13.571511 env[1377]: time="2025-10-31T01:33:13.571481025Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Oct 31 01:33:13.879293 env[1377]: time="2025-10-31T01:33:13.879202605Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Oct 31 01:33:13.916377 env[1377]: time="2025-10-31T01:33:13.916301896Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Oct 31 01:33:13.916706 kubelet[2284]: E1031 01:33:13.916678 2284 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Oct 31 01:33:13.917047 kubelet[2284]: E1031 01:33:13.917031 2284 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Oct 31 01:33:13.917374 env[1377]: time="2025-10-31T01:33:13.917349185Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Oct 31 01:33:13.917690 kubelet[2284]: E1031 01:33:13.917657 2284 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:7fca255853824dd5924def6ba75879e0,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-jvt5b,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-84f98497cf-b6pmv_calico-system(034a3d4f-f436-4259-8570-9d57a6e1d274): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Oct 31 01:33:14.305625 env[1377]: time="2025-10-31T01:33:14.305524391Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Oct 31 01:33:14.324390 env[1377]: time="2025-10-31T01:33:14.324327581Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Oct 31 01:33:14.324593 kubelet[2284]: E1031 01:33:14.324536 2284 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Oct 31 01:33:14.324674 kubelet[2284]: E1031 01:33:14.324614 2284 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Oct 31 01:33:14.324877 env[1377]: time="2025-10-31T01:33:14.324853787Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Oct 31 01:33:14.324926 kubelet[2284]: E1031 01:33:14.324837 2284 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-5b64v,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-59f47b46b-s9kdt_calico-apiserver(8f69b598-caa2-4abe-911a-df60fbb3c4df): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Oct 31 01:33:14.326400 kubelet[2284]: E1031 01:33:14.326372 2284 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-59f47b46b-s9kdt" podUID="8f69b598-caa2-4abe-911a-df60fbb3c4df" Oct 31 01:33:14.689025 env[1377]: time="2025-10-31T01:33:14.688988741Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Oct 31 01:33:14.689312 env[1377]: time="2025-10-31T01:33:14.689267024Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Oct 31 01:33:14.689453 kubelet[2284]: E1031 01:33:14.689432 2284 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Oct 31 01:33:14.689533 kubelet[2284]: E1031 01:33:14.689522 2284 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Oct 31 01:33:14.689827 kubelet[2284]: E1031 01:33:14.689762 2284 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-jvt5b,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-84f98497cf-b6pmv_calico-system(034a3d4f-f436-4259-8570-9d57a6e1d274): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Oct 31 01:33:14.691050 kubelet[2284]: E1031 01:33:14.691025 2284 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-84f98497cf-b6pmv" podUID="034a3d4f-f436-4259-8570-9d57a6e1d274" Oct 31 01:33:15.542196 env[1377]: time="2025-10-31T01:33:15.542163734Z" level=info msg="StopPodSandbox for \"660a1bc9f26fedc398ff472056caa51e1793e5e319a6e213b713da6c45aecdd8\"" Oct 31 01:33:15.606220 env[1377]: 2025-10-31 01:33:15.573 [WARNING][4728] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="660a1bc9f26fedc398ff472056caa51e1793e5e319a6e213b713da6c45aecdd8" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--668d6bf9bc--j9jdm-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"c35d3eac-7307-43de-bf4e-73472193a4cb", ResourceVersion:"1049", Generation:0, CreationTimestamp:time.Date(2025, time.October, 31, 1, 32, 19, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"1f3473c4800660473d609053d1289022cfd0444ef95e01da3854d558ae53db2e", Pod:"coredns-668d6bf9bc-j9jdm", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali35c6f7c7cd9", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 31 01:33:15.606220 env[1377]: 2025-10-31 01:33:15.573 [INFO][4728] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="660a1bc9f26fedc398ff472056caa51e1793e5e319a6e213b713da6c45aecdd8" Oct 31 01:33:15.606220 env[1377]: 2025-10-31 01:33:15.573 [INFO][4728] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="660a1bc9f26fedc398ff472056caa51e1793e5e319a6e213b713da6c45aecdd8" iface="eth0" netns="" Oct 31 01:33:15.606220 env[1377]: 2025-10-31 01:33:15.573 [INFO][4728] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="660a1bc9f26fedc398ff472056caa51e1793e5e319a6e213b713da6c45aecdd8" Oct 31 01:33:15.606220 env[1377]: 2025-10-31 01:33:15.573 [INFO][4728] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="660a1bc9f26fedc398ff472056caa51e1793e5e319a6e213b713da6c45aecdd8" Oct 31 01:33:15.606220 env[1377]: 2025-10-31 01:33:15.596 [INFO][4735] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="660a1bc9f26fedc398ff472056caa51e1793e5e319a6e213b713da6c45aecdd8" HandleID="k8s-pod-network.660a1bc9f26fedc398ff472056caa51e1793e5e319a6e213b713da6c45aecdd8" Workload="localhost-k8s-coredns--668d6bf9bc--j9jdm-eth0" Oct 31 01:33:15.606220 env[1377]: 2025-10-31 01:33:15.596 [INFO][4735] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Oct 31 01:33:15.606220 env[1377]: 2025-10-31 01:33:15.596 [INFO][4735] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Oct 31 01:33:15.606220 env[1377]: 2025-10-31 01:33:15.601 [WARNING][4735] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="660a1bc9f26fedc398ff472056caa51e1793e5e319a6e213b713da6c45aecdd8" HandleID="k8s-pod-network.660a1bc9f26fedc398ff472056caa51e1793e5e319a6e213b713da6c45aecdd8" Workload="localhost-k8s-coredns--668d6bf9bc--j9jdm-eth0" Oct 31 01:33:15.606220 env[1377]: 2025-10-31 01:33:15.601 [INFO][4735] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="660a1bc9f26fedc398ff472056caa51e1793e5e319a6e213b713da6c45aecdd8" HandleID="k8s-pod-network.660a1bc9f26fedc398ff472056caa51e1793e5e319a6e213b713da6c45aecdd8" Workload="localhost-k8s-coredns--668d6bf9bc--j9jdm-eth0" Oct 31 01:33:15.606220 env[1377]: 2025-10-31 01:33:15.602 [INFO][4735] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Oct 31 01:33:15.606220 env[1377]: 2025-10-31 01:33:15.603 [INFO][4728] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="660a1bc9f26fedc398ff472056caa51e1793e5e319a6e213b713da6c45aecdd8" Oct 31 01:33:15.606220 env[1377]: time="2025-10-31T01:33:15.605408626Z" level=info msg="TearDown network for sandbox \"660a1bc9f26fedc398ff472056caa51e1793e5e319a6e213b713da6c45aecdd8\" successfully" Oct 31 01:33:15.606220 env[1377]: time="2025-10-31T01:33:15.605427149Z" level=info msg="StopPodSandbox for \"660a1bc9f26fedc398ff472056caa51e1793e5e319a6e213b713da6c45aecdd8\" returns successfully" Oct 31 01:33:15.616697 env[1377]: time="2025-10-31T01:33:15.616655889Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Oct 31 01:33:15.639079 env[1377]: time="2025-10-31T01:33:15.638941094Z" level=info msg="RemovePodSandbox for \"660a1bc9f26fedc398ff472056caa51e1793e5e319a6e213b713da6c45aecdd8\"" Oct 31 01:33:15.639079 env[1377]: time="2025-10-31T01:33:15.638981874Z" level=info msg="Forcibly stopping sandbox \"660a1bc9f26fedc398ff472056caa51e1793e5e319a6e213b713da6c45aecdd8\"" Oct 31 01:33:15.687348 env[1377]: 2025-10-31 01:33:15.664 [WARNING][4752] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="660a1bc9f26fedc398ff472056caa51e1793e5e319a6e213b713da6c45aecdd8" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--668d6bf9bc--j9jdm-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"c35d3eac-7307-43de-bf4e-73472193a4cb", ResourceVersion:"1049", Generation:0, CreationTimestamp:time.Date(2025, time.October, 31, 1, 32, 19, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"1f3473c4800660473d609053d1289022cfd0444ef95e01da3854d558ae53db2e", Pod:"coredns-668d6bf9bc-j9jdm", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali35c6f7c7cd9", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 31 01:33:15.687348 env[1377]: 2025-10-31 01:33:15.664 [INFO][4752] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="660a1bc9f26fedc398ff472056caa51e1793e5e319a6e213b713da6c45aecdd8" Oct 31 01:33:15.687348 env[1377]: 2025-10-31 01:33:15.665 [INFO][4752] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="660a1bc9f26fedc398ff472056caa51e1793e5e319a6e213b713da6c45aecdd8" iface="eth0" netns="" Oct 31 01:33:15.687348 env[1377]: 2025-10-31 01:33:15.665 [INFO][4752] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="660a1bc9f26fedc398ff472056caa51e1793e5e319a6e213b713da6c45aecdd8" Oct 31 01:33:15.687348 env[1377]: 2025-10-31 01:33:15.665 [INFO][4752] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="660a1bc9f26fedc398ff472056caa51e1793e5e319a6e213b713da6c45aecdd8" Oct 31 01:33:15.687348 env[1377]: 2025-10-31 01:33:15.679 [INFO][4759] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="660a1bc9f26fedc398ff472056caa51e1793e5e319a6e213b713da6c45aecdd8" HandleID="k8s-pod-network.660a1bc9f26fedc398ff472056caa51e1793e5e319a6e213b713da6c45aecdd8" Workload="localhost-k8s-coredns--668d6bf9bc--j9jdm-eth0" Oct 31 01:33:15.687348 env[1377]: 2025-10-31 01:33:15.679 [INFO][4759] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Oct 31 01:33:15.687348 env[1377]: 2025-10-31 01:33:15.679 [INFO][4759] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Oct 31 01:33:15.687348 env[1377]: 2025-10-31 01:33:15.683 [WARNING][4759] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="660a1bc9f26fedc398ff472056caa51e1793e5e319a6e213b713da6c45aecdd8" HandleID="k8s-pod-network.660a1bc9f26fedc398ff472056caa51e1793e5e319a6e213b713da6c45aecdd8" Workload="localhost-k8s-coredns--668d6bf9bc--j9jdm-eth0" Oct 31 01:33:15.687348 env[1377]: 2025-10-31 01:33:15.683 [INFO][4759] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="660a1bc9f26fedc398ff472056caa51e1793e5e319a6e213b713da6c45aecdd8" HandleID="k8s-pod-network.660a1bc9f26fedc398ff472056caa51e1793e5e319a6e213b713da6c45aecdd8" Workload="localhost-k8s-coredns--668d6bf9bc--j9jdm-eth0" Oct 31 01:33:15.687348 env[1377]: 2025-10-31 01:33:15.684 [INFO][4759] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Oct 31 01:33:15.687348 env[1377]: 2025-10-31 01:33:15.686 [INFO][4752] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="660a1bc9f26fedc398ff472056caa51e1793e5e319a6e213b713da6c45aecdd8" Oct 31 01:33:15.687738 env[1377]: time="2025-10-31T01:33:15.687372285Z" level=info msg="TearDown network for sandbox \"660a1bc9f26fedc398ff472056caa51e1793e5e319a6e213b713da6c45aecdd8\" successfully" Oct 31 01:33:15.690792 env[1377]: time="2025-10-31T01:33:15.690777825Z" level=info msg="RemovePodSandbox \"660a1bc9f26fedc398ff472056caa51e1793e5e319a6e213b713da6c45aecdd8\" returns successfully" Oct 31 01:33:15.691083 env[1377]: time="2025-10-31T01:33:15.691065297Z" level=info msg="StopPodSandbox for \"a3f2fe237e95c197a2f6b326a1b871ff16dcf0c00c677db5ddf2964768d2fbe0\"" Oct 31 01:33:15.736380 env[1377]: 2025-10-31 01:33:15.716 [WARNING][4773] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="a3f2fe237e95c197a2f6b326a1b871ff16dcf0c00c677db5ddf2964768d2fbe0" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--668d6bf9bc--27j8s-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"4bc01c1a-5001-4f6b-ad3f-615201398d71", ResourceVersion:"1046", Generation:0, CreationTimestamp:time.Date(2025, time.October, 31, 1, 32, 19, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"18c9ee7a3733ba468290c84864dca370471398d9869535678996bfce2f2771f4", Pod:"coredns-668d6bf9bc-27j8s", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calie36a760a9ee", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 31 01:33:15.736380 env[1377]: 2025-10-31 01:33:15.716 [INFO][4773] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="a3f2fe237e95c197a2f6b326a1b871ff16dcf0c00c677db5ddf2964768d2fbe0" Oct 31 01:33:15.736380 env[1377]: 2025-10-31 01:33:15.716 [INFO][4773] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="a3f2fe237e95c197a2f6b326a1b871ff16dcf0c00c677db5ddf2964768d2fbe0" iface="eth0" netns="" Oct 31 01:33:15.736380 env[1377]: 2025-10-31 01:33:15.716 [INFO][4773] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="a3f2fe237e95c197a2f6b326a1b871ff16dcf0c00c677db5ddf2964768d2fbe0" Oct 31 01:33:15.736380 env[1377]: 2025-10-31 01:33:15.716 [INFO][4773] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="a3f2fe237e95c197a2f6b326a1b871ff16dcf0c00c677db5ddf2964768d2fbe0" Oct 31 01:33:15.736380 env[1377]: 2025-10-31 01:33:15.729 [INFO][4780] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="a3f2fe237e95c197a2f6b326a1b871ff16dcf0c00c677db5ddf2964768d2fbe0" HandleID="k8s-pod-network.a3f2fe237e95c197a2f6b326a1b871ff16dcf0c00c677db5ddf2964768d2fbe0" Workload="localhost-k8s-coredns--668d6bf9bc--27j8s-eth0" Oct 31 01:33:15.736380 env[1377]: 2025-10-31 01:33:15.729 [INFO][4780] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Oct 31 01:33:15.736380 env[1377]: 2025-10-31 01:33:15.729 [INFO][4780] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Oct 31 01:33:15.736380 env[1377]: 2025-10-31 01:33:15.733 [WARNING][4780] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="a3f2fe237e95c197a2f6b326a1b871ff16dcf0c00c677db5ddf2964768d2fbe0" HandleID="k8s-pod-network.a3f2fe237e95c197a2f6b326a1b871ff16dcf0c00c677db5ddf2964768d2fbe0" Workload="localhost-k8s-coredns--668d6bf9bc--27j8s-eth0" Oct 31 01:33:15.736380 env[1377]: 2025-10-31 01:33:15.733 [INFO][4780] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="a3f2fe237e95c197a2f6b326a1b871ff16dcf0c00c677db5ddf2964768d2fbe0" HandleID="k8s-pod-network.a3f2fe237e95c197a2f6b326a1b871ff16dcf0c00c677db5ddf2964768d2fbe0" Workload="localhost-k8s-coredns--668d6bf9bc--27j8s-eth0" Oct 31 01:33:15.736380 env[1377]: 2025-10-31 01:33:15.734 [INFO][4780] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Oct 31 01:33:15.736380 env[1377]: 2025-10-31 01:33:15.735 [INFO][4773] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="a3f2fe237e95c197a2f6b326a1b871ff16dcf0c00c677db5ddf2964768d2fbe0" Oct 31 01:33:15.737409 env[1377]: time="2025-10-31T01:33:15.736406397Z" level=info msg="TearDown network for sandbox \"a3f2fe237e95c197a2f6b326a1b871ff16dcf0c00c677db5ddf2964768d2fbe0\" successfully" Oct 31 01:33:15.737409 env[1377]: time="2025-10-31T01:33:15.736426453Z" level=info msg="StopPodSandbox for \"a3f2fe237e95c197a2f6b326a1b871ff16dcf0c00c677db5ddf2964768d2fbe0\" returns successfully" Oct 31 01:33:15.738632 env[1377]: time="2025-10-31T01:33:15.738601735Z" level=info msg="RemovePodSandbox for \"a3f2fe237e95c197a2f6b326a1b871ff16dcf0c00c677db5ddf2964768d2fbe0\"" Oct 31 01:33:15.738668 env[1377]: time="2025-10-31T01:33:15.738637955Z" level=info msg="Forcibly stopping sandbox \"a3f2fe237e95c197a2f6b326a1b871ff16dcf0c00c677db5ddf2964768d2fbe0\"" Oct 31 01:33:15.784863 env[1377]: 2025-10-31 01:33:15.759 [WARNING][4795] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="a3f2fe237e95c197a2f6b326a1b871ff16dcf0c00c677db5ddf2964768d2fbe0" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--668d6bf9bc--27j8s-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"4bc01c1a-5001-4f6b-ad3f-615201398d71", ResourceVersion:"1046", Generation:0, CreationTimestamp:time.Date(2025, time.October, 31, 1, 32, 19, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"18c9ee7a3733ba468290c84864dca370471398d9869535678996bfce2f2771f4", Pod:"coredns-668d6bf9bc-27j8s", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calie36a760a9ee", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 31 01:33:15.784863 env[1377]: 2025-10-31 01:33:15.760 [INFO][4795] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="a3f2fe237e95c197a2f6b326a1b871ff16dcf0c00c677db5ddf2964768d2fbe0" Oct 31 01:33:15.784863 env[1377]: 2025-10-31 01:33:15.760 [INFO][4795] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="a3f2fe237e95c197a2f6b326a1b871ff16dcf0c00c677db5ddf2964768d2fbe0" iface="eth0" netns="" Oct 31 01:33:15.784863 env[1377]: 2025-10-31 01:33:15.760 [INFO][4795] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="a3f2fe237e95c197a2f6b326a1b871ff16dcf0c00c677db5ddf2964768d2fbe0" Oct 31 01:33:15.784863 env[1377]: 2025-10-31 01:33:15.760 [INFO][4795] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="a3f2fe237e95c197a2f6b326a1b871ff16dcf0c00c677db5ddf2964768d2fbe0" Oct 31 01:33:15.784863 env[1377]: 2025-10-31 01:33:15.775 [INFO][4802] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="a3f2fe237e95c197a2f6b326a1b871ff16dcf0c00c677db5ddf2964768d2fbe0" HandleID="k8s-pod-network.a3f2fe237e95c197a2f6b326a1b871ff16dcf0c00c677db5ddf2964768d2fbe0" Workload="localhost-k8s-coredns--668d6bf9bc--27j8s-eth0" Oct 31 01:33:15.784863 env[1377]: 2025-10-31 01:33:15.775 [INFO][4802] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Oct 31 01:33:15.784863 env[1377]: 2025-10-31 01:33:15.775 [INFO][4802] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Oct 31 01:33:15.784863 env[1377]: 2025-10-31 01:33:15.779 [WARNING][4802] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="a3f2fe237e95c197a2f6b326a1b871ff16dcf0c00c677db5ddf2964768d2fbe0" HandleID="k8s-pod-network.a3f2fe237e95c197a2f6b326a1b871ff16dcf0c00c677db5ddf2964768d2fbe0" Workload="localhost-k8s-coredns--668d6bf9bc--27j8s-eth0" Oct 31 01:33:15.784863 env[1377]: 2025-10-31 01:33:15.779 [INFO][4802] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="a3f2fe237e95c197a2f6b326a1b871ff16dcf0c00c677db5ddf2964768d2fbe0" HandleID="k8s-pod-network.a3f2fe237e95c197a2f6b326a1b871ff16dcf0c00c677db5ddf2964768d2fbe0" Workload="localhost-k8s-coredns--668d6bf9bc--27j8s-eth0" Oct 31 01:33:15.784863 env[1377]: 2025-10-31 01:33:15.780 [INFO][4802] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Oct 31 01:33:15.784863 env[1377]: 2025-10-31 01:33:15.781 [INFO][4795] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="a3f2fe237e95c197a2f6b326a1b871ff16dcf0c00c677db5ddf2964768d2fbe0" Oct 31 01:33:15.785225 env[1377]: time="2025-10-31T01:33:15.784877854Z" level=info msg="TearDown network for sandbox \"a3f2fe237e95c197a2f6b326a1b871ff16dcf0c00c677db5ddf2964768d2fbe0\" successfully" Oct 31 01:33:15.789356 env[1377]: time="2025-10-31T01:33:15.789205177Z" level=info msg="RemovePodSandbox \"a3f2fe237e95c197a2f6b326a1b871ff16dcf0c00c677db5ddf2964768d2fbe0\" returns successfully" Oct 31 01:33:15.789601 env[1377]: time="2025-10-31T01:33:15.789587797Z" level=info msg="StopPodSandbox for \"4c0bab7a2ed6ec3c12b99faf075344d81f292b4cad6e7a55ec04d0974d8e7759\"" Oct 31 01:33:15.835955 env[1377]: 2025-10-31 01:33:15.815 [WARNING][4816] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="4c0bab7a2ed6ec3c12b99faf075344d81f292b4cad6e7a55ec04d0974d8e7759" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--666569f655--8nrww-eth0", GenerateName:"goldmane-666569f655-", Namespace:"calico-system", SelfLink:"", UID:"d10ebfbe-91f8-4576-8542-06b4d8a152be", ResourceVersion:"1141", Generation:0, CreationTimestamp:time.Date(2025, time.October, 31, 1, 32, 29, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"666569f655", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"e189e0ecdeec3111c46c118e943a28fc0ca959044e13af3d633adf11e200f0e9", Pod:"goldmane-666569f655-8nrww", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calia9c5889affe", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 31 01:33:15.835955 env[1377]: 2025-10-31 01:33:15.816 [INFO][4816] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="4c0bab7a2ed6ec3c12b99faf075344d81f292b4cad6e7a55ec04d0974d8e7759" Oct 31 01:33:15.835955 env[1377]: 2025-10-31 01:33:15.816 [INFO][4816] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="4c0bab7a2ed6ec3c12b99faf075344d81f292b4cad6e7a55ec04d0974d8e7759" iface="eth0" netns="" Oct 31 01:33:15.835955 env[1377]: 2025-10-31 01:33:15.816 [INFO][4816] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="4c0bab7a2ed6ec3c12b99faf075344d81f292b4cad6e7a55ec04d0974d8e7759" Oct 31 01:33:15.835955 env[1377]: 2025-10-31 01:33:15.816 [INFO][4816] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="4c0bab7a2ed6ec3c12b99faf075344d81f292b4cad6e7a55ec04d0974d8e7759" Oct 31 01:33:15.835955 env[1377]: 2025-10-31 01:33:15.829 [INFO][4823] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="4c0bab7a2ed6ec3c12b99faf075344d81f292b4cad6e7a55ec04d0974d8e7759" HandleID="k8s-pod-network.4c0bab7a2ed6ec3c12b99faf075344d81f292b4cad6e7a55ec04d0974d8e7759" Workload="localhost-k8s-goldmane--666569f655--8nrww-eth0" Oct 31 01:33:15.835955 env[1377]: 2025-10-31 01:33:15.829 [INFO][4823] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Oct 31 01:33:15.835955 env[1377]: 2025-10-31 01:33:15.829 [INFO][4823] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Oct 31 01:33:15.835955 env[1377]: 2025-10-31 01:33:15.832 [WARNING][4823] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="4c0bab7a2ed6ec3c12b99faf075344d81f292b4cad6e7a55ec04d0974d8e7759" HandleID="k8s-pod-network.4c0bab7a2ed6ec3c12b99faf075344d81f292b4cad6e7a55ec04d0974d8e7759" Workload="localhost-k8s-goldmane--666569f655--8nrww-eth0" Oct 31 01:33:15.835955 env[1377]: 2025-10-31 01:33:15.832 [INFO][4823] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="4c0bab7a2ed6ec3c12b99faf075344d81f292b4cad6e7a55ec04d0974d8e7759" HandleID="k8s-pod-network.4c0bab7a2ed6ec3c12b99faf075344d81f292b4cad6e7a55ec04d0974d8e7759" Workload="localhost-k8s-goldmane--666569f655--8nrww-eth0" Oct 31 01:33:15.835955 env[1377]: 2025-10-31 01:33:15.833 [INFO][4823] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Oct 31 01:33:15.835955 env[1377]: 2025-10-31 01:33:15.834 [INFO][4816] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="4c0bab7a2ed6ec3c12b99faf075344d81f292b4cad6e7a55ec04d0974d8e7759" Oct 31 01:33:15.836318 env[1377]: time="2025-10-31T01:33:15.835970785Z" level=info msg="TearDown network for sandbox \"4c0bab7a2ed6ec3c12b99faf075344d81f292b4cad6e7a55ec04d0974d8e7759\" successfully" Oct 31 01:33:15.836318 env[1377]: time="2025-10-31T01:33:15.835995797Z" level=info msg="StopPodSandbox for \"4c0bab7a2ed6ec3c12b99faf075344d81f292b4cad6e7a55ec04d0974d8e7759\" returns successfully" Oct 31 01:33:15.836497 env[1377]: time="2025-10-31T01:33:15.836483409Z" level=info msg="RemovePodSandbox for \"4c0bab7a2ed6ec3c12b99faf075344d81f292b4cad6e7a55ec04d0974d8e7759\"" Oct 31 01:33:15.836571 env[1377]: time="2025-10-31T01:33:15.836539462Z" level=info msg="Forcibly stopping sandbox \"4c0bab7a2ed6ec3c12b99faf075344d81f292b4cad6e7a55ec04d0974d8e7759\"" Oct 31 01:33:15.886767 env[1377]: 2025-10-31 01:33:15.865 [WARNING][4837] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="4c0bab7a2ed6ec3c12b99faf075344d81f292b4cad6e7a55ec04d0974d8e7759" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--666569f655--8nrww-eth0", GenerateName:"goldmane-666569f655-", Namespace:"calico-system", SelfLink:"", UID:"d10ebfbe-91f8-4576-8542-06b4d8a152be", ResourceVersion:"1141", Generation:0, CreationTimestamp:time.Date(2025, time.October, 31, 1, 32, 29, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"666569f655", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"e189e0ecdeec3111c46c118e943a28fc0ca959044e13af3d633adf11e200f0e9", Pod:"goldmane-666569f655-8nrww", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calia9c5889affe", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 31 01:33:15.886767 env[1377]: 2025-10-31 01:33:15.865 [INFO][4837] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="4c0bab7a2ed6ec3c12b99faf075344d81f292b4cad6e7a55ec04d0974d8e7759" Oct 31 01:33:15.886767 env[1377]: 2025-10-31 01:33:15.865 [INFO][4837] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="4c0bab7a2ed6ec3c12b99faf075344d81f292b4cad6e7a55ec04d0974d8e7759" iface="eth0" netns="" Oct 31 01:33:15.886767 env[1377]: 2025-10-31 01:33:15.866 [INFO][4837] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="4c0bab7a2ed6ec3c12b99faf075344d81f292b4cad6e7a55ec04d0974d8e7759" Oct 31 01:33:15.886767 env[1377]: 2025-10-31 01:33:15.866 [INFO][4837] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="4c0bab7a2ed6ec3c12b99faf075344d81f292b4cad6e7a55ec04d0974d8e7759" Oct 31 01:33:15.886767 env[1377]: 2025-10-31 01:33:15.879 [INFO][4845] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="4c0bab7a2ed6ec3c12b99faf075344d81f292b4cad6e7a55ec04d0974d8e7759" HandleID="k8s-pod-network.4c0bab7a2ed6ec3c12b99faf075344d81f292b4cad6e7a55ec04d0974d8e7759" Workload="localhost-k8s-goldmane--666569f655--8nrww-eth0" Oct 31 01:33:15.886767 env[1377]: 2025-10-31 01:33:15.879 [INFO][4845] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Oct 31 01:33:15.886767 env[1377]: 2025-10-31 01:33:15.879 [INFO][4845] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Oct 31 01:33:15.886767 env[1377]: 2025-10-31 01:33:15.883 [WARNING][4845] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="4c0bab7a2ed6ec3c12b99faf075344d81f292b4cad6e7a55ec04d0974d8e7759" HandleID="k8s-pod-network.4c0bab7a2ed6ec3c12b99faf075344d81f292b4cad6e7a55ec04d0974d8e7759" Workload="localhost-k8s-goldmane--666569f655--8nrww-eth0" Oct 31 01:33:15.886767 env[1377]: 2025-10-31 01:33:15.883 [INFO][4845] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="4c0bab7a2ed6ec3c12b99faf075344d81f292b4cad6e7a55ec04d0974d8e7759" HandleID="k8s-pod-network.4c0bab7a2ed6ec3c12b99faf075344d81f292b4cad6e7a55ec04d0974d8e7759" Workload="localhost-k8s-goldmane--666569f655--8nrww-eth0" Oct 31 01:33:15.886767 env[1377]: 2025-10-31 01:33:15.883 [INFO][4845] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Oct 31 01:33:15.886767 env[1377]: 2025-10-31 01:33:15.885 [INFO][4837] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="4c0bab7a2ed6ec3c12b99faf075344d81f292b4cad6e7a55ec04d0974d8e7759" Oct 31 01:33:15.887150 env[1377]: time="2025-10-31T01:33:15.887126333Z" level=info msg="TearDown network for sandbox \"4c0bab7a2ed6ec3c12b99faf075344d81f292b4cad6e7a55ec04d0974d8e7759\" successfully" Oct 31 01:33:15.888469 env[1377]: time="2025-10-31T01:33:15.888455494Z" level=info msg="RemovePodSandbox \"4c0bab7a2ed6ec3c12b99faf075344d81f292b4cad6e7a55ec04d0974d8e7759\" returns successfully" Oct 31 01:33:15.888788 env[1377]: time="2025-10-31T01:33:15.888776444Z" level=info msg="StopPodSandbox for \"16e510b62672d74d105ab6a96889818618766c7743e9a2af0797521a7f0d6681\"" Oct 31 01:33:15.928815 env[1377]: 2025-10-31 01:33:15.908 [WARNING][4860] cni-plugin/k8s.go 598: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="16e510b62672d74d105ab6a96889818618766c7743e9a2af0797521a7f0d6681" WorkloadEndpoint="localhost-k8s-whisker--dc76fd6ff--vkhmk-eth0" Oct 31 01:33:15.928815 env[1377]: 2025-10-31 01:33:15.908 [INFO][4860] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="16e510b62672d74d105ab6a96889818618766c7743e9a2af0797521a7f0d6681" Oct 31 01:33:15.928815 env[1377]: 2025-10-31 01:33:15.908 [INFO][4860] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="16e510b62672d74d105ab6a96889818618766c7743e9a2af0797521a7f0d6681" iface="eth0" netns="" Oct 31 01:33:15.928815 env[1377]: 2025-10-31 01:33:15.908 [INFO][4860] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="16e510b62672d74d105ab6a96889818618766c7743e9a2af0797521a7f0d6681" Oct 31 01:33:15.928815 env[1377]: 2025-10-31 01:33:15.908 [INFO][4860] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="16e510b62672d74d105ab6a96889818618766c7743e9a2af0797521a7f0d6681" Oct 31 01:33:15.928815 env[1377]: 2025-10-31 01:33:15.921 [INFO][4869] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="16e510b62672d74d105ab6a96889818618766c7743e9a2af0797521a7f0d6681" HandleID="k8s-pod-network.16e510b62672d74d105ab6a96889818618766c7743e9a2af0797521a7f0d6681" Workload="localhost-k8s-whisker--dc76fd6ff--vkhmk-eth0" Oct 31 01:33:15.928815 env[1377]: 2025-10-31 01:33:15.922 [INFO][4869] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Oct 31 01:33:15.928815 env[1377]: 2025-10-31 01:33:15.922 [INFO][4869] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Oct 31 01:33:15.928815 env[1377]: 2025-10-31 01:33:15.925 [WARNING][4869] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="16e510b62672d74d105ab6a96889818618766c7743e9a2af0797521a7f0d6681" HandleID="k8s-pod-network.16e510b62672d74d105ab6a96889818618766c7743e9a2af0797521a7f0d6681" Workload="localhost-k8s-whisker--dc76fd6ff--vkhmk-eth0" Oct 31 01:33:15.928815 env[1377]: 2025-10-31 01:33:15.925 [INFO][4869] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="16e510b62672d74d105ab6a96889818618766c7743e9a2af0797521a7f0d6681" HandleID="k8s-pod-network.16e510b62672d74d105ab6a96889818618766c7743e9a2af0797521a7f0d6681" Workload="localhost-k8s-whisker--dc76fd6ff--vkhmk-eth0" Oct 31 01:33:15.928815 env[1377]: 2025-10-31 01:33:15.926 [INFO][4869] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Oct 31 01:33:15.928815 env[1377]: 2025-10-31 01:33:15.927 [INFO][4860] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="16e510b62672d74d105ab6a96889818618766c7743e9a2af0797521a7f0d6681" Oct 31 01:33:15.929743 env[1377]: time="2025-10-31T01:33:15.929068526Z" level=info msg="TearDown network for sandbox \"16e510b62672d74d105ab6a96889818618766c7743e9a2af0797521a7f0d6681\" successfully" Oct 31 01:33:15.929743 env[1377]: time="2025-10-31T01:33:15.929088092Z" level=info msg="StopPodSandbox for \"16e510b62672d74d105ab6a96889818618766c7743e9a2af0797521a7f0d6681\" returns successfully" Oct 31 01:33:15.929743 env[1377]: time="2025-10-31T01:33:15.929403459Z" level=info msg="RemovePodSandbox for \"16e510b62672d74d105ab6a96889818618766c7743e9a2af0797521a7f0d6681\"" Oct 31 01:33:15.929743 env[1377]: time="2025-10-31T01:33:15.929420049Z" level=info msg="Forcibly stopping sandbox \"16e510b62672d74d105ab6a96889818618766c7743e9a2af0797521a7f0d6681\"" Oct 31 01:33:15.943677 env[1377]: time="2025-10-31T01:33:15.943649787Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Oct 31 01:33:15.944073 env[1377]: time="2025-10-31T01:33:15.944051363Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Oct 31 01:33:15.946036 kubelet[2284]: E1031 01:33:15.944773 2284 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Oct 31 01:33:15.946927 kubelet[2284]: E1031 01:33:15.946908 2284 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Oct 31 01:33:15.948660 env[1377]: time="2025-10-31T01:33:15.948641312Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Oct 31 01:33:15.953378 kubelet[2284]: E1031 01:33:15.953282 2284 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-9kdtz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-8nrww_calico-system(d10ebfbe-91f8-4576-8542-06b4d8a152be): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Oct 31 01:33:15.954884 kubelet[2284]: E1031 01:33:15.954811 2284 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-8nrww" podUID="d10ebfbe-91f8-4576-8542-06b4d8a152be" Oct 31 01:33:15.973514 env[1377]: 2025-10-31 01:33:15.949 [WARNING][4883] cni-plugin/k8s.go 598: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="16e510b62672d74d105ab6a96889818618766c7743e9a2af0797521a7f0d6681" WorkloadEndpoint="localhost-k8s-whisker--dc76fd6ff--vkhmk-eth0" Oct 31 01:33:15.973514 env[1377]: 2025-10-31 01:33:15.950 [INFO][4883] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="16e510b62672d74d105ab6a96889818618766c7743e9a2af0797521a7f0d6681" Oct 31 01:33:15.973514 env[1377]: 2025-10-31 01:33:15.950 [INFO][4883] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="16e510b62672d74d105ab6a96889818618766c7743e9a2af0797521a7f0d6681" iface="eth0" netns="" Oct 31 01:33:15.973514 env[1377]: 2025-10-31 01:33:15.950 [INFO][4883] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="16e510b62672d74d105ab6a96889818618766c7743e9a2af0797521a7f0d6681" Oct 31 01:33:15.973514 env[1377]: 2025-10-31 01:33:15.950 [INFO][4883] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="16e510b62672d74d105ab6a96889818618766c7743e9a2af0797521a7f0d6681" Oct 31 01:33:15.973514 env[1377]: 2025-10-31 01:33:15.966 [INFO][4891] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="16e510b62672d74d105ab6a96889818618766c7743e9a2af0797521a7f0d6681" HandleID="k8s-pod-network.16e510b62672d74d105ab6a96889818618766c7743e9a2af0797521a7f0d6681" Workload="localhost-k8s-whisker--dc76fd6ff--vkhmk-eth0" Oct 31 01:33:15.973514 env[1377]: 2025-10-31 01:33:15.966 [INFO][4891] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Oct 31 01:33:15.973514 env[1377]: 2025-10-31 01:33:15.966 [INFO][4891] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Oct 31 01:33:15.973514 env[1377]: 2025-10-31 01:33:15.970 [WARNING][4891] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="16e510b62672d74d105ab6a96889818618766c7743e9a2af0797521a7f0d6681" HandleID="k8s-pod-network.16e510b62672d74d105ab6a96889818618766c7743e9a2af0797521a7f0d6681" Workload="localhost-k8s-whisker--dc76fd6ff--vkhmk-eth0" Oct 31 01:33:15.973514 env[1377]: 2025-10-31 01:33:15.970 [INFO][4891] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="16e510b62672d74d105ab6a96889818618766c7743e9a2af0797521a7f0d6681" HandleID="k8s-pod-network.16e510b62672d74d105ab6a96889818618766c7743e9a2af0797521a7f0d6681" Workload="localhost-k8s-whisker--dc76fd6ff--vkhmk-eth0" Oct 31 01:33:15.973514 env[1377]: 2025-10-31 01:33:15.971 [INFO][4891] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Oct 31 01:33:15.973514 env[1377]: 2025-10-31 01:33:15.972 [INFO][4883] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="16e510b62672d74d105ab6a96889818618766c7743e9a2af0797521a7f0d6681" Oct 31 01:33:15.973867 env[1377]: time="2025-10-31T01:33:15.973846456Z" level=info msg="TearDown network for sandbox \"16e510b62672d74d105ab6a96889818618766c7743e9a2af0797521a7f0d6681\" successfully" Oct 31 01:33:15.975282 env[1377]: time="2025-10-31T01:33:15.975269098Z" level=info msg="RemovePodSandbox \"16e510b62672d74d105ab6a96889818618766c7743e9a2af0797521a7f0d6681\" returns successfully" Oct 31 01:33:15.975653 env[1377]: time="2025-10-31T01:33:15.975634296Z" level=info msg="StopPodSandbox for \"0f9e76c08e7066d5d6aee4b43aab25d278801a49577b0dd54668526bb5edbf57\"" Oct 31 01:33:16.018125 env[1377]: 2025-10-31 01:33:15.996 [WARNING][4907] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="0f9e76c08e7066d5d6aee4b43aab25d278801a49577b0dd54668526bb5edbf57" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--59f47b46b--s9kdt-eth0", GenerateName:"calico-apiserver-59f47b46b-", Namespace:"calico-apiserver", SelfLink:"", UID:"8f69b598-caa2-4abe-911a-df60fbb3c4df", ResourceVersion:"1015", Generation:0, CreationTimestamp:time.Date(2025, time.October, 31, 1, 32, 27, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"59f47b46b", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"f2bd4d841158d061e1b924d180675f86ca9db5a6617aa40ba034c415e5e116a1", Pod:"calico-apiserver-59f47b46b-s9kdt", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali8fc7743da35", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 31 01:33:16.018125 env[1377]: 2025-10-31 01:33:15.996 [INFO][4907] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="0f9e76c08e7066d5d6aee4b43aab25d278801a49577b0dd54668526bb5edbf57" Oct 31 01:33:16.018125 env[1377]: 2025-10-31 01:33:15.996 [INFO][4907] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="0f9e76c08e7066d5d6aee4b43aab25d278801a49577b0dd54668526bb5edbf57" iface="eth0" netns="" Oct 31 01:33:16.018125 env[1377]: 2025-10-31 01:33:15.996 [INFO][4907] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="0f9e76c08e7066d5d6aee4b43aab25d278801a49577b0dd54668526bb5edbf57" Oct 31 01:33:16.018125 env[1377]: 2025-10-31 01:33:15.996 [INFO][4907] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="0f9e76c08e7066d5d6aee4b43aab25d278801a49577b0dd54668526bb5edbf57" Oct 31 01:33:16.018125 env[1377]: 2025-10-31 01:33:16.011 [INFO][4914] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="0f9e76c08e7066d5d6aee4b43aab25d278801a49577b0dd54668526bb5edbf57" HandleID="k8s-pod-network.0f9e76c08e7066d5d6aee4b43aab25d278801a49577b0dd54668526bb5edbf57" Workload="localhost-k8s-calico--apiserver--59f47b46b--s9kdt-eth0" Oct 31 01:33:16.018125 env[1377]: 2025-10-31 01:33:16.011 [INFO][4914] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Oct 31 01:33:16.018125 env[1377]: 2025-10-31 01:33:16.011 [INFO][4914] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Oct 31 01:33:16.018125 env[1377]: 2025-10-31 01:33:16.015 [WARNING][4914] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="0f9e76c08e7066d5d6aee4b43aab25d278801a49577b0dd54668526bb5edbf57" HandleID="k8s-pod-network.0f9e76c08e7066d5d6aee4b43aab25d278801a49577b0dd54668526bb5edbf57" Workload="localhost-k8s-calico--apiserver--59f47b46b--s9kdt-eth0" Oct 31 01:33:16.018125 env[1377]: 2025-10-31 01:33:16.015 [INFO][4914] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="0f9e76c08e7066d5d6aee4b43aab25d278801a49577b0dd54668526bb5edbf57" HandleID="k8s-pod-network.0f9e76c08e7066d5d6aee4b43aab25d278801a49577b0dd54668526bb5edbf57" Workload="localhost-k8s-calico--apiserver--59f47b46b--s9kdt-eth0" Oct 31 01:33:16.018125 env[1377]: 2025-10-31 01:33:16.015 [INFO][4914] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Oct 31 01:33:16.018125 env[1377]: 2025-10-31 01:33:16.017 [INFO][4907] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="0f9e76c08e7066d5d6aee4b43aab25d278801a49577b0dd54668526bb5edbf57" Oct 31 01:33:16.019086 env[1377]: time="2025-10-31T01:33:16.018690202Z" level=info msg="TearDown network for sandbox \"0f9e76c08e7066d5d6aee4b43aab25d278801a49577b0dd54668526bb5edbf57\" successfully" Oct 31 01:33:16.019086 env[1377]: time="2025-10-31T01:33:16.018710975Z" level=info msg="StopPodSandbox for \"0f9e76c08e7066d5d6aee4b43aab25d278801a49577b0dd54668526bb5edbf57\" returns successfully" Oct 31 01:33:16.019251 env[1377]: time="2025-10-31T01:33:16.019232408Z" level=info msg="RemovePodSandbox for \"0f9e76c08e7066d5d6aee4b43aab25d278801a49577b0dd54668526bb5edbf57\"" Oct 31 01:33:16.019286 env[1377]: time="2025-10-31T01:33:16.019255046Z" level=info msg="Forcibly stopping sandbox \"0f9e76c08e7066d5d6aee4b43aab25d278801a49577b0dd54668526bb5edbf57\"" Oct 31 01:33:16.062946 env[1377]: 2025-10-31 01:33:16.041 [WARNING][4928] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="0f9e76c08e7066d5d6aee4b43aab25d278801a49577b0dd54668526bb5edbf57" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--59f47b46b--s9kdt-eth0", GenerateName:"calico-apiserver-59f47b46b-", Namespace:"calico-apiserver", SelfLink:"", UID:"8f69b598-caa2-4abe-911a-df60fbb3c4df", ResourceVersion:"1015", Generation:0, CreationTimestamp:time.Date(2025, time.October, 31, 1, 32, 27, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"59f47b46b", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"f2bd4d841158d061e1b924d180675f86ca9db5a6617aa40ba034c415e5e116a1", Pod:"calico-apiserver-59f47b46b-s9kdt", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali8fc7743da35", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 31 01:33:16.062946 env[1377]: 2025-10-31 01:33:16.041 [INFO][4928] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="0f9e76c08e7066d5d6aee4b43aab25d278801a49577b0dd54668526bb5edbf57" Oct 31 01:33:16.062946 env[1377]: 2025-10-31 01:33:16.041 [INFO][4928] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="0f9e76c08e7066d5d6aee4b43aab25d278801a49577b0dd54668526bb5edbf57" iface="eth0" netns="" Oct 31 01:33:16.062946 env[1377]: 2025-10-31 01:33:16.041 [INFO][4928] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="0f9e76c08e7066d5d6aee4b43aab25d278801a49577b0dd54668526bb5edbf57" Oct 31 01:33:16.062946 env[1377]: 2025-10-31 01:33:16.041 [INFO][4928] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="0f9e76c08e7066d5d6aee4b43aab25d278801a49577b0dd54668526bb5edbf57" Oct 31 01:33:16.062946 env[1377]: 2025-10-31 01:33:16.055 [INFO][4935] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="0f9e76c08e7066d5d6aee4b43aab25d278801a49577b0dd54668526bb5edbf57" HandleID="k8s-pod-network.0f9e76c08e7066d5d6aee4b43aab25d278801a49577b0dd54668526bb5edbf57" Workload="localhost-k8s-calico--apiserver--59f47b46b--s9kdt-eth0" Oct 31 01:33:16.062946 env[1377]: 2025-10-31 01:33:16.055 [INFO][4935] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Oct 31 01:33:16.062946 env[1377]: 2025-10-31 01:33:16.055 [INFO][4935] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Oct 31 01:33:16.062946 env[1377]: 2025-10-31 01:33:16.059 [WARNING][4935] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="0f9e76c08e7066d5d6aee4b43aab25d278801a49577b0dd54668526bb5edbf57" HandleID="k8s-pod-network.0f9e76c08e7066d5d6aee4b43aab25d278801a49577b0dd54668526bb5edbf57" Workload="localhost-k8s-calico--apiserver--59f47b46b--s9kdt-eth0" Oct 31 01:33:16.062946 env[1377]: 2025-10-31 01:33:16.059 [INFO][4935] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="0f9e76c08e7066d5d6aee4b43aab25d278801a49577b0dd54668526bb5edbf57" HandleID="k8s-pod-network.0f9e76c08e7066d5d6aee4b43aab25d278801a49577b0dd54668526bb5edbf57" Workload="localhost-k8s-calico--apiserver--59f47b46b--s9kdt-eth0" Oct 31 01:33:16.062946 env[1377]: 2025-10-31 01:33:16.060 [INFO][4935] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Oct 31 01:33:16.062946 env[1377]: 2025-10-31 01:33:16.061 [INFO][4928] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="0f9e76c08e7066d5d6aee4b43aab25d278801a49577b0dd54668526bb5edbf57" Oct 31 01:33:16.063983 env[1377]: time="2025-10-31T01:33:16.063200300Z" level=info msg="TearDown network for sandbox \"0f9e76c08e7066d5d6aee4b43aab25d278801a49577b0dd54668526bb5edbf57\" successfully" Oct 31 01:33:16.064823 env[1377]: time="2025-10-31T01:33:16.064810120Z" level=info msg="RemovePodSandbox \"0f9e76c08e7066d5d6aee4b43aab25d278801a49577b0dd54668526bb5edbf57\" returns successfully" Oct 31 01:33:16.065176 env[1377]: time="2025-10-31T01:33:16.065160221Z" level=info msg="StopPodSandbox for \"adb25d497f530262e3f30149078f5eaa0d0ef52dcabb55036103df9d0fc53bf3\"" Oct 31 01:33:16.108189 env[1377]: 2025-10-31 01:33:16.085 [WARNING][4950] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="adb25d497f530262e3f30149078f5eaa0d0ef52dcabb55036103df9d0fc53bf3" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--6cf7465748--bpws2-eth0", GenerateName:"calico-kube-controllers-6cf7465748-", Namespace:"calico-system", SelfLink:"", UID:"c5ff4853-41a8-4d7e-a0bc-f8d8451a400b", ResourceVersion:"1110", Generation:0, CreationTimestamp:time.Date(2025, time.October, 31, 1, 32, 32, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"6cf7465748", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"88881dc01f2f85191d2d129efd1c837fc21cc54245917dae20c856760f316fc9", Pod:"calico-kube-controllers-6cf7465748-bpws2", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali544bc597c85", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 31 01:33:16.108189 env[1377]: 2025-10-31 01:33:16.085 [INFO][4950] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="adb25d497f530262e3f30149078f5eaa0d0ef52dcabb55036103df9d0fc53bf3" Oct 31 01:33:16.108189 env[1377]: 2025-10-31 01:33:16.085 [INFO][4950] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="adb25d497f530262e3f30149078f5eaa0d0ef52dcabb55036103df9d0fc53bf3" iface="eth0" netns="" Oct 31 01:33:16.108189 env[1377]: 2025-10-31 01:33:16.085 [INFO][4950] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="adb25d497f530262e3f30149078f5eaa0d0ef52dcabb55036103df9d0fc53bf3" Oct 31 01:33:16.108189 env[1377]: 2025-10-31 01:33:16.085 [INFO][4950] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="adb25d497f530262e3f30149078f5eaa0d0ef52dcabb55036103df9d0fc53bf3" Oct 31 01:33:16.108189 env[1377]: 2025-10-31 01:33:16.099 [INFO][4958] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="adb25d497f530262e3f30149078f5eaa0d0ef52dcabb55036103df9d0fc53bf3" HandleID="k8s-pod-network.adb25d497f530262e3f30149078f5eaa0d0ef52dcabb55036103df9d0fc53bf3" Workload="localhost-k8s-calico--kube--controllers--6cf7465748--bpws2-eth0" Oct 31 01:33:16.108189 env[1377]: 2025-10-31 01:33:16.099 [INFO][4958] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Oct 31 01:33:16.108189 env[1377]: 2025-10-31 01:33:16.099 [INFO][4958] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Oct 31 01:33:16.108189 env[1377]: 2025-10-31 01:33:16.103 [WARNING][4958] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="adb25d497f530262e3f30149078f5eaa0d0ef52dcabb55036103df9d0fc53bf3" HandleID="k8s-pod-network.adb25d497f530262e3f30149078f5eaa0d0ef52dcabb55036103df9d0fc53bf3" Workload="localhost-k8s-calico--kube--controllers--6cf7465748--bpws2-eth0" Oct 31 01:33:16.108189 env[1377]: 2025-10-31 01:33:16.103 [INFO][4958] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="adb25d497f530262e3f30149078f5eaa0d0ef52dcabb55036103df9d0fc53bf3" HandleID="k8s-pod-network.adb25d497f530262e3f30149078f5eaa0d0ef52dcabb55036103df9d0fc53bf3" Workload="localhost-k8s-calico--kube--controllers--6cf7465748--bpws2-eth0" Oct 31 01:33:16.108189 env[1377]: 2025-10-31 01:33:16.104 [INFO][4958] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Oct 31 01:33:16.108189 env[1377]: 2025-10-31 01:33:16.105 [INFO][4950] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="adb25d497f530262e3f30149078f5eaa0d0ef52dcabb55036103df9d0fc53bf3" Oct 31 01:33:16.108189 env[1377]: time="2025-10-31T01:33:16.107085146Z" level=info msg="TearDown network for sandbox \"adb25d497f530262e3f30149078f5eaa0d0ef52dcabb55036103df9d0fc53bf3\" successfully" Oct 31 01:33:16.108189 env[1377]: time="2025-10-31T01:33:16.107106462Z" level=info msg="StopPodSandbox for \"adb25d497f530262e3f30149078f5eaa0d0ef52dcabb55036103df9d0fc53bf3\" returns successfully" Oct 31 01:33:16.109275 env[1377]: time="2025-10-31T01:33:16.109260264Z" level=info msg="RemovePodSandbox for \"adb25d497f530262e3f30149078f5eaa0d0ef52dcabb55036103df9d0fc53bf3\"" Oct 31 01:33:16.109354 env[1377]: time="2025-10-31T01:33:16.109331035Z" level=info msg="Forcibly stopping sandbox \"adb25d497f530262e3f30149078f5eaa0d0ef52dcabb55036103df9d0fc53bf3\"" Oct 31 01:33:16.151018 env[1377]: 2025-10-31 01:33:16.130 [WARNING][4972] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="adb25d497f530262e3f30149078f5eaa0d0ef52dcabb55036103df9d0fc53bf3" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--6cf7465748--bpws2-eth0", GenerateName:"calico-kube-controllers-6cf7465748-", Namespace:"calico-system", SelfLink:"", UID:"c5ff4853-41a8-4d7e-a0bc-f8d8451a400b", ResourceVersion:"1110", Generation:0, CreationTimestamp:time.Date(2025, time.October, 31, 1, 32, 32, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"6cf7465748", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"88881dc01f2f85191d2d129efd1c837fc21cc54245917dae20c856760f316fc9", Pod:"calico-kube-controllers-6cf7465748-bpws2", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali544bc597c85", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 31 01:33:16.151018 env[1377]: 2025-10-31 01:33:16.131 [INFO][4972] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="adb25d497f530262e3f30149078f5eaa0d0ef52dcabb55036103df9d0fc53bf3" Oct 31 01:33:16.151018 env[1377]: 2025-10-31 01:33:16.131 [INFO][4972] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="adb25d497f530262e3f30149078f5eaa0d0ef52dcabb55036103df9d0fc53bf3" iface="eth0" netns="" Oct 31 01:33:16.151018 env[1377]: 2025-10-31 01:33:16.131 [INFO][4972] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="adb25d497f530262e3f30149078f5eaa0d0ef52dcabb55036103df9d0fc53bf3" Oct 31 01:33:16.151018 env[1377]: 2025-10-31 01:33:16.131 [INFO][4972] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="adb25d497f530262e3f30149078f5eaa0d0ef52dcabb55036103df9d0fc53bf3" Oct 31 01:33:16.151018 env[1377]: 2025-10-31 01:33:16.143 [INFO][4979] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="adb25d497f530262e3f30149078f5eaa0d0ef52dcabb55036103df9d0fc53bf3" HandleID="k8s-pod-network.adb25d497f530262e3f30149078f5eaa0d0ef52dcabb55036103df9d0fc53bf3" Workload="localhost-k8s-calico--kube--controllers--6cf7465748--bpws2-eth0" Oct 31 01:33:16.151018 env[1377]: 2025-10-31 01:33:16.144 [INFO][4979] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Oct 31 01:33:16.151018 env[1377]: 2025-10-31 01:33:16.144 [INFO][4979] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Oct 31 01:33:16.151018 env[1377]: 2025-10-31 01:33:16.147 [WARNING][4979] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="adb25d497f530262e3f30149078f5eaa0d0ef52dcabb55036103df9d0fc53bf3" HandleID="k8s-pod-network.adb25d497f530262e3f30149078f5eaa0d0ef52dcabb55036103df9d0fc53bf3" Workload="localhost-k8s-calico--kube--controllers--6cf7465748--bpws2-eth0" Oct 31 01:33:16.151018 env[1377]: 2025-10-31 01:33:16.147 [INFO][4979] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="adb25d497f530262e3f30149078f5eaa0d0ef52dcabb55036103df9d0fc53bf3" HandleID="k8s-pod-network.adb25d497f530262e3f30149078f5eaa0d0ef52dcabb55036103df9d0fc53bf3" Workload="localhost-k8s-calico--kube--controllers--6cf7465748--bpws2-eth0" Oct 31 01:33:16.151018 env[1377]: 2025-10-31 01:33:16.148 [INFO][4979] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Oct 31 01:33:16.151018 env[1377]: 2025-10-31 01:33:16.149 [INFO][4972] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="adb25d497f530262e3f30149078f5eaa0d0ef52dcabb55036103df9d0fc53bf3" Oct 31 01:33:16.151400 env[1377]: time="2025-10-31T01:33:16.151379358Z" level=info msg="TearDown network for sandbox \"adb25d497f530262e3f30149078f5eaa0d0ef52dcabb55036103df9d0fc53bf3\" successfully" Oct 31 01:33:16.164886 env[1377]: time="2025-10-31T01:33:16.164859048Z" level=info msg="RemovePodSandbox \"adb25d497f530262e3f30149078f5eaa0d0ef52dcabb55036103df9d0fc53bf3\" returns successfully" Oct 31 01:33:16.165348 env[1377]: time="2025-10-31T01:33:16.165335479Z" level=info msg="StopPodSandbox for \"e9b9f960fe0a05e1639f4b425911f59ace57268750f8e775d196eb85258ce494\"" Oct 31 01:33:16.205802 env[1377]: 2025-10-31 01:33:16.184 [WARNING][4993] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="e9b9f960fe0a05e1639f4b425911f59ace57268750f8e775d196eb85258ce494" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--7495b6f49d--9bz8s-eth0", GenerateName:"calico-apiserver-7495b6f49d-", Namespace:"calico-apiserver", SelfLink:"", UID:"3586a90a-6636-45ab-8082-c6aa9bdb62e3", ResourceVersion:"1069", Generation:0, CreationTimestamp:time.Date(2025, time.October, 31, 1, 32, 28, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7495b6f49d", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"f04fa00d61a9e6613391799a4c0cda337310bb9eb143696f6bb69877342ca9eb", Pod:"calico-apiserver-7495b6f49d-9bz8s", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali674cf0b2982", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 31 01:33:16.205802 env[1377]: 2025-10-31 01:33:16.184 [INFO][4993] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="e9b9f960fe0a05e1639f4b425911f59ace57268750f8e775d196eb85258ce494" Oct 31 01:33:16.205802 env[1377]: 2025-10-31 01:33:16.184 [INFO][4993] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="e9b9f960fe0a05e1639f4b425911f59ace57268750f8e775d196eb85258ce494" iface="eth0" netns="" Oct 31 01:33:16.205802 env[1377]: 2025-10-31 01:33:16.184 [INFO][4993] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="e9b9f960fe0a05e1639f4b425911f59ace57268750f8e775d196eb85258ce494" Oct 31 01:33:16.205802 env[1377]: 2025-10-31 01:33:16.184 [INFO][4993] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="e9b9f960fe0a05e1639f4b425911f59ace57268750f8e775d196eb85258ce494" Oct 31 01:33:16.205802 env[1377]: 2025-10-31 01:33:16.199 [INFO][5000] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="e9b9f960fe0a05e1639f4b425911f59ace57268750f8e775d196eb85258ce494" HandleID="k8s-pod-network.e9b9f960fe0a05e1639f4b425911f59ace57268750f8e775d196eb85258ce494" Workload="localhost-k8s-calico--apiserver--7495b6f49d--9bz8s-eth0" Oct 31 01:33:16.205802 env[1377]: 2025-10-31 01:33:16.199 [INFO][5000] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Oct 31 01:33:16.205802 env[1377]: 2025-10-31 01:33:16.199 [INFO][5000] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Oct 31 01:33:16.205802 env[1377]: 2025-10-31 01:33:16.202 [WARNING][5000] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="e9b9f960fe0a05e1639f4b425911f59ace57268750f8e775d196eb85258ce494" HandleID="k8s-pod-network.e9b9f960fe0a05e1639f4b425911f59ace57268750f8e775d196eb85258ce494" Workload="localhost-k8s-calico--apiserver--7495b6f49d--9bz8s-eth0" Oct 31 01:33:16.205802 env[1377]: 2025-10-31 01:33:16.202 [INFO][5000] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="e9b9f960fe0a05e1639f4b425911f59ace57268750f8e775d196eb85258ce494" HandleID="k8s-pod-network.e9b9f960fe0a05e1639f4b425911f59ace57268750f8e775d196eb85258ce494" Workload="localhost-k8s-calico--apiserver--7495b6f49d--9bz8s-eth0" Oct 31 01:33:16.205802 env[1377]: 2025-10-31 01:33:16.203 [INFO][5000] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Oct 31 01:33:16.205802 env[1377]: 2025-10-31 01:33:16.204 [INFO][4993] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="e9b9f960fe0a05e1639f4b425911f59ace57268750f8e775d196eb85258ce494" Oct 31 01:33:16.206198 env[1377]: time="2025-10-31T01:33:16.206177531Z" level=info msg="TearDown network for sandbox \"e9b9f960fe0a05e1639f4b425911f59ace57268750f8e775d196eb85258ce494\" successfully" Oct 31 01:33:16.206249 env[1377]: time="2025-10-31T01:33:16.206237860Z" level=info msg="StopPodSandbox for \"e9b9f960fe0a05e1639f4b425911f59ace57268750f8e775d196eb85258ce494\" returns successfully" Oct 31 01:33:16.210067 env[1377]: time="2025-10-31T01:33:16.210049101Z" level=info msg="RemovePodSandbox for \"e9b9f960fe0a05e1639f4b425911f59ace57268750f8e775d196eb85258ce494\"" Oct 31 01:33:16.210142 env[1377]: time="2025-10-31T01:33:16.210070460Z" level=info msg="Forcibly stopping sandbox \"e9b9f960fe0a05e1639f4b425911f59ace57268750f8e775d196eb85258ce494\"" Oct 31 01:33:16.252068 env[1377]: 2025-10-31 01:33:16.230 [WARNING][5015] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="e9b9f960fe0a05e1639f4b425911f59ace57268750f8e775d196eb85258ce494" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--7495b6f49d--9bz8s-eth0", GenerateName:"calico-apiserver-7495b6f49d-", Namespace:"calico-apiserver", SelfLink:"", UID:"3586a90a-6636-45ab-8082-c6aa9bdb62e3", ResourceVersion:"1069", Generation:0, CreationTimestamp:time.Date(2025, time.October, 31, 1, 32, 28, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7495b6f49d", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"f04fa00d61a9e6613391799a4c0cda337310bb9eb143696f6bb69877342ca9eb", Pod:"calico-apiserver-7495b6f49d-9bz8s", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali674cf0b2982", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 31 01:33:16.252068 env[1377]: 2025-10-31 01:33:16.230 [INFO][5015] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="e9b9f960fe0a05e1639f4b425911f59ace57268750f8e775d196eb85258ce494" Oct 31 01:33:16.252068 env[1377]: 2025-10-31 01:33:16.230 [INFO][5015] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="e9b9f960fe0a05e1639f4b425911f59ace57268750f8e775d196eb85258ce494" iface="eth0" netns="" Oct 31 01:33:16.252068 env[1377]: 2025-10-31 01:33:16.230 [INFO][5015] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="e9b9f960fe0a05e1639f4b425911f59ace57268750f8e775d196eb85258ce494" Oct 31 01:33:16.252068 env[1377]: 2025-10-31 01:33:16.230 [INFO][5015] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="e9b9f960fe0a05e1639f4b425911f59ace57268750f8e775d196eb85258ce494" Oct 31 01:33:16.252068 env[1377]: 2025-10-31 01:33:16.245 [INFO][5022] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="e9b9f960fe0a05e1639f4b425911f59ace57268750f8e775d196eb85258ce494" HandleID="k8s-pod-network.e9b9f960fe0a05e1639f4b425911f59ace57268750f8e775d196eb85258ce494" Workload="localhost-k8s-calico--apiserver--7495b6f49d--9bz8s-eth0" Oct 31 01:33:16.252068 env[1377]: 2025-10-31 01:33:16.245 [INFO][5022] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Oct 31 01:33:16.252068 env[1377]: 2025-10-31 01:33:16.245 [INFO][5022] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Oct 31 01:33:16.252068 env[1377]: 2025-10-31 01:33:16.249 [WARNING][5022] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="e9b9f960fe0a05e1639f4b425911f59ace57268750f8e775d196eb85258ce494" HandleID="k8s-pod-network.e9b9f960fe0a05e1639f4b425911f59ace57268750f8e775d196eb85258ce494" Workload="localhost-k8s-calico--apiserver--7495b6f49d--9bz8s-eth0" Oct 31 01:33:16.252068 env[1377]: 2025-10-31 01:33:16.249 [INFO][5022] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="e9b9f960fe0a05e1639f4b425911f59ace57268750f8e775d196eb85258ce494" HandleID="k8s-pod-network.e9b9f960fe0a05e1639f4b425911f59ace57268750f8e775d196eb85258ce494" Workload="localhost-k8s-calico--apiserver--7495b6f49d--9bz8s-eth0" Oct 31 01:33:16.252068 env[1377]: 2025-10-31 01:33:16.249 [INFO][5022] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Oct 31 01:33:16.252068 env[1377]: 2025-10-31 01:33:16.250 [INFO][5015] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="e9b9f960fe0a05e1639f4b425911f59ace57268750f8e775d196eb85258ce494" Oct 31 01:33:16.252413 env[1377]: time="2025-10-31T01:33:16.252085882Z" level=info msg="TearDown network for sandbox \"e9b9f960fe0a05e1639f4b425911f59ace57268750f8e775d196eb85258ce494\" successfully" Oct 31 01:33:16.265674 env[1377]: time="2025-10-31T01:33:16.265644907Z" level=info msg="RemovePodSandbox \"e9b9f960fe0a05e1639f4b425911f59ace57268750f8e775d196eb85258ce494\" returns successfully" Oct 31 01:33:16.287484 env[1377]: time="2025-10-31T01:33:16.287456324Z" level=info msg="StopPodSandbox for \"68ea07f2cec36a0ea709039e67ed228650906cd3b6a583d6f78a2e7c9988da4d\"" Oct 31 01:33:16.301840 env[1377]: time="2025-10-31T01:33:16.301811368Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Oct 31 01:33:16.302109 env[1377]: time="2025-10-31T01:33:16.302087040Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Oct 31 01:33:16.303312 kubelet[2284]: E1031 01:33:16.303200 2284 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Oct 31 01:33:16.303312 kubelet[2284]: E1031 01:33:16.303236 2284 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Oct 31 01:33:16.308319 kubelet[2284]: E1031 01:33:16.308243 2284 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-747rs,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-vc6st_calico-system(68e0baab-eac1-409d-a79c-945bc83eb739): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Oct 31 01:33:16.310986 env[1377]: time="2025-10-31T01:33:16.310421996Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Oct 31 01:33:16.340566 env[1377]: 2025-10-31 01:33:16.320 [WARNING][5036] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="68ea07f2cec36a0ea709039e67ed228650906cd3b6a583d6f78a2e7c9988da4d" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--59f47b46b--44rl6-eth0", GenerateName:"calico-apiserver-59f47b46b-", Namespace:"calico-apiserver", SelfLink:"", UID:"03dcc52f-4acf-4546-b99b-cf4de4d54704", ResourceVersion:"1107", Generation:0, CreationTimestamp:time.Date(2025, time.October, 31, 1, 32, 27, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"59f47b46b", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"c556828c3b1ca9507dd4a81c07df8d2090f1c926fdcb1e725e72126dbaec8d0f", Pod:"calico-apiserver-59f47b46b-44rl6", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.137/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calif1a4806cb35", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 31 01:33:16.340566 env[1377]: 2025-10-31 01:33:16.320 [INFO][5036] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="68ea07f2cec36a0ea709039e67ed228650906cd3b6a583d6f78a2e7c9988da4d" Oct 31 01:33:16.340566 env[1377]: 2025-10-31 01:33:16.320 [INFO][5036] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="68ea07f2cec36a0ea709039e67ed228650906cd3b6a583d6f78a2e7c9988da4d" iface="eth0" netns="" Oct 31 01:33:16.340566 env[1377]: 2025-10-31 01:33:16.320 [INFO][5036] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="68ea07f2cec36a0ea709039e67ed228650906cd3b6a583d6f78a2e7c9988da4d" Oct 31 01:33:16.340566 env[1377]: 2025-10-31 01:33:16.320 [INFO][5036] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="68ea07f2cec36a0ea709039e67ed228650906cd3b6a583d6f78a2e7c9988da4d" Oct 31 01:33:16.340566 env[1377]: 2025-10-31 01:33:16.333 [INFO][5043] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="68ea07f2cec36a0ea709039e67ed228650906cd3b6a583d6f78a2e7c9988da4d" HandleID="k8s-pod-network.68ea07f2cec36a0ea709039e67ed228650906cd3b6a583d6f78a2e7c9988da4d" Workload="localhost-k8s-calico--apiserver--59f47b46b--44rl6-eth0" Oct 31 01:33:16.340566 env[1377]: 2025-10-31 01:33:16.333 [INFO][5043] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Oct 31 01:33:16.340566 env[1377]: 2025-10-31 01:33:16.334 [INFO][5043] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Oct 31 01:33:16.340566 env[1377]: 2025-10-31 01:33:16.337 [WARNING][5043] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="68ea07f2cec36a0ea709039e67ed228650906cd3b6a583d6f78a2e7c9988da4d" HandleID="k8s-pod-network.68ea07f2cec36a0ea709039e67ed228650906cd3b6a583d6f78a2e7c9988da4d" Workload="localhost-k8s-calico--apiserver--59f47b46b--44rl6-eth0" Oct 31 01:33:16.340566 env[1377]: 2025-10-31 01:33:16.337 [INFO][5043] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="68ea07f2cec36a0ea709039e67ed228650906cd3b6a583d6f78a2e7c9988da4d" HandleID="k8s-pod-network.68ea07f2cec36a0ea709039e67ed228650906cd3b6a583d6f78a2e7c9988da4d" Workload="localhost-k8s-calico--apiserver--59f47b46b--44rl6-eth0" Oct 31 01:33:16.340566 env[1377]: 2025-10-31 01:33:16.338 [INFO][5043] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Oct 31 01:33:16.340566 env[1377]: 2025-10-31 01:33:16.339 [INFO][5036] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="68ea07f2cec36a0ea709039e67ed228650906cd3b6a583d6f78a2e7c9988da4d" Oct 31 01:33:16.340952 env[1377]: time="2025-10-31T01:33:16.340930529Z" level=info msg="TearDown network for sandbox \"68ea07f2cec36a0ea709039e67ed228650906cd3b6a583d6f78a2e7c9988da4d\" successfully" Oct 31 01:33:16.341001 env[1377]: time="2025-10-31T01:33:16.340989442Z" level=info msg="StopPodSandbox for \"68ea07f2cec36a0ea709039e67ed228650906cd3b6a583d6f78a2e7c9988da4d\" returns successfully" Oct 31 01:33:16.341338 env[1377]: time="2025-10-31T01:33:16.341319757Z" level=info msg="RemovePodSandbox for \"68ea07f2cec36a0ea709039e67ed228650906cd3b6a583d6f78a2e7c9988da4d\"" Oct 31 01:33:16.341370 env[1377]: time="2025-10-31T01:33:16.341342836Z" level=info msg="Forcibly stopping sandbox \"68ea07f2cec36a0ea709039e67ed228650906cd3b6a583d6f78a2e7c9988da4d\"" Oct 31 01:33:16.385086 env[1377]: 2025-10-31 01:33:16.363 [WARNING][5057] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="68ea07f2cec36a0ea709039e67ed228650906cd3b6a583d6f78a2e7c9988da4d" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--59f47b46b--44rl6-eth0", GenerateName:"calico-apiserver-59f47b46b-", Namespace:"calico-apiserver", SelfLink:"", UID:"03dcc52f-4acf-4546-b99b-cf4de4d54704", ResourceVersion:"1107", Generation:0, CreationTimestamp:time.Date(2025, time.October, 31, 1, 32, 27, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"59f47b46b", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"c556828c3b1ca9507dd4a81c07df8d2090f1c926fdcb1e725e72126dbaec8d0f", Pod:"calico-apiserver-59f47b46b-44rl6", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.137/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calif1a4806cb35", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 31 01:33:16.385086 env[1377]: 2025-10-31 01:33:16.363 [INFO][5057] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="68ea07f2cec36a0ea709039e67ed228650906cd3b6a583d6f78a2e7c9988da4d" Oct 31 01:33:16.385086 env[1377]: 2025-10-31 01:33:16.363 [INFO][5057] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="68ea07f2cec36a0ea709039e67ed228650906cd3b6a583d6f78a2e7c9988da4d" iface="eth0" netns="" Oct 31 01:33:16.385086 env[1377]: 2025-10-31 01:33:16.363 [INFO][5057] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="68ea07f2cec36a0ea709039e67ed228650906cd3b6a583d6f78a2e7c9988da4d" Oct 31 01:33:16.385086 env[1377]: 2025-10-31 01:33:16.363 [INFO][5057] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="68ea07f2cec36a0ea709039e67ed228650906cd3b6a583d6f78a2e7c9988da4d" Oct 31 01:33:16.385086 env[1377]: 2025-10-31 01:33:16.377 [INFO][5064] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="68ea07f2cec36a0ea709039e67ed228650906cd3b6a583d6f78a2e7c9988da4d" HandleID="k8s-pod-network.68ea07f2cec36a0ea709039e67ed228650906cd3b6a583d6f78a2e7c9988da4d" Workload="localhost-k8s-calico--apiserver--59f47b46b--44rl6-eth0" Oct 31 01:33:16.385086 env[1377]: 2025-10-31 01:33:16.377 [INFO][5064] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Oct 31 01:33:16.385086 env[1377]: 2025-10-31 01:33:16.377 [INFO][5064] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Oct 31 01:33:16.385086 env[1377]: 2025-10-31 01:33:16.380 [WARNING][5064] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="68ea07f2cec36a0ea709039e67ed228650906cd3b6a583d6f78a2e7c9988da4d" HandleID="k8s-pod-network.68ea07f2cec36a0ea709039e67ed228650906cd3b6a583d6f78a2e7c9988da4d" Workload="localhost-k8s-calico--apiserver--59f47b46b--44rl6-eth0" Oct 31 01:33:16.385086 env[1377]: 2025-10-31 01:33:16.380 [INFO][5064] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="68ea07f2cec36a0ea709039e67ed228650906cd3b6a583d6f78a2e7c9988da4d" HandleID="k8s-pod-network.68ea07f2cec36a0ea709039e67ed228650906cd3b6a583d6f78a2e7c9988da4d" Workload="localhost-k8s-calico--apiserver--59f47b46b--44rl6-eth0" Oct 31 01:33:16.385086 env[1377]: 2025-10-31 01:33:16.381 [INFO][5064] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Oct 31 01:33:16.385086 env[1377]: 2025-10-31 01:33:16.382 [INFO][5057] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="68ea07f2cec36a0ea709039e67ed228650906cd3b6a583d6f78a2e7c9988da4d" Oct 31 01:33:16.385500 env[1377]: time="2025-10-31T01:33:16.385477752Z" level=info msg="TearDown network for sandbox \"68ea07f2cec36a0ea709039e67ed228650906cd3b6a583d6f78a2e7c9988da4d\" successfully" Oct 31 01:33:16.387230 env[1377]: time="2025-10-31T01:33:16.387208087Z" level=info msg="RemovePodSandbox \"68ea07f2cec36a0ea709039e67ed228650906cd3b6a583d6f78a2e7c9988da4d\" returns successfully" Oct 31 01:33:16.387663 env[1377]: time="2025-10-31T01:33:16.387643732Z" level=info msg="StopPodSandbox for \"6d715b3156b7623086840544a9b2e8c95015db1e296dbe8cc2dc2e036b11cdb8\"" Oct 31 01:33:16.430721 env[1377]: 2025-10-31 01:33:16.410 [WARNING][5078] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="6d715b3156b7623086840544a9b2e8c95015db1e296dbe8cc2dc2e036b11cdb8" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--vc6st-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"68e0baab-eac1-409d-a79c-945bc83eb739", ResourceVersion:"1095", Generation:0, CreationTimestamp:time.Date(2025, time.October, 31, 1, 32, 31, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"857b56db8f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"8823ccaeade7fd2f11fd189be65b605ea7ed472d7ff8242ef331dc88f04b6bf6", Pod:"csi-node-driver-vc6st", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali5d49b778d08", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 31 01:33:16.430721 env[1377]: 2025-10-31 01:33:16.410 [INFO][5078] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="6d715b3156b7623086840544a9b2e8c95015db1e296dbe8cc2dc2e036b11cdb8" Oct 31 01:33:16.430721 env[1377]: 2025-10-31 01:33:16.410 [INFO][5078] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="6d715b3156b7623086840544a9b2e8c95015db1e296dbe8cc2dc2e036b11cdb8" iface="eth0" netns="" Oct 31 01:33:16.430721 env[1377]: 2025-10-31 01:33:16.410 [INFO][5078] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="6d715b3156b7623086840544a9b2e8c95015db1e296dbe8cc2dc2e036b11cdb8" Oct 31 01:33:16.430721 env[1377]: 2025-10-31 01:33:16.410 [INFO][5078] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="6d715b3156b7623086840544a9b2e8c95015db1e296dbe8cc2dc2e036b11cdb8" Oct 31 01:33:16.430721 env[1377]: 2025-10-31 01:33:16.423 [INFO][5085] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="6d715b3156b7623086840544a9b2e8c95015db1e296dbe8cc2dc2e036b11cdb8" HandleID="k8s-pod-network.6d715b3156b7623086840544a9b2e8c95015db1e296dbe8cc2dc2e036b11cdb8" Workload="localhost-k8s-csi--node--driver--vc6st-eth0" Oct 31 01:33:16.430721 env[1377]: 2025-10-31 01:33:16.423 [INFO][5085] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Oct 31 01:33:16.430721 env[1377]: 2025-10-31 01:33:16.423 [INFO][5085] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Oct 31 01:33:16.430721 env[1377]: 2025-10-31 01:33:16.427 [WARNING][5085] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="6d715b3156b7623086840544a9b2e8c95015db1e296dbe8cc2dc2e036b11cdb8" HandleID="k8s-pod-network.6d715b3156b7623086840544a9b2e8c95015db1e296dbe8cc2dc2e036b11cdb8" Workload="localhost-k8s-csi--node--driver--vc6st-eth0" Oct 31 01:33:16.430721 env[1377]: 2025-10-31 01:33:16.427 [INFO][5085] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="6d715b3156b7623086840544a9b2e8c95015db1e296dbe8cc2dc2e036b11cdb8" HandleID="k8s-pod-network.6d715b3156b7623086840544a9b2e8c95015db1e296dbe8cc2dc2e036b11cdb8" Workload="localhost-k8s-csi--node--driver--vc6st-eth0" Oct 31 01:33:16.430721 env[1377]: 2025-10-31 01:33:16.428 [INFO][5085] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Oct 31 01:33:16.430721 env[1377]: 2025-10-31 01:33:16.429 [INFO][5078] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="6d715b3156b7623086840544a9b2e8c95015db1e296dbe8cc2dc2e036b11cdb8" Oct 31 01:33:16.431062 env[1377]: time="2025-10-31T01:33:16.430737694Z" level=info msg="TearDown network for sandbox \"6d715b3156b7623086840544a9b2e8c95015db1e296dbe8cc2dc2e036b11cdb8\" successfully" Oct 31 01:33:16.431062 env[1377]: time="2025-10-31T01:33:16.430757723Z" level=info msg="StopPodSandbox for \"6d715b3156b7623086840544a9b2e8c95015db1e296dbe8cc2dc2e036b11cdb8\" returns successfully" Oct 31 01:33:16.431241 env[1377]: time="2025-10-31T01:33:16.431227872Z" level=info msg="RemovePodSandbox for \"6d715b3156b7623086840544a9b2e8c95015db1e296dbe8cc2dc2e036b11cdb8\"" Oct 31 01:33:16.431321 env[1377]: time="2025-10-31T01:33:16.431297131Z" level=info msg="Forcibly stopping sandbox \"6d715b3156b7623086840544a9b2e8c95015db1e296dbe8cc2dc2e036b11cdb8\"" Oct 31 01:33:16.480526 env[1377]: 2025-10-31 01:33:16.452 [WARNING][5100] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="6d715b3156b7623086840544a9b2e8c95015db1e296dbe8cc2dc2e036b11cdb8" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--vc6st-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"68e0baab-eac1-409d-a79c-945bc83eb739", ResourceVersion:"1095", Generation:0, CreationTimestamp:time.Date(2025, time.October, 31, 1, 32, 31, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"857b56db8f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"8823ccaeade7fd2f11fd189be65b605ea7ed472d7ff8242ef331dc88f04b6bf6", Pod:"csi-node-driver-vc6st", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali5d49b778d08", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 31 01:33:16.480526 env[1377]: 2025-10-31 01:33:16.453 [INFO][5100] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="6d715b3156b7623086840544a9b2e8c95015db1e296dbe8cc2dc2e036b11cdb8" Oct 31 01:33:16.480526 env[1377]: 2025-10-31 01:33:16.453 [INFO][5100] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="6d715b3156b7623086840544a9b2e8c95015db1e296dbe8cc2dc2e036b11cdb8" iface="eth0" netns="" Oct 31 01:33:16.480526 env[1377]: 2025-10-31 01:33:16.453 [INFO][5100] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="6d715b3156b7623086840544a9b2e8c95015db1e296dbe8cc2dc2e036b11cdb8" Oct 31 01:33:16.480526 env[1377]: 2025-10-31 01:33:16.453 [INFO][5100] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="6d715b3156b7623086840544a9b2e8c95015db1e296dbe8cc2dc2e036b11cdb8" Oct 31 01:33:16.480526 env[1377]: 2025-10-31 01:33:16.469 [INFO][5107] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="6d715b3156b7623086840544a9b2e8c95015db1e296dbe8cc2dc2e036b11cdb8" HandleID="k8s-pod-network.6d715b3156b7623086840544a9b2e8c95015db1e296dbe8cc2dc2e036b11cdb8" Workload="localhost-k8s-csi--node--driver--vc6st-eth0" Oct 31 01:33:16.480526 env[1377]: 2025-10-31 01:33:16.469 [INFO][5107] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Oct 31 01:33:16.480526 env[1377]: 2025-10-31 01:33:16.470 [INFO][5107] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Oct 31 01:33:16.480526 env[1377]: 2025-10-31 01:33:16.474 [WARNING][5107] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="6d715b3156b7623086840544a9b2e8c95015db1e296dbe8cc2dc2e036b11cdb8" HandleID="k8s-pod-network.6d715b3156b7623086840544a9b2e8c95015db1e296dbe8cc2dc2e036b11cdb8" Workload="localhost-k8s-csi--node--driver--vc6st-eth0" Oct 31 01:33:16.480526 env[1377]: 2025-10-31 01:33:16.474 [INFO][5107] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="6d715b3156b7623086840544a9b2e8c95015db1e296dbe8cc2dc2e036b11cdb8" HandleID="k8s-pod-network.6d715b3156b7623086840544a9b2e8c95015db1e296dbe8cc2dc2e036b11cdb8" Workload="localhost-k8s-csi--node--driver--vc6st-eth0" Oct 31 01:33:16.480526 env[1377]: 2025-10-31 01:33:16.476 [INFO][5107] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Oct 31 01:33:16.480526 env[1377]: 2025-10-31 01:33:16.479 [INFO][5100] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="6d715b3156b7623086840544a9b2e8c95015db1e296dbe8cc2dc2e036b11cdb8" Oct 31 01:33:16.480889 env[1377]: time="2025-10-31T01:33:16.480543562Z" level=info msg="TearDown network for sandbox \"6d715b3156b7623086840544a9b2e8c95015db1e296dbe8cc2dc2e036b11cdb8\" successfully" Oct 31 01:33:16.483017 env[1377]: time="2025-10-31T01:33:16.482998319Z" level=info msg="RemovePodSandbox \"6d715b3156b7623086840544a9b2e8c95015db1e296dbe8cc2dc2e036b11cdb8\" returns successfully" Oct 31 01:33:16.695656 env[1377]: time="2025-10-31T01:33:16.694798477Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Oct 31 01:33:16.695908 env[1377]: time="2025-10-31T01:33:16.695519964Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Oct 31 01:33:16.695947 kubelet[2284]: E1031 01:33:16.695921 2284 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Oct 31 01:33:16.695983 kubelet[2284]: E1031 01:33:16.695953 2284 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Oct 31 01:33:16.696126 kubelet[2284]: E1031 01:33:16.696101 2284 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-747rs,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-vc6st_calico-system(68e0baab-eac1-409d-a79c-945bc83eb739): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Oct 31 01:33:16.696282 env[1377]: time="2025-10-31T01:33:16.696267664Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Oct 31 01:33:16.697991 kubelet[2284]: E1031 01:33:16.697962 2284 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-vc6st" podUID="68e0baab-eac1-409d-a79c-945bc83eb739" Oct 31 01:33:17.014745 env[1377]: time="2025-10-31T01:33:17.014387297Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Oct 31 01:33:17.014901 env[1377]: time="2025-10-31T01:33:17.014875476Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Oct 31 01:33:17.015456 kubelet[2284]: E1031 01:33:17.015047 2284 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Oct 31 01:33:17.015456 kubelet[2284]: E1031 01:33:17.015085 2284 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Oct 31 01:33:17.015456 kubelet[2284]: E1031 01:33:17.015418 2284 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-qzscg,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-59f47b46b-44rl6_calico-apiserver(03dcc52f-4acf-4546-b99b-cf4de4d54704): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Oct 31 01:33:17.017110 kubelet[2284]: E1031 01:33:17.017079 2284 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-59f47b46b-44rl6" podUID="03dcc52f-4acf-4546-b99b-cf4de4d54704" Oct 31 01:33:18.571145 env[1377]: time="2025-10-31T01:33:18.571115965Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Oct 31 01:33:18.883776 env[1377]: time="2025-10-31T01:33:18.883681328Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Oct 31 01:33:18.884149 env[1377]: time="2025-10-31T01:33:18.884086504Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Oct 31 01:33:18.884342 kubelet[2284]: E1031 01:33:18.884299 2284 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Oct 31 01:33:18.884588 kubelet[2284]: E1031 01:33:18.884351 2284 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Oct 31 01:33:18.884588 kubelet[2284]: E1031 01:33:18.884482 2284 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-r9l7q,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-6cf7465748-bpws2_calico-system(c5ff4853-41a8-4d7e-a0bc-f8d8451a400b): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Oct 31 01:33:18.885876 kubelet[2284]: E1031 01:33:18.885849 2284 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-6cf7465748-bpws2" podUID="c5ff4853-41a8-4d7e-a0bc-f8d8451a400b" Oct 31 01:33:19.571485 env[1377]: time="2025-10-31T01:33:19.571459041Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Oct 31 01:33:19.903839 env[1377]: time="2025-10-31T01:33:19.903736222Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Oct 31 01:33:19.904415 env[1377]: time="2025-10-31T01:33:19.904365680Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Oct 31 01:33:19.904700 kubelet[2284]: E1031 01:33:19.904676 2284 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Oct 31 01:33:19.904971 kubelet[2284]: E1031 01:33:19.904954 2284 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Oct 31 01:33:19.905161 kubelet[2284]: E1031 01:33:19.905130 2284 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-wqf4c,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-7495b6f49d-9bz8s_calico-apiserver(3586a90a-6636-45ab-8082-c6aa9bdb62e3): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Oct 31 01:33:19.907176 kubelet[2284]: E1031 01:33:19.907133 2284 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7495b6f49d-9bz8s" podUID="3586a90a-6636-45ab-8082-c6aa9bdb62e3" Oct 31 01:33:24.632491 kubelet[2284]: E1031 01:33:24.632464 2284 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-59f47b46b-s9kdt" podUID="8f69b598-caa2-4abe-911a-df60fbb3c4df" Oct 31 01:33:25.754908 systemd[1]: run-containerd-runc-k8s.io-7c3f7ff64beedeb944cd56391b76480b6781f2d051e3cc7f588f6a4ce9585d21-runc.r9fX4o.mount: Deactivated successfully. Oct 31 01:33:26.571567 kubelet[2284]: E1031 01:33:26.571488 2284 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-84f98497cf-b6pmv" podUID="034a3d4f-f436-4259-8570-9d57a6e1d274" Oct 31 01:33:29.571550 kubelet[2284]: E1031 01:33:29.571521 2284 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-59f47b46b-44rl6" podUID="03dcc52f-4acf-4546-b99b-cf4de4d54704" Oct 31 01:33:30.571043 kubelet[2284]: E1031 01:33:30.571000 2284 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-8nrww" podUID="d10ebfbe-91f8-4576-8542-06b4d8a152be" Oct 31 01:33:32.570967 kubelet[2284]: E1031 01:33:32.570941 2284 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-6cf7465748-bpws2" podUID="c5ff4853-41a8-4d7e-a0bc-f8d8451a400b" Oct 31 01:33:32.571861 kubelet[2284]: E1031 01:33:32.571824 2284 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-vc6st" podUID="68e0baab-eac1-409d-a79c-945bc83eb739" Oct 31 01:33:33.571030 kubelet[2284]: E1031 01:33:33.571002 2284 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7495b6f49d-9bz8s" podUID="3586a90a-6636-45ab-8082-c6aa9bdb62e3" Oct 31 01:33:37.570941 env[1377]: time="2025-10-31T01:33:37.570870068Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Oct 31 01:33:37.939942 env[1377]: time="2025-10-31T01:33:37.939842119Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Oct 31 01:33:37.948704 env[1377]: time="2025-10-31T01:33:37.948617900Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Oct 31 01:33:37.960527 kubelet[2284]: E1031 01:33:37.948888 2284 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Oct 31 01:33:37.960827 kubelet[2284]: E1031 01:33:37.960811 2284 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Oct 31 01:33:37.960975 kubelet[2284]: E1031 01:33:37.960956 2284 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:7fca255853824dd5924def6ba75879e0,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-jvt5b,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-84f98497cf-b6pmv_calico-system(034a3d4f-f436-4259-8570-9d57a6e1d274): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Oct 31 01:33:37.967761 env[1377]: time="2025-10-31T01:33:37.962902527Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Oct 31 01:33:38.415181 env[1377]: time="2025-10-31T01:33:38.415134582Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Oct 31 01:33:38.421206 env[1377]: time="2025-10-31T01:33:38.416092170Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Oct 31 01:33:38.421282 kubelet[2284]: E1031 01:33:38.416307 2284 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Oct 31 01:33:38.421282 kubelet[2284]: E1031 01:33:38.416340 2284 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Oct 31 01:33:38.421282 kubelet[2284]: E1031 01:33:38.416416 2284 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-jvt5b,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-84f98497cf-b6pmv_calico-system(034a3d4f-f436-4259-8570-9d57a6e1d274): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Oct 31 01:33:38.421282 kubelet[2284]: E1031 01:33:38.417657 2284 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-84f98497cf-b6pmv" podUID="034a3d4f-f436-4259-8570-9d57a6e1d274" Oct 31 01:33:40.571248 env[1377]: time="2025-10-31T01:33:40.571209769Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Oct 31 01:33:40.897203 env[1377]: time="2025-10-31T01:33:40.897099832Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Oct 31 01:33:40.903705 env[1377]: time="2025-10-31T01:33:40.903675390Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Oct 31 01:33:40.903862 kubelet[2284]: E1031 01:33:40.903826 2284 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Oct 31 01:33:40.910600 kubelet[2284]: E1031 01:33:40.903869 2284 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Oct 31 01:33:40.910600 kubelet[2284]: E1031 01:33:40.903960 2284 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-5b64v,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-59f47b46b-s9kdt_calico-apiserver(8f69b598-caa2-4abe-911a-df60fbb3c4df): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Oct 31 01:33:40.910600 kubelet[2284]: E1031 01:33:40.905231 2284 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-59f47b46b-s9kdt" podUID="8f69b598-caa2-4abe-911a-df60fbb3c4df" Oct 31 01:33:41.571647 env[1377]: time="2025-10-31T01:33:41.571624272Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Oct 31 01:33:41.949636 env[1377]: time="2025-10-31T01:33:41.949530761Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Oct 31 01:33:41.958079 env[1377]: time="2025-10-31T01:33:41.958031428Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Oct 31 01:33:41.958747 kubelet[2284]: E1031 01:33:41.958325 2284 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Oct 31 01:33:41.958747 kubelet[2284]: E1031 01:33:41.958360 2284 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Oct 31 01:33:41.958747 kubelet[2284]: E1031 01:33:41.958454 2284 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-qzscg,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-59f47b46b-44rl6_calico-apiserver(03dcc52f-4acf-4546-b99b-cf4de4d54704): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Oct 31 01:33:41.970185 kubelet[2284]: E1031 01:33:41.960258 2284 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-59f47b46b-44rl6" podUID="03dcc52f-4acf-4546-b99b-cf4de4d54704" Oct 31 01:33:43.571507 env[1377]: time="2025-10-31T01:33:43.571474879Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Oct 31 01:33:43.887426 env[1377]: time="2025-10-31T01:33:43.887341307Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Oct 31 01:33:43.902556 env[1377]: time="2025-10-31T01:33:43.902516832Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Oct 31 01:33:43.902853 kubelet[2284]: E1031 01:33:43.902832 2284 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Oct 31 01:33:43.903071 kubelet[2284]: E1031 01:33:43.903058 2284 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Oct 31 01:33:43.903317 kubelet[2284]: E1031 01:33:43.903296 2284 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-747rs,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-vc6st_calico-system(68e0baab-eac1-409d-a79c-945bc83eb739): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Oct 31 01:33:43.903410 env[1377]: time="2025-10-31T01:33:43.903338500Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Oct 31 01:33:44.225311 env[1377]: time="2025-10-31T01:33:44.225228942Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Oct 31 01:33:44.234315 env[1377]: time="2025-10-31T01:33:44.234281229Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Oct 31 01:33:44.247437 env[1377]: time="2025-10-31T01:33:44.234816034Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Oct 31 01:33:44.257420 kubelet[2284]: E1031 01:33:44.234554 2284 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Oct 31 01:33:44.257420 kubelet[2284]: E1031 01:33:44.234585 2284 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Oct 31 01:33:44.257420 kubelet[2284]: E1031 01:33:44.234797 2284 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-r9l7q,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-6cf7465748-bpws2_calico-system(c5ff4853-41a8-4d7e-a0bc-f8d8451a400b): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Oct 31 01:33:44.257420 kubelet[2284]: E1031 01:33:44.236163 2284 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-6cf7465748-bpws2" podUID="c5ff4853-41a8-4d7e-a0bc-f8d8451a400b" Oct 31 01:33:44.596379 env[1377]: time="2025-10-31T01:33:44.596345599Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Oct 31 01:33:44.596716 env[1377]: time="2025-10-31T01:33:44.596693259Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Oct 31 01:33:44.596874 kubelet[2284]: E1031 01:33:44.596837 2284 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Oct 31 01:33:44.597348 kubelet[2284]: E1031 01:33:44.596884 2284 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Oct 31 01:33:44.597348 kubelet[2284]: E1031 01:33:44.597186 2284 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-747rs,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-vc6st_calico-system(68e0baab-eac1-409d-a79c-945bc83eb739): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Oct 31 01:33:44.597460 env[1377]: time="2025-10-31T01:33:44.597099445Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Oct 31 01:33:44.598743 kubelet[2284]: E1031 01:33:44.598721 2284 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-vc6st" podUID="68e0baab-eac1-409d-a79c-945bc83eb739" Oct 31 01:33:44.929948 env[1377]: time="2025-10-31T01:33:44.929877113Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Oct 31 01:33:44.935113 env[1377]: time="2025-10-31T01:33:44.935071127Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Oct 31 01:33:44.935269 kubelet[2284]: E1031 01:33:44.935236 2284 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Oct 31 01:33:44.935514 kubelet[2284]: E1031 01:33:44.935279 2284 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Oct 31 01:33:44.935514 kubelet[2284]: E1031 01:33:44.935386 2284 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-wqf4c,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-7495b6f49d-9bz8s_calico-apiserver(3586a90a-6636-45ab-8082-c6aa9bdb62e3): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Oct 31 01:33:44.936758 kubelet[2284]: E1031 01:33:44.936740 2284 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7495b6f49d-9bz8s" podUID="3586a90a-6636-45ab-8082-c6aa9bdb62e3" Oct 31 01:33:45.280774 systemd[1]: Started sshd@7-139.178.70.102:22-147.75.109.163:54032.service. Oct 31 01:33:45.290166 kernel: kauditd_printk_skb: 35 callbacks suppressed Oct 31 01:33:45.291386 kernel: audit: type=1130 audit(1761874425.279:426): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@7-139.178.70.102:22-147.75.109.163:54032 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 01:33:45.279000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@7-139.178.70.102:22-147.75.109.163:54032 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 01:33:45.424344 sshd[5157]: Accepted publickey for core from 147.75.109.163 port 54032 ssh2: RSA SHA256:5EJg7y3pv7Ht00E4mNOtVA/MuREJjzW0PuLgUWDrRw0 Oct 31 01:33:45.422000 audit[5157]: USER_ACCT pid=5157 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Oct 31 01:33:45.434015 kernel: audit: type=1101 audit(1761874425.422:427): pid=5157 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Oct 31 01:33:45.434073 kernel: audit: type=1103 audit(1761874425.426:428): pid=5157 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Oct 31 01:33:45.439361 kernel: audit: type=1006 audit(1761874425.430:429): pid=5157 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=10 res=1 Oct 31 01:33:45.439408 kernel: audit: type=1300 audit(1761874425.430:429): arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffe801164a0 a2=3 a3=0 items=0 ppid=1 pid=5157 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=10 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 01:33:45.439432 kernel: audit: type=1327 audit(1761874425.430:429): proctitle=737368643A20636F7265205B707269765D Oct 31 01:33:45.426000 audit[5157]: CRED_ACQ pid=5157 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Oct 31 01:33:45.430000 audit[5157]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffe801164a0 a2=3 a3=0 items=0 ppid=1 pid=5157 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=10 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 01:33:45.430000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Oct 31 01:33:45.442129 sshd[5157]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Oct 31 01:33:45.473694 systemd[1]: Started session-10.scope. Oct 31 01:33:45.473842 systemd-logind[1342]: New session 10 of user core. Oct 31 01:33:45.476000 audit[5157]: USER_START pid=5157 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Oct 31 01:33:45.480000 audit[5160]: CRED_ACQ pid=5160 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Oct 31 01:33:45.485085 kernel: audit: type=1105 audit(1761874425.476:430): pid=5157 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Oct 31 01:33:45.485144 kernel: audit: type=1103 audit(1761874425.480:431): pid=5160 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Oct 31 01:33:45.572235 env[1377]: time="2025-10-31T01:33:45.572042027Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Oct 31 01:33:45.913681 env[1377]: time="2025-10-31T01:33:45.913595563Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Oct 31 01:33:45.915300 env[1377]: time="2025-10-31T01:33:45.915277526Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Oct 31 01:33:45.915493 kubelet[2284]: E1031 01:33:45.915469 2284 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Oct 31 01:33:45.915567 kubelet[2284]: E1031 01:33:45.915554 2284 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Oct 31 01:33:45.915731 kubelet[2284]: E1031 01:33:45.915702 2284 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-9kdtz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-8nrww_calico-system(d10ebfbe-91f8-4576-8542-06b4d8a152be): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Oct 31 01:33:45.917129 kubelet[2284]: E1031 01:33:45.917113 2284 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-8nrww" podUID="d10ebfbe-91f8-4576-8542-06b4d8a152be" Oct 31 01:33:46.405828 sshd[5157]: pam_unix(sshd:session): session closed for user core Oct 31 01:33:46.405000 audit[5157]: USER_END pid=5157 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Oct 31 01:33:46.407820 systemd[1]: sshd@7-139.178.70.102:22-147.75.109.163:54032.service: Deactivated successfully. Oct 31 01:33:46.408375 systemd[1]: session-10.scope: Deactivated successfully. Oct 31 01:33:46.414310 kernel: audit: type=1106 audit(1761874426.405:432): pid=5157 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Oct 31 01:33:46.414967 kernel: audit: type=1104 audit(1761874426.405:433): pid=5157 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Oct 31 01:33:46.405000 audit[5157]: CRED_DISP pid=5157 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Oct 31 01:33:46.410080 systemd-logind[1342]: Session 10 logged out. Waiting for processes to exit. Oct 31 01:33:46.410574 systemd-logind[1342]: Removed session 10. Oct 31 01:33:46.405000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@7-139.178.70.102:22-147.75.109.163:54032 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 01:33:49.578876 kubelet[2284]: E1031 01:33:49.578849 2284 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-84f98497cf-b6pmv" podUID="034a3d4f-f436-4259-8570-9d57a6e1d274" Oct 31 01:33:51.411000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@8-139.178.70.102:22-147.75.109.163:57084 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 01:33:51.421733 kernel: kauditd_printk_skb: 1 callbacks suppressed Oct 31 01:33:51.421782 kernel: audit: type=1130 audit(1761874431.411:435): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@8-139.178.70.102:22-147.75.109.163:57084 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 01:33:51.413071 systemd[1]: Started sshd@8-139.178.70.102:22-147.75.109.163:57084.service. Oct 31 01:33:51.871000 audit[5176]: USER_ACCT pid=5176 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Oct 31 01:33:51.876105 sshd[5176]: Accepted publickey for core from 147.75.109.163 port 57084 ssh2: RSA SHA256:5EJg7y3pv7Ht00E4mNOtVA/MuREJjzW0PuLgUWDrRw0 Oct 31 01:33:51.892181 kernel: audit: type=1101 audit(1761874431.871:436): pid=5176 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Oct 31 01:33:51.892234 kernel: audit: type=1103 audit(1761874431.871:437): pid=5176 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Oct 31 01:33:51.892257 kernel: audit: type=1006 audit(1761874431.871:438): pid=5176 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=11 res=1 Oct 31 01:33:51.892273 kernel: audit: type=1300 audit(1761874431.871:438): arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffca57fad40 a2=3 a3=0 items=0 ppid=1 pid=5176 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=11 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 01:33:51.892288 kernel: audit: type=1327 audit(1761874431.871:438): proctitle=737368643A20636F7265205B707269765D Oct 31 01:33:51.871000 audit[5176]: CRED_ACQ pid=5176 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Oct 31 01:33:51.871000 audit[5176]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffca57fad40 a2=3 a3=0 items=0 ppid=1 pid=5176 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=11 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 01:33:51.871000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Oct 31 01:33:51.885096 sshd[5176]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Oct 31 01:33:51.930457 systemd[1]: Started session-11.scope. Oct 31 01:33:51.930711 systemd-logind[1342]: New session 11 of user core. Oct 31 01:33:51.945325 kernel: audit: type=1105 audit(1761874431.936:439): pid=5176 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Oct 31 01:33:51.945368 kernel: audit: type=1103 audit(1761874431.940:440): pid=5179 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Oct 31 01:33:51.936000 audit[5176]: USER_START pid=5176 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Oct 31 01:33:51.940000 audit[5179]: CRED_ACQ pid=5179 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Oct 31 01:33:52.851208 sshd[5176]: pam_unix(sshd:session): session closed for user core Oct 31 01:33:52.850000 audit[5176]: USER_END pid=5176 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Oct 31 01:33:52.854000 audit[5176]: CRED_DISP pid=5176 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Oct 31 01:33:52.863645 kernel: audit: type=1106 audit(1761874432.850:441): pid=5176 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Oct 31 01:33:52.863690 kernel: audit: type=1104 audit(1761874432.854:442): pid=5176 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Oct 31 01:33:52.858000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@8-139.178.70.102:22-147.75.109.163:57084 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 01:33:52.859882 systemd[1]: sshd@8-139.178.70.102:22-147.75.109.163:57084.service: Deactivated successfully. Oct 31 01:33:52.860841 systemd[1]: session-11.scope: Deactivated successfully. Oct 31 01:33:52.861084 systemd-logind[1342]: Session 11 logged out. Waiting for processes to exit. Oct 31 01:33:52.861543 systemd-logind[1342]: Removed session 11. Oct 31 01:33:54.571060 kubelet[2284]: E1031 01:33:54.571038 2284 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-59f47b46b-s9kdt" podUID="8f69b598-caa2-4abe-911a-df60fbb3c4df" Oct 31 01:33:56.570859 kubelet[2284]: E1031 01:33:56.570833 2284 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-6cf7465748-bpws2" podUID="c5ff4853-41a8-4d7e-a0bc-f8d8451a400b" Oct 31 01:33:56.594895 kubelet[2284]: E1031 01:33:56.594868 2284 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-59f47b46b-44rl6" podUID="03dcc52f-4acf-4546-b99b-cf4de4d54704" Oct 31 01:33:57.852156 systemd[1]: Started sshd@9-139.178.70.102:22-147.75.109.163:57100.service. Oct 31 01:33:57.854286 kernel: kauditd_printk_skb: 1 callbacks suppressed Oct 31 01:33:57.854329 kernel: audit: type=1130 audit(1761874437.851:444): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@9-139.178.70.102:22-147.75.109.163:57100 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 01:33:57.851000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@9-139.178.70.102:22-147.75.109.163:57100 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 01:33:57.977000 audit[5208]: USER_ACCT pid=5208 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Oct 31 01:33:57.980024 sshd[5208]: Accepted publickey for core from 147.75.109.163 port 57100 ssh2: RSA SHA256:5EJg7y3pv7Ht00E4mNOtVA/MuREJjzW0PuLgUWDrRw0 Oct 31 01:33:57.983678 kernel: audit: type=1101 audit(1761874437.977:445): pid=5208 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Oct 31 01:33:57.982000 audit[5208]: CRED_ACQ pid=5208 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Oct 31 01:33:57.988678 kernel: audit: type=1103 audit(1761874437.982:446): pid=5208 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Oct 31 01:33:57.989592 sshd[5208]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Oct 31 01:33:57.992640 kernel: audit: type=1006 audit(1761874437.983:447): pid=5208 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=12 res=1 Oct 31 01:33:57.983000 audit[5208]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffeb6322de0 a2=3 a3=0 items=0 ppid=1 pid=5208 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=12 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 01:33:57.998710 kernel: audit: type=1300 audit(1761874437.983:447): arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffeb6322de0 a2=3 a3=0 items=0 ppid=1 pid=5208 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=12 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 01:33:57.983000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Oct 31 01:33:58.001567 systemd[1]: Started session-12.scope. Oct 31 01:33:58.002520 systemd-logind[1342]: New session 12 of user core. Oct 31 01:33:58.003658 kernel: audit: type=1327 audit(1761874437.983:447): proctitle=737368643A20636F7265205B707269765D Oct 31 01:33:58.004000 audit[5208]: USER_START pid=5208 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Oct 31 01:33:58.010727 kernel: audit: type=1105 audit(1761874438.004:448): pid=5208 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Oct 31 01:33:58.006000 audit[5211]: CRED_ACQ pid=5211 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Oct 31 01:33:58.014695 kernel: audit: type=1103 audit(1761874438.006:449): pid=5211 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Oct 31 01:33:58.480000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@10-139.178.70.102:22-147.75.109.163:57104 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 01:33:58.482254 systemd[1]: Started sshd@10-139.178.70.102:22-147.75.109.163:57104.service. Oct 31 01:33:58.488304 kernel: audit: type=1130 audit(1761874438.480:450): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@10-139.178.70.102:22-147.75.109.163:57104 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 01:33:58.485986 sshd[5208]: pam_unix(sshd:session): session closed for user core Oct 31 01:33:58.494000 audit[5208]: USER_END pid=5208 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Oct 31 01:33:58.500654 kernel: audit: type=1106 audit(1761874438.494:451): pid=5208 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Oct 31 01:33:58.503000 audit[5208]: CRED_DISP pid=5208 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Oct 31 01:33:58.504000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@9-139.178.70.102:22-147.75.109.163:57100 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 01:33:58.506342 systemd[1]: sshd@9-139.178.70.102:22-147.75.109.163:57100.service: Deactivated successfully. Oct 31 01:33:58.506896 systemd[1]: session-12.scope: Deactivated successfully. Oct 31 01:33:58.507987 systemd-logind[1342]: Session 12 logged out. Waiting for processes to exit. Oct 31 01:33:58.508596 systemd-logind[1342]: Removed session 12. Oct 31 01:33:58.537000 audit[5219]: USER_ACCT pid=5219 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Oct 31 01:33:58.538000 audit[5219]: CRED_ACQ pid=5219 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Oct 31 01:33:58.538000 audit[5219]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7fff6844eee0 a2=3 a3=0 items=0 ppid=1 pid=5219 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=13 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 01:33:58.538000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Oct 31 01:33:58.540576 sshd[5219]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Oct 31 01:33:58.549136 sshd[5219]: Accepted publickey for core from 147.75.109.163 port 57104 ssh2: RSA SHA256:5EJg7y3pv7Ht00E4mNOtVA/MuREJjzW0PuLgUWDrRw0 Oct 31 01:33:58.546100 systemd[1]: Started session-13.scope. Oct 31 01:33:58.547651 systemd-logind[1342]: New session 13 of user core. Oct 31 01:33:58.565000 audit[5219]: USER_START pid=5219 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Oct 31 01:33:58.566000 audit[5224]: CRED_ACQ pid=5224 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Oct 31 01:33:58.572143 kubelet[2284]: E1031 01:33:58.572091 2284 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-vc6st" podUID="68e0baab-eac1-409d-a79c-945bc83eb739" Oct 31 01:33:58.914249 systemd[1]: Started sshd@11-139.178.70.102:22-147.75.109.163:57110.service. Oct 31 01:33:58.912000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@11-139.178.70.102:22-147.75.109.163:57110 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 01:33:58.914000 audit[5219]: USER_END pid=5219 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Oct 31 01:33:58.914000 audit[5219]: CRED_DISP pid=5219 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Oct 31 01:33:58.915536 sshd[5219]: pam_unix(sshd:session): session closed for user core Oct 31 01:33:58.917558 systemd[1]: sshd@10-139.178.70.102:22-147.75.109.163:57104.service: Deactivated successfully. Oct 31 01:33:58.916000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@10-139.178.70.102:22-147.75.109.163:57104 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 01:33:58.918333 systemd[1]: session-13.scope: Deactivated successfully. Oct 31 01:33:58.918381 systemd-logind[1342]: Session 13 logged out. Waiting for processes to exit. Oct 31 01:33:58.919309 systemd-logind[1342]: Removed session 13. Oct 31 01:33:58.976591 sshd[5229]: Accepted publickey for core from 147.75.109.163 port 57110 ssh2: RSA SHA256:5EJg7y3pv7Ht00E4mNOtVA/MuREJjzW0PuLgUWDrRw0 Oct 31 01:33:58.974000 audit[5229]: USER_ACCT pid=5229 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Oct 31 01:33:58.978906 sshd[5229]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Oct 31 01:33:58.976000 audit[5229]: CRED_ACQ pid=5229 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Oct 31 01:33:58.976000 audit[5229]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffd94dbc7e0 a2=3 a3=0 items=0 ppid=1 pid=5229 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=14 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 01:33:58.976000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Oct 31 01:33:58.983042 systemd-logind[1342]: New session 14 of user core. Oct 31 01:33:58.983391 systemd[1]: Started session-14.scope. Oct 31 01:33:58.990000 audit[5229]: USER_START pid=5229 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Oct 31 01:33:58.991000 audit[5234]: CRED_ACQ pid=5234 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Oct 31 01:33:59.252273 sshd[5229]: pam_unix(sshd:session): session closed for user core Oct 31 01:33:59.252000 audit[5229]: USER_END pid=5229 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Oct 31 01:33:59.252000 audit[5229]: CRED_DISP pid=5229 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Oct 31 01:33:59.254789 systemd[1]: sshd@11-139.178.70.102:22-147.75.109.163:57110.service: Deactivated successfully. Oct 31 01:33:59.253000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@11-139.178.70.102:22-147.75.109.163:57110 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 01:33:59.255687 systemd[1]: session-14.scope: Deactivated successfully. Oct 31 01:33:59.256069 systemd-logind[1342]: Session 14 logged out. Waiting for processes to exit. Oct 31 01:33:59.256748 systemd-logind[1342]: Removed session 14. Oct 31 01:33:59.571409 kubelet[2284]: E1031 01:33:59.571259 2284 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-8nrww" podUID="d10ebfbe-91f8-4576-8542-06b4d8a152be" Oct 31 01:33:59.571409 kubelet[2284]: E1031 01:33:59.571363 2284 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7495b6f49d-9bz8s" podUID="3586a90a-6636-45ab-8082-c6aa9bdb62e3" Oct 31 01:34:01.572991 kubelet[2284]: E1031 01:34:01.572962 2284 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-84f98497cf-b6pmv" podUID="034a3d4f-f436-4259-8570-9d57a6e1d274" Oct 31 01:34:04.250428 systemd[1]: Started sshd@12-139.178.70.102:22-147.75.109.163:43458.service. Oct 31 01:34:04.254686 kernel: kauditd_printk_skb: 23 callbacks suppressed Oct 31 01:34:04.254734 kernel: audit: type=1130 audit(1761874444.248:471): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@12-139.178.70.102:22-147.75.109.163:43458 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 01:34:04.248000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@12-139.178.70.102:22-147.75.109.163:43458 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 01:34:04.417000 audit[5248]: USER_ACCT pid=5248 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Oct 31 01:34:04.422666 kernel: audit: type=1101 audit(1761874444.417:472): pid=5248 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Oct 31 01:34:04.422707 sshd[5248]: Accepted publickey for core from 147.75.109.163 port 43458 ssh2: RSA SHA256:5EJg7y3pv7Ht00E4mNOtVA/MuREJjzW0PuLgUWDrRw0 Oct 31 01:34:04.421000 audit[5248]: CRED_ACQ pid=5248 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Oct 31 01:34:04.424860 sshd[5248]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Oct 31 01:34:04.427878 kernel: audit: type=1103 audit(1761874444.421:473): pid=5248 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Oct 31 01:34:04.428735 kernel: audit: type=1006 audit(1761874444.421:474): pid=5248 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=15 res=1 Oct 31 01:34:04.428759 kernel: audit: type=1300 audit(1761874444.421:474): arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffeba33fd20 a2=3 a3=0 items=0 ppid=1 pid=5248 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=15 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 01:34:04.421000 audit[5248]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffeba33fd20 a2=3 a3=0 items=0 ppid=1 pid=5248 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=15 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 01:34:04.421000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Oct 31 01:34:04.433639 kernel: audit: type=1327 audit(1761874444.421:474): proctitle=737368643A20636F7265205B707269765D Oct 31 01:34:04.434088 systemd-logind[1342]: New session 15 of user core. Oct 31 01:34:04.434402 systemd[1]: Started session-15.scope. Oct 31 01:34:04.445660 kernel: audit: type=1105 audit(1761874444.440:475): pid=5248 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Oct 31 01:34:04.440000 audit[5248]: USER_START pid=5248 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Oct 31 01:34:04.444000 audit[5251]: CRED_ACQ pid=5251 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Oct 31 01:34:04.449713 kernel: audit: type=1103 audit(1761874444.444:476): pid=5251 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Oct 31 01:34:04.741823 sshd[5248]: pam_unix(sshd:session): session closed for user core Oct 31 01:34:04.741000 audit[5248]: USER_END pid=5248 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Oct 31 01:34:04.747598 kernel: audit: type=1106 audit(1761874444.741:477): pid=5248 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Oct 31 01:34:04.746687 systemd-logind[1342]: Session 15 logged out. Waiting for processes to exit. Oct 31 01:34:04.746849 systemd[1]: sshd@12-139.178.70.102:22-147.75.109.163:43458.service: Deactivated successfully. Oct 31 01:34:04.747344 systemd[1]: session-15.scope: Deactivated successfully. Oct 31 01:34:04.747662 systemd-logind[1342]: Removed session 15. Oct 31 01:34:04.742000 audit[5248]: CRED_DISP pid=5248 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Oct 31 01:34:04.751625 kernel: audit: type=1104 audit(1761874444.742:478): pid=5248 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Oct 31 01:34:04.745000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@12-139.178.70.102:22-147.75.109.163:43458 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 01:34:06.570795 kubelet[2284]: E1031 01:34:06.570770 2284 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-59f47b46b-s9kdt" podUID="8f69b598-caa2-4abe-911a-df60fbb3c4df" Oct 31 01:34:08.571218 kubelet[2284]: E1031 01:34:08.571185 2284 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-59f47b46b-44rl6" podUID="03dcc52f-4acf-4546-b99b-cf4de4d54704" Oct 31 01:34:08.571601 kubelet[2284]: E1031 01:34:08.571392 2284 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-6cf7465748-bpws2" podUID="c5ff4853-41a8-4d7e-a0bc-f8d8451a400b" Oct 31 01:34:09.737924 systemd[1]: Started sshd@13-139.178.70.102:22-147.75.109.163:43464.service. Oct 31 01:34:09.736000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@13-139.178.70.102:22-147.75.109.163:43464 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 01:34:09.739597 kernel: kauditd_printk_skb: 1 callbacks suppressed Oct 31 01:34:09.739663 kernel: audit: type=1130 audit(1761874449.736:480): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@13-139.178.70.102:22-147.75.109.163:43464 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 01:34:09.907000 audit[5261]: USER_ACCT pid=5261 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Oct 31 01:34:09.912832 sshd[5261]: Accepted publickey for core from 147.75.109.163 port 43464 ssh2: RSA SHA256:5EJg7y3pv7Ht00E4mNOtVA/MuREJjzW0PuLgUWDrRw0 Oct 31 01:34:09.916859 kernel: audit: type=1101 audit(1761874449.907:481): pid=5261 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Oct 31 01:34:09.916891 kernel: audit: type=1103 audit(1761874449.912:482): pid=5261 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Oct 31 01:34:09.916909 kernel: audit: type=1006 audit(1761874449.912:483): pid=5261 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=16 res=1 Oct 31 01:34:09.925656 kernel: audit: type=1300 audit(1761874449.912:483): arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffe727455b0 a2=3 a3=0 items=0 ppid=1 pid=5261 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=16 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 01:34:09.925692 kernel: audit: type=1327 audit(1761874449.912:483): proctitle=737368643A20636F7265205B707269765D Oct 31 01:34:09.912000 audit[5261]: CRED_ACQ pid=5261 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Oct 31 01:34:09.912000 audit[5261]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffe727455b0 a2=3 a3=0 items=0 ppid=1 pid=5261 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=16 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 01:34:09.912000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Oct 31 01:34:09.917139 sshd[5261]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Oct 31 01:34:09.925043 systemd-logind[1342]: New session 16 of user core. Oct 31 01:34:09.925485 systemd[1]: Started session-16.scope. Oct 31 01:34:09.927000 audit[5261]: USER_START pid=5261 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Oct 31 01:34:09.927000 audit[5264]: CRED_ACQ pid=5264 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Oct 31 01:34:09.935465 kernel: audit: type=1105 audit(1761874449.927:484): pid=5261 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Oct 31 01:34:09.935548 kernel: audit: type=1103 audit(1761874449.927:485): pid=5264 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Oct 31 01:34:10.045730 sshd[5261]: pam_unix(sshd:session): session closed for user core Oct 31 01:34:10.045000 audit[5261]: USER_END pid=5261 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Oct 31 01:34:10.054111 kernel: audit: type=1106 audit(1761874450.045:486): pid=5261 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Oct 31 01:34:10.054147 kernel: audit: type=1104 audit(1761874450.049:487): pid=5261 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Oct 31 01:34:10.049000 audit[5261]: CRED_DISP pid=5261 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Oct 31 01:34:10.054446 systemd[1]: sshd@13-139.178.70.102:22-147.75.109.163:43464.service: Deactivated successfully. Oct 31 01:34:10.055137 systemd[1]: session-16.scope: Deactivated successfully. Oct 31 01:34:10.055173 systemd-logind[1342]: Session 16 logged out. Waiting for processes to exit. Oct 31 01:34:10.053000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@13-139.178.70.102:22-147.75.109.163:43464 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 01:34:10.055816 systemd-logind[1342]: Removed session 16. Oct 31 01:34:11.570803 kubelet[2284]: E1031 01:34:11.570779 2284 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7495b6f49d-9bz8s" podUID="3586a90a-6636-45ab-8082-c6aa9bdb62e3" Oct 31 01:34:12.571889 kubelet[2284]: E1031 01:34:12.571861 2284 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-84f98497cf-b6pmv" podUID="034a3d4f-f436-4259-8570-9d57a6e1d274" Oct 31 01:34:12.572240 kubelet[2284]: E1031 01:34:12.572221 2284 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-vc6st" podUID="68e0baab-eac1-409d-a79c-945bc83eb739" Oct 31 01:34:14.570359 kubelet[2284]: E1031 01:34:14.570336 2284 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-8nrww" podUID="d10ebfbe-91f8-4576-8542-06b4d8a152be" Oct 31 01:34:15.048838 systemd[1]: Started sshd@14-139.178.70.102:22-147.75.109.163:39508.service. Oct 31 01:34:15.053597 kernel: kauditd_printk_skb: 1 callbacks suppressed Oct 31 01:34:15.054173 kernel: audit: type=1130 audit(1761874455.047:489): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@14-139.178.70.102:22-147.75.109.163:39508 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 01:34:15.047000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@14-139.178.70.102:22-147.75.109.163:39508 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 01:34:15.204092 sshd[5273]: Accepted publickey for core from 147.75.109.163 port 39508 ssh2: RSA SHA256:5EJg7y3pv7Ht00E4mNOtVA/MuREJjzW0PuLgUWDrRw0 Oct 31 01:34:15.202000 audit[5273]: USER_ACCT pid=5273 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Oct 31 01:34:15.212281 kernel: audit: type=1101 audit(1761874455.202:490): pid=5273 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Oct 31 01:34:15.212380 kernel: audit: type=1103 audit(1761874455.203:491): pid=5273 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Oct 31 01:34:15.203000 audit[5273]: CRED_ACQ pid=5273 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Oct 31 01:34:15.212774 sshd[5273]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Oct 31 01:34:15.217083 kernel: audit: type=1006 audit(1761874455.203:492): pid=5273 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=17 res=1 Oct 31 01:34:15.217514 systemd-logind[1342]: New session 17 of user core. Oct 31 01:34:15.217811 systemd[1]: Started session-17.scope. Oct 31 01:34:15.203000 audit[5273]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffe5afd3900 a2=3 a3=0 items=0 ppid=1 pid=5273 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=17 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 01:34:15.225659 kernel: audit: type=1300 audit(1761874455.203:492): arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffe5afd3900 a2=3 a3=0 items=0 ppid=1 pid=5273 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=17 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 01:34:15.203000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Oct 31 01:34:15.230651 kernel: audit: type=1327 audit(1761874455.203:492): proctitle=737368643A20636F7265205B707269765D Oct 31 01:34:15.219000 audit[5273]: USER_START pid=5273 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Oct 31 01:34:15.235657 kernel: audit: type=1105 audit(1761874455.219:493): pid=5273 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Oct 31 01:34:15.224000 audit[5276]: CRED_ACQ pid=5276 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Oct 31 01:34:15.241720 kernel: audit: type=1103 audit(1761874455.224:494): pid=5276 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Oct 31 01:34:15.509279 sshd[5273]: pam_unix(sshd:session): session closed for user core Oct 31 01:34:15.508000 audit[5273]: USER_END pid=5273 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Oct 31 01:34:15.510000 audit[5273]: CRED_DISP pid=5273 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Oct 31 01:34:15.517147 kernel: audit: type=1106 audit(1761874455.508:495): pid=5273 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Oct 31 01:34:15.517242 kernel: audit: type=1104 audit(1761874455.510:496): pid=5273 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Oct 31 01:34:15.517353 systemd[1]: sshd@14-139.178.70.102:22-147.75.109.163:39508.service: Deactivated successfully. Oct 31 01:34:15.515000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@14-139.178.70.102:22-147.75.109.163:39508 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 01:34:15.518112 systemd[1]: session-17.scope: Deactivated successfully. Oct 31 01:34:15.518132 systemd-logind[1342]: Session 17 logged out. Waiting for processes to exit. Oct 31 01:34:15.519060 systemd-logind[1342]: Removed session 17. Oct 31 01:34:20.511755 systemd[1]: Started sshd@15-139.178.70.102:22-147.75.109.163:43006.service. Oct 31 01:34:20.510000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@15-139.178.70.102:22-147.75.109.163:43006 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 01:34:20.519552 kernel: kauditd_printk_skb: 1 callbacks suppressed Oct 31 01:34:20.519631 kernel: audit: type=1130 audit(1761874460.510:498): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@15-139.178.70.102:22-147.75.109.163:43006 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 01:34:20.570623 kubelet[2284]: E1031 01:34:20.570576 2284 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-59f47b46b-s9kdt" podUID="8f69b598-caa2-4abe-911a-df60fbb3c4df" Oct 31 01:34:20.705291 sshd[5293]: Accepted publickey for core from 147.75.109.163 port 43006 ssh2: RSA SHA256:5EJg7y3pv7Ht00E4mNOtVA/MuREJjzW0PuLgUWDrRw0 Oct 31 01:34:20.703000 audit[5293]: USER_ACCT pid=5293 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Oct 31 01:34:20.704000 audit[5293]: CRED_ACQ pid=5293 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Oct 31 01:34:20.706295 sshd[5293]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Oct 31 01:34:20.717571 kernel: audit: type=1101 audit(1761874460.703:499): pid=5293 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Oct 31 01:34:20.717654 kernel: audit: type=1103 audit(1761874460.704:500): pid=5293 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Oct 31 01:34:20.717676 kernel: audit: type=1006 audit(1761874460.704:501): pid=5293 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=18 res=1 Oct 31 01:34:20.717697 kernel: audit: type=1300 audit(1761874460.704:501): arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffefd0d5e80 a2=3 a3=0 items=0 ppid=1 pid=5293 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=18 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 01:34:20.717713 kernel: audit: type=1327 audit(1761874460.704:501): proctitle=737368643A20636F7265205B707269765D Oct 31 01:34:20.704000 audit[5293]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffefd0d5e80 a2=3 a3=0 items=0 ppid=1 pid=5293 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=18 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 01:34:20.704000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Oct 31 01:34:20.721413 systemd[1]: Started session-18.scope. Oct 31 01:34:20.756090 kernel: audit: type=1105 audit(1761874460.723:502): pid=5293 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Oct 31 01:34:20.756149 kernel: audit: type=1103 audit(1761874460.724:503): pid=5300 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Oct 31 01:34:20.723000 audit[5293]: USER_START pid=5293 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Oct 31 01:34:20.724000 audit[5300]: CRED_ACQ pid=5300 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Oct 31 01:34:20.721737 systemd-logind[1342]: New session 18 of user core. Oct 31 01:34:21.122226 systemd[1]: Started sshd@16-139.178.70.102:22-147.75.109.163:43010.service. Oct 31 01:34:21.120000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@16-139.178.70.102:22-147.75.109.163:43010 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 01:34:21.125626 kernel: audit: type=1130 audit(1761874461.120:504): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@16-139.178.70.102:22-147.75.109.163:43010 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 01:34:21.125858 sshd[5293]: pam_unix(sshd:session): session closed for user core Oct 31 01:34:21.124000 audit[5293]: USER_END pid=5293 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Oct 31 01:34:21.130677 kernel: audit: type=1106 audit(1761874461.124:505): pid=5293 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Oct 31 01:34:21.129000 audit[5293]: CRED_DISP pid=5293 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Oct 31 01:34:21.129000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@15-139.178.70.102:22-147.75.109.163:43006 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 01:34:21.131245 systemd[1]: sshd@15-139.178.70.102:22-147.75.109.163:43006.service: Deactivated successfully. Oct 31 01:34:21.131837 systemd[1]: session-18.scope: Deactivated successfully. Oct 31 01:34:21.132468 systemd-logind[1342]: Session 18 logged out. Waiting for processes to exit. Oct 31 01:34:21.132950 systemd-logind[1342]: Removed session 18. Oct 31 01:34:21.154000 audit[5307]: USER_ACCT pid=5307 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Oct 31 01:34:21.156438 sshd[5307]: Accepted publickey for core from 147.75.109.163 port 43010 ssh2: RSA SHA256:5EJg7y3pv7Ht00E4mNOtVA/MuREJjzW0PuLgUWDrRw0 Oct 31 01:34:21.155000 audit[5307]: CRED_ACQ pid=5307 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Oct 31 01:34:21.155000 audit[5307]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffc3f9b2ef0 a2=3 a3=0 items=0 ppid=1 pid=5307 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=19 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 01:34:21.155000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Oct 31 01:34:21.160369 systemd[1]: Started session-19.scope. Oct 31 01:34:21.157374 sshd[5307]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Oct 31 01:34:21.160494 systemd-logind[1342]: New session 19 of user core. Oct 31 01:34:21.162000 audit[5307]: USER_START pid=5307 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Oct 31 01:34:21.163000 audit[5312]: CRED_ACQ pid=5312 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Oct 31 01:34:21.758181 systemd[1]: Started sshd@17-139.178.70.102:22-147.75.109.163:43014.service. Oct 31 01:34:21.757000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@17-139.178.70.102:22-147.75.109.163:43014 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 01:34:21.763217 sshd[5307]: pam_unix(sshd:session): session closed for user core Oct 31 01:34:21.762000 audit[5307]: USER_END pid=5307 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Oct 31 01:34:21.762000 audit[5307]: CRED_DISP pid=5307 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Oct 31 01:34:21.770000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@16-139.178.70.102:22-147.75.109.163:43010 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 01:34:21.772409 systemd[1]: sshd@16-139.178.70.102:22-147.75.109.163:43010.service: Deactivated successfully. Oct 31 01:34:21.772999 systemd[1]: session-19.scope: Deactivated successfully. Oct 31 01:34:21.773848 systemd-logind[1342]: Session 19 logged out. Waiting for processes to exit. Oct 31 01:34:21.774402 systemd-logind[1342]: Removed session 19. Oct 31 01:34:21.819000 audit[5318]: USER_ACCT pid=5318 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Oct 31 01:34:21.821471 sshd[5318]: Accepted publickey for core from 147.75.109.163 port 43014 ssh2: RSA SHA256:5EJg7y3pv7Ht00E4mNOtVA/MuREJjzW0PuLgUWDrRw0 Oct 31 01:34:21.825000 audit[5318]: CRED_ACQ pid=5318 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Oct 31 01:34:21.825000 audit[5318]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7fff4b732000 a2=3 a3=0 items=0 ppid=1 pid=5318 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=20 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 01:34:21.825000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Oct 31 01:34:21.828791 sshd[5318]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Oct 31 01:34:21.834849 systemd[1]: Started session-20.scope. Oct 31 01:34:21.835200 systemd-logind[1342]: New session 20 of user core. Oct 31 01:34:21.838000 audit[5318]: USER_START pid=5318 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Oct 31 01:34:21.839000 audit[5323]: CRED_ACQ pid=5323 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Oct 31 01:34:22.476498 systemd[1]: Started sshd@18-139.178.70.102:22-147.75.109.163:43028.service. Oct 31 01:34:22.477361 sshd[5318]: pam_unix(sshd:session): session closed for user core Oct 31 01:34:22.475000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@18-139.178.70.102:22-147.75.109.163:43028 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 01:34:22.481000 audit[5318]: USER_END pid=5318 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Oct 31 01:34:22.483000 audit[5318]: CRED_DISP pid=5318 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Oct 31 01:34:22.486244 systemd[1]: sshd@17-139.178.70.102:22-147.75.109.163:43014.service: Deactivated successfully. Oct 31 01:34:22.484000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@17-139.178.70.102:22-147.75.109.163:43014 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 01:34:22.487497 systemd[1]: session-20.scope: Deactivated successfully. Oct 31 01:34:22.487535 systemd-logind[1342]: Session 20 logged out. Waiting for processes to exit. Oct 31 01:34:22.490281 systemd-logind[1342]: Removed session 20. Oct 31 01:34:22.552000 audit[5332]: USER_ACCT pid=5332 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Oct 31 01:34:22.553851 sshd[5332]: Accepted publickey for core from 147.75.109.163 port 43028 ssh2: RSA SHA256:5EJg7y3pv7Ht00E4mNOtVA/MuREJjzW0PuLgUWDrRw0 Oct 31 01:34:22.553000 audit[5332]: CRED_ACQ pid=5332 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Oct 31 01:34:22.553000 audit[5332]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffe0a965a20 a2=3 a3=0 items=0 ppid=1 pid=5332 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=21 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 01:34:22.553000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Oct 31 01:34:22.555759 sshd[5332]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Oct 31 01:34:22.559374 systemd[1]: Started session-21.scope. Oct 31 01:34:22.560046 systemd-logind[1342]: New session 21 of user core. Oct 31 01:34:22.561000 audit[5332]: USER_START pid=5332 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Oct 31 01:34:22.562000 audit[5337]: CRED_ACQ pid=5337 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Oct 31 01:34:22.572647 env[1377]: time="2025-10-31T01:34:22.572495318Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Oct 31 01:34:22.599000 audit[5339]: NETFILTER_CFG table=filter:130 family=2 entries=26 op=nft_register_rule pid=5339 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Oct 31 01:34:22.599000 audit[5339]: SYSCALL arch=c000003e syscall=46 success=yes exit=14176 a0=3 a1=7fff60d2a000 a2=0 a3=7fff60d29fec items=0 ppid=2387 pid=5339 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 01:34:22.599000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Oct 31 01:34:22.604000 audit[5339]: NETFILTER_CFG table=nat:131 family=2 entries=20 op=nft_register_rule pid=5339 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Oct 31 01:34:22.604000 audit[5339]: SYSCALL arch=c000003e syscall=46 success=yes exit=5772 a0=3 a1=7fff60d2a000 a2=0 a3=0 items=0 ppid=2387 pid=5339 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 01:34:22.604000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Oct 31 01:34:22.615000 audit[5342]: NETFILTER_CFG table=filter:132 family=2 entries=38 op=nft_register_rule pid=5342 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Oct 31 01:34:22.615000 audit[5342]: SYSCALL arch=c000003e syscall=46 success=yes exit=14176 a0=3 a1=7ffd269e8ef0 a2=0 a3=7ffd269e8edc items=0 ppid=2387 pid=5342 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 01:34:22.615000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Oct 31 01:34:22.619000 audit[5342]: NETFILTER_CFG table=nat:133 family=2 entries=20 op=nft_register_rule pid=5342 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Oct 31 01:34:22.619000 audit[5342]: SYSCALL arch=c000003e syscall=46 success=yes exit=5772 a0=3 a1=7ffd269e8ef0 a2=0 a3=0 items=0 ppid=2387 pid=5342 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 01:34:22.619000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Oct 31 01:34:23.053000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@19-139.178.70.102:22-147.75.109.163:43030 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 01:34:23.055244 sshd[5332]: pam_unix(sshd:session): session closed for user core Oct 31 01:34:23.054667 systemd[1]: Started sshd@19-139.178.70.102:22-147.75.109.163:43030.service. Oct 31 01:34:23.058000 audit[5332]: USER_END pid=5332 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Oct 31 01:34:23.059000 audit[5332]: CRED_DISP pid=5332 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Oct 31 01:34:23.060000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@18-139.178.70.102:22-147.75.109.163:43028 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 01:34:23.061868 systemd[1]: sshd@18-139.178.70.102:22-147.75.109.163:43028.service: Deactivated successfully. Oct 31 01:34:23.063243 systemd[1]: session-21.scope: Deactivated successfully. Oct 31 01:34:23.063406 systemd-logind[1342]: Session 21 logged out. Waiting for processes to exit. Oct 31 01:34:23.064471 systemd-logind[1342]: Removed session 21. Oct 31 01:34:23.068496 env[1377]: time="2025-10-31T01:34:23.068270373Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Oct 31 01:34:23.069628 env[1377]: time="2025-10-31T01:34:23.068853831Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Oct 31 01:34:23.074442 kubelet[2284]: E1031 01:34:23.071889 2284 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Oct 31 01:34:23.077758 kubelet[2284]: E1031 01:34:23.077738 2284 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Oct 31 01:34:23.082668 kubelet[2284]: E1031 01:34:23.082632 2284 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-qzscg,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-59f47b46b-44rl6_calico-apiserver(03dcc52f-4acf-4546-b99b-cf4de4d54704): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Oct 31 01:34:23.084672 kubelet[2284]: E1031 01:34:23.084651 2284 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-59f47b46b-44rl6" podUID="03dcc52f-4acf-4546-b99b-cf4de4d54704" Oct 31 01:34:23.110000 audit[5347]: USER_ACCT pid=5347 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Oct 31 01:34:23.111000 audit[5347]: CRED_ACQ pid=5347 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Oct 31 01:34:23.111000 audit[5347]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffcc171f230 a2=3 a3=0 items=0 ppid=1 pid=5347 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=22 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 01:34:23.111000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Oct 31 01:34:23.112990 sshd[5347]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Oct 31 01:34:23.114817 sshd[5347]: Accepted publickey for core from 147.75.109.163 port 43030 ssh2: RSA SHA256:5EJg7y3pv7Ht00E4mNOtVA/MuREJjzW0PuLgUWDrRw0 Oct 31 01:34:23.116323 systemd[1]: Started session-22.scope. Oct 31 01:34:23.116523 systemd-logind[1342]: New session 22 of user core. Oct 31 01:34:23.118000 audit[5347]: USER_START pid=5347 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Oct 31 01:34:23.119000 audit[5352]: CRED_ACQ pid=5352 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Oct 31 01:34:23.288184 sshd[5347]: pam_unix(sshd:session): session closed for user core Oct 31 01:34:23.287000 audit[5347]: USER_END pid=5347 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Oct 31 01:34:23.287000 audit[5347]: CRED_DISP pid=5347 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Oct 31 01:34:23.289000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@19-139.178.70.102:22-147.75.109.163:43030 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 01:34:23.290729 systemd[1]: sshd@19-139.178.70.102:22-147.75.109.163:43030.service: Deactivated successfully. Oct 31 01:34:23.291672 systemd[1]: session-22.scope: Deactivated successfully. Oct 31 01:34:23.292015 systemd-logind[1342]: Session 22 logged out. Waiting for processes to exit. Oct 31 01:34:23.292820 systemd-logind[1342]: Removed session 22. Oct 31 01:34:23.570858 kubelet[2284]: E1031 01:34:23.570755 2284 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-6cf7465748-bpws2" podUID="c5ff4853-41a8-4d7e-a0bc-f8d8451a400b" Oct 31 01:34:23.571436 env[1377]: time="2025-10-31T01:34:23.571106481Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Oct 31 01:34:23.892519 env[1377]: time="2025-10-31T01:34:23.892305440Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Oct 31 01:34:23.898268 env[1377]: time="2025-10-31T01:34:23.898182859Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Oct 31 01:34:23.898387 kubelet[2284]: E1031 01:34:23.898350 2284 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Oct 31 01:34:23.898438 kubelet[2284]: E1031 01:34:23.898395 2284 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Oct 31 01:34:23.898498 kubelet[2284]: E1031 01:34:23.898475 2284 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:7fca255853824dd5924def6ba75879e0,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-jvt5b,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-84f98497cf-b6pmv_calico-system(034a3d4f-f436-4259-8570-9d57a6e1d274): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Oct 31 01:34:23.900520 env[1377]: time="2025-10-31T01:34:23.900338394Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Oct 31 01:34:24.236508 env[1377]: time="2025-10-31T01:34:24.236396366Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Oct 31 01:34:24.237100 env[1377]: time="2025-10-31T01:34:24.237045868Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Oct 31 01:34:24.237339 kubelet[2284]: E1031 01:34:24.237301 2284 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Oct 31 01:34:24.237535 kubelet[2284]: E1031 01:34:24.237348 2284 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Oct 31 01:34:24.237535 kubelet[2284]: E1031 01:34:24.237433 2284 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-jvt5b,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-84f98497cf-b6pmv_calico-system(034a3d4f-f436-4259-8570-9d57a6e1d274): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Oct 31 01:34:24.238772 kubelet[2284]: E1031 01:34:24.238745 2284 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-84f98497cf-b6pmv" podUID="034a3d4f-f436-4259-8570-9d57a6e1d274" Oct 31 01:34:25.702777 env[1377]: time="2025-10-31T01:34:25.702498917Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Oct 31 01:34:26.063914 env[1377]: time="2025-10-31T01:34:26.063824914Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Oct 31 01:34:26.064434 env[1377]: time="2025-10-31T01:34:26.064400028Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Oct 31 01:34:26.065389 kubelet[2284]: E1031 01:34:26.064717 2284 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Oct 31 01:34:26.065389 kubelet[2284]: E1031 01:34:26.064757 2284 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Oct 31 01:34:26.065389 kubelet[2284]: E1031 01:34:26.064930 2284 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-wqf4c,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-7495b6f49d-9bz8s_calico-apiserver(3586a90a-6636-45ab-8082-c6aa9bdb62e3): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Oct 31 01:34:26.065821 env[1377]: time="2025-10-31T01:34:26.065427263Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Oct 31 01:34:26.066549 kubelet[2284]: E1031 01:34:26.066512 2284 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7495b6f49d-9bz8s" podUID="3586a90a-6636-45ab-8082-c6aa9bdb62e3" Oct 31 01:34:26.401069 env[1377]: time="2025-10-31T01:34:26.401028696Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Oct 31 01:34:26.401538 env[1377]: time="2025-10-31T01:34:26.401495341Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Oct 31 01:34:26.401764 kubelet[2284]: E1031 01:34:26.401733 2284 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Oct 31 01:34:26.401836 kubelet[2284]: E1031 01:34:26.401772 2284 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Oct 31 01:34:26.404586 kubelet[2284]: E1031 01:34:26.404537 2284 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-747rs,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-vc6st_calico-system(68e0baab-eac1-409d-a79c-945bc83eb739): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Oct 31 01:34:26.406840 env[1377]: time="2025-10-31T01:34:26.406811374Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Oct 31 01:34:26.739139 env[1377]: time="2025-10-31T01:34:26.739033285Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Oct 31 01:34:26.739848 env[1377]: time="2025-10-31T01:34:26.739814347Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Oct 31 01:34:26.740111 kubelet[2284]: E1031 01:34:26.740076 2284 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Oct 31 01:34:26.740210 kubelet[2284]: E1031 01:34:26.740195 2284 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Oct 31 01:34:26.740759 kubelet[2284]: E1031 01:34:26.740493 2284 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-747rs,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-vc6st_calico-system(68e0baab-eac1-409d-a79c-945bc83eb739): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Oct 31 01:34:26.742098 kubelet[2284]: E1031 01:34:26.742056 2284 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-vc6st" podUID="68e0baab-eac1-409d-a79c-945bc83eb739" Oct 31 01:34:27.348623 kernel: kauditd_printk_skb: 57 callbacks suppressed Oct 31 01:34:27.355184 kernel: audit: type=1325 audit(1761874467.344:547): table=filter:134 family=2 entries=26 op=nft_register_rule pid=5385 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Oct 31 01:34:27.356110 kernel: audit: type=1300 audit(1761874467.344:547): arch=c000003e syscall=46 success=yes exit=5248 a0=3 a1=7fffc80a3e90 a2=0 a3=7fffc80a3e7c items=0 ppid=2387 pid=5385 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 01:34:27.356141 kernel: audit: type=1327 audit(1761874467.344:547): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Oct 31 01:34:27.344000 audit[5385]: NETFILTER_CFG table=filter:134 family=2 entries=26 op=nft_register_rule pid=5385 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Oct 31 01:34:27.344000 audit[5385]: SYSCALL arch=c000003e syscall=46 success=yes exit=5248 a0=3 a1=7fffc80a3e90 a2=0 a3=7fffc80a3e7c items=0 ppid=2387 pid=5385 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 01:34:27.367338 kernel: audit: type=1325 audit(1761874467.354:548): table=nat:135 family=2 entries=104 op=nft_register_chain pid=5385 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Oct 31 01:34:27.372468 kernel: audit: type=1300 audit(1761874467.354:548): arch=c000003e syscall=46 success=yes exit=48684 a0=3 a1=7fffc80a3e90 a2=0 a3=7fffc80a3e7c items=0 ppid=2387 pid=5385 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 01:34:27.372500 kernel: audit: type=1327 audit(1761874467.354:548): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Oct 31 01:34:27.344000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Oct 31 01:34:27.354000 audit[5385]: NETFILTER_CFG table=nat:135 family=2 entries=104 op=nft_register_chain pid=5385 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Oct 31 01:34:27.354000 audit[5385]: SYSCALL arch=c000003e syscall=46 success=yes exit=48684 a0=3 a1=7fffc80a3e90 a2=0 a3=7fffc80a3e7c items=0 ppid=2387 pid=5385 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 01:34:27.354000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Oct 31 01:34:28.290699 systemd[1]: Started sshd@20-139.178.70.102:22-147.75.109.163:43040.service. Oct 31 01:34:28.294700 kernel: audit: type=1130 audit(1761874468.289:549): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@20-139.178.70.102:22-147.75.109.163:43040 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 01:34:28.289000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@20-139.178.70.102:22-147.75.109.163:43040 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 01:34:28.441000 audit[5386]: USER_ACCT pid=5386 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Oct 31 01:34:28.441000 audit[5386]: CRED_ACQ pid=5386 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Oct 31 01:34:28.447837 sshd[5386]: Accepted publickey for core from 147.75.109.163 port 43040 ssh2: RSA SHA256:5EJg7y3pv7Ht00E4mNOtVA/MuREJjzW0PuLgUWDrRw0 Oct 31 01:34:28.450518 kernel: audit: type=1101 audit(1761874468.441:550): pid=5386 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Oct 31 01:34:28.452651 kernel: audit: type=1103 audit(1761874468.441:551): pid=5386 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Oct 31 01:34:28.452949 kernel: audit: type=1006 audit(1761874468.441:552): pid=5386 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=23 res=1 Oct 31 01:34:28.441000 audit[5386]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffc13f885f0 a2=3 a3=0 items=0 ppid=1 pid=5386 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=23 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 01:34:28.441000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Oct 31 01:34:28.453262 sshd[5386]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Oct 31 01:34:28.458272 systemd-logind[1342]: New session 23 of user core. Oct 31 01:34:28.458650 systemd[1]: Started session-23.scope. Oct 31 01:34:28.460000 audit[5386]: USER_START pid=5386 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Oct 31 01:34:28.461000 audit[5389]: CRED_ACQ pid=5389 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Oct 31 01:34:29.089352 sshd[5386]: pam_unix(sshd:session): session closed for user core Oct 31 01:34:29.089000 audit[5386]: USER_END pid=5386 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Oct 31 01:34:29.090000 audit[5386]: CRED_DISP pid=5386 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Oct 31 01:34:29.092000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@20-139.178.70.102:22-147.75.109.163:43040 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 01:34:29.094012 systemd[1]: sshd@20-139.178.70.102:22-147.75.109.163:43040.service: Deactivated successfully. Oct 31 01:34:29.094760 systemd[1]: session-23.scope: Deactivated successfully. Oct 31 01:34:29.094784 systemd-logind[1342]: Session 23 logged out. Waiting for processes to exit. Oct 31 01:34:29.095969 systemd-logind[1342]: Removed session 23. Oct 31 01:34:29.574897 env[1377]: time="2025-10-31T01:34:29.574469977Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Oct 31 01:34:29.910143 env[1377]: time="2025-10-31T01:34:29.909768976Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Oct 31 01:34:29.910731 env[1377]: time="2025-10-31T01:34:29.910638916Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Oct 31 01:34:29.910921 kubelet[2284]: E1031 01:34:29.910890 2284 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Oct 31 01:34:29.911190 kubelet[2284]: E1031 01:34:29.911176 2284 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Oct 31 01:34:29.911431 kubelet[2284]: E1031 01:34:29.911397 2284 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-9kdtz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-8nrww_calico-system(d10ebfbe-91f8-4576-8542-06b4d8a152be): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Oct 31 01:34:29.912987 kubelet[2284]: E1031 01:34:29.912953 2284 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-8nrww" podUID="d10ebfbe-91f8-4576-8542-06b4d8a152be" Oct 31 01:34:32.571303 env[1377]: time="2025-10-31T01:34:32.571274199Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Oct 31 01:34:32.892246 env[1377]: time="2025-10-31T01:34:32.892159554Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Oct 31 01:34:32.896257 env[1377]: time="2025-10-31T01:34:32.896209189Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Oct 31 01:34:32.896527 kubelet[2284]: E1031 01:34:32.896486 2284 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Oct 31 01:34:32.896849 kubelet[2284]: E1031 01:34:32.896833 2284 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Oct 31 01:34:32.897039 kubelet[2284]: E1031 01:34:32.896996 2284 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-5b64v,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-59f47b46b-s9kdt_calico-apiserver(8f69b598-caa2-4abe-911a-df60fbb3c4df): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Oct 31 01:34:32.899254 kubelet[2284]: E1031 01:34:32.899216 2284 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-59f47b46b-s9kdt" podUID="8f69b598-caa2-4abe-911a-df60fbb3c4df" Oct 31 01:34:34.088093 systemd[1]: Started sshd@21-139.178.70.102:22-147.75.109.163:33568.service. Oct 31 01:34:34.086000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@21-139.178.70.102:22-147.75.109.163:33568 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 01:34:34.093086 kernel: kauditd_printk_skb: 7 callbacks suppressed Oct 31 01:34:34.093132 kernel: audit: type=1130 audit(1761874474.086:558): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@21-139.178.70.102:22-147.75.109.163:33568 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 01:34:34.131000 audit[5420]: USER_ACCT pid=5420 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Oct 31 01:34:34.133209 sshd[5420]: Accepted publickey for core from 147.75.109.163 port 33568 ssh2: RSA SHA256:5EJg7y3pv7Ht00E4mNOtVA/MuREJjzW0PuLgUWDrRw0 Oct 31 01:34:34.136689 kernel: audit: type=1101 audit(1761874474.131:559): pid=5420 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Oct 31 01:34:34.136000 audit[5420]: CRED_ACQ pid=5420 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Oct 31 01:34:34.160671 kernel: audit: type=1103 audit(1761874474.136:560): pid=5420 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Oct 31 01:34:34.160718 kernel: audit: type=1006 audit(1761874474.140:561): pid=5420 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=24 res=1 Oct 31 01:34:34.160743 kernel: audit: type=1300 audit(1761874474.140:561): arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffc412e53f0 a2=3 a3=0 items=0 ppid=1 pid=5420 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=24 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 01:34:34.160761 kernel: audit: type=1327 audit(1761874474.140:561): proctitle=737368643A20636F7265205B707269765D Oct 31 01:34:34.160777 kernel: audit: type=1105 audit(1761874474.154:562): pid=5420 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Oct 31 01:34:34.140000 audit[5420]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffc412e53f0 a2=3 a3=0 items=0 ppid=1 pid=5420 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=24 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 01:34:34.140000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Oct 31 01:34:34.154000 audit[5420]: USER_START pid=5420 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Oct 31 01:34:34.173681 kernel: audit: type=1103 audit(1761874474.159:563): pid=5423 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Oct 31 01:34:34.159000 audit[5423]: CRED_ACQ pid=5423 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Oct 31 01:34:34.142215 sshd[5420]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Oct 31 01:34:34.152441 systemd-logind[1342]: New session 24 of user core. Oct 31 01:34:34.152919 systemd[1]: Started session-24.scope. Oct 31 01:34:34.479017 sshd[5420]: pam_unix(sshd:session): session closed for user core Oct 31 01:34:34.485667 kernel: audit: type=1106 audit(1761874474.478:564): pid=5420 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Oct 31 01:34:34.478000 audit[5420]: USER_END pid=5420 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Oct 31 01:34:34.484567 systemd[1]: sshd@21-139.178.70.102:22-147.75.109.163:33568.service: Deactivated successfully. Oct 31 01:34:34.485315 systemd-logind[1342]: Session 24 logged out. Waiting for processes to exit. Oct 31 01:34:34.490141 kernel: audit: type=1104 audit(1761874474.478:565): pid=5420 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Oct 31 01:34:34.478000 audit[5420]: CRED_DISP pid=5420 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Oct 31 01:34:34.485388 systemd[1]: session-24.scope: Deactivated successfully. Oct 31 01:34:34.489362 systemd-logind[1342]: Removed session 24. Oct 31 01:34:34.483000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@21-139.178.70.102:22-147.75.109.163:33568 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 01:34:35.572033 env[1377]: time="2025-10-31T01:34:35.571809310Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Oct 31 01:34:35.929792 env[1377]: time="2025-10-31T01:34:35.929701652Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Oct 31 01:34:35.930252 env[1377]: time="2025-10-31T01:34:35.930214537Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Oct 31 01:34:35.930399 kubelet[2284]: E1031 01:34:35.930368 2284 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Oct 31 01:34:35.930646 kubelet[2284]: E1031 01:34:35.930633 2284 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Oct 31 01:34:35.930824 kubelet[2284]: E1031 01:34:35.930796 2284 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-r9l7q,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-6cf7465748-bpws2_calico-system(c5ff4853-41a8-4d7e-a0bc-f8d8451a400b): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Oct 31 01:34:35.932156 kubelet[2284]: E1031 01:34:35.932127 2284 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-6cf7465748-bpws2" podUID="c5ff4853-41a8-4d7e-a0bc-f8d8451a400b" Oct 31 01:34:36.591343 kubelet[2284]: E1031 01:34:36.591309 2284 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7495b6f49d-9bz8s" podUID="3586a90a-6636-45ab-8082-c6aa9bdb62e3" Oct 31 01:34:37.571410 kubelet[2284]: E1031 01:34:37.571381 2284 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-59f47b46b-44rl6" podUID="03dcc52f-4acf-4546-b99b-cf4de4d54704" Oct 31 01:34:37.571711 kubelet[2284]: E1031 01:34:37.571509 2284 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-84f98497cf-b6pmv" podUID="034a3d4f-f436-4259-8570-9d57a6e1d274" Oct 31 01:34:39.481580 systemd[1]: Started sshd@22-139.178.70.102:22-147.75.109.163:33582.service. Oct 31 01:34:39.485601 kernel: kauditd_printk_skb: 1 callbacks suppressed Oct 31 01:34:39.485668 kernel: audit: type=1130 audit(1761874479.480:567): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@22-139.178.70.102:22-147.75.109.163:33582 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 01:34:39.480000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@22-139.178.70.102:22-147.75.109.163:33582 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 01:34:39.571113 kubelet[2284]: E1031 01:34:39.571079 2284 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-vc6st" podUID="68e0baab-eac1-409d-a79c-945bc83eb739" Oct 31 01:34:39.767905 sshd[5432]: Accepted publickey for core from 147.75.109.163 port 33582 ssh2: RSA SHA256:5EJg7y3pv7Ht00E4mNOtVA/MuREJjzW0PuLgUWDrRw0 Oct 31 01:34:39.766000 audit[5432]: USER_ACCT pid=5432 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Oct 31 01:34:39.770000 audit[5432]: CRED_ACQ pid=5432 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Oct 31 01:34:39.774468 sshd[5432]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Oct 31 01:34:39.775522 kernel: audit: type=1101 audit(1761874479.766:568): pid=5432 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Oct 31 01:34:39.775555 kernel: audit: type=1103 audit(1761874479.770:569): pid=5432 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Oct 31 01:34:39.775828 kernel: audit: type=1006 audit(1761874479.770:570): pid=5432 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=25 res=1 Oct 31 01:34:39.770000 audit[5432]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7fff68a84480 a2=3 a3=0 items=0 ppid=1 pid=5432 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=25 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 01:34:39.782133 kernel: audit: type=1300 audit(1761874479.770:570): arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7fff68a84480 a2=3 a3=0 items=0 ppid=1 pid=5432 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=25 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 01:34:39.784142 systemd-logind[1342]: New session 25 of user core. Oct 31 01:34:39.784796 systemd[1]: Started session-25.scope. Oct 31 01:34:39.770000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Oct 31 01:34:39.786652 kernel: audit: type=1327 audit(1761874479.770:570): proctitle=737368643A20636F7265205B707269765D Oct 31 01:34:39.793000 audit[5432]: USER_START pid=5432 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Oct 31 01:34:39.797000 audit[5435]: CRED_ACQ pid=5435 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Oct 31 01:34:39.802554 kernel: audit: type=1105 audit(1761874479.793:571): pid=5432 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Oct 31 01:34:39.802658 kernel: audit: type=1103 audit(1761874479.797:572): pid=5435 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Oct 31 01:34:41.218477 sshd[5432]: pam_unix(sshd:session): session closed for user core Oct 31 01:34:41.235860 kernel: audit: type=1106 audit(1761874481.218:573): pid=5432 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Oct 31 01:34:41.235920 kernel: audit: type=1104 audit(1761874481.218:574): pid=5432 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Oct 31 01:34:41.218000 audit[5432]: USER_END pid=5432 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Oct 31 01:34:41.218000 audit[5432]: CRED_DISP pid=5432 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Oct 31 01:34:41.223000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@22-139.178.70.102:22-147.75.109.163:33582 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 01:34:41.224501 systemd[1]: sshd@22-139.178.70.102:22-147.75.109.163:33582.service: Deactivated successfully. Oct 31 01:34:41.228384 systemd[1]: session-25.scope: Deactivated successfully. Oct 31 01:34:41.228396 systemd-logind[1342]: Session 25 logged out. Waiting for processes to exit. Oct 31 01:34:41.229344 systemd-logind[1342]: Removed session 25. Oct 31 01:34:44.570840 kubelet[2284]: E1031 01:34:44.570804 2284 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-8nrww" podUID="d10ebfbe-91f8-4576-8542-06b4d8a152be"