Nov 8 00:26:28.738774 kernel: Linux version 6.6.113-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT_DYNAMIC Fri Nov 7 22:45:04 -00 2025 Nov 8 00:26:28.738789 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=vmware flatcar.autologin verity.usrhash=480a02cf7f2001774aa495c3e719d4173e968e6839485a7d2b207ef2facca472 Nov 8 00:26:28.738795 kernel: Disabled fast string operations Nov 8 00:26:28.738799 kernel: BIOS-provided physical RAM map: Nov 8 00:26:28.738803 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009ebff] usable Nov 8 00:26:28.738806 kernel: BIOS-e820: [mem 0x000000000009ec00-0x000000000009ffff] reserved Nov 8 00:26:28.738812 kernel: BIOS-e820: [mem 0x00000000000dc000-0x00000000000fffff] reserved Nov 8 00:26:28.738816 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000007fedffff] usable Nov 8 00:26:28.738820 kernel: BIOS-e820: [mem 0x000000007fee0000-0x000000007fefefff] ACPI data Nov 8 00:26:28.738824 kernel: BIOS-e820: [mem 0x000000007feff000-0x000000007fefffff] ACPI NVS Nov 8 00:26:28.738828 kernel: BIOS-e820: [mem 0x000000007ff00000-0x000000007fffffff] usable Nov 8 00:26:28.738832 kernel: BIOS-e820: [mem 0x00000000f0000000-0x00000000f7ffffff] reserved Nov 8 00:26:28.738836 kernel: BIOS-e820: [mem 0x00000000fec00000-0x00000000fec0ffff] reserved Nov 8 00:26:28.738840 kernel: BIOS-e820: [mem 0x00000000fee00000-0x00000000fee00fff] reserved Nov 8 00:26:28.738846 kernel: BIOS-e820: [mem 0x00000000fffe0000-0x00000000ffffffff] reserved Nov 8 00:26:28.738851 kernel: NX (Execute Disable) protection: active Nov 8 00:26:28.738856 kernel: APIC: Static calls initialized Nov 8 00:26:28.738860 kernel: SMBIOS 2.7 present. Nov 8 00:26:28.738865 kernel: DMI: VMware, Inc. VMware Virtual Platform/440BX Desktop Reference Platform, BIOS 6.00 05/28/2020 Nov 8 00:26:28.738869 kernel: vmware: hypercall mode: 0x00 Nov 8 00:26:28.738874 kernel: Hypervisor detected: VMware Nov 8 00:26:28.738878 kernel: vmware: TSC freq read from hypervisor : 3408.000 MHz Nov 8 00:26:28.738884 kernel: vmware: Host bus clock speed read from hypervisor : 66000000 Hz Nov 8 00:26:28.738888 kernel: vmware: using clock offset of 2514331275 ns Nov 8 00:26:28.738893 kernel: tsc: Detected 3408.000 MHz processor Nov 8 00:26:28.738898 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Nov 8 00:26:28.738903 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Nov 8 00:26:28.738908 kernel: last_pfn = 0x80000 max_arch_pfn = 0x400000000 Nov 8 00:26:28.738912 kernel: total RAM covered: 3072M Nov 8 00:26:28.738917 kernel: Found optimal setting for mtrr clean up Nov 8 00:26:28.738922 kernel: gran_size: 64K chunk_size: 64K num_reg: 2 lose cover RAM: 0G Nov 8 00:26:28.738928 kernel: MTRR map: 6 entries (5 fixed + 1 variable; max 21), built from 8 variable MTRRs Nov 8 00:26:28.738933 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Nov 8 00:26:28.738937 kernel: Using GB pages for direct mapping Nov 8 00:26:28.738942 kernel: ACPI: Early table checksum verification disabled Nov 8 00:26:28.738947 kernel: ACPI: RSDP 0x00000000000F6A00 000024 (v02 PTLTD ) Nov 8 00:26:28.738951 kernel: ACPI: XSDT 0x000000007FEE965B 00005C (v01 INTEL 440BX 06040000 VMW 01324272) Nov 8 00:26:28.738956 kernel: ACPI: FACP 0x000000007FEFEE73 0000F4 (v04 INTEL 440BX 06040000 PTL 000F4240) Nov 8 00:26:28.738961 kernel: ACPI: DSDT 0x000000007FEEAD55 01411E (v01 PTLTD Custom 06040000 MSFT 03000001) Nov 8 00:26:28.738965 kernel: ACPI: FACS 0x000000007FEFFFC0 000040 Nov 8 00:26:28.738972 kernel: ACPI: FACS 0x000000007FEFFFC0 000040 Nov 8 00:26:28.738977 kernel: ACPI: BOOT 0x000000007FEEAD2D 000028 (v01 PTLTD $SBFTBL$ 06040000 LTP 00000001) Nov 8 00:26:28.738982 kernel: ACPI: APIC 0x000000007FEEA5EB 000742 (v01 PTLTD ? APIC 06040000 LTP 00000000) Nov 8 00:26:28.738987 kernel: ACPI: MCFG 0x000000007FEEA5AF 00003C (v01 PTLTD $PCITBL$ 06040000 LTP 00000001) Nov 8 00:26:28.738992 kernel: ACPI: SRAT 0x000000007FEE9757 0008A8 (v02 VMWARE MEMPLUG 06040000 VMW 00000001) Nov 8 00:26:28.738998 kernel: ACPI: HPET 0x000000007FEE971F 000038 (v01 VMWARE VMW HPET 06040000 VMW 00000001) Nov 8 00:26:28.739003 kernel: ACPI: WAET 0x000000007FEE96F7 000028 (v01 VMWARE VMW WAET 06040000 VMW 00000001) Nov 8 00:26:28.739008 kernel: ACPI: Reserving FACP table memory at [mem 0x7fefee73-0x7fefef66] Nov 8 00:26:28.739013 kernel: ACPI: Reserving DSDT table memory at [mem 0x7feead55-0x7fefee72] Nov 8 00:26:28.739017 kernel: ACPI: Reserving FACS table memory at [mem 0x7fefffc0-0x7fefffff] Nov 8 00:26:28.739022 kernel: ACPI: Reserving FACS table memory at [mem 0x7fefffc0-0x7fefffff] Nov 8 00:26:28.739027 kernel: ACPI: Reserving BOOT table memory at [mem 0x7feead2d-0x7feead54] Nov 8 00:26:28.739032 kernel: ACPI: Reserving APIC table memory at [mem 0x7feea5eb-0x7feead2c] Nov 8 00:26:28.739037 kernel: ACPI: Reserving MCFG table memory at [mem 0x7feea5af-0x7feea5ea] Nov 8 00:26:28.739042 kernel: ACPI: Reserving SRAT table memory at [mem 0x7fee9757-0x7fee9ffe] Nov 8 00:26:28.739048 kernel: ACPI: Reserving HPET table memory at [mem 0x7fee971f-0x7fee9756] Nov 8 00:26:28.739052 kernel: ACPI: Reserving WAET table memory at [mem 0x7fee96f7-0x7fee971e] Nov 8 00:26:28.739057 kernel: system APIC only can use physical flat Nov 8 00:26:28.739062 kernel: APIC: Switched APIC routing to: physical flat Nov 8 00:26:28.739067 kernel: SRAT: PXM 0 -> APIC 0x00 -> Node 0 Nov 8 00:26:28.739072 kernel: SRAT: PXM 0 -> APIC 0x02 -> Node 0 Nov 8 00:26:28.739077 kernel: SRAT: PXM 0 -> APIC 0x04 -> Node 0 Nov 8 00:26:28.739081 kernel: SRAT: PXM 0 -> APIC 0x06 -> Node 0 Nov 8 00:26:28.739086 kernel: SRAT: PXM 0 -> APIC 0x08 -> Node 0 Nov 8 00:26:28.739092 kernel: SRAT: PXM 0 -> APIC 0x0a -> Node 0 Nov 8 00:26:28.739097 kernel: SRAT: PXM 0 -> APIC 0x0c -> Node 0 Nov 8 00:26:28.739102 kernel: SRAT: PXM 0 -> APIC 0x0e -> Node 0 Nov 8 00:26:28.739107 kernel: SRAT: PXM 0 -> APIC 0x10 -> Node 0 Nov 8 00:26:28.739111 kernel: SRAT: PXM 0 -> APIC 0x12 -> Node 0 Nov 8 00:26:28.739116 kernel: SRAT: PXM 0 -> APIC 0x14 -> Node 0 Nov 8 00:26:28.739121 kernel: SRAT: PXM 0 -> APIC 0x16 -> Node 0 Nov 8 00:26:28.739126 kernel: SRAT: PXM 0 -> APIC 0x18 -> Node 0 Nov 8 00:26:28.739130 kernel: SRAT: PXM 0 -> APIC 0x1a -> Node 0 Nov 8 00:26:28.739135 kernel: SRAT: PXM 0 -> APIC 0x1c -> Node 0 Nov 8 00:26:28.739141 kernel: SRAT: PXM 0 -> APIC 0x1e -> Node 0 Nov 8 00:26:28.739146 kernel: SRAT: PXM 0 -> APIC 0x20 -> Node 0 Nov 8 00:26:28.739150 kernel: SRAT: PXM 0 -> APIC 0x22 -> Node 0 Nov 8 00:26:28.739155 kernel: SRAT: PXM 0 -> APIC 0x24 -> Node 0 Nov 8 00:26:28.739160 kernel: SRAT: PXM 0 -> APIC 0x26 -> Node 0 Nov 8 00:26:28.739165 kernel: SRAT: PXM 0 -> APIC 0x28 -> Node 0 Nov 8 00:26:28.739170 kernel: SRAT: PXM 0 -> APIC 0x2a -> Node 0 Nov 8 00:26:28.739174 kernel: SRAT: PXM 0 -> APIC 0x2c -> Node 0 Nov 8 00:26:28.739179 kernel: SRAT: PXM 0 -> APIC 0x2e -> Node 0 Nov 8 00:26:28.739184 kernel: SRAT: PXM 0 -> APIC 0x30 -> Node 0 Nov 8 00:26:28.739188 kernel: SRAT: PXM 0 -> APIC 0x32 -> Node 0 Nov 8 00:26:28.739194 kernel: SRAT: PXM 0 -> APIC 0x34 -> Node 0 Nov 8 00:26:28.739199 kernel: SRAT: PXM 0 -> APIC 0x36 -> Node 0 Nov 8 00:26:28.739204 kernel: SRAT: PXM 0 -> APIC 0x38 -> Node 0 Nov 8 00:26:28.739209 kernel: SRAT: PXM 0 -> APIC 0x3a -> Node 0 Nov 8 00:26:28.739214 kernel: SRAT: PXM 0 -> APIC 0x3c -> Node 0 Nov 8 00:26:28.739218 kernel: SRAT: PXM 0 -> APIC 0x3e -> Node 0 Nov 8 00:26:28.739223 kernel: SRAT: PXM 0 -> APIC 0x40 -> Node 0 Nov 8 00:26:28.739228 kernel: SRAT: PXM 0 -> APIC 0x42 -> Node 0 Nov 8 00:26:28.739232 kernel: SRAT: PXM 0 -> APIC 0x44 -> Node 0 Nov 8 00:26:28.739237 kernel: SRAT: PXM 0 -> APIC 0x46 -> Node 0 Nov 8 00:26:28.739243 kernel: SRAT: PXM 0 -> APIC 0x48 -> Node 0 Nov 8 00:26:28.739248 kernel: SRAT: PXM 0 -> APIC 0x4a -> Node 0 Nov 8 00:26:28.739253 kernel: SRAT: PXM 0 -> APIC 0x4c -> Node 0 Nov 8 00:26:28.739257 kernel: SRAT: PXM 0 -> APIC 0x4e -> Node 0 Nov 8 00:26:28.739262 kernel: SRAT: PXM 0 -> APIC 0x50 -> Node 0 Nov 8 00:26:28.739267 kernel: SRAT: PXM 0 -> APIC 0x52 -> Node 0 Nov 8 00:26:28.739271 kernel: SRAT: PXM 0 -> APIC 0x54 -> Node 0 Nov 8 00:26:28.739276 kernel: SRAT: PXM 0 -> APIC 0x56 -> Node 0 Nov 8 00:26:28.739281 kernel: SRAT: PXM 0 -> APIC 0x58 -> Node 0 Nov 8 00:26:28.739286 kernel: SRAT: PXM 0 -> APIC 0x5a -> Node 0 Nov 8 00:26:28.739292 kernel: SRAT: PXM 0 -> APIC 0x5c -> Node 0 Nov 8 00:26:28.739296 kernel: SRAT: PXM 0 -> APIC 0x5e -> Node 0 Nov 8 00:26:28.739301 kernel: SRAT: PXM 0 -> APIC 0x60 -> Node 0 Nov 8 00:26:28.739306 kernel: SRAT: PXM 0 -> APIC 0x62 -> Node 0 Nov 8 00:26:28.739311 kernel: SRAT: PXM 0 -> APIC 0x64 -> Node 0 Nov 8 00:26:28.739315 kernel: SRAT: PXM 0 -> APIC 0x66 -> Node 0 Nov 8 00:26:28.739320 kernel: SRAT: PXM 0 -> APIC 0x68 -> Node 0 Nov 8 00:26:28.739325 kernel: SRAT: PXM 0 -> APIC 0x6a -> Node 0 Nov 8 00:26:28.739329 kernel: SRAT: PXM 0 -> APIC 0x6c -> Node 0 Nov 8 00:26:28.739334 kernel: SRAT: PXM 0 -> APIC 0x6e -> Node 0 Nov 8 00:26:28.739340 kernel: SRAT: PXM 0 -> APIC 0x70 -> Node 0 Nov 8 00:26:28.739345 kernel: SRAT: PXM 0 -> APIC 0x72 -> Node 0 Nov 8 00:26:28.739350 kernel: SRAT: PXM 0 -> APIC 0x74 -> Node 0 Nov 8 00:26:28.739358 kernel: SRAT: PXM 0 -> APIC 0x76 -> Node 0 Nov 8 00:26:28.739364 kernel: SRAT: PXM 0 -> APIC 0x78 -> Node 0 Nov 8 00:26:28.739369 kernel: SRAT: PXM 0 -> APIC 0x7a -> Node 0 Nov 8 00:26:28.739374 kernel: SRAT: PXM 0 -> APIC 0x7c -> Node 0 Nov 8 00:26:28.739379 kernel: SRAT: PXM 0 -> APIC 0x7e -> Node 0 Nov 8 00:26:28.739384 kernel: SRAT: PXM 0 -> APIC 0x80 -> Node 0 Nov 8 00:26:28.739436 kernel: SRAT: PXM 0 -> APIC 0x82 -> Node 0 Nov 8 00:26:28.739443 kernel: SRAT: PXM 0 -> APIC 0x84 -> Node 0 Nov 8 00:26:28.739448 kernel: SRAT: PXM 0 -> APIC 0x86 -> Node 0 Nov 8 00:26:28.739453 kernel: SRAT: PXM 0 -> APIC 0x88 -> Node 0 Nov 8 00:26:28.739458 kernel: SRAT: PXM 0 -> APIC 0x8a -> Node 0 Nov 8 00:26:28.739463 kernel: SRAT: PXM 0 -> APIC 0x8c -> Node 0 Nov 8 00:26:28.739468 kernel: SRAT: PXM 0 -> APIC 0x8e -> Node 0 Nov 8 00:26:28.739473 kernel: SRAT: PXM 0 -> APIC 0x90 -> Node 0 Nov 8 00:26:28.739478 kernel: SRAT: PXM 0 -> APIC 0x92 -> Node 0 Nov 8 00:26:28.739483 kernel: SRAT: PXM 0 -> APIC 0x94 -> Node 0 Nov 8 00:26:28.739490 kernel: SRAT: PXM 0 -> APIC 0x96 -> Node 0 Nov 8 00:26:28.739495 kernel: SRAT: PXM 0 -> APIC 0x98 -> Node 0 Nov 8 00:26:28.739501 kernel: SRAT: PXM 0 -> APIC 0x9a -> Node 0 Nov 8 00:26:28.739506 kernel: SRAT: PXM 0 -> APIC 0x9c -> Node 0 Nov 8 00:26:28.739511 kernel: SRAT: PXM 0 -> APIC 0x9e -> Node 0 Nov 8 00:26:28.739516 kernel: SRAT: PXM 0 -> APIC 0xa0 -> Node 0 Nov 8 00:26:28.739521 kernel: SRAT: PXM 0 -> APIC 0xa2 -> Node 0 Nov 8 00:26:28.739526 kernel: SRAT: PXM 0 -> APIC 0xa4 -> Node 0 Nov 8 00:26:28.739531 kernel: SRAT: PXM 0 -> APIC 0xa6 -> Node 0 Nov 8 00:26:28.739536 kernel: SRAT: PXM 0 -> APIC 0xa8 -> Node 0 Nov 8 00:26:28.739542 kernel: SRAT: PXM 0 -> APIC 0xaa -> Node 0 Nov 8 00:26:28.739547 kernel: SRAT: PXM 0 -> APIC 0xac -> Node 0 Nov 8 00:26:28.739552 kernel: SRAT: PXM 0 -> APIC 0xae -> Node 0 Nov 8 00:26:28.739557 kernel: SRAT: PXM 0 -> APIC 0xb0 -> Node 0 Nov 8 00:26:28.739562 kernel: SRAT: PXM 0 -> APIC 0xb2 -> Node 0 Nov 8 00:26:28.739567 kernel: SRAT: PXM 0 -> APIC 0xb4 -> Node 0 Nov 8 00:26:28.739573 kernel: SRAT: PXM 0 -> APIC 0xb6 -> Node 0 Nov 8 00:26:28.739578 kernel: SRAT: PXM 0 -> APIC 0xb8 -> Node 0 Nov 8 00:26:28.739583 kernel: SRAT: PXM 0 -> APIC 0xba -> Node 0 Nov 8 00:26:28.739588 kernel: SRAT: PXM 0 -> APIC 0xbc -> Node 0 Nov 8 00:26:28.739594 kernel: SRAT: PXM 0 -> APIC 0xbe -> Node 0 Nov 8 00:26:28.739599 kernel: SRAT: PXM 0 -> APIC 0xc0 -> Node 0 Nov 8 00:26:28.739604 kernel: SRAT: PXM 0 -> APIC 0xc2 -> Node 0 Nov 8 00:26:28.739609 kernel: SRAT: PXM 0 -> APIC 0xc4 -> Node 0 Nov 8 00:26:28.739614 kernel: SRAT: PXM 0 -> APIC 0xc6 -> Node 0 Nov 8 00:26:28.739619 kernel: SRAT: PXM 0 -> APIC 0xc8 -> Node 0 Nov 8 00:26:28.739624 kernel: SRAT: PXM 0 -> APIC 0xca -> Node 0 Nov 8 00:26:28.739629 kernel: SRAT: PXM 0 -> APIC 0xcc -> Node 0 Nov 8 00:26:28.739634 kernel: SRAT: PXM 0 -> APIC 0xce -> Node 0 Nov 8 00:26:28.739639 kernel: SRAT: PXM 0 -> APIC 0xd0 -> Node 0 Nov 8 00:26:28.739645 kernel: SRAT: PXM 0 -> APIC 0xd2 -> Node 0 Nov 8 00:26:28.739651 kernel: SRAT: PXM 0 -> APIC 0xd4 -> Node 0 Nov 8 00:26:28.739656 kernel: SRAT: PXM 0 -> APIC 0xd6 -> Node 0 Nov 8 00:26:28.739661 kernel: SRAT: PXM 0 -> APIC 0xd8 -> Node 0 Nov 8 00:26:28.739666 kernel: SRAT: PXM 0 -> APIC 0xda -> Node 0 Nov 8 00:26:28.739671 kernel: SRAT: PXM 0 -> APIC 0xdc -> Node 0 Nov 8 00:26:28.739676 kernel: SRAT: PXM 0 -> APIC 0xde -> Node 0 Nov 8 00:26:28.739681 kernel: SRAT: PXM 0 -> APIC 0xe0 -> Node 0 Nov 8 00:26:28.739686 kernel: SRAT: PXM 0 -> APIC 0xe2 -> Node 0 Nov 8 00:26:28.739691 kernel: SRAT: PXM 0 -> APIC 0xe4 -> Node 0 Nov 8 00:26:28.739696 kernel: SRAT: PXM 0 -> APIC 0xe6 -> Node 0 Nov 8 00:26:28.739702 kernel: SRAT: PXM 0 -> APIC 0xe8 -> Node 0 Nov 8 00:26:28.739707 kernel: SRAT: PXM 0 -> APIC 0xea -> Node 0 Nov 8 00:26:28.739712 kernel: SRAT: PXM 0 -> APIC 0xec -> Node 0 Nov 8 00:26:28.739717 kernel: SRAT: PXM 0 -> APIC 0xee -> Node 0 Nov 8 00:26:28.739722 kernel: SRAT: PXM 0 -> APIC 0xf0 -> Node 0 Nov 8 00:26:28.739727 kernel: SRAT: PXM 0 -> APIC 0xf2 -> Node 0 Nov 8 00:26:28.739732 kernel: SRAT: PXM 0 -> APIC 0xf4 -> Node 0 Nov 8 00:26:28.739737 kernel: SRAT: PXM 0 -> APIC 0xf6 -> Node 0 Nov 8 00:26:28.739742 kernel: SRAT: PXM 0 -> APIC 0xf8 -> Node 0 Nov 8 00:26:28.739748 kernel: SRAT: PXM 0 -> APIC 0xfa -> Node 0 Nov 8 00:26:28.739754 kernel: SRAT: PXM 0 -> APIC 0xfc -> Node 0 Nov 8 00:26:28.739759 kernel: SRAT: PXM 0 -> APIC 0xfe -> Node 0 Nov 8 00:26:28.739764 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x0009ffff] Nov 8 00:26:28.739769 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00100000-0x7fffffff] Nov 8 00:26:28.739775 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x80000000-0xbfffffff] hotplug Nov 8 00:26:28.739780 kernel: NUMA: Node 0 [mem 0x00000000-0x0009ffff] + [mem 0x00100000-0x7fffffff] -> [mem 0x00000000-0x7fffffff] Nov 8 00:26:28.739785 kernel: NODE_DATA(0) allocated [mem 0x7fffa000-0x7fffffff] Nov 8 00:26:28.739791 kernel: Zone ranges: Nov 8 00:26:28.739796 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Nov 8 00:26:28.739802 kernel: DMA32 [mem 0x0000000001000000-0x000000007fffffff] Nov 8 00:26:28.739808 kernel: Normal empty Nov 8 00:26:28.739813 kernel: Movable zone start for each node Nov 8 00:26:28.739818 kernel: Early memory node ranges Nov 8 00:26:28.739823 kernel: node 0: [mem 0x0000000000001000-0x000000000009dfff] Nov 8 00:26:28.739829 kernel: node 0: [mem 0x0000000000100000-0x000000007fedffff] Nov 8 00:26:28.739834 kernel: node 0: [mem 0x000000007ff00000-0x000000007fffffff] Nov 8 00:26:28.739839 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000007fffffff] Nov 8 00:26:28.739844 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Nov 8 00:26:28.739849 kernel: On node 0, zone DMA: 98 pages in unavailable ranges Nov 8 00:26:28.739855 kernel: On node 0, zone DMA32: 32 pages in unavailable ranges Nov 8 00:26:28.739860 kernel: ACPI: PM-Timer IO Port: 0x1008 Nov 8 00:26:28.739866 kernel: system APIC only can use physical flat Nov 8 00:26:28.739871 kernel: ACPI: LAPIC_NMI (acpi_id[0x00] high edge lint[0x1]) Nov 8 00:26:28.739876 kernel: ACPI: LAPIC_NMI (acpi_id[0x01] high edge lint[0x1]) Nov 8 00:26:28.739881 kernel: ACPI: LAPIC_NMI (acpi_id[0x02] high edge lint[0x1]) Nov 8 00:26:28.739886 kernel: ACPI: LAPIC_NMI (acpi_id[0x03] high edge lint[0x1]) Nov 8 00:26:28.739891 kernel: ACPI: LAPIC_NMI (acpi_id[0x04] high edge lint[0x1]) Nov 8 00:26:28.739896 kernel: ACPI: LAPIC_NMI (acpi_id[0x05] high edge lint[0x1]) Nov 8 00:26:28.739902 kernel: ACPI: LAPIC_NMI (acpi_id[0x06] high edge lint[0x1]) Nov 8 00:26:28.739907 kernel: ACPI: LAPIC_NMI (acpi_id[0x07] high edge lint[0x1]) Nov 8 00:26:28.739913 kernel: ACPI: LAPIC_NMI (acpi_id[0x08] high edge lint[0x1]) Nov 8 00:26:28.739918 kernel: ACPI: LAPIC_NMI (acpi_id[0x09] high edge lint[0x1]) Nov 8 00:26:28.739923 kernel: ACPI: LAPIC_NMI (acpi_id[0x0a] high edge lint[0x1]) Nov 8 00:26:28.739928 kernel: ACPI: LAPIC_NMI (acpi_id[0x0b] high edge lint[0x1]) Nov 8 00:26:28.739933 kernel: ACPI: LAPIC_NMI (acpi_id[0x0c] high edge lint[0x1]) Nov 8 00:26:28.739938 kernel: ACPI: LAPIC_NMI (acpi_id[0x0d] high edge lint[0x1]) Nov 8 00:26:28.739943 kernel: ACPI: LAPIC_NMI (acpi_id[0x0e] high edge lint[0x1]) Nov 8 00:26:28.739948 kernel: ACPI: LAPIC_NMI (acpi_id[0x0f] high edge lint[0x1]) Nov 8 00:26:28.739954 kernel: ACPI: LAPIC_NMI (acpi_id[0x10] high edge lint[0x1]) Nov 8 00:26:28.739960 kernel: ACPI: LAPIC_NMI (acpi_id[0x11] high edge lint[0x1]) Nov 8 00:26:28.739965 kernel: ACPI: LAPIC_NMI (acpi_id[0x12] high edge lint[0x1]) Nov 8 00:26:28.739970 kernel: ACPI: LAPIC_NMI (acpi_id[0x13] high edge lint[0x1]) Nov 8 00:26:28.739975 kernel: ACPI: LAPIC_NMI (acpi_id[0x14] high edge lint[0x1]) Nov 8 00:26:28.739980 kernel: ACPI: LAPIC_NMI (acpi_id[0x15] high edge lint[0x1]) Nov 8 00:26:28.739985 kernel: ACPI: LAPIC_NMI (acpi_id[0x16] high edge lint[0x1]) Nov 8 00:26:28.739990 kernel: ACPI: LAPIC_NMI (acpi_id[0x17] high edge lint[0x1]) Nov 8 00:26:28.739995 kernel: ACPI: LAPIC_NMI (acpi_id[0x18] high edge lint[0x1]) Nov 8 00:26:28.740000 kernel: ACPI: LAPIC_NMI (acpi_id[0x19] high edge lint[0x1]) Nov 8 00:26:28.740007 kernel: ACPI: LAPIC_NMI (acpi_id[0x1a] high edge lint[0x1]) Nov 8 00:26:28.740012 kernel: ACPI: LAPIC_NMI (acpi_id[0x1b] high edge lint[0x1]) Nov 8 00:26:28.740017 kernel: ACPI: LAPIC_NMI (acpi_id[0x1c] high edge lint[0x1]) Nov 8 00:26:28.740022 kernel: ACPI: LAPIC_NMI (acpi_id[0x1d] high edge lint[0x1]) Nov 8 00:26:28.740027 kernel: ACPI: LAPIC_NMI (acpi_id[0x1e] high edge lint[0x1]) Nov 8 00:26:28.740032 kernel: ACPI: LAPIC_NMI (acpi_id[0x1f] high edge lint[0x1]) Nov 8 00:26:28.740038 kernel: ACPI: LAPIC_NMI (acpi_id[0x20] high edge lint[0x1]) Nov 8 00:26:28.740043 kernel: ACPI: LAPIC_NMI (acpi_id[0x21] high edge lint[0x1]) Nov 8 00:26:28.740048 kernel: ACPI: LAPIC_NMI (acpi_id[0x22] high edge lint[0x1]) Nov 8 00:26:28.740054 kernel: ACPI: LAPIC_NMI (acpi_id[0x23] high edge lint[0x1]) Nov 8 00:26:28.740059 kernel: ACPI: LAPIC_NMI (acpi_id[0x24] high edge lint[0x1]) Nov 8 00:26:28.740064 kernel: ACPI: LAPIC_NMI (acpi_id[0x25] high edge lint[0x1]) Nov 8 00:26:28.740069 kernel: ACPI: LAPIC_NMI (acpi_id[0x26] high edge lint[0x1]) Nov 8 00:26:28.740075 kernel: ACPI: LAPIC_NMI (acpi_id[0x27] high edge lint[0x1]) Nov 8 00:26:28.740080 kernel: ACPI: LAPIC_NMI (acpi_id[0x28] high edge lint[0x1]) Nov 8 00:26:28.740085 kernel: ACPI: LAPIC_NMI (acpi_id[0x29] high edge lint[0x1]) Nov 8 00:26:28.740090 kernel: ACPI: LAPIC_NMI (acpi_id[0x2a] high edge lint[0x1]) Nov 8 00:26:28.740095 kernel: ACPI: LAPIC_NMI (acpi_id[0x2b] high edge lint[0x1]) Nov 8 00:26:28.740100 kernel: ACPI: LAPIC_NMI (acpi_id[0x2c] high edge lint[0x1]) Nov 8 00:26:28.740106 kernel: ACPI: LAPIC_NMI (acpi_id[0x2d] high edge lint[0x1]) Nov 8 00:26:28.740112 kernel: ACPI: LAPIC_NMI (acpi_id[0x2e] high edge lint[0x1]) Nov 8 00:26:28.740117 kernel: ACPI: LAPIC_NMI (acpi_id[0x2f] high edge lint[0x1]) Nov 8 00:26:28.740122 kernel: ACPI: LAPIC_NMI (acpi_id[0x30] high edge lint[0x1]) Nov 8 00:26:28.740127 kernel: ACPI: LAPIC_NMI (acpi_id[0x31] high edge lint[0x1]) Nov 8 00:26:28.740132 kernel: ACPI: LAPIC_NMI (acpi_id[0x32] high edge lint[0x1]) Nov 8 00:26:28.740137 kernel: ACPI: LAPIC_NMI (acpi_id[0x33] high edge lint[0x1]) Nov 8 00:26:28.740142 kernel: ACPI: LAPIC_NMI (acpi_id[0x34] high edge lint[0x1]) Nov 8 00:26:28.740147 kernel: ACPI: LAPIC_NMI (acpi_id[0x35] high edge lint[0x1]) Nov 8 00:26:28.740154 kernel: ACPI: LAPIC_NMI (acpi_id[0x36] high edge lint[0x1]) Nov 8 00:26:28.740159 kernel: ACPI: LAPIC_NMI (acpi_id[0x37] high edge lint[0x1]) Nov 8 00:26:28.740164 kernel: ACPI: LAPIC_NMI (acpi_id[0x38] high edge lint[0x1]) Nov 8 00:26:28.740169 kernel: ACPI: LAPIC_NMI (acpi_id[0x39] high edge lint[0x1]) Nov 8 00:26:28.740174 kernel: ACPI: LAPIC_NMI (acpi_id[0x3a] high edge lint[0x1]) Nov 8 00:26:28.740179 kernel: ACPI: LAPIC_NMI (acpi_id[0x3b] high edge lint[0x1]) Nov 8 00:26:28.740184 kernel: ACPI: LAPIC_NMI (acpi_id[0x3c] high edge lint[0x1]) Nov 8 00:26:28.740189 kernel: ACPI: LAPIC_NMI (acpi_id[0x3d] high edge lint[0x1]) Nov 8 00:26:28.740194 kernel: ACPI: LAPIC_NMI (acpi_id[0x3e] high edge lint[0x1]) Nov 8 00:26:28.740199 kernel: ACPI: LAPIC_NMI (acpi_id[0x3f] high edge lint[0x1]) Nov 8 00:26:28.740206 kernel: ACPI: LAPIC_NMI (acpi_id[0x40] high edge lint[0x1]) Nov 8 00:26:28.740211 kernel: ACPI: LAPIC_NMI (acpi_id[0x41] high edge lint[0x1]) Nov 8 00:26:28.740216 kernel: ACPI: LAPIC_NMI (acpi_id[0x42] high edge lint[0x1]) Nov 8 00:26:28.740221 kernel: ACPI: LAPIC_NMI (acpi_id[0x43] high edge lint[0x1]) Nov 8 00:26:28.740226 kernel: ACPI: LAPIC_NMI (acpi_id[0x44] high edge lint[0x1]) Nov 8 00:26:28.740231 kernel: ACPI: LAPIC_NMI (acpi_id[0x45] high edge lint[0x1]) Nov 8 00:26:28.740236 kernel: ACPI: LAPIC_NMI (acpi_id[0x46] high edge lint[0x1]) Nov 8 00:26:28.740241 kernel: ACPI: LAPIC_NMI (acpi_id[0x47] high edge lint[0x1]) Nov 8 00:26:28.740247 kernel: ACPI: LAPIC_NMI (acpi_id[0x48] high edge lint[0x1]) Nov 8 00:26:28.740252 kernel: ACPI: LAPIC_NMI (acpi_id[0x49] high edge lint[0x1]) Nov 8 00:26:28.740258 kernel: ACPI: LAPIC_NMI (acpi_id[0x4a] high edge lint[0x1]) Nov 8 00:26:28.740263 kernel: ACPI: LAPIC_NMI (acpi_id[0x4b] high edge lint[0x1]) Nov 8 00:26:28.740269 kernel: ACPI: LAPIC_NMI (acpi_id[0x4c] high edge lint[0x1]) Nov 8 00:26:28.740274 kernel: ACPI: LAPIC_NMI (acpi_id[0x4d] high edge lint[0x1]) Nov 8 00:26:28.740279 kernel: ACPI: LAPIC_NMI (acpi_id[0x4e] high edge lint[0x1]) Nov 8 00:26:28.740284 kernel: ACPI: LAPIC_NMI (acpi_id[0x4f] high edge lint[0x1]) Nov 8 00:26:28.740289 kernel: ACPI: LAPIC_NMI (acpi_id[0x50] high edge lint[0x1]) Nov 8 00:26:28.740294 kernel: ACPI: LAPIC_NMI (acpi_id[0x51] high edge lint[0x1]) Nov 8 00:26:28.740299 kernel: ACPI: LAPIC_NMI (acpi_id[0x52] high edge lint[0x1]) Nov 8 00:26:28.740304 kernel: ACPI: LAPIC_NMI (acpi_id[0x53] high edge lint[0x1]) Nov 8 00:26:28.740311 kernel: ACPI: LAPIC_NMI (acpi_id[0x54] high edge lint[0x1]) Nov 8 00:26:28.740316 kernel: ACPI: LAPIC_NMI (acpi_id[0x55] high edge lint[0x1]) Nov 8 00:26:28.740321 kernel: ACPI: LAPIC_NMI (acpi_id[0x56] high edge lint[0x1]) Nov 8 00:26:28.740326 kernel: ACPI: LAPIC_NMI (acpi_id[0x57] high edge lint[0x1]) Nov 8 00:26:28.740331 kernel: ACPI: LAPIC_NMI (acpi_id[0x58] high edge lint[0x1]) Nov 8 00:26:28.740336 kernel: ACPI: LAPIC_NMI (acpi_id[0x59] high edge lint[0x1]) Nov 8 00:26:28.740341 kernel: ACPI: LAPIC_NMI (acpi_id[0x5a] high edge lint[0x1]) Nov 8 00:26:28.740346 kernel: ACPI: LAPIC_NMI (acpi_id[0x5b] high edge lint[0x1]) Nov 8 00:26:28.740351 kernel: ACPI: LAPIC_NMI (acpi_id[0x5c] high edge lint[0x1]) Nov 8 00:26:28.740358 kernel: ACPI: LAPIC_NMI (acpi_id[0x5d] high edge lint[0x1]) Nov 8 00:26:28.740363 kernel: ACPI: LAPIC_NMI (acpi_id[0x5e] high edge lint[0x1]) Nov 8 00:26:28.740368 kernel: ACPI: LAPIC_NMI (acpi_id[0x5f] high edge lint[0x1]) Nov 8 00:26:28.740373 kernel: ACPI: LAPIC_NMI (acpi_id[0x60] high edge lint[0x1]) Nov 8 00:26:28.740378 kernel: ACPI: LAPIC_NMI (acpi_id[0x61] high edge lint[0x1]) Nov 8 00:26:28.740383 kernel: ACPI: LAPIC_NMI (acpi_id[0x62] high edge lint[0x1]) Nov 8 00:26:28.740388 kernel: ACPI: LAPIC_NMI (acpi_id[0x63] high edge lint[0x1]) Nov 8 00:26:28.740393 kernel: ACPI: LAPIC_NMI (acpi_id[0x64] high edge lint[0x1]) Nov 8 00:26:28.740403 kernel: ACPI: LAPIC_NMI (acpi_id[0x65] high edge lint[0x1]) Nov 8 00:26:28.740408 kernel: ACPI: LAPIC_NMI (acpi_id[0x66] high edge lint[0x1]) Nov 8 00:26:28.740414 kernel: ACPI: LAPIC_NMI (acpi_id[0x67] high edge lint[0x1]) Nov 8 00:26:28.740420 kernel: ACPI: LAPIC_NMI (acpi_id[0x68] high edge lint[0x1]) Nov 8 00:26:28.740425 kernel: ACPI: LAPIC_NMI (acpi_id[0x69] high edge lint[0x1]) Nov 8 00:26:28.740430 kernel: ACPI: LAPIC_NMI (acpi_id[0x6a] high edge lint[0x1]) Nov 8 00:26:28.740435 kernel: ACPI: LAPIC_NMI (acpi_id[0x6b] high edge lint[0x1]) Nov 8 00:26:28.740440 kernel: ACPI: LAPIC_NMI (acpi_id[0x6c] high edge lint[0x1]) Nov 8 00:26:28.740445 kernel: ACPI: LAPIC_NMI (acpi_id[0x6d] high edge lint[0x1]) Nov 8 00:26:28.740450 kernel: ACPI: LAPIC_NMI (acpi_id[0x6e] high edge lint[0x1]) Nov 8 00:26:28.740456 kernel: ACPI: LAPIC_NMI (acpi_id[0x6f] high edge lint[0x1]) Nov 8 00:26:28.740461 kernel: ACPI: LAPIC_NMI (acpi_id[0x70] high edge lint[0x1]) Nov 8 00:26:28.740467 kernel: ACPI: LAPIC_NMI (acpi_id[0x71] high edge lint[0x1]) Nov 8 00:26:28.740472 kernel: ACPI: LAPIC_NMI (acpi_id[0x72] high edge lint[0x1]) Nov 8 00:26:28.740477 kernel: ACPI: LAPIC_NMI (acpi_id[0x73] high edge lint[0x1]) Nov 8 00:26:28.740482 kernel: ACPI: LAPIC_NMI (acpi_id[0x74] high edge lint[0x1]) Nov 8 00:26:28.740488 kernel: ACPI: LAPIC_NMI (acpi_id[0x75] high edge lint[0x1]) Nov 8 00:26:28.740493 kernel: ACPI: LAPIC_NMI (acpi_id[0x76] high edge lint[0x1]) Nov 8 00:26:28.740498 kernel: ACPI: LAPIC_NMI (acpi_id[0x77] high edge lint[0x1]) Nov 8 00:26:28.740503 kernel: ACPI: LAPIC_NMI (acpi_id[0x78] high edge lint[0x1]) Nov 8 00:26:28.740508 kernel: ACPI: LAPIC_NMI (acpi_id[0x79] high edge lint[0x1]) Nov 8 00:26:28.740514 kernel: ACPI: LAPIC_NMI (acpi_id[0x7a] high edge lint[0x1]) Nov 8 00:26:28.740519 kernel: ACPI: LAPIC_NMI (acpi_id[0x7b] high edge lint[0x1]) Nov 8 00:26:28.740524 kernel: ACPI: LAPIC_NMI (acpi_id[0x7c] high edge lint[0x1]) Nov 8 00:26:28.740529 kernel: ACPI: LAPIC_NMI (acpi_id[0x7d] high edge lint[0x1]) Nov 8 00:26:28.740535 kernel: ACPI: LAPIC_NMI (acpi_id[0x7e] high edge lint[0x1]) Nov 8 00:26:28.740540 kernel: ACPI: LAPIC_NMI (acpi_id[0x7f] high edge lint[0x1]) Nov 8 00:26:28.740545 kernel: IOAPIC[0]: apic_id 1, version 17, address 0xfec00000, GSI 0-23 Nov 8 00:26:28.740550 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 high edge) Nov 8 00:26:28.740555 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Nov 8 00:26:28.740561 kernel: ACPI: HPET id: 0x8086af01 base: 0xfed00000 Nov 8 00:26:28.740568 kernel: TSC deadline timer available Nov 8 00:26:28.740573 kernel: smpboot: Allowing 128 CPUs, 126 hotplug CPUs Nov 8 00:26:28.740579 kernel: [mem 0x80000000-0xefffffff] available for PCI devices Nov 8 00:26:28.740584 kernel: Booting paravirtualized kernel on VMware hypervisor Nov 8 00:26:28.740589 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Nov 8 00:26:28.740594 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:128 nr_cpu_ids:128 nr_node_ids:1 Nov 8 00:26:28.740600 kernel: percpu: Embedded 58 pages/cpu s196712 r8192 d32664 u262144 Nov 8 00:26:28.740605 kernel: pcpu-alloc: s196712 r8192 d32664 u262144 alloc=1*2097152 Nov 8 00:26:28.740610 kernel: pcpu-alloc: [0] 000 001 002 003 004 005 006 007 Nov 8 00:26:28.740616 kernel: pcpu-alloc: [0] 008 009 010 011 012 013 014 015 Nov 8 00:26:28.740622 kernel: pcpu-alloc: [0] 016 017 018 019 020 021 022 023 Nov 8 00:26:28.740627 kernel: pcpu-alloc: [0] 024 025 026 027 028 029 030 031 Nov 8 00:26:28.740633 kernel: pcpu-alloc: [0] 032 033 034 035 036 037 038 039 Nov 8 00:26:28.740652 kernel: pcpu-alloc: [0] 040 041 042 043 044 045 046 047 Nov 8 00:26:28.740663 kernel: pcpu-alloc: [0] 048 049 050 051 052 053 054 055 Nov 8 00:26:28.740669 kernel: pcpu-alloc: [0] 056 057 058 059 060 061 062 063 Nov 8 00:26:28.740674 kernel: pcpu-alloc: [0] 064 065 066 067 068 069 070 071 Nov 8 00:26:28.740680 kernel: pcpu-alloc: [0] 072 073 074 075 076 077 078 079 Nov 8 00:26:28.740703 kernel: pcpu-alloc: [0] 080 081 082 083 084 085 086 087 Nov 8 00:26:28.740708 kernel: pcpu-alloc: [0] 088 089 090 091 092 093 094 095 Nov 8 00:26:28.740714 kernel: pcpu-alloc: [0] 096 097 098 099 100 101 102 103 Nov 8 00:26:28.740719 kernel: pcpu-alloc: [0] 104 105 106 107 108 109 110 111 Nov 8 00:26:28.740725 kernel: pcpu-alloc: [0] 112 113 114 115 116 117 118 119 Nov 8 00:26:28.740730 kernel: pcpu-alloc: [0] 120 121 122 123 124 125 126 127 Nov 8 00:26:28.740736 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=vmware flatcar.autologin verity.usrhash=480a02cf7f2001774aa495c3e719d4173e968e6839485a7d2b207ef2facca472 Nov 8 00:26:28.740742 kernel: random: crng init done Nov 8 00:26:28.740749 kernel: printk: log_buf_len individual max cpu contribution: 4096 bytes Nov 8 00:26:28.740755 kernel: printk: log_buf_len total cpu_extra contributions: 520192 bytes Nov 8 00:26:28.740761 kernel: printk: log_buf_len min size: 262144 bytes Nov 8 00:26:28.740766 kernel: printk: log_buf_len: 1048576 bytes Nov 8 00:26:28.740772 kernel: printk: early log buf free: 239760(91%) Nov 8 00:26:28.740777 kernel: Dentry cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Nov 8 00:26:28.740783 kernel: Inode-cache hash table entries: 131072 (order: 8, 1048576 bytes, linear) Nov 8 00:26:28.740788 kernel: Fallback order for Node 0: 0 Nov 8 00:26:28.740794 kernel: Built 1 zonelists, mobility grouping on. Total pages: 515808 Nov 8 00:26:28.740801 kernel: Policy zone: DMA32 Nov 8 00:26:28.740806 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Nov 8 00:26:28.740812 kernel: Memory: 1936368K/2096628K available (12288K kernel code, 2288K rwdata, 22748K rodata, 42880K init, 2320K bss, 160000K reserved, 0K cma-reserved) Nov 8 00:26:28.740819 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=128, Nodes=1 Nov 8 00:26:28.740824 kernel: ftrace: allocating 37980 entries in 149 pages Nov 8 00:26:28.740831 kernel: ftrace: allocated 149 pages with 4 groups Nov 8 00:26:28.740837 kernel: Dynamic Preempt: voluntary Nov 8 00:26:28.740842 kernel: rcu: Preemptible hierarchical RCU implementation. Nov 8 00:26:28.740848 kernel: rcu: RCU event tracing is enabled. Nov 8 00:26:28.740854 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=128. Nov 8 00:26:28.740859 kernel: Trampoline variant of Tasks RCU enabled. Nov 8 00:26:28.740865 kernel: Rude variant of Tasks RCU enabled. Nov 8 00:26:28.740871 kernel: Tracing variant of Tasks RCU enabled. Nov 8 00:26:28.740876 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Nov 8 00:26:28.740882 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=128 Nov 8 00:26:28.740888 kernel: NR_IRQS: 33024, nr_irqs: 1448, preallocated irqs: 16 Nov 8 00:26:28.740894 kernel: rcu: srcu_init: Setting srcu_struct sizes to big. Nov 8 00:26:28.740899 kernel: Console: colour VGA+ 80x25 Nov 8 00:26:28.740905 kernel: printk: console [tty0] enabled Nov 8 00:26:28.740911 kernel: printk: console [ttyS0] enabled Nov 8 00:26:28.740916 kernel: ACPI: Core revision 20230628 Nov 8 00:26:28.740922 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 133484882848 ns Nov 8 00:26:28.740928 kernel: APIC: Switch to symmetric I/O mode setup Nov 8 00:26:28.740934 kernel: x2apic enabled Nov 8 00:26:28.740940 kernel: APIC: Switched APIC routing to: physical x2apic Nov 8 00:26:28.740946 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Nov 8 00:26:28.740952 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x311fd3cd494, max_idle_ns: 440795223879 ns Nov 8 00:26:28.740957 kernel: Calibrating delay loop (skipped) preset value.. 6816.00 BogoMIPS (lpj=3408000) Nov 8 00:26:28.740963 kernel: Disabled fast string operations Nov 8 00:26:28.740969 kernel: Last level iTLB entries: 4KB 64, 2MB 8, 4MB 8 Nov 8 00:26:28.740974 kernel: Last level dTLB entries: 4KB 64, 2MB 32, 4MB 32, 1GB 4 Nov 8 00:26:28.740981 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Nov 8 00:26:28.740987 kernel: Spectre V2 : Spectre BHI mitigation: SW BHB clearing on vm exit Nov 8 00:26:28.740994 kernel: Spectre V2 : Spectre BHI mitigation: SW BHB clearing on syscall Nov 8 00:26:28.741000 kernel: Spectre V2 : Mitigation: Enhanced / Automatic IBRS Nov 8 00:26:28.741005 kernel: Spectre V2 : Spectre v2 / PBRSB-eIBRS: Retire a single CALL on VMEXIT Nov 8 00:26:28.741011 kernel: RETBleed: Mitigation: Enhanced IBRS Nov 8 00:26:28.741016 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Nov 8 00:26:28.741022 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Nov 8 00:26:28.741028 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode Nov 8 00:26:28.741033 kernel: SRBDS: Unknown: Dependent on hypervisor status Nov 8 00:26:28.741039 kernel: GDS: Unknown: Dependent on hypervisor status Nov 8 00:26:28.741045 kernel: active return thunk: its_return_thunk Nov 8 00:26:28.741051 kernel: ITS: Mitigation: Aligned branch/return thunks Nov 8 00:26:28.741057 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Nov 8 00:26:28.741062 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Nov 8 00:26:28.741068 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Nov 8 00:26:28.741073 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Nov 8 00:26:28.741079 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format. Nov 8 00:26:28.741084 kernel: Freeing SMP alternatives memory: 32K Nov 8 00:26:28.741090 kernel: pid_max: default: 131072 minimum: 1024 Nov 8 00:26:28.741096 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Nov 8 00:26:28.741102 kernel: landlock: Up and running. Nov 8 00:26:28.741108 kernel: SELinux: Initializing. Nov 8 00:26:28.741113 kernel: Mount-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Nov 8 00:26:28.741119 kernel: Mountpoint-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Nov 8 00:26:28.741125 kernel: smpboot: CPU0: Intel(R) Xeon(R) E-2278G CPU @ 3.40GHz (family: 0x6, model: 0x9e, stepping: 0xd) Nov 8 00:26:28.741130 kernel: RCU Tasks: Setting shift to 7 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=128. Nov 8 00:26:28.741136 kernel: RCU Tasks Rude: Setting shift to 7 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=128. Nov 8 00:26:28.741141 kernel: RCU Tasks Trace: Setting shift to 7 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=128. Nov 8 00:26:28.741148 kernel: Performance Events: Skylake events, core PMU driver. Nov 8 00:26:28.741154 kernel: core: CPUID marked event: 'cpu cycles' unavailable Nov 8 00:26:28.741159 kernel: core: CPUID marked event: 'instructions' unavailable Nov 8 00:26:28.741165 kernel: core: CPUID marked event: 'bus cycles' unavailable Nov 8 00:26:28.741170 kernel: core: CPUID marked event: 'cache references' unavailable Nov 8 00:26:28.741175 kernel: core: CPUID marked event: 'cache misses' unavailable Nov 8 00:26:28.741181 kernel: core: CPUID marked event: 'branch instructions' unavailable Nov 8 00:26:28.741186 kernel: core: CPUID marked event: 'branch misses' unavailable Nov 8 00:26:28.741193 kernel: ... version: 1 Nov 8 00:26:28.741198 kernel: ... bit width: 48 Nov 8 00:26:28.741204 kernel: ... generic registers: 4 Nov 8 00:26:28.741209 kernel: ... value mask: 0000ffffffffffff Nov 8 00:26:28.741215 kernel: ... max period: 000000007fffffff Nov 8 00:26:28.741220 kernel: ... fixed-purpose events: 0 Nov 8 00:26:28.741226 kernel: ... event mask: 000000000000000f Nov 8 00:26:28.741232 kernel: signal: max sigframe size: 1776 Nov 8 00:26:28.741237 kernel: rcu: Hierarchical SRCU implementation. Nov 8 00:26:28.741244 kernel: rcu: Max phase no-delay instances is 400. Nov 8 00:26:28.741250 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Nov 8 00:26:28.741255 kernel: smp: Bringing up secondary CPUs ... Nov 8 00:26:28.741261 kernel: smpboot: x86: Booting SMP configuration: Nov 8 00:26:28.741267 kernel: .... node #0, CPUs: #1 Nov 8 00:26:28.741272 kernel: Disabled fast string operations Nov 8 00:26:28.741277 kernel: smpboot: CPU 1 Converting physical 2 to logical package 1 Nov 8 00:26:28.741283 kernel: smpboot: CPU 1 Converting physical 0 to logical die 1 Nov 8 00:26:28.741288 kernel: smp: Brought up 1 node, 2 CPUs Nov 8 00:26:28.741294 kernel: smpboot: Max logical packages: 128 Nov 8 00:26:28.741301 kernel: smpboot: Total of 2 processors activated (13632.00 BogoMIPS) Nov 8 00:26:28.741306 kernel: devtmpfs: initialized Nov 8 00:26:28.741312 kernel: x86/mm: Memory block size: 128MB Nov 8 00:26:28.741318 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x7feff000-0x7fefffff] (4096 bytes) Nov 8 00:26:28.741324 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Nov 8 00:26:28.741330 kernel: futex hash table entries: 32768 (order: 9, 2097152 bytes, linear) Nov 8 00:26:28.741336 kernel: pinctrl core: initialized pinctrl subsystem Nov 8 00:26:28.741341 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Nov 8 00:26:28.741347 kernel: audit: initializing netlink subsys (disabled) Nov 8 00:26:28.741354 kernel: audit: type=2000 audit(1762561587.086:1): state=initialized audit_enabled=0 res=1 Nov 8 00:26:28.741359 kernel: thermal_sys: Registered thermal governor 'step_wise' Nov 8 00:26:28.741364 kernel: thermal_sys: Registered thermal governor 'user_space' Nov 8 00:26:28.741370 kernel: cpuidle: using governor menu Nov 8 00:26:28.741376 kernel: Simple Boot Flag at 0x36 set to 0x80 Nov 8 00:26:28.741381 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Nov 8 00:26:28.741387 kernel: dca service started, version 1.12.1 Nov 8 00:26:28.741392 kernel: PCI: MMCONFIG for domain 0000 [bus 00-7f] at [mem 0xf0000000-0xf7ffffff] (base 0xf0000000) Nov 8 00:26:28.741439 kernel: PCI: Using configuration type 1 for base access Nov 8 00:26:28.741448 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Nov 8 00:26:28.741453 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Nov 8 00:26:28.741459 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Nov 8 00:26:28.741465 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Nov 8 00:26:28.741470 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Nov 8 00:26:28.741476 kernel: ACPI: Added _OSI(Module Device) Nov 8 00:26:28.741481 kernel: ACPI: Added _OSI(Processor Device) Nov 8 00:26:28.741487 kernel: ACPI: Added _OSI(Processor Aggregator Device) Nov 8 00:26:28.741492 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Nov 8 00:26:28.741499 kernel: ACPI: [Firmware Bug]: BIOS _OSI(Linux) query ignored Nov 8 00:26:28.741505 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Nov 8 00:26:28.741510 kernel: ACPI: Interpreter enabled Nov 8 00:26:28.741516 kernel: ACPI: PM: (supports S0 S1 S5) Nov 8 00:26:28.741521 kernel: ACPI: Using IOAPIC for interrupt routing Nov 8 00:26:28.741527 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Nov 8 00:26:28.741532 kernel: PCI: Using E820 reservations for host bridge windows Nov 8 00:26:28.741538 kernel: ACPI: Enabled 4 GPEs in block 00 to 0F Nov 8 00:26:28.741543 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-7f]) Nov 8 00:26:28.741625 kernel: acpi PNP0A03:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Nov 8 00:26:28.741680 kernel: acpi PNP0A03:00: _OSC: platform does not support [AER LTR] Nov 8 00:26:28.741729 kernel: acpi PNP0A03:00: _OSC: OS now controls [PCIeHotplug PME PCIeCapability] Nov 8 00:26:28.741737 kernel: PCI host bridge to bus 0000:00 Nov 8 00:26:28.741789 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Nov 8 00:26:28.741833 kernel: pci_bus 0000:00: root bus resource [mem 0x000cc000-0x000dbfff window] Nov 8 00:26:28.741879 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Nov 8 00:26:28.741922 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Nov 8 00:26:28.741964 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xfeff window] Nov 8 00:26:28.742007 kernel: pci_bus 0000:00: root bus resource [bus 00-7f] Nov 8 00:26:28.742135 kernel: pci 0000:00:00.0: [8086:7190] type 00 class 0x060000 Nov 8 00:26:28.742195 kernel: pci 0000:00:01.0: [8086:7191] type 01 class 0x060400 Nov 8 00:26:28.742306 kernel: pci 0000:00:07.0: [8086:7110] type 00 class 0x060100 Nov 8 00:26:28.742365 kernel: pci 0000:00:07.1: [8086:7111] type 00 class 0x01018a Nov 8 00:26:28.742430 kernel: pci 0000:00:07.1: reg 0x20: [io 0x1060-0x106f] Nov 8 00:26:28.742482 kernel: pci 0000:00:07.1: legacy IDE quirk: reg 0x10: [io 0x01f0-0x01f7] Nov 8 00:26:28.742531 kernel: pci 0000:00:07.1: legacy IDE quirk: reg 0x14: [io 0x03f6] Nov 8 00:26:28.742581 kernel: pci 0000:00:07.1: legacy IDE quirk: reg 0x18: [io 0x0170-0x0177] Nov 8 00:26:28.742631 kernel: pci 0000:00:07.1: legacy IDE quirk: reg 0x1c: [io 0x0376] Nov 8 00:26:28.742710 kernel: pci 0000:00:07.3: [8086:7113] type 00 class 0x068000 Nov 8 00:26:28.742792 kernel: pci 0000:00:07.3: quirk: [io 0x1000-0x103f] claimed by PIIX4 ACPI Nov 8 00:26:28.742841 kernel: pci 0000:00:07.3: quirk: [io 0x1040-0x104f] claimed by PIIX4 SMB Nov 8 00:26:28.742895 kernel: pci 0000:00:07.7: [15ad:0740] type 00 class 0x088000 Nov 8 00:26:28.742945 kernel: pci 0000:00:07.7: reg 0x10: [io 0x1080-0x10bf] Nov 8 00:26:28.742994 kernel: pci 0000:00:07.7: reg 0x14: [mem 0xfebfe000-0xfebfffff 64bit] Nov 8 00:26:28.743048 kernel: pci 0000:00:0f.0: [15ad:0405] type 00 class 0x030000 Nov 8 00:26:28.743225 kernel: pci 0000:00:0f.0: reg 0x10: [io 0x1070-0x107f] Nov 8 00:26:28.743310 kernel: pci 0000:00:0f.0: reg 0x14: [mem 0xe8000000-0xefffffff pref] Nov 8 00:26:28.743393 kernel: pci 0000:00:0f.0: reg 0x18: [mem 0xfe000000-0xfe7fffff] Nov 8 00:26:28.743864 kernel: pci 0000:00:0f.0: reg 0x30: [mem 0x00000000-0x00007fff pref] Nov 8 00:26:28.743919 kernel: pci 0000:00:0f.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Nov 8 00:26:28.743976 kernel: pci 0000:00:11.0: [15ad:0790] type 01 class 0x060401 Nov 8 00:26:28.744037 kernel: pci 0000:00:15.0: [15ad:07a0] type 01 class 0x060400 Nov 8 00:26:28.744090 kernel: pci 0000:00:15.0: PME# supported from D0 D3hot D3cold Nov 8 00:26:28.744149 kernel: pci 0000:00:15.1: [15ad:07a0] type 01 class 0x060400 Nov 8 00:26:28.744201 kernel: pci 0000:00:15.1: PME# supported from D0 D3hot D3cold Nov 8 00:26:28.744257 kernel: pci 0000:00:15.2: [15ad:07a0] type 01 class 0x060400 Nov 8 00:26:28.744310 kernel: pci 0000:00:15.2: PME# supported from D0 D3hot D3cold Nov 8 00:26:28.744365 kernel: pci 0000:00:15.3: [15ad:07a0] type 01 class 0x060400 Nov 8 00:26:28.744427 kernel: pci 0000:00:15.3: PME# supported from D0 D3hot D3cold Nov 8 00:26:28.744483 kernel: pci 0000:00:15.4: [15ad:07a0] type 01 class 0x060400 Nov 8 00:26:28.746323 kernel: pci 0000:00:15.4: PME# supported from D0 D3hot D3cold Nov 8 00:26:28.746389 kernel: pci 0000:00:15.5: [15ad:07a0] type 01 class 0x060400 Nov 8 00:26:28.746455 kernel: pci 0000:00:15.5: PME# supported from D0 D3hot D3cold Nov 8 00:26:28.746515 kernel: pci 0000:00:15.6: [15ad:07a0] type 01 class 0x060400 Nov 8 00:26:28.746572 kernel: pci 0000:00:15.6: PME# supported from D0 D3hot D3cold Nov 8 00:26:28.746627 kernel: pci 0000:00:15.7: [15ad:07a0] type 01 class 0x060400 Nov 8 00:26:28.746681 kernel: pci 0000:00:15.7: PME# supported from D0 D3hot D3cold Nov 8 00:26:28.746737 kernel: pci 0000:00:16.0: [15ad:07a0] type 01 class 0x060400 Nov 8 00:26:28.746790 kernel: pci 0000:00:16.0: PME# supported from D0 D3hot D3cold Nov 8 00:26:28.746848 kernel: pci 0000:00:16.1: [15ad:07a0] type 01 class 0x060400 Nov 8 00:26:28.746900 kernel: pci 0000:00:16.1: PME# supported from D0 D3hot D3cold Nov 8 00:26:28.746955 kernel: pci 0000:00:16.2: [15ad:07a0] type 01 class 0x060400 Nov 8 00:26:28.747007 kernel: pci 0000:00:16.2: PME# supported from D0 D3hot D3cold Nov 8 00:26:28.747062 kernel: pci 0000:00:16.3: [15ad:07a0] type 01 class 0x060400 Nov 8 00:26:28.747114 kernel: pci 0000:00:16.3: PME# supported from D0 D3hot D3cold Nov 8 00:26:28.747171 kernel: pci 0000:00:16.4: [15ad:07a0] type 01 class 0x060400 Nov 8 00:26:28.747224 kernel: pci 0000:00:16.4: PME# supported from D0 D3hot D3cold Nov 8 00:26:28.747279 kernel: pci 0000:00:16.5: [15ad:07a0] type 01 class 0x060400 Nov 8 00:26:28.747331 kernel: pci 0000:00:16.5: PME# supported from D0 D3hot D3cold Nov 8 00:26:28.747389 kernel: pci 0000:00:16.6: [15ad:07a0] type 01 class 0x060400 Nov 8 00:26:28.748496 kernel: pci 0000:00:16.6: PME# supported from D0 D3hot D3cold Nov 8 00:26:28.748558 kernel: pci 0000:00:16.7: [15ad:07a0] type 01 class 0x060400 Nov 8 00:26:28.748616 kernel: pci 0000:00:16.7: PME# supported from D0 D3hot D3cold Nov 8 00:26:28.748673 kernel: pci 0000:00:17.0: [15ad:07a0] type 01 class 0x060400 Nov 8 00:26:28.748738 kernel: pci 0000:00:17.0: PME# supported from D0 D3hot D3cold Nov 8 00:26:28.748795 kernel: pci 0000:00:17.1: [15ad:07a0] type 01 class 0x060400 Nov 8 00:26:28.748848 kernel: pci 0000:00:17.1: PME# supported from D0 D3hot D3cold Nov 8 00:26:28.748904 kernel: pci 0000:00:17.2: [15ad:07a0] type 01 class 0x060400 Nov 8 00:26:28.748959 kernel: pci 0000:00:17.2: PME# supported from D0 D3hot D3cold Nov 8 00:26:28.749017 kernel: pci 0000:00:17.3: [15ad:07a0] type 01 class 0x060400 Nov 8 00:26:28.749132 kernel: pci 0000:00:17.3: PME# supported from D0 D3hot D3cold Nov 8 00:26:28.749192 kernel: pci 0000:00:17.4: [15ad:07a0] type 01 class 0x060400 Nov 8 00:26:28.749244 kernel: pci 0000:00:17.4: PME# supported from D0 D3hot D3cold Nov 8 00:26:28.749300 kernel: pci 0000:00:17.5: [15ad:07a0] type 01 class 0x060400 Nov 8 00:26:28.749355 kernel: pci 0000:00:17.5: PME# supported from D0 D3hot D3cold Nov 8 00:26:28.750428 kernel: pci 0000:00:17.6: [15ad:07a0] type 01 class 0x060400 Nov 8 00:26:28.750488 kernel: pci 0000:00:17.6: PME# supported from D0 D3hot D3cold Nov 8 00:26:28.750602 kernel: pci 0000:00:17.7: [15ad:07a0] type 01 class 0x060400 Nov 8 00:26:28.750658 kernel: pci 0000:00:17.7: PME# supported from D0 D3hot D3cold Nov 8 00:26:28.750735 kernel: pci 0000:00:18.0: [15ad:07a0] type 01 class 0x060400 Nov 8 00:26:28.750792 kernel: pci 0000:00:18.0: PME# supported from D0 D3hot D3cold Nov 8 00:26:28.750847 kernel: pci 0000:00:18.1: [15ad:07a0] type 01 class 0x060400 Nov 8 00:26:28.750898 kernel: pci 0000:00:18.1: PME# supported from D0 D3hot D3cold Nov 8 00:26:28.750953 kernel: pci 0000:00:18.2: [15ad:07a0] type 01 class 0x060400 Nov 8 00:26:28.751004 kernel: pci 0000:00:18.2: PME# supported from D0 D3hot D3cold Nov 8 00:26:28.751059 kernel: pci 0000:00:18.3: [15ad:07a0] type 01 class 0x060400 Nov 8 00:26:28.751111 kernel: pci 0000:00:18.3: PME# supported from D0 D3hot D3cold Nov 8 00:26:28.751170 kernel: pci 0000:00:18.4: [15ad:07a0] type 01 class 0x060400 Nov 8 00:26:28.751222 kernel: pci 0000:00:18.4: PME# supported from D0 D3hot D3cold Nov 8 00:26:28.751276 kernel: pci 0000:00:18.5: [15ad:07a0] type 01 class 0x060400 Nov 8 00:26:28.751327 kernel: pci 0000:00:18.5: PME# supported from D0 D3hot D3cold Nov 8 00:26:28.753433 kernel: pci 0000:00:18.6: [15ad:07a0] type 01 class 0x060400 Nov 8 00:26:28.753491 kernel: pci 0000:00:18.6: PME# supported from D0 D3hot D3cold Nov 8 00:26:28.753552 kernel: pci 0000:00:18.7: [15ad:07a0] type 01 class 0x060400 Nov 8 00:26:28.753604 kernel: pci 0000:00:18.7: PME# supported from D0 D3hot D3cold Nov 8 00:26:28.753656 kernel: pci_bus 0000:01: extended config space not accessible Nov 8 00:26:28.753707 kernel: pci 0000:00:01.0: PCI bridge to [bus 01] Nov 8 00:26:28.753782 kernel: pci_bus 0000:02: extended config space not accessible Nov 8 00:26:28.753800 kernel: acpiphp: Slot [32] registered Nov 8 00:26:28.753809 kernel: acpiphp: Slot [33] registered Nov 8 00:26:28.753815 kernel: acpiphp: Slot [34] registered Nov 8 00:26:28.753820 kernel: acpiphp: Slot [35] registered Nov 8 00:26:28.753826 kernel: acpiphp: Slot [36] registered Nov 8 00:26:28.753834 kernel: acpiphp: Slot [37] registered Nov 8 00:26:28.753840 kernel: acpiphp: Slot [38] registered Nov 8 00:26:28.753845 kernel: acpiphp: Slot [39] registered Nov 8 00:26:28.753851 kernel: acpiphp: Slot [40] registered Nov 8 00:26:28.753857 kernel: acpiphp: Slot [41] registered Nov 8 00:26:28.753862 kernel: acpiphp: Slot [42] registered Nov 8 00:26:28.753869 kernel: acpiphp: Slot [43] registered Nov 8 00:26:28.753875 kernel: acpiphp: Slot [44] registered Nov 8 00:26:28.753880 kernel: acpiphp: Slot [45] registered Nov 8 00:26:28.753886 kernel: acpiphp: Slot [46] registered Nov 8 00:26:28.753892 kernel: acpiphp: Slot [47] registered Nov 8 00:26:28.753897 kernel: acpiphp: Slot [48] registered Nov 8 00:26:28.753903 kernel: acpiphp: Slot [49] registered Nov 8 00:26:28.753908 kernel: acpiphp: Slot [50] registered Nov 8 00:26:28.753914 kernel: acpiphp: Slot [51] registered Nov 8 00:26:28.753921 kernel: acpiphp: Slot [52] registered Nov 8 00:26:28.753926 kernel: acpiphp: Slot [53] registered Nov 8 00:26:28.753932 kernel: acpiphp: Slot [54] registered Nov 8 00:26:28.753937 kernel: acpiphp: Slot [55] registered Nov 8 00:26:28.753943 kernel: acpiphp: Slot [56] registered Nov 8 00:26:28.753949 kernel: acpiphp: Slot [57] registered Nov 8 00:26:28.753954 kernel: acpiphp: Slot [58] registered Nov 8 00:26:28.753960 kernel: acpiphp: Slot [59] registered Nov 8 00:26:28.753965 kernel: acpiphp: Slot [60] registered Nov 8 00:26:28.753971 kernel: acpiphp: Slot [61] registered Nov 8 00:26:28.753978 kernel: acpiphp: Slot [62] registered Nov 8 00:26:28.753984 kernel: acpiphp: Slot [63] registered Nov 8 00:26:28.754039 kernel: pci 0000:00:11.0: PCI bridge to [bus 02] (subtractive decode) Nov 8 00:26:28.754089 kernel: pci 0000:00:11.0: bridge window [io 0x2000-0x3fff] Nov 8 00:26:28.754137 kernel: pci 0000:00:11.0: bridge window [mem 0xfd600000-0xfdffffff] Nov 8 00:26:28.754185 kernel: pci 0000:00:11.0: bridge window [mem 0xe7b00000-0xe7ffffff 64bit pref] Nov 8 00:26:28.754234 kernel: pci 0000:00:11.0: bridge window [mem 0x000a0000-0x000bffff window] (subtractive decode) Nov 8 00:26:28.754283 kernel: pci 0000:00:11.0: bridge window [mem 0x000cc000-0x000dbfff window] (subtractive decode) Nov 8 00:26:28.754344 kernel: pci 0000:00:11.0: bridge window [mem 0xc0000000-0xfebfffff window] (subtractive decode) Nov 8 00:26:28.754423 kernel: pci 0000:00:11.0: bridge window [io 0x0000-0x0cf7 window] (subtractive decode) Nov 8 00:26:28.754474 kernel: pci 0000:00:11.0: bridge window [io 0x0d00-0xfeff window] (subtractive decode) Nov 8 00:26:28.754529 kernel: pci 0000:03:00.0: [15ad:07c0] type 00 class 0x010700 Nov 8 00:26:28.754581 kernel: pci 0000:03:00.0: reg 0x10: [io 0x4000-0x4007] Nov 8 00:26:28.754630 kernel: pci 0000:03:00.0: reg 0x14: [mem 0xfd5f8000-0xfd5fffff 64bit] Nov 8 00:26:28.754680 kernel: pci 0000:03:00.0: reg 0x30: [mem 0x00000000-0x0000ffff pref] Nov 8 00:26:28.754769 kernel: pci 0000:03:00.0: PME# supported from D0 D3hot D3cold Nov 8 00:26:28.754819 kernel: pci 0000:03:00.0: disabling ASPM on pre-1.1 PCIe device. You can enable it with 'pcie_aspm=force' Nov 8 00:26:28.754870 kernel: pci 0000:00:15.0: PCI bridge to [bus 03] Nov 8 00:26:28.754919 kernel: pci 0000:00:15.0: bridge window [io 0x4000-0x4fff] Nov 8 00:26:28.754969 kernel: pci 0000:00:15.0: bridge window [mem 0xfd500000-0xfd5fffff] Nov 8 00:26:28.755020 kernel: pci 0000:00:15.1: PCI bridge to [bus 04] Nov 8 00:26:28.755069 kernel: pci 0000:00:15.1: bridge window [io 0x8000-0x8fff] Nov 8 00:26:28.755117 kernel: pci 0000:00:15.1: bridge window [mem 0xfd100000-0xfd1fffff] Nov 8 00:26:28.755170 kernel: pci 0000:00:15.1: bridge window [mem 0xe7800000-0xe78fffff 64bit pref] Nov 8 00:26:28.755219 kernel: pci 0000:00:15.2: PCI bridge to [bus 05] Nov 8 00:26:28.755268 kernel: pci 0000:00:15.2: bridge window [io 0xc000-0xcfff] Nov 8 00:26:28.755317 kernel: pci 0000:00:15.2: bridge window [mem 0xfcd00000-0xfcdfffff] Nov 8 00:26:28.755366 kernel: pci 0000:00:15.2: bridge window [mem 0xe7400000-0xe74fffff 64bit pref] Nov 8 00:26:28.757443 kernel: pci 0000:00:15.3: PCI bridge to [bus 06] Nov 8 00:26:28.757508 kernel: pci 0000:00:15.3: bridge window [mem 0xfc900000-0xfc9fffff] Nov 8 00:26:28.757566 kernel: pci 0000:00:15.3: bridge window [mem 0xe7000000-0xe70fffff 64bit pref] Nov 8 00:26:28.757619 kernel: pci 0000:00:15.4: PCI bridge to [bus 07] Nov 8 00:26:28.757670 kernel: pci 0000:00:15.4: bridge window [mem 0xfc500000-0xfc5fffff] Nov 8 00:26:28.757758 kernel: pci 0000:00:15.4: bridge window [mem 0xe6c00000-0xe6cfffff 64bit pref] Nov 8 00:26:28.757812 kernel: pci 0000:00:15.5: PCI bridge to [bus 08] Nov 8 00:26:28.757863 kernel: pci 0000:00:15.5: bridge window [mem 0xfc100000-0xfc1fffff] Nov 8 00:26:28.757913 kernel: pci 0000:00:15.5: bridge window [mem 0xe6800000-0xe68fffff 64bit pref] Nov 8 00:26:28.757964 kernel: pci 0000:00:15.6: PCI bridge to [bus 09] Nov 8 00:26:28.758014 kernel: pci 0000:00:15.6: bridge window [mem 0xfbd00000-0xfbdfffff] Nov 8 00:26:28.758064 kernel: pci 0000:00:15.6: bridge window [mem 0xe6400000-0xe64fffff 64bit pref] Nov 8 00:26:28.758114 kernel: pci 0000:00:15.7: PCI bridge to [bus 0a] Nov 8 00:26:28.758163 kernel: pci 0000:00:15.7: bridge window [mem 0xfb900000-0xfb9fffff] Nov 8 00:26:28.758215 kernel: pci 0000:00:15.7: bridge window [mem 0xe6000000-0xe60fffff 64bit pref] Nov 8 00:26:28.758272 kernel: pci 0000:0b:00.0: [15ad:07b0] type 00 class 0x020000 Nov 8 00:26:28.758324 kernel: pci 0000:0b:00.0: reg 0x10: [mem 0xfd4fc000-0xfd4fcfff] Nov 8 00:26:28.758375 kernel: pci 0000:0b:00.0: reg 0x14: [mem 0xfd4fd000-0xfd4fdfff] Nov 8 00:26:28.758441 kernel: pci 0000:0b:00.0: reg 0x18: [mem 0xfd4fe000-0xfd4fffff] Nov 8 00:26:28.758493 kernel: pci 0000:0b:00.0: reg 0x1c: [io 0x5000-0x500f] Nov 8 00:26:28.758543 kernel: pci 0000:0b:00.0: reg 0x30: [mem 0x00000000-0x0000ffff pref] Nov 8 00:26:28.758594 kernel: pci 0000:0b:00.0: supports D1 D2 Nov 8 00:26:28.758649 kernel: pci 0000:0b:00.0: PME# supported from D0 D1 D2 D3hot D3cold Nov 8 00:26:28.758704 kernel: pci 0000:0b:00.0: disabling ASPM on pre-1.1 PCIe device. You can enable it with 'pcie_aspm=force' Nov 8 00:26:28.758755 kernel: pci 0000:00:16.0: PCI bridge to [bus 0b] Nov 8 00:26:28.758806 kernel: pci 0000:00:16.0: bridge window [io 0x5000-0x5fff] Nov 8 00:26:28.758856 kernel: pci 0000:00:16.0: bridge window [mem 0xfd400000-0xfd4fffff] Nov 8 00:26:28.758907 kernel: pci 0000:00:16.1: PCI bridge to [bus 0c] Nov 8 00:26:28.758957 kernel: pci 0000:00:16.1: bridge window [io 0x9000-0x9fff] Nov 8 00:26:28.759008 kernel: pci 0000:00:16.1: bridge window [mem 0xfd000000-0xfd0fffff] Nov 8 00:26:28.759062 kernel: pci 0000:00:16.1: bridge window [mem 0xe7700000-0xe77fffff 64bit pref] Nov 8 00:26:28.759114 kernel: pci 0000:00:16.2: PCI bridge to [bus 0d] Nov 8 00:26:28.759164 kernel: pci 0000:00:16.2: bridge window [io 0xd000-0xdfff] Nov 8 00:26:28.759214 kernel: pci 0000:00:16.2: bridge window [mem 0xfcc00000-0xfccfffff] Nov 8 00:26:28.759265 kernel: pci 0000:00:16.2: bridge window [mem 0xe7300000-0xe73fffff 64bit pref] Nov 8 00:26:28.759316 kernel: pci 0000:00:16.3: PCI bridge to [bus 0e] Nov 8 00:26:28.759367 kernel: pci 0000:00:16.3: bridge window [mem 0xfc800000-0xfc8fffff] Nov 8 00:26:28.760243 kernel: pci 0000:00:16.3: bridge window [mem 0xe6f00000-0xe6ffffff 64bit pref] Nov 8 00:26:28.760300 kernel: pci 0000:00:16.4: PCI bridge to [bus 0f] Nov 8 00:26:28.760350 kernel: pci 0000:00:16.4: bridge window [mem 0xfc400000-0xfc4fffff] Nov 8 00:26:28.760408 kernel: pci 0000:00:16.4: bridge window [mem 0xe6b00000-0xe6bfffff 64bit pref] Nov 8 00:26:28.760459 kernel: pci 0000:00:16.5: PCI bridge to [bus 10] Nov 8 00:26:28.760508 kernel: pci 0000:00:16.5: bridge window [mem 0xfc000000-0xfc0fffff] Nov 8 00:26:28.760557 kernel: pci 0000:00:16.5: bridge window [mem 0xe6700000-0xe67fffff 64bit pref] Nov 8 00:26:28.760605 kernel: pci 0000:00:16.6: PCI bridge to [bus 11] Nov 8 00:26:28.760659 kernel: pci 0000:00:16.6: bridge window [mem 0xfbc00000-0xfbcfffff] Nov 8 00:26:28.760747 kernel: pci 0000:00:16.6: bridge window [mem 0xe6300000-0xe63fffff 64bit pref] Nov 8 00:26:28.760798 kernel: pci 0000:00:16.7: PCI bridge to [bus 12] Nov 8 00:26:28.760848 kernel: pci 0000:00:16.7: bridge window [mem 0xfb800000-0xfb8fffff] Nov 8 00:26:28.760897 kernel: pci 0000:00:16.7: bridge window [mem 0xe5f00000-0xe5ffffff 64bit pref] Nov 8 00:26:28.760947 kernel: pci 0000:00:17.0: PCI bridge to [bus 13] Nov 8 00:26:28.760997 kernel: pci 0000:00:17.0: bridge window [io 0x6000-0x6fff] Nov 8 00:26:28.761046 kernel: pci 0000:00:17.0: bridge window [mem 0xfd300000-0xfd3fffff] Nov 8 00:26:28.761097 kernel: pci 0000:00:17.0: bridge window [mem 0xe7a00000-0xe7afffff 64bit pref] Nov 8 00:26:28.761148 kernel: pci 0000:00:17.1: PCI bridge to [bus 14] Nov 8 00:26:28.761197 kernel: pci 0000:00:17.1: bridge window [io 0xa000-0xafff] Nov 8 00:26:28.761247 kernel: pci 0000:00:17.1: bridge window [mem 0xfcf00000-0xfcffffff] Nov 8 00:26:28.761296 kernel: pci 0000:00:17.1: bridge window [mem 0xe7600000-0xe76fffff 64bit pref] Nov 8 00:26:28.761346 kernel: pci 0000:00:17.2: PCI bridge to [bus 15] Nov 8 00:26:28.761403 kernel: pci 0000:00:17.2: bridge window [io 0xe000-0xefff] Nov 8 00:26:28.761456 kernel: pci 0000:00:17.2: bridge window [mem 0xfcb00000-0xfcbfffff] Nov 8 00:26:28.761508 kernel: pci 0000:00:17.2: bridge window [mem 0xe7200000-0xe72fffff 64bit pref] Nov 8 00:26:28.761559 kernel: pci 0000:00:17.3: PCI bridge to [bus 16] Nov 8 00:26:28.761609 kernel: pci 0000:00:17.3: bridge window [mem 0xfc700000-0xfc7fffff] Nov 8 00:26:28.761659 kernel: pci 0000:00:17.3: bridge window [mem 0xe6e00000-0xe6efffff 64bit pref] Nov 8 00:26:28.761708 kernel: pci 0000:00:17.4: PCI bridge to [bus 17] Nov 8 00:26:28.761758 kernel: pci 0000:00:17.4: bridge window [mem 0xfc300000-0xfc3fffff] Nov 8 00:26:28.761808 kernel: pci 0000:00:17.4: bridge window [mem 0xe6a00000-0xe6afffff 64bit pref] Nov 8 00:26:28.761861 kernel: pci 0000:00:17.5: PCI bridge to [bus 18] Nov 8 00:26:28.761911 kernel: pci 0000:00:17.5: bridge window [mem 0xfbf00000-0xfbffffff] Nov 8 00:26:28.761960 kernel: pci 0000:00:17.5: bridge window [mem 0xe6600000-0xe66fffff 64bit pref] Nov 8 00:26:28.762010 kernel: pci 0000:00:17.6: PCI bridge to [bus 19] Nov 8 00:26:28.762060 kernel: pci 0000:00:17.6: bridge window [mem 0xfbb00000-0xfbbfffff] Nov 8 00:26:28.762109 kernel: pci 0000:00:17.6: bridge window [mem 0xe6200000-0xe62fffff 64bit pref] Nov 8 00:26:28.762159 kernel: pci 0000:00:17.7: PCI bridge to [bus 1a] Nov 8 00:26:28.762208 kernel: pci 0000:00:17.7: bridge window [mem 0xfb700000-0xfb7fffff] Nov 8 00:26:28.762260 kernel: pci 0000:00:17.7: bridge window [mem 0xe5e00000-0xe5efffff 64bit pref] Nov 8 00:26:28.762309 kernel: pci 0000:00:18.0: PCI bridge to [bus 1b] Nov 8 00:26:28.762358 kernel: pci 0000:00:18.0: bridge window [io 0x7000-0x7fff] Nov 8 00:26:28.762669 kernel: pci 0000:00:18.0: bridge window [mem 0xfd200000-0xfd2fffff] Nov 8 00:26:28.762775 kernel: pci 0000:00:18.0: bridge window [mem 0xe7900000-0xe79fffff 64bit pref] Nov 8 00:26:28.762827 kernel: pci 0000:00:18.1: PCI bridge to [bus 1c] Nov 8 00:26:28.762877 kernel: pci 0000:00:18.1: bridge window [io 0xb000-0xbfff] Nov 8 00:26:28.762927 kernel: pci 0000:00:18.1: bridge window [mem 0xfce00000-0xfcefffff] Nov 8 00:26:28.762979 kernel: pci 0000:00:18.1: bridge window [mem 0xe7500000-0xe75fffff 64bit pref] Nov 8 00:26:28.763030 kernel: pci 0000:00:18.2: PCI bridge to [bus 1d] Nov 8 00:26:28.763079 kernel: pci 0000:00:18.2: bridge window [mem 0xfca00000-0xfcafffff] Nov 8 00:26:28.763128 kernel: pci 0000:00:18.2: bridge window [mem 0xe7100000-0xe71fffff 64bit pref] Nov 8 00:26:28.763177 kernel: pci 0000:00:18.3: PCI bridge to [bus 1e] Nov 8 00:26:28.763227 kernel: pci 0000:00:18.3: bridge window [mem 0xfc600000-0xfc6fffff] Nov 8 00:26:28.763276 kernel: pci 0000:00:18.3: bridge window [mem 0xe6d00000-0xe6dfffff 64bit pref] Nov 8 00:26:28.763324 kernel: pci 0000:00:18.4: PCI bridge to [bus 1f] Nov 8 00:26:28.763377 kernel: pci 0000:00:18.4: bridge window [mem 0xfc200000-0xfc2fffff] Nov 8 00:26:28.763440 kernel: pci 0000:00:18.4: bridge window [mem 0xe6900000-0xe69fffff 64bit pref] Nov 8 00:26:28.763491 kernel: pci 0000:00:18.5: PCI bridge to [bus 20] Nov 8 00:26:28.763540 kernel: pci 0000:00:18.5: bridge window [mem 0xfbe00000-0xfbefffff] Nov 8 00:26:28.763588 kernel: pci 0000:00:18.5: bridge window [mem 0xe6500000-0xe65fffff 64bit pref] Nov 8 00:26:28.763637 kernel: pci 0000:00:18.6: PCI bridge to [bus 21] Nov 8 00:26:28.763692 kernel: pci 0000:00:18.6: bridge window [mem 0xfba00000-0xfbafffff] Nov 8 00:26:28.763745 kernel: pci 0000:00:18.6: bridge window [mem 0xe6100000-0xe61fffff 64bit pref] Nov 8 00:26:28.763798 kernel: pci 0000:00:18.7: PCI bridge to [bus 22] Nov 8 00:26:28.763848 kernel: pci 0000:00:18.7: bridge window [mem 0xfb600000-0xfb6fffff] Nov 8 00:26:28.763897 kernel: pci 0000:00:18.7: bridge window [mem 0xe5d00000-0xe5dfffff 64bit pref] Nov 8 00:26:28.763906 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 9 Nov 8 00:26:28.763912 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 0 Nov 8 00:26:28.763918 kernel: ACPI: PCI: Interrupt link LNKB disabled Nov 8 00:26:28.763923 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Nov 8 00:26:28.763929 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 10 Nov 8 00:26:28.763935 kernel: iommu: Default domain type: Translated Nov 8 00:26:28.763942 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Nov 8 00:26:28.763948 kernel: PCI: Using ACPI for IRQ routing Nov 8 00:26:28.763954 kernel: PCI: pci_cache_line_size set to 64 bytes Nov 8 00:26:28.763959 kernel: e820: reserve RAM buffer [mem 0x0009ec00-0x0009ffff] Nov 8 00:26:28.763965 kernel: e820: reserve RAM buffer [mem 0x7fee0000-0x7fffffff] Nov 8 00:26:28.764013 kernel: pci 0000:00:0f.0: vgaarb: setting as boot VGA device Nov 8 00:26:28.764062 kernel: pci 0000:00:0f.0: vgaarb: bridge control possible Nov 8 00:26:28.764111 kernel: pci 0000:00:0f.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Nov 8 00:26:28.764121 kernel: vgaarb: loaded Nov 8 00:26:28.764127 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 Nov 8 00:26:28.764132 kernel: hpet0: 16 comparators, 64-bit 14.318180 MHz counter Nov 8 00:26:28.764138 kernel: clocksource: Switched to clocksource tsc-early Nov 8 00:26:28.764144 kernel: VFS: Disk quotas dquot_6.6.0 Nov 8 00:26:28.764149 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Nov 8 00:26:28.764155 kernel: pnp: PnP ACPI init Nov 8 00:26:28.764207 kernel: system 00:00: [io 0x1000-0x103f] has been reserved Nov 8 00:26:28.764253 kernel: system 00:00: [io 0x1040-0x104f] has been reserved Nov 8 00:26:28.764301 kernel: system 00:00: [io 0x0cf0-0x0cf1] has been reserved Nov 8 00:26:28.764349 kernel: system 00:04: [mem 0xfed00000-0xfed003ff] has been reserved Nov 8 00:26:28.764460 kernel: pnp 00:06: [dma 2] Nov 8 00:26:28.764520 kernel: system 00:07: [io 0xfce0-0xfcff] has been reserved Nov 8 00:26:28.764566 kernel: system 00:07: [mem 0xf0000000-0xf7ffffff] has been reserved Nov 8 00:26:28.764611 kernel: system 00:07: [mem 0xfe800000-0xfe9fffff] has been reserved Nov 8 00:26:28.764622 kernel: pnp: PnP ACPI: found 8 devices Nov 8 00:26:28.764628 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Nov 8 00:26:28.764633 kernel: NET: Registered PF_INET protocol family Nov 8 00:26:28.764639 kernel: IP idents hash table entries: 32768 (order: 6, 262144 bytes, linear) Nov 8 00:26:28.764645 kernel: tcp_listen_portaddr_hash hash table entries: 1024 (order: 2, 16384 bytes, linear) Nov 8 00:26:28.764651 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Nov 8 00:26:28.764656 kernel: TCP established hash table entries: 16384 (order: 5, 131072 bytes, linear) Nov 8 00:26:28.764662 kernel: TCP bind hash table entries: 16384 (order: 7, 524288 bytes, linear) Nov 8 00:26:28.764668 kernel: TCP: Hash tables configured (established 16384 bind 16384) Nov 8 00:26:28.764674 kernel: UDP hash table entries: 1024 (order: 3, 32768 bytes, linear) Nov 8 00:26:28.764680 kernel: UDP-Lite hash table entries: 1024 (order: 3, 32768 bytes, linear) Nov 8 00:26:28.764689 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Nov 8 00:26:28.764695 kernel: NET: Registered PF_XDP protocol family Nov 8 00:26:28.764747 kernel: pci 0000:00:15.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 03] add_size 200000 add_align 100000 Nov 8 00:26:28.764797 kernel: pci 0000:00:15.3: bridge window [io 0x1000-0x0fff] to [bus 06] add_size 1000 Nov 8 00:26:28.764847 kernel: pci 0000:00:15.4: bridge window [io 0x1000-0x0fff] to [bus 07] add_size 1000 Nov 8 00:26:28.764900 kernel: pci 0000:00:15.5: bridge window [io 0x1000-0x0fff] to [bus 08] add_size 1000 Nov 8 00:26:28.764949 kernel: pci 0000:00:15.6: bridge window [io 0x1000-0x0fff] to [bus 09] add_size 1000 Nov 8 00:26:28.764998 kernel: pci 0000:00:15.7: bridge window [io 0x1000-0x0fff] to [bus 0a] add_size 1000 Nov 8 00:26:28.765047 kernel: pci 0000:00:16.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 0b] add_size 200000 add_align 100000 Nov 8 00:26:28.765097 kernel: pci 0000:00:16.3: bridge window [io 0x1000-0x0fff] to [bus 0e] add_size 1000 Nov 8 00:26:28.765146 kernel: pci 0000:00:16.4: bridge window [io 0x1000-0x0fff] to [bus 0f] add_size 1000 Nov 8 00:26:28.765198 kernel: pci 0000:00:16.5: bridge window [io 0x1000-0x0fff] to [bus 10] add_size 1000 Nov 8 00:26:28.765247 kernel: pci 0000:00:16.6: bridge window [io 0x1000-0x0fff] to [bus 11] add_size 1000 Nov 8 00:26:28.765297 kernel: pci 0000:00:16.7: bridge window [io 0x1000-0x0fff] to [bus 12] add_size 1000 Nov 8 00:26:28.765346 kernel: pci 0000:00:17.3: bridge window [io 0x1000-0x0fff] to [bus 16] add_size 1000 Nov 8 00:26:28.765402 kernel: pci 0000:00:17.4: bridge window [io 0x1000-0x0fff] to [bus 17] add_size 1000 Nov 8 00:26:28.765454 kernel: pci 0000:00:17.5: bridge window [io 0x1000-0x0fff] to [bus 18] add_size 1000 Nov 8 00:26:28.765507 kernel: pci 0000:00:17.6: bridge window [io 0x1000-0x0fff] to [bus 19] add_size 1000 Nov 8 00:26:28.765557 kernel: pci 0000:00:17.7: bridge window [io 0x1000-0x0fff] to [bus 1a] add_size 1000 Nov 8 00:26:28.765608 kernel: pci 0000:00:18.2: bridge window [io 0x1000-0x0fff] to [bus 1d] add_size 1000 Nov 8 00:26:28.765657 kernel: pci 0000:00:18.3: bridge window [io 0x1000-0x0fff] to [bus 1e] add_size 1000 Nov 8 00:26:28.765707 kernel: pci 0000:00:18.4: bridge window [io 0x1000-0x0fff] to [bus 1f] add_size 1000 Nov 8 00:26:28.765757 kernel: pci 0000:00:18.5: bridge window [io 0x1000-0x0fff] to [bus 20] add_size 1000 Nov 8 00:26:28.765808 kernel: pci 0000:00:18.6: bridge window [io 0x1000-0x0fff] to [bus 21] add_size 1000 Nov 8 00:26:28.765858 kernel: pci 0000:00:18.7: bridge window [io 0x1000-0x0fff] to [bus 22] add_size 1000 Nov 8 00:26:28.765908 kernel: pci 0000:00:15.0: BAR 15: assigned [mem 0xc0000000-0xc01fffff 64bit pref] Nov 8 00:26:28.765957 kernel: pci 0000:00:16.0: BAR 15: assigned [mem 0xc0200000-0xc03fffff 64bit pref] Nov 8 00:26:28.766006 kernel: pci 0000:00:15.3: BAR 13: no space for [io size 0x1000] Nov 8 00:26:28.766055 kernel: pci 0000:00:15.3: BAR 13: failed to assign [io size 0x1000] Nov 8 00:26:28.766107 kernel: pci 0000:00:15.4: BAR 13: no space for [io size 0x1000] Nov 8 00:26:28.766156 kernel: pci 0000:00:15.4: BAR 13: failed to assign [io size 0x1000] Nov 8 00:26:28.766205 kernel: pci 0000:00:15.5: BAR 13: no space for [io size 0x1000] Nov 8 00:26:28.766254 kernel: pci 0000:00:15.5: BAR 13: failed to assign [io size 0x1000] Nov 8 00:26:28.766304 kernel: pci 0000:00:15.6: BAR 13: no space for [io size 0x1000] Nov 8 00:26:28.766353 kernel: pci 0000:00:15.6: BAR 13: failed to assign [io size 0x1000] Nov 8 00:26:28.766426 kernel: pci 0000:00:15.7: BAR 13: no space for [io size 0x1000] Nov 8 00:26:28.766477 kernel: pci 0000:00:15.7: BAR 13: failed to assign [io size 0x1000] Nov 8 00:26:28.766530 kernel: pci 0000:00:16.3: BAR 13: no space for [io size 0x1000] Nov 8 00:26:28.766578 kernel: pci 0000:00:16.3: BAR 13: failed to assign [io size 0x1000] Nov 8 00:26:28.766626 kernel: pci 0000:00:16.4: BAR 13: no space for [io size 0x1000] Nov 8 00:26:28.766675 kernel: pci 0000:00:16.4: BAR 13: failed to assign [io size 0x1000] Nov 8 00:26:28.766728 kernel: pci 0000:00:16.5: BAR 13: no space for [io size 0x1000] Nov 8 00:26:28.766776 kernel: pci 0000:00:16.5: BAR 13: failed to assign [io size 0x1000] Nov 8 00:26:28.766824 kernel: pci 0000:00:16.6: BAR 13: no space for [io size 0x1000] Nov 8 00:26:28.766872 kernel: pci 0000:00:16.6: BAR 13: failed to assign [io size 0x1000] Nov 8 00:26:28.766921 kernel: pci 0000:00:16.7: BAR 13: no space for [io size 0x1000] Nov 8 00:26:28.766973 kernel: pci 0000:00:16.7: BAR 13: failed to assign [io size 0x1000] Nov 8 00:26:28.767022 kernel: pci 0000:00:17.3: BAR 13: no space for [io size 0x1000] Nov 8 00:26:28.767071 kernel: pci 0000:00:17.3: BAR 13: failed to assign [io size 0x1000] Nov 8 00:26:28.767119 kernel: pci 0000:00:17.4: BAR 13: no space for [io size 0x1000] Nov 8 00:26:28.767168 kernel: pci 0000:00:17.4: BAR 13: failed to assign [io size 0x1000] Nov 8 00:26:28.767217 kernel: pci 0000:00:17.5: BAR 13: no space for [io size 0x1000] Nov 8 00:26:28.767265 kernel: pci 0000:00:17.5: BAR 13: failed to assign [io size 0x1000] Nov 8 00:26:28.767315 kernel: pci 0000:00:17.6: BAR 13: no space for [io size 0x1000] Nov 8 00:26:28.767367 kernel: pci 0000:00:17.6: BAR 13: failed to assign [io size 0x1000] Nov 8 00:26:28.767422 kernel: pci 0000:00:17.7: BAR 13: no space for [io size 0x1000] Nov 8 00:26:28.767472 kernel: pci 0000:00:17.7: BAR 13: failed to assign [io size 0x1000] Nov 8 00:26:28.767521 kernel: pci 0000:00:18.2: BAR 13: no space for [io size 0x1000] Nov 8 00:26:28.767570 kernel: pci 0000:00:18.2: BAR 13: failed to assign [io size 0x1000] Nov 8 00:26:28.767619 kernel: pci 0000:00:18.3: BAR 13: no space for [io size 0x1000] Nov 8 00:26:28.767668 kernel: pci 0000:00:18.3: BAR 13: failed to assign [io size 0x1000] Nov 8 00:26:28.767717 kernel: pci 0000:00:18.4: BAR 13: no space for [io size 0x1000] Nov 8 00:26:28.767769 kernel: pci 0000:00:18.4: BAR 13: failed to assign [io size 0x1000] Nov 8 00:26:28.767818 kernel: pci 0000:00:18.5: BAR 13: no space for [io size 0x1000] Nov 8 00:26:28.767867 kernel: pci 0000:00:18.5: BAR 13: failed to assign [io size 0x1000] Nov 8 00:26:28.767916 kernel: pci 0000:00:18.6: BAR 13: no space for [io size 0x1000] Nov 8 00:26:28.767965 kernel: pci 0000:00:18.6: BAR 13: failed to assign [io size 0x1000] Nov 8 00:26:28.768014 kernel: pci 0000:00:18.7: BAR 13: no space for [io size 0x1000] Nov 8 00:26:28.768062 kernel: pci 0000:00:18.7: BAR 13: failed to assign [io size 0x1000] Nov 8 00:26:28.768110 kernel: pci 0000:00:18.7: BAR 13: no space for [io size 0x1000] Nov 8 00:26:28.768162 kernel: pci 0000:00:18.7: BAR 13: failed to assign [io size 0x1000] Nov 8 00:26:28.768210 kernel: pci 0000:00:18.6: BAR 13: no space for [io size 0x1000] Nov 8 00:26:28.768259 kernel: pci 0000:00:18.6: BAR 13: failed to assign [io size 0x1000] Nov 8 00:26:28.768308 kernel: pci 0000:00:18.5: BAR 13: no space for [io size 0x1000] Nov 8 00:26:28.768356 kernel: pci 0000:00:18.5: BAR 13: failed to assign [io size 0x1000] Nov 8 00:26:28.768417 kernel: pci 0000:00:18.4: BAR 13: no space for [io size 0x1000] Nov 8 00:26:28.768470 kernel: pci 0000:00:18.4: BAR 13: failed to assign [io size 0x1000] Nov 8 00:26:28.768519 kernel: pci 0000:00:18.3: BAR 13: no space for [io size 0x1000] Nov 8 00:26:28.768568 kernel: pci 0000:00:18.3: BAR 13: failed to assign [io size 0x1000] Nov 8 00:26:28.768619 kernel: pci 0000:00:18.2: BAR 13: no space for [io size 0x1000] Nov 8 00:26:28.768669 kernel: pci 0000:00:18.2: BAR 13: failed to assign [io size 0x1000] Nov 8 00:26:28.768722 kernel: pci 0000:00:17.7: BAR 13: no space for [io size 0x1000] Nov 8 00:26:28.768772 kernel: pci 0000:00:17.7: BAR 13: failed to assign [io size 0x1000] Nov 8 00:26:28.768821 kernel: pci 0000:00:17.6: BAR 13: no space for [io size 0x1000] Nov 8 00:26:28.768870 kernel: pci 0000:00:17.6: BAR 13: failed to assign [io size 0x1000] Nov 8 00:26:28.768919 kernel: pci 0000:00:17.5: BAR 13: no space for [io size 0x1000] Nov 8 00:26:28.768968 kernel: pci 0000:00:17.5: BAR 13: failed to assign [io size 0x1000] Nov 8 00:26:28.769017 kernel: pci 0000:00:17.4: BAR 13: no space for [io size 0x1000] Nov 8 00:26:28.769067 kernel: pci 0000:00:17.4: BAR 13: failed to assign [io size 0x1000] Nov 8 00:26:28.769119 kernel: pci 0000:00:17.3: BAR 13: no space for [io size 0x1000] Nov 8 00:26:28.769168 kernel: pci 0000:00:17.3: BAR 13: failed to assign [io size 0x1000] Nov 8 00:26:28.769217 kernel: pci 0000:00:16.7: BAR 13: no space for [io size 0x1000] Nov 8 00:26:28.769267 kernel: pci 0000:00:16.7: BAR 13: failed to assign [io size 0x1000] Nov 8 00:26:28.769316 kernel: pci 0000:00:16.6: BAR 13: no space for [io size 0x1000] Nov 8 00:26:28.769364 kernel: pci 0000:00:16.6: BAR 13: failed to assign [io size 0x1000] Nov 8 00:26:28.769484 kernel: pci 0000:00:16.5: BAR 13: no space for [io size 0x1000] Nov 8 00:26:28.769535 kernel: pci 0000:00:16.5: BAR 13: failed to assign [io size 0x1000] Nov 8 00:26:28.769584 kernel: pci 0000:00:16.4: BAR 13: no space for [io size 0x1000] Nov 8 00:26:28.769637 kernel: pci 0000:00:16.4: BAR 13: failed to assign [io size 0x1000] Nov 8 00:26:28.769690 kernel: pci 0000:00:16.3: BAR 13: no space for [io size 0x1000] Nov 8 00:26:28.769739 kernel: pci 0000:00:16.3: BAR 13: failed to assign [io size 0x1000] Nov 8 00:26:28.769792 kernel: pci 0000:00:15.7: BAR 13: no space for [io size 0x1000] Nov 8 00:26:28.769841 kernel: pci 0000:00:15.7: BAR 13: failed to assign [io size 0x1000] Nov 8 00:26:28.769890 kernel: pci 0000:00:15.6: BAR 13: no space for [io size 0x1000] Nov 8 00:26:28.769938 kernel: pci 0000:00:15.6: BAR 13: failed to assign [io size 0x1000] Nov 8 00:26:28.769987 kernel: pci 0000:00:15.5: BAR 13: no space for [io size 0x1000] Nov 8 00:26:28.770036 kernel: pci 0000:00:15.5: BAR 13: failed to assign [io size 0x1000] Nov 8 00:26:28.770085 kernel: pci 0000:00:15.4: BAR 13: no space for [io size 0x1000] Nov 8 00:26:28.770136 kernel: pci 0000:00:15.4: BAR 13: failed to assign [io size 0x1000] Nov 8 00:26:28.770184 kernel: pci 0000:00:15.3: BAR 13: no space for [io size 0x1000] Nov 8 00:26:28.770233 kernel: pci 0000:00:15.3: BAR 13: failed to assign [io size 0x1000] Nov 8 00:26:28.770281 kernel: pci 0000:00:01.0: PCI bridge to [bus 01] Nov 8 00:26:28.770330 kernel: pci 0000:00:11.0: PCI bridge to [bus 02] Nov 8 00:26:28.770379 kernel: pci 0000:00:11.0: bridge window [io 0x2000-0x3fff] Nov 8 00:26:28.771477 kernel: pci 0000:00:11.0: bridge window [mem 0xfd600000-0xfdffffff] Nov 8 00:26:28.771536 kernel: pci 0000:00:11.0: bridge window [mem 0xe7b00000-0xe7ffffff 64bit pref] Nov 8 00:26:28.771595 kernel: pci 0000:03:00.0: BAR 6: assigned [mem 0xfd500000-0xfd50ffff pref] Nov 8 00:26:28.771647 kernel: pci 0000:00:15.0: PCI bridge to [bus 03] Nov 8 00:26:28.771697 kernel: pci 0000:00:15.0: bridge window [io 0x4000-0x4fff] Nov 8 00:26:28.771746 kernel: pci 0000:00:15.0: bridge window [mem 0xfd500000-0xfd5fffff] Nov 8 00:26:28.771822 kernel: pci 0000:00:15.0: bridge window [mem 0xc0000000-0xc01fffff 64bit pref] Nov 8 00:26:28.771875 kernel: pci 0000:00:15.1: PCI bridge to [bus 04] Nov 8 00:26:28.771925 kernel: pci 0000:00:15.1: bridge window [io 0x8000-0x8fff] Nov 8 00:26:28.771975 kernel: pci 0000:00:15.1: bridge window [mem 0xfd100000-0xfd1fffff] Nov 8 00:26:28.772025 kernel: pci 0000:00:15.1: bridge window [mem 0xe7800000-0xe78fffff 64bit pref] Nov 8 00:26:28.772079 kernel: pci 0000:00:15.2: PCI bridge to [bus 05] Nov 8 00:26:28.772128 kernel: pci 0000:00:15.2: bridge window [io 0xc000-0xcfff] Nov 8 00:26:28.772178 kernel: pci 0000:00:15.2: bridge window [mem 0xfcd00000-0xfcdfffff] Nov 8 00:26:28.772227 kernel: pci 0000:00:15.2: bridge window [mem 0xe7400000-0xe74fffff 64bit pref] Nov 8 00:26:28.772277 kernel: pci 0000:00:15.3: PCI bridge to [bus 06] Nov 8 00:26:28.772326 kernel: pci 0000:00:15.3: bridge window [mem 0xfc900000-0xfc9fffff] Nov 8 00:26:28.772376 kernel: pci 0000:00:15.3: bridge window [mem 0xe7000000-0xe70fffff 64bit pref] Nov 8 00:26:28.772434 kernel: pci 0000:00:15.4: PCI bridge to [bus 07] Nov 8 00:26:28.772484 kernel: pci 0000:00:15.4: bridge window [mem 0xfc500000-0xfc5fffff] Nov 8 00:26:28.772533 kernel: pci 0000:00:15.4: bridge window [mem 0xe6c00000-0xe6cfffff 64bit pref] Nov 8 00:26:28.772587 kernel: pci 0000:00:15.5: PCI bridge to [bus 08] Nov 8 00:26:28.772636 kernel: pci 0000:00:15.5: bridge window [mem 0xfc100000-0xfc1fffff] Nov 8 00:26:28.772711 kernel: pci 0000:00:15.5: bridge window [mem 0xe6800000-0xe68fffff 64bit pref] Nov 8 00:26:28.772780 kernel: pci 0000:00:15.6: PCI bridge to [bus 09] Nov 8 00:26:28.772829 kernel: pci 0000:00:15.6: bridge window [mem 0xfbd00000-0xfbdfffff] Nov 8 00:26:28.772880 kernel: pci 0000:00:15.6: bridge window [mem 0xe6400000-0xe64fffff 64bit pref] Nov 8 00:26:28.772930 kernel: pci 0000:00:15.7: PCI bridge to [bus 0a] Nov 8 00:26:28.772980 kernel: pci 0000:00:15.7: bridge window [mem 0xfb900000-0xfb9fffff] Nov 8 00:26:28.773030 kernel: pci 0000:00:15.7: bridge window [mem 0xe6000000-0xe60fffff 64bit pref] Nov 8 00:26:28.773081 kernel: pci 0000:0b:00.0: BAR 6: assigned [mem 0xfd400000-0xfd40ffff pref] Nov 8 00:26:28.773132 kernel: pci 0000:00:16.0: PCI bridge to [bus 0b] Nov 8 00:26:28.773181 kernel: pci 0000:00:16.0: bridge window [io 0x5000-0x5fff] Nov 8 00:26:28.773231 kernel: pci 0000:00:16.0: bridge window [mem 0xfd400000-0xfd4fffff] Nov 8 00:26:28.773280 kernel: pci 0000:00:16.0: bridge window [mem 0xc0200000-0xc03fffff 64bit pref] Nov 8 00:26:28.773333 kernel: pci 0000:00:16.1: PCI bridge to [bus 0c] Nov 8 00:26:28.773382 kernel: pci 0000:00:16.1: bridge window [io 0x9000-0x9fff] Nov 8 00:26:28.774454 kernel: pci 0000:00:16.1: bridge window [mem 0xfd000000-0xfd0fffff] Nov 8 00:26:28.774511 kernel: pci 0000:00:16.1: bridge window [mem 0xe7700000-0xe77fffff 64bit pref] Nov 8 00:26:28.774564 kernel: pci 0000:00:16.2: PCI bridge to [bus 0d] Nov 8 00:26:28.774615 kernel: pci 0000:00:16.2: bridge window [io 0xd000-0xdfff] Nov 8 00:26:28.774665 kernel: pci 0000:00:16.2: bridge window [mem 0xfcc00000-0xfccfffff] Nov 8 00:26:28.774715 kernel: pci 0000:00:16.2: bridge window [mem 0xe7300000-0xe73fffff 64bit pref] Nov 8 00:26:28.774799 kernel: pci 0000:00:16.3: PCI bridge to [bus 0e] Nov 8 00:26:28.774849 kernel: pci 0000:00:16.3: bridge window [mem 0xfc800000-0xfc8fffff] Nov 8 00:26:28.774903 kernel: pci 0000:00:16.3: bridge window [mem 0xe6f00000-0xe6ffffff 64bit pref] Nov 8 00:26:28.774953 kernel: pci 0000:00:16.4: PCI bridge to [bus 0f] Nov 8 00:26:28.775003 kernel: pci 0000:00:16.4: bridge window [mem 0xfc400000-0xfc4fffff] Nov 8 00:26:28.775052 kernel: pci 0000:00:16.4: bridge window [mem 0xe6b00000-0xe6bfffff 64bit pref] Nov 8 00:26:28.775102 kernel: pci 0000:00:16.5: PCI bridge to [bus 10] Nov 8 00:26:28.775180 kernel: pci 0000:00:16.5: bridge window [mem 0xfc000000-0xfc0fffff] Nov 8 00:26:28.775936 kernel: pci 0000:00:16.5: bridge window [mem 0xe6700000-0xe67fffff 64bit pref] Nov 8 00:26:28.775997 kernel: pci 0000:00:16.6: PCI bridge to [bus 11] Nov 8 00:26:28.776049 kernel: pci 0000:00:16.6: bridge window [mem 0xfbc00000-0xfbcfffff] Nov 8 00:26:28.776103 kernel: pci 0000:00:16.6: bridge window [mem 0xe6300000-0xe63fffff 64bit pref] Nov 8 00:26:28.776153 kernel: pci 0000:00:16.7: PCI bridge to [bus 12] Nov 8 00:26:28.776203 kernel: pci 0000:00:16.7: bridge window [mem 0xfb800000-0xfb8fffff] Nov 8 00:26:28.776252 kernel: pci 0000:00:16.7: bridge window [mem 0xe5f00000-0xe5ffffff 64bit pref] Nov 8 00:26:28.776302 kernel: pci 0000:00:17.0: PCI bridge to [bus 13] Nov 8 00:26:28.776352 kernel: pci 0000:00:17.0: bridge window [io 0x6000-0x6fff] Nov 8 00:26:28.776448 kernel: pci 0000:00:17.0: bridge window [mem 0xfd300000-0xfd3fffff] Nov 8 00:26:28.776502 kernel: pci 0000:00:17.0: bridge window [mem 0xe7a00000-0xe7afffff 64bit pref] Nov 8 00:26:28.776553 kernel: pci 0000:00:17.1: PCI bridge to [bus 14] Nov 8 00:26:28.776602 kernel: pci 0000:00:17.1: bridge window [io 0xa000-0xafff] Nov 8 00:26:28.776655 kernel: pci 0000:00:17.1: bridge window [mem 0xfcf00000-0xfcffffff] Nov 8 00:26:28.776712 kernel: pci 0000:00:17.1: bridge window [mem 0xe7600000-0xe76fffff 64bit pref] Nov 8 00:26:28.776763 kernel: pci 0000:00:17.2: PCI bridge to [bus 15] Nov 8 00:26:28.776812 kernel: pci 0000:00:17.2: bridge window [io 0xe000-0xefff] Nov 8 00:26:28.776861 kernel: pci 0000:00:17.2: bridge window [mem 0xfcb00000-0xfcbfffff] Nov 8 00:26:28.776910 kernel: pci 0000:00:17.2: bridge window [mem 0xe7200000-0xe72fffff 64bit pref] Nov 8 00:26:28.776959 kernel: pci 0000:00:17.3: PCI bridge to [bus 16] Nov 8 00:26:28.777009 kernel: pci 0000:00:17.3: bridge window [mem 0xfc700000-0xfc7fffff] Nov 8 00:26:28.777059 kernel: pci 0000:00:17.3: bridge window [mem 0xe6e00000-0xe6efffff 64bit pref] Nov 8 00:26:28.777112 kernel: pci 0000:00:17.4: PCI bridge to [bus 17] Nov 8 00:26:28.777251 kernel: pci 0000:00:17.4: bridge window [mem 0xfc300000-0xfc3fffff] Nov 8 00:26:28.777304 kernel: pci 0000:00:17.4: bridge window [mem 0xe6a00000-0xe6afffff 64bit pref] Nov 8 00:26:28.777354 kernel: pci 0000:00:17.5: PCI bridge to [bus 18] Nov 8 00:26:28.777431 kernel: pci 0000:00:17.5: bridge window [mem 0xfbf00000-0xfbffffff] Nov 8 00:26:28.777484 kernel: pci 0000:00:17.5: bridge window [mem 0xe6600000-0xe66fffff 64bit pref] Nov 8 00:26:28.777533 kernel: pci 0000:00:17.6: PCI bridge to [bus 19] Nov 8 00:26:28.777582 kernel: pci 0000:00:17.6: bridge window [mem 0xfbb00000-0xfbbfffff] Nov 8 00:26:28.777631 kernel: pci 0000:00:17.6: bridge window [mem 0xe6200000-0xe62fffff 64bit pref] Nov 8 00:26:28.777680 kernel: pci 0000:00:17.7: PCI bridge to [bus 1a] Nov 8 00:26:28.777732 kernel: pci 0000:00:17.7: bridge window [mem 0xfb700000-0xfb7fffff] Nov 8 00:26:28.777782 kernel: pci 0000:00:17.7: bridge window [mem 0xe5e00000-0xe5efffff 64bit pref] Nov 8 00:26:28.777831 kernel: pci 0000:00:18.0: PCI bridge to [bus 1b] Nov 8 00:26:28.777882 kernel: pci 0000:00:18.0: bridge window [io 0x7000-0x7fff] Nov 8 00:26:28.777931 kernel: pci 0000:00:18.0: bridge window [mem 0xfd200000-0xfd2fffff] Nov 8 00:26:28.777981 kernel: pci 0000:00:18.0: bridge window [mem 0xe7900000-0xe79fffff 64bit pref] Nov 8 00:26:28.778031 kernel: pci 0000:00:18.1: PCI bridge to [bus 1c] Nov 8 00:26:28.778081 kernel: pci 0000:00:18.1: bridge window [io 0xb000-0xbfff] Nov 8 00:26:28.778131 kernel: pci 0000:00:18.1: bridge window [mem 0xfce00000-0xfcefffff] Nov 8 00:26:28.778183 kernel: pci 0000:00:18.1: bridge window [mem 0xe7500000-0xe75fffff 64bit pref] Nov 8 00:26:28.778235 kernel: pci 0000:00:18.2: PCI bridge to [bus 1d] Nov 8 00:26:28.778284 kernel: pci 0000:00:18.2: bridge window [mem 0xfca00000-0xfcafffff] Nov 8 00:26:28.778369 kernel: pci 0000:00:18.2: bridge window [mem 0xe7100000-0xe71fffff 64bit pref] Nov 8 00:26:28.778469 kernel: pci 0000:00:18.3: PCI bridge to [bus 1e] Nov 8 00:26:28.778522 kernel: pci 0000:00:18.3: bridge window [mem 0xfc600000-0xfc6fffff] Nov 8 00:26:28.778572 kernel: pci 0000:00:18.3: bridge window [mem 0xe6d00000-0xe6dfffff 64bit pref] Nov 8 00:26:28.778622 kernel: pci 0000:00:18.4: PCI bridge to [bus 1f] Nov 8 00:26:28.778672 kernel: pci 0000:00:18.4: bridge window [mem 0xfc200000-0xfc2fffff] Nov 8 00:26:28.778754 kernel: pci 0000:00:18.4: bridge window [mem 0xe6900000-0xe69fffff 64bit pref] Nov 8 00:26:28.778807 kernel: pci 0000:00:18.5: PCI bridge to [bus 20] Nov 8 00:26:28.778856 kernel: pci 0000:00:18.5: bridge window [mem 0xfbe00000-0xfbefffff] Nov 8 00:26:28.778906 kernel: pci 0000:00:18.5: bridge window [mem 0xe6500000-0xe65fffff 64bit pref] Nov 8 00:26:28.778956 kernel: pci 0000:00:18.6: PCI bridge to [bus 21] Nov 8 00:26:28.779006 kernel: pci 0000:00:18.6: bridge window [mem 0xfba00000-0xfbafffff] Nov 8 00:26:28.779056 kernel: pci 0000:00:18.6: bridge window [mem 0xe6100000-0xe61fffff 64bit pref] Nov 8 00:26:28.779106 kernel: pci 0000:00:18.7: PCI bridge to [bus 22] Nov 8 00:26:28.779155 kernel: pci 0000:00:18.7: bridge window [mem 0xfb600000-0xfb6fffff] Nov 8 00:26:28.779205 kernel: pci 0000:00:18.7: bridge window [mem 0xe5d00000-0xe5dfffff 64bit pref] Nov 8 00:26:28.779257 kernel: pci_bus 0000:00: resource 4 [mem 0x000a0000-0x000bffff window] Nov 8 00:26:28.779302 kernel: pci_bus 0000:00: resource 5 [mem 0x000cc000-0x000dbfff window] Nov 8 00:26:28.779346 kernel: pci_bus 0000:00: resource 6 [mem 0xc0000000-0xfebfffff window] Nov 8 00:26:28.779389 kernel: pci_bus 0000:00: resource 7 [io 0x0000-0x0cf7 window] Nov 8 00:26:28.779458 kernel: pci_bus 0000:00: resource 8 [io 0x0d00-0xfeff window] Nov 8 00:26:28.779507 kernel: pci_bus 0000:02: resource 0 [io 0x2000-0x3fff] Nov 8 00:26:28.779586 kernel: pci_bus 0000:02: resource 1 [mem 0xfd600000-0xfdffffff] Nov 8 00:26:28.779656 kernel: pci_bus 0000:02: resource 2 [mem 0xe7b00000-0xe7ffffff 64bit pref] Nov 8 00:26:28.779725 kernel: pci_bus 0000:02: resource 4 [mem 0x000a0000-0x000bffff window] Nov 8 00:26:28.779779 kernel: pci_bus 0000:02: resource 5 [mem 0x000cc000-0x000dbfff window] Nov 8 00:26:28.779825 kernel: pci_bus 0000:02: resource 6 [mem 0xc0000000-0xfebfffff window] Nov 8 00:26:28.779870 kernel: pci_bus 0000:02: resource 7 [io 0x0000-0x0cf7 window] Nov 8 00:26:28.779914 kernel: pci_bus 0000:02: resource 8 [io 0x0d00-0xfeff window] Nov 8 00:26:28.779964 kernel: pci_bus 0000:03: resource 0 [io 0x4000-0x4fff] Nov 8 00:26:28.780010 kernel: pci_bus 0000:03: resource 1 [mem 0xfd500000-0xfd5fffff] Nov 8 00:26:28.780058 kernel: pci_bus 0000:03: resource 2 [mem 0xc0000000-0xc01fffff 64bit pref] Nov 8 00:26:28.780108 kernel: pci_bus 0000:04: resource 0 [io 0x8000-0x8fff] Nov 8 00:26:28.780153 kernel: pci_bus 0000:04: resource 1 [mem 0xfd100000-0xfd1fffff] Nov 8 00:26:28.780198 kernel: pci_bus 0000:04: resource 2 [mem 0xe7800000-0xe78fffff 64bit pref] Nov 8 00:26:28.780247 kernel: pci_bus 0000:05: resource 0 [io 0xc000-0xcfff] Nov 8 00:26:28.780293 kernel: pci_bus 0000:05: resource 1 [mem 0xfcd00000-0xfcdfffff] Nov 8 00:26:28.780339 kernel: pci_bus 0000:05: resource 2 [mem 0xe7400000-0xe74fffff 64bit pref] Nov 8 00:26:28.780391 kernel: pci_bus 0000:06: resource 1 [mem 0xfc900000-0xfc9fffff] Nov 8 00:26:28.780488 kernel: pci_bus 0000:06: resource 2 [mem 0xe7000000-0xe70fffff 64bit pref] Nov 8 00:26:28.780537 kernel: pci_bus 0000:07: resource 1 [mem 0xfc500000-0xfc5fffff] Nov 8 00:26:28.780583 kernel: pci_bus 0000:07: resource 2 [mem 0xe6c00000-0xe6cfffff 64bit pref] Nov 8 00:26:28.780633 kernel: pci_bus 0000:08: resource 1 [mem 0xfc100000-0xfc1fffff] Nov 8 00:26:28.780679 kernel: pci_bus 0000:08: resource 2 [mem 0xe6800000-0xe68fffff 64bit pref] Nov 8 00:26:28.780773 kernel: pci_bus 0000:09: resource 1 [mem 0xfbd00000-0xfbdfffff] Nov 8 00:26:28.780819 kernel: pci_bus 0000:09: resource 2 [mem 0xe6400000-0xe64fffff 64bit pref] Nov 8 00:26:28.780871 kernel: pci_bus 0000:0a: resource 1 [mem 0xfb900000-0xfb9fffff] Nov 8 00:26:28.780917 kernel: pci_bus 0000:0a: resource 2 [mem 0xe6000000-0xe60fffff 64bit pref] Nov 8 00:26:28.780977 kernel: pci_bus 0000:0b: resource 0 [io 0x5000-0x5fff] Nov 8 00:26:28.781023 kernel: pci_bus 0000:0b: resource 1 [mem 0xfd400000-0xfd4fffff] Nov 8 00:26:28.781071 kernel: pci_bus 0000:0b: resource 2 [mem 0xc0200000-0xc03fffff 64bit pref] Nov 8 00:26:28.781119 kernel: pci_bus 0000:0c: resource 0 [io 0x9000-0x9fff] Nov 8 00:26:28.781166 kernel: pci_bus 0000:0c: resource 1 [mem 0xfd000000-0xfd0fffff] Nov 8 00:26:28.781212 kernel: pci_bus 0000:0c: resource 2 [mem 0xe7700000-0xe77fffff 64bit pref] Nov 8 00:26:28.781261 kernel: pci_bus 0000:0d: resource 0 [io 0xd000-0xdfff] Nov 8 00:26:28.781308 kernel: pci_bus 0000:0d: resource 1 [mem 0xfcc00000-0xfccfffff] Nov 8 00:26:28.781360 kernel: pci_bus 0000:0d: resource 2 [mem 0xe7300000-0xe73fffff 64bit pref] Nov 8 00:26:28.781428 kernel: pci_bus 0000:0e: resource 1 [mem 0xfc800000-0xfc8fffff] Nov 8 00:26:28.781477 kernel: pci_bus 0000:0e: resource 2 [mem 0xe6f00000-0xe6ffffff 64bit pref] Nov 8 00:26:28.781526 kernel: pci_bus 0000:0f: resource 1 [mem 0xfc400000-0xfc4fffff] Nov 8 00:26:28.781573 kernel: pci_bus 0000:0f: resource 2 [mem 0xe6b00000-0xe6bfffff 64bit pref] Nov 8 00:26:28.781622 kernel: pci_bus 0000:10: resource 1 [mem 0xfc000000-0xfc0fffff] Nov 8 00:26:28.781668 kernel: pci_bus 0000:10: resource 2 [mem 0xe6700000-0xe67fffff 64bit pref] Nov 8 00:26:28.781720 kernel: pci_bus 0000:11: resource 1 [mem 0xfbc00000-0xfbcfffff] Nov 8 00:26:28.781767 kernel: pci_bus 0000:11: resource 2 [mem 0xe6300000-0xe63fffff 64bit pref] Nov 8 00:26:28.781818 kernel: pci_bus 0000:12: resource 1 [mem 0xfb800000-0xfb8fffff] Nov 8 00:26:28.781864 kernel: pci_bus 0000:12: resource 2 [mem 0xe5f00000-0xe5ffffff 64bit pref] Nov 8 00:26:28.781913 kernel: pci_bus 0000:13: resource 0 [io 0x6000-0x6fff] Nov 8 00:26:28.781960 kernel: pci_bus 0000:13: resource 1 [mem 0xfd300000-0xfd3fffff] Nov 8 00:26:28.782008 kernel: pci_bus 0000:13: resource 2 [mem 0xe7a00000-0xe7afffff 64bit pref] Nov 8 00:26:28.782057 kernel: pci_bus 0000:14: resource 0 [io 0xa000-0xafff] Nov 8 00:26:28.782103 kernel: pci_bus 0000:14: resource 1 [mem 0xfcf00000-0xfcffffff] Nov 8 00:26:28.782148 kernel: pci_bus 0000:14: resource 2 [mem 0xe7600000-0xe76fffff 64bit pref] Nov 8 00:26:28.782197 kernel: pci_bus 0000:15: resource 0 [io 0xe000-0xefff] Nov 8 00:26:28.782243 kernel: pci_bus 0000:15: resource 1 [mem 0xfcb00000-0xfcbfffff] Nov 8 00:26:28.782288 kernel: pci_bus 0000:15: resource 2 [mem 0xe7200000-0xe72fffff 64bit pref] Nov 8 00:26:28.782340 kernel: pci_bus 0000:16: resource 1 [mem 0xfc700000-0xfc7fffff] Nov 8 00:26:28.782387 kernel: pci_bus 0000:16: resource 2 [mem 0xe6e00000-0xe6efffff 64bit pref] Nov 8 00:26:28.782460 kernel: pci_bus 0000:17: resource 1 [mem 0xfc300000-0xfc3fffff] Nov 8 00:26:28.782506 kernel: pci_bus 0000:17: resource 2 [mem 0xe6a00000-0xe6afffff 64bit pref] Nov 8 00:26:28.782555 kernel: pci_bus 0000:18: resource 1 [mem 0xfbf00000-0xfbffffff] Nov 8 00:26:28.782617 kernel: pci_bus 0000:18: resource 2 [mem 0xe6600000-0xe66fffff 64bit pref] Nov 8 00:26:28.782671 kernel: pci_bus 0000:19: resource 1 [mem 0xfbb00000-0xfbbfffff] Nov 8 00:26:28.782769 kernel: pci_bus 0000:19: resource 2 [mem 0xe6200000-0xe62fffff 64bit pref] Nov 8 00:26:28.782821 kernel: pci_bus 0000:1a: resource 1 [mem 0xfb700000-0xfb7fffff] Nov 8 00:26:28.782867 kernel: pci_bus 0000:1a: resource 2 [mem 0xe5e00000-0xe5efffff 64bit pref] Nov 8 00:26:28.782920 kernel: pci_bus 0000:1b: resource 0 [io 0x7000-0x7fff] Nov 8 00:26:28.782966 kernel: pci_bus 0000:1b: resource 1 [mem 0xfd200000-0xfd2fffff] Nov 8 00:26:28.783014 kernel: pci_bus 0000:1b: resource 2 [mem 0xe7900000-0xe79fffff 64bit pref] Nov 8 00:26:28.783063 kernel: pci_bus 0000:1c: resource 0 [io 0xb000-0xbfff] Nov 8 00:26:28.783112 kernel: pci_bus 0000:1c: resource 1 [mem 0xfce00000-0xfcefffff] Nov 8 00:26:28.783158 kernel: pci_bus 0000:1c: resource 2 [mem 0xe7500000-0xe75fffff 64bit pref] Nov 8 00:26:28.783207 kernel: pci_bus 0000:1d: resource 1 [mem 0xfca00000-0xfcafffff] Nov 8 00:26:28.783253 kernel: pci_bus 0000:1d: resource 2 [mem 0xe7100000-0xe71fffff 64bit pref] Nov 8 00:26:28.783305 kernel: pci_bus 0000:1e: resource 1 [mem 0xfc600000-0xfc6fffff] Nov 8 00:26:28.783352 kernel: pci_bus 0000:1e: resource 2 [mem 0xe6d00000-0xe6dfffff 64bit pref] Nov 8 00:26:28.783411 kernel: pci_bus 0000:1f: resource 1 [mem 0xfc200000-0xfc2fffff] Nov 8 00:26:28.783462 kernel: pci_bus 0000:1f: resource 2 [mem 0xe6900000-0xe69fffff 64bit pref] Nov 8 00:26:28.783512 kernel: pci_bus 0000:20: resource 1 [mem 0xfbe00000-0xfbefffff] Nov 8 00:26:28.783560 kernel: pci_bus 0000:20: resource 2 [mem 0xe6500000-0xe65fffff 64bit pref] Nov 8 00:26:28.783610 kernel: pci_bus 0000:21: resource 1 [mem 0xfba00000-0xfbafffff] Nov 8 00:26:28.783660 kernel: pci_bus 0000:21: resource 2 [mem 0xe6100000-0xe61fffff 64bit pref] Nov 8 00:26:28.783714 kernel: pci_bus 0000:22: resource 1 [mem 0xfb600000-0xfb6fffff] Nov 8 00:26:28.783761 kernel: pci_bus 0000:22: resource 2 [mem 0xe5d00000-0xe5dfffff 64bit pref] Nov 8 00:26:28.783815 kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers Nov 8 00:26:28.783824 kernel: PCI: CLS 32 bytes, default 64 Nov 8 00:26:28.783831 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Nov 8 00:26:28.783840 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x311fd3cd494, max_idle_ns: 440795223879 ns Nov 8 00:26:28.783846 kernel: clocksource: Switched to clocksource tsc Nov 8 00:26:28.783852 kernel: Initialise system trusted keyrings Nov 8 00:26:28.783858 kernel: workingset: timestamp_bits=39 max_order=19 bucket_order=0 Nov 8 00:26:28.783864 kernel: Key type asymmetric registered Nov 8 00:26:28.783870 kernel: Asymmetric key parser 'x509' registered Nov 8 00:26:28.783876 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Nov 8 00:26:28.783882 kernel: io scheduler mq-deadline registered Nov 8 00:26:28.783888 kernel: io scheduler kyber registered Nov 8 00:26:28.783894 kernel: io scheduler bfq registered Nov 8 00:26:28.783947 kernel: pcieport 0000:00:15.0: PME: Signaling with IRQ 24 Nov 8 00:26:28.783999 kernel: pcieport 0000:00:15.0: pciehp: Slot #160 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Nov 8 00:26:28.784051 kernel: pcieport 0000:00:15.1: PME: Signaling with IRQ 25 Nov 8 00:26:28.784101 kernel: pcieport 0000:00:15.1: pciehp: Slot #161 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Nov 8 00:26:28.784152 kernel: pcieport 0000:00:15.2: PME: Signaling with IRQ 26 Nov 8 00:26:28.784203 kernel: pcieport 0000:00:15.2: pciehp: Slot #162 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Nov 8 00:26:28.784254 kernel: pcieport 0000:00:15.3: PME: Signaling with IRQ 27 Nov 8 00:26:28.784307 kernel: pcieport 0000:00:15.3: pciehp: Slot #163 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Nov 8 00:26:28.784358 kernel: pcieport 0000:00:15.4: PME: Signaling with IRQ 28 Nov 8 00:26:28.785496 kernel: pcieport 0000:00:15.4: pciehp: Slot #164 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Nov 8 00:26:28.785559 kernel: pcieport 0000:00:15.5: PME: Signaling with IRQ 29 Nov 8 00:26:28.785615 kernel: pcieport 0000:00:15.5: pciehp: Slot #165 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Nov 8 00:26:28.785672 kernel: pcieport 0000:00:15.6: PME: Signaling with IRQ 30 Nov 8 00:26:28.785724 kernel: pcieport 0000:00:15.6: pciehp: Slot #166 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Nov 8 00:26:28.785776 kernel: pcieport 0000:00:15.7: PME: Signaling with IRQ 31 Nov 8 00:26:28.785829 kernel: pcieport 0000:00:15.7: pciehp: Slot #167 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Nov 8 00:26:28.785879 kernel: pcieport 0000:00:16.0: PME: Signaling with IRQ 32 Nov 8 00:26:28.785931 kernel: pcieport 0000:00:16.0: pciehp: Slot #192 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Nov 8 00:26:28.785985 kernel: pcieport 0000:00:16.1: PME: Signaling with IRQ 33 Nov 8 00:26:28.786036 kernel: pcieport 0000:00:16.1: pciehp: Slot #193 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Nov 8 00:26:28.786086 kernel: pcieport 0000:00:16.2: PME: Signaling with IRQ 34 Nov 8 00:26:28.786137 kernel: pcieport 0000:00:16.2: pciehp: Slot #194 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Nov 8 00:26:28.786187 kernel: pcieport 0000:00:16.3: PME: Signaling with IRQ 35 Nov 8 00:26:28.786238 kernel: pcieport 0000:00:16.3: pciehp: Slot #195 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Nov 8 00:26:28.786288 kernel: pcieport 0000:00:16.4: PME: Signaling with IRQ 36 Nov 8 00:26:28.786342 kernel: pcieport 0000:00:16.4: pciehp: Slot #196 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Nov 8 00:26:28.786393 kernel: pcieport 0000:00:16.5: PME: Signaling with IRQ 37 Nov 8 00:26:28.787056 kernel: pcieport 0000:00:16.5: pciehp: Slot #197 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Nov 8 00:26:28.787113 kernel: pcieport 0000:00:16.6: PME: Signaling with IRQ 38 Nov 8 00:26:28.787167 kernel: pcieport 0000:00:16.6: pciehp: Slot #198 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Nov 8 00:26:28.787223 kernel: pcieport 0000:00:16.7: PME: Signaling with IRQ 39 Nov 8 00:26:28.787274 kernel: pcieport 0000:00:16.7: pciehp: Slot #199 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Nov 8 00:26:28.787327 kernel: pcieport 0000:00:17.0: PME: Signaling with IRQ 40 Nov 8 00:26:28.787378 kernel: pcieport 0000:00:17.0: pciehp: Slot #224 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Nov 8 00:26:28.787467 kernel: pcieport 0000:00:17.1: PME: Signaling with IRQ 41 Nov 8 00:26:28.787521 kernel: pcieport 0000:00:17.1: pciehp: Slot #225 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Nov 8 00:26:28.787576 kernel: pcieport 0000:00:17.2: PME: Signaling with IRQ 42 Nov 8 00:26:28.787628 kernel: pcieport 0000:00:17.2: pciehp: Slot #226 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Nov 8 00:26:28.788004 kernel: pcieport 0000:00:17.3: PME: Signaling with IRQ 43 Nov 8 00:26:28.788062 kernel: pcieport 0000:00:17.3: pciehp: Slot #227 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Nov 8 00:26:28.788115 kernel: pcieport 0000:00:17.4: PME: Signaling with IRQ 44 Nov 8 00:26:28.788168 kernel: pcieport 0000:00:17.4: pciehp: Slot #228 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Nov 8 00:26:28.788223 kernel: pcieport 0000:00:17.5: PME: Signaling with IRQ 45 Nov 8 00:26:28.788275 kernel: pcieport 0000:00:17.5: pciehp: Slot #229 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Nov 8 00:26:28.788327 kernel: pcieport 0000:00:17.6: PME: Signaling with IRQ 46 Nov 8 00:26:28.788379 kernel: pcieport 0000:00:17.6: pciehp: Slot #230 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Nov 8 00:26:28.788445 kernel: pcieport 0000:00:17.7: PME: Signaling with IRQ 47 Nov 8 00:26:28.788501 kernel: pcieport 0000:00:17.7: pciehp: Slot #231 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Nov 8 00:26:28.788553 kernel: pcieport 0000:00:18.0: PME: Signaling with IRQ 48 Nov 8 00:26:28.788604 kernel: pcieport 0000:00:18.0: pciehp: Slot #256 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Nov 8 00:26:28.788656 kernel: pcieport 0000:00:18.1: PME: Signaling with IRQ 49 Nov 8 00:26:28.788713 kernel: pcieport 0000:00:18.1: pciehp: Slot #257 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Nov 8 00:26:28.788766 kernel: pcieport 0000:00:18.2: PME: Signaling with IRQ 50 Nov 8 00:26:28.788836 kernel: pcieport 0000:00:18.2: pciehp: Slot #258 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Nov 8 00:26:28.788906 kernel: pcieport 0000:00:18.3: PME: Signaling with IRQ 51 Nov 8 00:26:28.788957 kernel: pcieport 0000:00:18.3: pciehp: Slot #259 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Nov 8 00:26:28.789009 kernel: pcieport 0000:00:18.4: PME: Signaling with IRQ 52 Nov 8 00:26:28.789060 kernel: pcieport 0000:00:18.4: pciehp: Slot #260 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Nov 8 00:26:28.789111 kernel: pcieport 0000:00:18.5: PME: Signaling with IRQ 53 Nov 8 00:26:28.789165 kernel: pcieport 0000:00:18.5: pciehp: Slot #261 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Nov 8 00:26:28.789217 kernel: pcieport 0000:00:18.6: PME: Signaling with IRQ 54 Nov 8 00:26:28.789268 kernel: pcieport 0000:00:18.6: pciehp: Slot #262 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Nov 8 00:26:28.789319 kernel: pcieport 0000:00:18.7: PME: Signaling with IRQ 55 Nov 8 00:26:28.789371 kernel: pcieport 0000:00:18.7: pciehp: Slot #263 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Nov 8 00:26:28.789382 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Nov 8 00:26:28.789388 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Nov 8 00:26:28.789395 kernel: 00:05: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Nov 8 00:26:28.789535 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBC,PNP0f13:MOUS] at 0x60,0x64 irq 1,12 Nov 8 00:26:28.789542 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Nov 8 00:26:28.789548 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Nov 8 00:26:28.789605 kernel: rtc_cmos 00:01: registered as rtc0 Nov 8 00:26:28.789879 kernel: rtc_cmos 00:01: setting system clock to 2025-11-08T00:26:28 UTC (1762561588) Nov 8 00:26:28.789937 kernel: rtc_cmos 00:01: alarms up to one month, y3k, 114 bytes nvram Nov 8 00:26:28.789946 kernel: intel_pstate: CPU model not supported Nov 8 00:26:28.789953 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Nov 8 00:26:28.789959 kernel: NET: Registered PF_INET6 protocol family Nov 8 00:26:28.789965 kernel: Segment Routing with IPv6 Nov 8 00:26:28.789972 kernel: In-situ OAM (IOAM) with IPv6 Nov 8 00:26:28.789978 kernel: NET: Registered PF_PACKET protocol family Nov 8 00:26:28.789984 kernel: Key type dns_resolver registered Nov 8 00:26:28.789990 kernel: IPI shorthand broadcast: enabled Nov 8 00:26:28.789999 kernel: sched_clock: Marking stable (874436697, 215198721)->(1141805795, -52170377) Nov 8 00:26:28.790009 kernel: registered taskstats version 1 Nov 8 00:26:28.790015 kernel: Loading compiled-in X.509 certificates Nov 8 00:26:28.790021 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.113-flatcar: cf7a35a152685ec84a621291e4ce58c959319dfd' Nov 8 00:26:28.790027 kernel: Key type .fscrypt registered Nov 8 00:26:28.790033 kernel: Key type fscrypt-provisioning registered Nov 8 00:26:28.790039 kernel: ima: No TPM chip found, activating TPM-bypass! Nov 8 00:26:28.790045 kernel: ima: Allocated hash algorithm: sha1 Nov 8 00:26:28.790052 kernel: ima: No architecture policies found Nov 8 00:26:28.790059 kernel: clk: Disabling unused clocks Nov 8 00:26:28.790065 kernel: Freeing unused kernel image (initmem) memory: 42880K Nov 8 00:26:28.790071 kernel: Write protecting the kernel read-only data: 36864k Nov 8 00:26:28.790077 kernel: Freeing unused kernel image (rodata/data gap) memory: 1828K Nov 8 00:26:28.790083 kernel: Run /init as init process Nov 8 00:26:28.790090 kernel: with arguments: Nov 8 00:26:28.790099 kernel: /init Nov 8 00:26:28.790105 kernel: with environment: Nov 8 00:26:28.790112 kernel: HOME=/ Nov 8 00:26:28.790119 kernel: TERM=linux Nov 8 00:26:28.790126 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Nov 8 00:26:28.790134 systemd[1]: Detected virtualization vmware. Nov 8 00:26:28.790141 systemd[1]: Detected architecture x86-64. Nov 8 00:26:28.790147 systemd[1]: Running in initrd. Nov 8 00:26:28.790153 systemd[1]: No hostname configured, using default hostname. Nov 8 00:26:28.790160 systemd[1]: Hostname set to . Nov 8 00:26:28.790167 systemd[1]: Initializing machine ID from random generator. Nov 8 00:26:28.790173 systemd[1]: Queued start job for default target initrd.target. Nov 8 00:26:28.790180 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Nov 8 00:26:28.790186 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Nov 8 00:26:28.790193 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Nov 8 00:26:28.790200 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Nov 8 00:26:28.790206 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Nov 8 00:26:28.790212 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Nov 8 00:26:28.790221 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Nov 8 00:26:28.790228 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Nov 8 00:26:28.790234 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Nov 8 00:26:28.790241 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Nov 8 00:26:28.790247 systemd[1]: Reached target paths.target - Path Units. Nov 8 00:26:28.790253 systemd[1]: Reached target slices.target - Slice Units. Nov 8 00:26:28.790260 systemd[1]: Reached target swap.target - Swaps. Nov 8 00:26:28.790267 systemd[1]: Reached target timers.target - Timer Units. Nov 8 00:26:28.790274 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Nov 8 00:26:28.790280 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Nov 8 00:26:28.790286 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Nov 8 00:26:28.790293 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Nov 8 00:26:28.790299 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Nov 8 00:26:28.790306 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Nov 8 00:26:28.790312 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Nov 8 00:26:28.790319 systemd[1]: Reached target sockets.target - Socket Units. Nov 8 00:26:28.790326 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Nov 8 00:26:28.790333 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Nov 8 00:26:28.790339 systemd[1]: Finished network-cleanup.service - Network Cleanup. Nov 8 00:26:28.790346 systemd[1]: Starting systemd-fsck-usr.service... Nov 8 00:26:28.790352 systemd[1]: Starting systemd-journald.service - Journal Service... Nov 8 00:26:28.790358 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Nov 8 00:26:28.790365 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Nov 8 00:26:28.790371 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Nov 8 00:26:28.790388 systemd-journald[216]: Collecting audit messages is disabled. Nov 8 00:26:28.792373 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Nov 8 00:26:28.792382 systemd[1]: Finished systemd-fsck-usr.service. Nov 8 00:26:28.792392 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Nov 8 00:26:28.792407 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Nov 8 00:26:28.792415 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Nov 8 00:26:28.792422 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Nov 8 00:26:28.792429 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Nov 8 00:26:28.792435 kernel: Bridge firewalling registered Nov 8 00:26:28.792444 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Nov 8 00:26:28.792451 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Nov 8 00:26:28.792457 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Nov 8 00:26:28.792467 systemd-journald[216]: Journal started Nov 8 00:26:28.792481 systemd-journald[216]: Runtime Journal (/run/log/journal/014e752593104202b7791e02f48b387a) is 4.8M, max 38.6M, 33.8M free. Nov 8 00:26:28.752067 systemd-modules-load[217]: Inserted module 'overlay' Nov 8 00:26:28.780423 systemd-modules-load[217]: Inserted module 'br_netfilter' Nov 8 00:26:28.794408 systemd[1]: Started systemd-journald.service - Journal Service. Nov 8 00:26:28.795273 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Nov 8 00:26:28.795496 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Nov 8 00:26:28.804654 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Nov 8 00:26:28.804958 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Nov 8 00:26:28.806506 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Nov 8 00:26:28.811426 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Nov 8 00:26:28.812527 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Nov 8 00:26:28.817603 dracut-cmdline[248]: dracut-dracut-053 Nov 8 00:26:28.819611 dracut-cmdline[248]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=vmware flatcar.autologin verity.usrhash=480a02cf7f2001774aa495c3e719d4173e968e6839485a7d2b207ef2facca472 Nov 8 00:26:28.836525 systemd-resolved[252]: Positive Trust Anchors: Nov 8 00:26:28.836534 systemd-resolved[252]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Nov 8 00:26:28.836555 systemd-resolved[252]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Nov 8 00:26:28.838103 systemd-resolved[252]: Defaulting to hostname 'linux'. Nov 8 00:26:28.839834 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Nov 8 00:26:28.839988 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Nov 8 00:26:28.867410 kernel: SCSI subsystem initialized Nov 8 00:26:28.874410 kernel: Loading iSCSI transport class v2.0-870. Nov 8 00:26:28.881408 kernel: iscsi: registered transport (tcp) Nov 8 00:26:28.895522 kernel: iscsi: registered transport (qla4xxx) Nov 8 00:26:28.895538 kernel: QLogic iSCSI HBA Driver Nov 8 00:26:28.914616 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Nov 8 00:26:28.919665 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Nov 8 00:26:28.938357 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Nov 8 00:26:28.938380 kernel: device-mapper: uevent: version 1.0.3 Nov 8 00:26:28.938390 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Nov 8 00:26:28.968442 kernel: raid6: avx2x4 gen() 54959 MB/s Nov 8 00:26:28.985442 kernel: raid6: avx2x2 gen() 54083 MB/s Nov 8 00:26:29.002531 kernel: raid6: avx2x1 gen() 46943 MB/s Nov 8 00:26:29.002550 kernel: raid6: using algorithm avx2x4 gen() 54959 MB/s Nov 8 00:26:29.020535 kernel: raid6: .... xor() 22382 MB/s, rmw enabled Nov 8 00:26:29.020557 kernel: raid6: using avx2x2 recovery algorithm Nov 8 00:26:29.033409 kernel: xor: automatically using best checksumming function avx Nov 8 00:26:29.131416 kernel: Btrfs loaded, zoned=no, fsverity=no Nov 8 00:26:29.136149 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Nov 8 00:26:29.141618 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Nov 8 00:26:29.148713 systemd-udevd[433]: Using default interface naming scheme 'v255'. Nov 8 00:26:29.151101 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Nov 8 00:26:29.156492 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Nov 8 00:26:29.163236 dracut-pre-trigger[435]: rd.md=0: removing MD RAID activation Nov 8 00:26:29.177846 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Nov 8 00:26:29.182601 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Nov 8 00:26:29.251862 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Nov 8 00:26:29.255495 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Nov 8 00:26:29.266509 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Nov 8 00:26:29.267259 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Nov 8 00:26:29.267876 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Nov 8 00:26:29.268090 systemd[1]: Reached target remote-fs.target - Remote File Systems. Nov 8 00:26:29.273502 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Nov 8 00:26:29.280996 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Nov 8 00:26:29.316524 kernel: libata version 3.00 loaded. Nov 8 00:26:29.319421 kernel: ata_piix 0000:00:07.1: version 2.13 Nov 8 00:26:29.320409 kernel: scsi host0: ata_piix Nov 8 00:26:29.320570 kernel: scsi host1: ata_piix Nov 8 00:26:29.327507 kernel: ata1: PATA max UDMA/33 cmd 0x1f0 ctl 0x3f6 bmdma 0x1060 irq 14 Nov 8 00:26:29.327524 kernel: ata2: PATA max UDMA/33 cmd 0x170 ctl 0x376 bmdma 0x1068 irq 15 Nov 8 00:26:29.333416 kernel: VMware PVSCSI driver - version 1.0.7.0-k Nov 8 00:26:29.334684 kernel: vmw_pvscsi: using 64bit dma Nov 8 00:26:29.334700 kernel: vmw_pvscsi: max_id: 16 Nov 8 00:26:29.334708 kernel: vmw_pvscsi: setting ring_pages to 8 Nov 8 00:26:29.337505 kernel: vmw_pvscsi: enabling reqCallThreshold Nov 8 00:26:29.337520 kernel: vmw_pvscsi: driver-based request coalescing enabled Nov 8 00:26:29.337531 kernel: vmw_pvscsi: using MSI-X Nov 8 00:26:29.337538 kernel: scsi host2: VMware PVSCSI storage adapter rev 2, req/cmp/msg rings: 8/8/1 pages, cmd_per_lun=254 Nov 8 00:26:29.341416 kernel: vmw_pvscsi 0000:03:00.0: VMware PVSCSI rev 2 host #2 Nov 8 00:26:29.343450 kernel: scsi 2:0:0:0: Direct-Access VMware Virtual disk 2.0 PQ: 0 ANSI: 6 Nov 8 00:26:29.348411 kernel: VMware vmxnet3 virtual NIC driver - version 1.7.0.0-k-NAPI Nov 8 00:26:29.350436 kernel: vmxnet3 0000:0b:00.0: # of Tx queues : 2, # of Rx queues : 2 Nov 8 00:26:29.352419 kernel: vmxnet3 0000:0b:00.0 eth0: NIC Link is Up 10000 Mbps Nov 8 00:26:29.363757 kernel: cryptd: max_cpu_qlen set to 1000 Nov 8 00:26:29.364507 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Nov 8 00:26:29.364745 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Nov 8 00:26:29.365065 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Nov 8 00:26:29.365162 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Nov 8 00:26:29.365238 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Nov 8 00:26:29.365340 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Nov 8 00:26:29.376586 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Nov 8 00:26:29.387153 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Nov 8 00:26:29.391551 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Nov 8 00:26:29.403734 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Nov 8 00:26:29.492497 kernel: ata2.00: ATAPI: VMware Virtual IDE CDROM Drive, 00000001, max UDMA/33 Nov 8 00:26:29.498417 kernel: scsi 1:0:0:0: CD-ROM NECVMWar VMware IDE CDR10 1.00 PQ: 0 ANSI: 5 Nov 8 00:26:29.503604 kernel: vmxnet3 0000:0b:00.0 ens192: renamed from eth0 Nov 8 00:26:29.512838 kernel: AVX2 version of gcm_enc/dec engaged. Nov 8 00:26:29.512858 kernel: AES CTR mode by8 optimization enabled Nov 8 00:26:29.522565 kernel: sd 2:0:0:0: [sda] 17805312 512-byte logical blocks: (9.12 GB/8.49 GiB) Nov 8 00:26:29.522678 kernel: sd 2:0:0:0: [sda] Write Protect is off Nov 8 00:26:29.522800 kernel: sd 2:0:0:0: [sda] Mode Sense: 31 00 00 00 Nov 8 00:26:29.522864 kernel: sd 2:0:0:0: [sda] Cache data unavailable Nov 8 00:26:29.523989 kernel: sd 2:0:0:0: [sda] Assuming drive cache: write through Nov 8 00:26:29.525662 kernel: sr 1:0:0:0: [sr0] scsi3-mmc drive: 1x/1x writer dvd-ram cd/rw xa/form2 cdda tray Nov 8 00:26:29.525773 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Nov 8 00:26:29.531494 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Nov 8 00:26:29.531515 kernel: sd 2:0:0:0: [sda] Attached SCSI disk Nov 8 00:26:29.534411 kernel: sr 1:0:0:0: Attached scsi CD-ROM sr0 Nov 8 00:26:29.560413 kernel: BTRFS: device label OEM devid 1 transid 10 /dev/sda6 scanned by (udev-worker) (492) Nov 8 00:26:29.564413 kernel: BTRFS: device fsid a2737782-a37e-42f9-8b56-489a87f47acc devid 1 transid 35 /dev/sda3 scanned by (udev-worker) (489) Nov 8 00:26:29.566137 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Virtual_disk ROOT. Nov 8 00:26:29.568716 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Virtual_disk EFI-SYSTEM. Nov 8 00:26:29.570937 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - Virtual_disk USR-A. Nov 8 00:26:29.571431 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Virtual_disk USR-A. Nov 8 00:26:29.576480 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Nov 8 00:26:29.579228 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Virtual_disk OEM. Nov 8 00:26:29.600429 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Nov 8 00:26:29.607406 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Nov 8 00:26:30.606695 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Nov 8 00:26:30.607084 disk-uuid[594]: The operation has completed successfully. Nov 8 00:26:30.645791 systemd[1]: disk-uuid.service: Deactivated successfully. Nov 8 00:26:30.646058 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Nov 8 00:26:30.649493 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Nov 8 00:26:30.651208 sh[612]: Success Nov 8 00:26:30.658411 kernel: device-mapper: verity: sha256 using implementation "sha256-avx2" Nov 8 00:26:30.701051 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Nov 8 00:26:30.706447 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Nov 8 00:26:30.706647 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Nov 8 00:26:30.723927 kernel: BTRFS info (device dm-0): first mount of filesystem a2737782-a37e-42f9-8b56-489a87f47acc Nov 8 00:26:30.723948 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Nov 8 00:26:30.723956 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Nov 8 00:26:30.723963 kernel: BTRFS info (device dm-0): disabling log replay at mount time Nov 8 00:26:30.723970 kernel: BTRFS info (device dm-0): using free space tree Nov 8 00:26:30.731404 kernel: BTRFS info (device dm-0): enabling ssd optimizations Nov 8 00:26:30.731932 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Nov 8 00:26:30.741487 systemd[1]: Starting afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments... Nov 8 00:26:30.742600 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Nov 8 00:26:30.756808 kernel: BTRFS info (device sda6): first mount of filesystem 7b59d8a2-cf4e-4d67-8d1e-00d7f134f45e Nov 8 00:26:30.756832 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Nov 8 00:26:30.756841 kernel: BTRFS info (device sda6): using free space tree Nov 8 00:26:30.764584 kernel: BTRFS info (device sda6): enabling ssd optimizations Nov 8 00:26:30.769408 kernel: BTRFS info (device sda6): last unmount of filesystem 7b59d8a2-cf4e-4d67-8d1e-00d7f134f45e Nov 8 00:26:30.770146 systemd[1]: mnt-oem.mount: Deactivated successfully. Nov 8 00:26:30.773381 systemd[1]: Finished ignition-setup.service - Ignition (setup). Nov 8 00:26:30.778488 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Nov 8 00:26:30.811965 systemd[1]: Finished afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments. Nov 8 00:26:30.819527 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Nov 8 00:26:30.876470 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Nov 8 00:26:30.883485 systemd[1]: Starting systemd-networkd.service - Network Configuration... Nov 8 00:26:30.884771 ignition[671]: Ignition 2.19.0 Nov 8 00:26:30.884777 ignition[671]: Stage: fetch-offline Nov 8 00:26:30.884820 ignition[671]: no configs at "/usr/lib/ignition/base.d" Nov 8 00:26:30.884827 ignition[671]: no config dir at "/usr/lib/ignition/base.platform.d/vmware" Nov 8 00:26:30.884920 ignition[671]: parsed url from cmdline: "" Nov 8 00:26:30.884922 ignition[671]: no config URL provided Nov 8 00:26:30.884926 ignition[671]: reading system config file "/usr/lib/ignition/user.ign" Nov 8 00:26:30.884931 ignition[671]: no config at "/usr/lib/ignition/user.ign" Nov 8 00:26:30.885414 ignition[671]: config successfully fetched Nov 8 00:26:30.885439 ignition[671]: parsing config with SHA512: 04025625e7cfe85ad58d4ccf1aadc97e86a3e4c6e8285938a01bca915efe77e07392bd128bbb2ec29adfbb1a2ffd2a59fbd86380015b02eca3ec3827c9029e11 Nov 8 00:26:30.889921 unknown[671]: fetched base config from "system" Nov 8 00:26:30.890053 unknown[671]: fetched user config from "vmware" Nov 8 00:26:30.890461 ignition[671]: fetch-offline: fetch-offline passed Nov 8 00:26:30.890619 ignition[671]: Ignition finished successfully Nov 8 00:26:30.891273 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Nov 8 00:26:30.897439 systemd-networkd[803]: lo: Link UP Nov 8 00:26:30.897445 systemd-networkd[803]: lo: Gained carrier Nov 8 00:26:30.898217 systemd-networkd[803]: Enumeration completed Nov 8 00:26:30.898358 systemd[1]: Started systemd-networkd.service - Network Configuration. Nov 8 00:26:30.898499 systemd[1]: Reached target network.target - Network. Nov 8 00:26:30.898565 systemd-networkd[803]: ens192: Configuring with /etc/systemd/network/10-dracut-cmdline-99.network. Nov 8 00:26:30.898588 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Nov 8 00:26:30.902390 kernel: vmxnet3 0000:0b:00.0 ens192: intr type 3, mode 0, 3 vectors allocated Nov 8 00:26:30.902499 kernel: vmxnet3 0000:0b:00.0 ens192: NIC Link is Up 10000 Mbps Nov 8 00:26:30.901983 systemd-networkd[803]: ens192: Link UP Nov 8 00:26:30.901985 systemd-networkd[803]: ens192: Gained carrier Nov 8 00:26:30.906499 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Nov 8 00:26:30.914445 ignition[806]: Ignition 2.19.0 Nov 8 00:26:30.914451 ignition[806]: Stage: kargs Nov 8 00:26:30.914545 ignition[806]: no configs at "/usr/lib/ignition/base.d" Nov 8 00:26:30.914551 ignition[806]: no config dir at "/usr/lib/ignition/base.platform.d/vmware" Nov 8 00:26:30.915037 ignition[806]: kargs: kargs passed Nov 8 00:26:30.915058 ignition[806]: Ignition finished successfully Nov 8 00:26:30.916380 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Nov 8 00:26:30.920608 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Nov 8 00:26:30.927880 ignition[814]: Ignition 2.19.0 Nov 8 00:26:30.927886 ignition[814]: Stage: disks Nov 8 00:26:30.927991 ignition[814]: no configs at "/usr/lib/ignition/base.d" Nov 8 00:26:30.927997 ignition[814]: no config dir at "/usr/lib/ignition/base.platform.d/vmware" Nov 8 00:26:30.928545 ignition[814]: disks: disks passed Nov 8 00:26:30.928593 ignition[814]: Ignition finished successfully Nov 8 00:26:30.929466 systemd[1]: Finished ignition-disks.service - Ignition (disks). Nov 8 00:26:30.929739 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Nov 8 00:26:30.929963 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Nov 8 00:26:30.930171 systemd[1]: Reached target local-fs.target - Local File Systems. Nov 8 00:26:30.930364 systemd[1]: Reached target sysinit.target - System Initialization. Nov 8 00:26:30.930517 systemd[1]: Reached target basic.target - Basic System. Nov 8 00:26:30.934620 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Nov 8 00:26:30.944869 systemd-fsck[822]: ROOT: clean, 14/1628000 files, 120691/1617920 blocks Nov 8 00:26:30.946246 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Nov 8 00:26:30.949448 systemd[1]: Mounting sysroot.mount - /sysroot... Nov 8 00:26:31.004180 systemd[1]: Mounted sysroot.mount - /sysroot. Nov 8 00:26:31.004407 kernel: EXT4-fs (sda9): mounted filesystem 3cd35b5c-4e0e-45c1-abc9-cf70eebd42df r/w with ordered data mode. Quota mode: none. Nov 8 00:26:31.004672 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Nov 8 00:26:31.012577 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Nov 8 00:26:31.014021 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Nov 8 00:26:31.014434 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Nov 8 00:26:31.014467 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Nov 8 00:26:31.014485 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Nov 8 00:26:31.019299 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Nov 8 00:26:31.022386 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/sda6 scanned by mount (830) Nov 8 00:26:31.022423 kernel: BTRFS info (device sda6): first mount of filesystem 7b59d8a2-cf4e-4d67-8d1e-00d7f134f45e Nov 8 00:26:31.022434 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Nov 8 00:26:31.022444 kernel: BTRFS info (device sda6): using free space tree Nov 8 00:26:31.021146 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Nov 8 00:26:31.027481 kernel: BTRFS info (device sda6): enabling ssd optimizations Nov 8 00:26:31.029185 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Nov 8 00:26:31.054835 initrd-setup-root[854]: cut: /sysroot/etc/passwd: No such file or directory Nov 8 00:26:31.057575 initrd-setup-root[861]: cut: /sysroot/etc/group: No such file or directory Nov 8 00:26:31.059932 initrd-setup-root[868]: cut: /sysroot/etc/shadow: No such file or directory Nov 8 00:26:31.062246 initrd-setup-root[875]: cut: /sysroot/etc/gshadow: No such file or directory Nov 8 00:26:31.112947 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Nov 8 00:26:31.116610 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Nov 8 00:26:31.119003 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Nov 8 00:26:31.121467 kernel: BTRFS info (device sda6): last unmount of filesystem 7b59d8a2-cf4e-4d67-8d1e-00d7f134f45e Nov 8 00:26:31.134274 ignition[943]: INFO : Ignition 2.19.0 Nov 8 00:26:31.134274 ignition[943]: INFO : Stage: mount Nov 8 00:26:31.134274 ignition[943]: INFO : no configs at "/usr/lib/ignition/base.d" Nov 8 00:26:31.134274 ignition[943]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/vmware" Nov 8 00:26:31.134274 ignition[943]: INFO : mount: mount passed Nov 8 00:26:31.134274 ignition[943]: INFO : Ignition finished successfully Nov 8 00:26:31.135165 systemd[1]: Finished ignition-mount.service - Ignition (mount). Nov 8 00:26:31.140767 systemd[1]: Starting ignition-files.service - Ignition (files)... Nov 8 00:26:31.141031 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Nov 8 00:26:31.719960 systemd[1]: sysroot-oem.mount: Deactivated successfully. Nov 8 00:26:31.728602 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Nov 8 00:26:31.736415 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/sda6 scanned by mount (955) Nov 8 00:26:31.739653 kernel: BTRFS info (device sda6): first mount of filesystem 7b59d8a2-cf4e-4d67-8d1e-00d7f134f45e Nov 8 00:26:31.739667 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Nov 8 00:26:31.739674 kernel: BTRFS info (device sda6): using free space tree Nov 8 00:26:31.744429 kernel: BTRFS info (device sda6): enabling ssd optimizations Nov 8 00:26:31.744059 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Nov 8 00:26:31.758821 ignition[972]: INFO : Ignition 2.19.0 Nov 8 00:26:31.758821 ignition[972]: INFO : Stage: files Nov 8 00:26:31.759222 ignition[972]: INFO : no configs at "/usr/lib/ignition/base.d" Nov 8 00:26:31.759222 ignition[972]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/vmware" Nov 8 00:26:31.759854 ignition[972]: DEBUG : files: compiled without relabeling support, skipping Nov 8 00:26:31.760373 ignition[972]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Nov 8 00:26:31.760373 ignition[972]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Nov 8 00:26:31.762630 ignition[972]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Nov 8 00:26:31.762883 ignition[972]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Nov 8 00:26:31.763219 unknown[972]: wrote ssh authorized keys file for user: core Nov 8 00:26:31.763466 ignition[972]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Nov 8 00:26:31.766083 ignition[972]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Nov 8 00:26:31.766319 ignition[972]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.3-linux-amd64.tar.gz: attempt #1 Nov 8 00:26:31.827509 ignition[972]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Nov 8 00:26:31.866416 ignition[972]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Nov 8 00:26:31.866416 ignition[972]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Nov 8 00:26:31.866416 ignition[972]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Nov 8 00:26:31.866416 ignition[972]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Nov 8 00:26:31.866416 ignition[972]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Nov 8 00:26:31.866416 ignition[972]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Nov 8 00:26:31.866416 ignition[972]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Nov 8 00:26:31.866416 ignition[972]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Nov 8 00:26:31.867708 ignition[972]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Nov 8 00:26:31.867708 ignition[972]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Nov 8 00:26:31.867708 ignition[972]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Nov 8 00:26:31.867708 ignition[972]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Nov 8 00:26:31.867708 ignition[972]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Nov 8 00:26:31.867708 ignition[972]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Nov 8 00:26:31.867708 ignition[972]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.33.0-x86-64.raw: attempt #1 Nov 8 00:26:31.983687 systemd-networkd[803]: ens192: Gained IPv6LL Nov 8 00:26:32.286248 ignition[972]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Nov 8 00:26:32.483032 ignition[972]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Nov 8 00:26:32.483287 ignition[972]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/etc/systemd/network/00-vmware.network" Nov 8 00:26:32.483287 ignition[972]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/etc/systemd/network/00-vmware.network" Nov 8 00:26:32.483287 ignition[972]: INFO : files: op(c): [started] processing unit "prepare-helm.service" Nov 8 00:26:32.483287 ignition[972]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Nov 8 00:26:32.483932 ignition[972]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Nov 8 00:26:32.483932 ignition[972]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" Nov 8 00:26:32.483932 ignition[972]: INFO : files: op(e): [started] processing unit "coreos-metadata.service" Nov 8 00:26:32.483932 ignition[972]: INFO : files: op(e): op(f): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Nov 8 00:26:32.483932 ignition[972]: INFO : files: op(e): op(f): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Nov 8 00:26:32.483932 ignition[972]: INFO : files: op(e): [finished] processing unit "coreos-metadata.service" Nov 8 00:26:32.483932 ignition[972]: INFO : files: op(10): [started] setting preset to disabled for "coreos-metadata.service" Nov 8 00:26:32.518027 ignition[972]: INFO : files: op(10): op(11): [started] removing enablement symlink(s) for "coreos-metadata.service" Nov 8 00:26:32.520214 ignition[972]: INFO : files: op(10): op(11): [finished] removing enablement symlink(s) for "coreos-metadata.service" Nov 8 00:26:32.520378 ignition[972]: INFO : files: op(10): [finished] setting preset to disabled for "coreos-metadata.service" Nov 8 00:26:32.520378 ignition[972]: INFO : files: op(12): [started] setting preset to enabled for "prepare-helm.service" Nov 8 00:26:32.520378 ignition[972]: INFO : files: op(12): [finished] setting preset to enabled for "prepare-helm.service" Nov 8 00:26:32.521302 ignition[972]: INFO : files: createResultFile: createFiles: op(13): [started] writing file "/sysroot/etc/.ignition-result.json" Nov 8 00:26:32.521302 ignition[972]: INFO : files: createResultFile: createFiles: op(13): [finished] writing file "/sysroot/etc/.ignition-result.json" Nov 8 00:26:32.521302 ignition[972]: INFO : files: files passed Nov 8 00:26:32.521302 ignition[972]: INFO : Ignition finished successfully Nov 8 00:26:32.521372 systemd[1]: Finished ignition-files.service - Ignition (files). Nov 8 00:26:32.527632 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Nov 8 00:26:32.529502 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Nov 8 00:26:32.529740 systemd[1]: ignition-quench.service: Deactivated successfully. Nov 8 00:26:32.529784 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Nov 8 00:26:32.535696 initrd-setup-root-after-ignition[1002]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Nov 8 00:26:32.535696 initrd-setup-root-after-ignition[1002]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Nov 8 00:26:32.536596 initrd-setup-root-after-ignition[1006]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Nov 8 00:26:32.537519 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Nov 8 00:26:32.537811 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Nov 8 00:26:32.541651 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Nov 8 00:26:32.553432 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Nov 8 00:26:32.553489 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Nov 8 00:26:32.553745 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Nov 8 00:26:32.553853 systemd[1]: Reached target initrd.target - Initrd Default Target. Nov 8 00:26:32.554042 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Nov 8 00:26:32.554451 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Nov 8 00:26:32.563616 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Nov 8 00:26:32.567614 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Nov 8 00:26:32.572743 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Nov 8 00:26:32.572928 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Nov 8 00:26:32.573139 systemd[1]: Stopped target timers.target - Timer Units. Nov 8 00:26:32.573318 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Nov 8 00:26:32.573377 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Nov 8 00:26:32.573744 systemd[1]: Stopped target initrd.target - Initrd Default Target. Nov 8 00:26:32.573887 systemd[1]: Stopped target basic.target - Basic System. Nov 8 00:26:32.574059 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Nov 8 00:26:32.574238 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Nov 8 00:26:32.574448 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Nov 8 00:26:32.574645 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Nov 8 00:26:32.574976 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Nov 8 00:26:32.575174 systemd[1]: Stopped target sysinit.target - System Initialization. Nov 8 00:26:32.575368 systemd[1]: Stopped target local-fs.target - Local File Systems. Nov 8 00:26:32.575611 systemd[1]: Stopped target swap.target - Swaps. Nov 8 00:26:32.575758 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Nov 8 00:26:32.575818 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Nov 8 00:26:32.576049 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Nov 8 00:26:32.576195 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Nov 8 00:26:32.576370 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Nov 8 00:26:32.576423 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Nov 8 00:26:32.576585 systemd[1]: dracut-initqueue.service: Deactivated successfully. Nov 8 00:26:32.576641 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Nov 8 00:26:32.576870 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Nov 8 00:26:32.576928 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Nov 8 00:26:32.577178 systemd[1]: Stopped target paths.target - Path Units. Nov 8 00:26:32.577314 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Nov 8 00:26:32.580495 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Nov 8 00:26:32.580721 systemd[1]: Stopped target slices.target - Slice Units. Nov 8 00:26:32.581011 systemd[1]: Stopped target sockets.target - Socket Units. Nov 8 00:26:32.581219 systemd[1]: iscsid.socket: Deactivated successfully. Nov 8 00:26:32.581283 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Nov 8 00:26:32.581495 systemd[1]: iscsiuio.socket: Deactivated successfully. Nov 8 00:26:32.581561 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Nov 8 00:26:32.581774 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Nov 8 00:26:32.581858 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Nov 8 00:26:32.582086 systemd[1]: ignition-files.service: Deactivated successfully. Nov 8 00:26:32.582165 systemd[1]: Stopped ignition-files.service - Ignition (files). Nov 8 00:26:32.593641 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Nov 8 00:26:32.596639 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Nov 8 00:26:32.596923 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Nov 8 00:26:32.597235 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Nov 8 00:26:32.597667 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Nov 8 00:26:32.597747 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Nov 8 00:26:32.600588 systemd[1]: initrd-cleanup.service: Deactivated successfully. Nov 8 00:26:32.600646 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Nov 8 00:26:32.604334 ignition[1026]: INFO : Ignition 2.19.0 Nov 8 00:26:32.604334 ignition[1026]: INFO : Stage: umount Nov 8 00:26:32.606957 ignition[1026]: INFO : no configs at "/usr/lib/ignition/base.d" Nov 8 00:26:32.606957 ignition[1026]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/vmware" Nov 8 00:26:32.606957 ignition[1026]: INFO : umount: umount passed Nov 8 00:26:32.606957 ignition[1026]: INFO : Ignition finished successfully Nov 8 00:26:32.605740 systemd[1]: ignition-mount.service: Deactivated successfully. Nov 8 00:26:32.605804 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Nov 8 00:26:32.606130 systemd[1]: Stopped target network.target - Network. Nov 8 00:26:32.606323 systemd[1]: ignition-disks.service: Deactivated successfully. Nov 8 00:26:32.606355 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Nov 8 00:26:32.606489 systemd[1]: ignition-kargs.service: Deactivated successfully. Nov 8 00:26:32.606514 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Nov 8 00:26:32.606714 systemd[1]: ignition-setup.service: Deactivated successfully. Nov 8 00:26:32.606739 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Nov 8 00:26:32.606877 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Nov 8 00:26:32.606899 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Nov 8 00:26:32.607125 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Nov 8 00:26:32.607291 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Nov 8 00:26:32.612144 systemd[1]: systemd-networkd.service: Deactivated successfully. Nov 8 00:26:32.612208 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Nov 8 00:26:32.612659 systemd[1]: systemd-networkd.socket: Deactivated successfully. Nov 8 00:26:32.612695 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Nov 8 00:26:32.620615 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Nov 8 00:26:32.620730 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Nov 8 00:26:32.620759 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Nov 8 00:26:32.620883 systemd[1]: afterburn-network-kargs.service: Deactivated successfully. Nov 8 00:26:32.620905 systemd[1]: Stopped afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments. Nov 8 00:26:32.621058 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Nov 8 00:26:32.621929 systemd[1]: sysroot-boot.mount: Deactivated successfully. Nov 8 00:26:32.622236 systemd[1]: systemd-resolved.service: Deactivated successfully. Nov 8 00:26:32.623314 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Nov 8 00:26:32.624797 systemd[1]: systemd-sysctl.service: Deactivated successfully. Nov 8 00:26:32.624845 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Nov 8 00:26:32.625123 systemd[1]: systemd-modules-load.service: Deactivated successfully. Nov 8 00:26:32.625146 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Nov 8 00:26:32.625252 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Nov 8 00:26:32.625273 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Nov 8 00:26:32.629476 systemd[1]: network-cleanup.service: Deactivated successfully. Nov 8 00:26:32.629536 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Nov 8 00:26:32.635776 systemd[1]: systemd-udevd.service: Deactivated successfully. Nov 8 00:26:32.635878 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Nov 8 00:26:32.636420 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Nov 8 00:26:32.636475 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Nov 8 00:26:32.636651 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Nov 8 00:26:32.636675 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Nov 8 00:26:32.636878 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Nov 8 00:26:32.636909 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Nov 8 00:26:32.637271 systemd[1]: dracut-cmdline.service: Deactivated successfully. Nov 8 00:26:32.637301 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Nov 8 00:26:32.637693 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Nov 8 00:26:32.637724 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Nov 8 00:26:32.643671 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Nov 8 00:26:32.644220 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Nov 8 00:26:32.644427 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Nov 8 00:26:32.644799 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Nov 8 00:26:32.644830 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Nov 8 00:26:32.645188 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Nov 8 00:26:32.645218 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Nov 8 00:26:32.645601 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Nov 8 00:26:32.645631 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Nov 8 00:26:32.647634 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Nov 8 00:26:32.647903 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Nov 8 00:26:32.708940 systemd[1]: sysroot-boot.service: Deactivated successfully. Nov 8 00:26:32.709168 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Nov 8 00:26:32.709459 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Nov 8 00:26:32.709594 systemd[1]: initrd-setup-root.service: Deactivated successfully. Nov 8 00:26:32.709627 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Nov 8 00:26:32.713529 systemd[1]: Starting initrd-switch-root.service - Switch Root... Nov 8 00:26:32.723287 systemd[1]: Switching root. Nov 8 00:26:32.756152 systemd-journald[216]: Journal stopped Nov 8 00:26:28.738774 kernel: Linux version 6.6.113-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT_DYNAMIC Fri Nov 7 22:45:04 -00 2025 Nov 8 00:26:28.738789 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=vmware flatcar.autologin verity.usrhash=480a02cf7f2001774aa495c3e719d4173e968e6839485a7d2b207ef2facca472 Nov 8 00:26:28.738795 kernel: Disabled fast string operations Nov 8 00:26:28.738799 kernel: BIOS-provided physical RAM map: Nov 8 00:26:28.738803 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009ebff] usable Nov 8 00:26:28.738806 kernel: BIOS-e820: [mem 0x000000000009ec00-0x000000000009ffff] reserved Nov 8 00:26:28.738812 kernel: BIOS-e820: [mem 0x00000000000dc000-0x00000000000fffff] reserved Nov 8 00:26:28.738816 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000007fedffff] usable Nov 8 00:26:28.738820 kernel: BIOS-e820: [mem 0x000000007fee0000-0x000000007fefefff] ACPI data Nov 8 00:26:28.738824 kernel: BIOS-e820: [mem 0x000000007feff000-0x000000007fefffff] ACPI NVS Nov 8 00:26:28.738828 kernel: BIOS-e820: [mem 0x000000007ff00000-0x000000007fffffff] usable Nov 8 00:26:28.738832 kernel: BIOS-e820: [mem 0x00000000f0000000-0x00000000f7ffffff] reserved Nov 8 00:26:28.738836 kernel: BIOS-e820: [mem 0x00000000fec00000-0x00000000fec0ffff] reserved Nov 8 00:26:28.738840 kernel: BIOS-e820: [mem 0x00000000fee00000-0x00000000fee00fff] reserved Nov 8 00:26:28.738846 kernel: BIOS-e820: [mem 0x00000000fffe0000-0x00000000ffffffff] reserved Nov 8 00:26:28.738851 kernel: NX (Execute Disable) protection: active Nov 8 00:26:28.738856 kernel: APIC: Static calls initialized Nov 8 00:26:28.738860 kernel: SMBIOS 2.7 present. Nov 8 00:26:28.738865 kernel: DMI: VMware, Inc. VMware Virtual Platform/440BX Desktop Reference Platform, BIOS 6.00 05/28/2020 Nov 8 00:26:28.738869 kernel: vmware: hypercall mode: 0x00 Nov 8 00:26:28.738874 kernel: Hypervisor detected: VMware Nov 8 00:26:28.738878 kernel: vmware: TSC freq read from hypervisor : 3408.000 MHz Nov 8 00:26:28.738884 kernel: vmware: Host bus clock speed read from hypervisor : 66000000 Hz Nov 8 00:26:28.738888 kernel: vmware: using clock offset of 2514331275 ns Nov 8 00:26:28.738893 kernel: tsc: Detected 3408.000 MHz processor Nov 8 00:26:28.738898 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Nov 8 00:26:28.738903 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Nov 8 00:26:28.738908 kernel: last_pfn = 0x80000 max_arch_pfn = 0x400000000 Nov 8 00:26:28.738912 kernel: total RAM covered: 3072M Nov 8 00:26:28.738917 kernel: Found optimal setting for mtrr clean up Nov 8 00:26:28.738922 kernel: gran_size: 64K chunk_size: 64K num_reg: 2 lose cover RAM: 0G Nov 8 00:26:28.738928 kernel: MTRR map: 6 entries (5 fixed + 1 variable; max 21), built from 8 variable MTRRs Nov 8 00:26:28.738933 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Nov 8 00:26:28.738937 kernel: Using GB pages for direct mapping Nov 8 00:26:28.738942 kernel: ACPI: Early table checksum verification disabled Nov 8 00:26:28.738947 kernel: ACPI: RSDP 0x00000000000F6A00 000024 (v02 PTLTD ) Nov 8 00:26:28.738951 kernel: ACPI: XSDT 0x000000007FEE965B 00005C (v01 INTEL 440BX 06040000 VMW 01324272) Nov 8 00:26:28.738956 kernel: ACPI: FACP 0x000000007FEFEE73 0000F4 (v04 INTEL 440BX 06040000 PTL 000F4240) Nov 8 00:26:28.738961 kernel: ACPI: DSDT 0x000000007FEEAD55 01411E (v01 PTLTD Custom 06040000 MSFT 03000001) Nov 8 00:26:28.738965 kernel: ACPI: FACS 0x000000007FEFFFC0 000040 Nov 8 00:26:28.738972 kernel: ACPI: FACS 0x000000007FEFFFC0 000040 Nov 8 00:26:28.738977 kernel: ACPI: BOOT 0x000000007FEEAD2D 000028 (v01 PTLTD $SBFTBL$ 06040000 LTP 00000001) Nov 8 00:26:28.738982 kernel: ACPI: APIC 0x000000007FEEA5EB 000742 (v01 PTLTD ? APIC 06040000 LTP 00000000) Nov 8 00:26:28.738987 kernel: ACPI: MCFG 0x000000007FEEA5AF 00003C (v01 PTLTD $PCITBL$ 06040000 LTP 00000001) Nov 8 00:26:28.738992 kernel: ACPI: SRAT 0x000000007FEE9757 0008A8 (v02 VMWARE MEMPLUG 06040000 VMW 00000001) Nov 8 00:26:28.738998 kernel: ACPI: HPET 0x000000007FEE971F 000038 (v01 VMWARE VMW HPET 06040000 VMW 00000001) Nov 8 00:26:28.739003 kernel: ACPI: WAET 0x000000007FEE96F7 000028 (v01 VMWARE VMW WAET 06040000 VMW 00000001) Nov 8 00:26:28.739008 kernel: ACPI: Reserving FACP table memory at [mem 0x7fefee73-0x7fefef66] Nov 8 00:26:28.739013 kernel: ACPI: Reserving DSDT table memory at [mem 0x7feead55-0x7fefee72] Nov 8 00:26:28.739017 kernel: ACPI: Reserving FACS table memory at [mem 0x7fefffc0-0x7fefffff] Nov 8 00:26:28.739022 kernel: ACPI: Reserving FACS table memory at [mem 0x7fefffc0-0x7fefffff] Nov 8 00:26:28.739027 kernel: ACPI: Reserving BOOT table memory at [mem 0x7feead2d-0x7feead54] Nov 8 00:26:28.739032 kernel: ACPI: Reserving APIC table memory at [mem 0x7feea5eb-0x7feead2c] Nov 8 00:26:28.739037 kernel: ACPI: Reserving MCFG table memory at [mem 0x7feea5af-0x7feea5ea] Nov 8 00:26:28.739042 kernel: ACPI: Reserving SRAT table memory at [mem 0x7fee9757-0x7fee9ffe] Nov 8 00:26:28.739048 kernel: ACPI: Reserving HPET table memory at [mem 0x7fee971f-0x7fee9756] Nov 8 00:26:28.739052 kernel: ACPI: Reserving WAET table memory at [mem 0x7fee96f7-0x7fee971e] Nov 8 00:26:28.739057 kernel: system APIC only can use physical flat Nov 8 00:26:28.739062 kernel: APIC: Switched APIC routing to: physical flat Nov 8 00:26:28.739067 kernel: SRAT: PXM 0 -> APIC 0x00 -> Node 0 Nov 8 00:26:28.739072 kernel: SRAT: PXM 0 -> APIC 0x02 -> Node 0 Nov 8 00:26:28.739077 kernel: SRAT: PXM 0 -> APIC 0x04 -> Node 0 Nov 8 00:26:28.739081 kernel: SRAT: PXM 0 -> APIC 0x06 -> Node 0 Nov 8 00:26:28.739086 kernel: SRAT: PXM 0 -> APIC 0x08 -> Node 0 Nov 8 00:26:28.739092 kernel: SRAT: PXM 0 -> APIC 0x0a -> Node 0 Nov 8 00:26:28.739097 kernel: SRAT: PXM 0 -> APIC 0x0c -> Node 0 Nov 8 00:26:28.739102 kernel: SRAT: PXM 0 -> APIC 0x0e -> Node 0 Nov 8 00:26:28.739107 kernel: SRAT: PXM 0 -> APIC 0x10 -> Node 0 Nov 8 00:26:28.739111 kernel: SRAT: PXM 0 -> APIC 0x12 -> Node 0 Nov 8 00:26:28.739116 kernel: SRAT: PXM 0 -> APIC 0x14 -> Node 0 Nov 8 00:26:28.739121 kernel: SRAT: PXM 0 -> APIC 0x16 -> Node 0 Nov 8 00:26:28.739126 kernel: SRAT: PXM 0 -> APIC 0x18 -> Node 0 Nov 8 00:26:28.739130 kernel: SRAT: PXM 0 -> APIC 0x1a -> Node 0 Nov 8 00:26:28.739135 kernel: SRAT: PXM 0 -> APIC 0x1c -> Node 0 Nov 8 00:26:28.739141 kernel: SRAT: PXM 0 -> APIC 0x1e -> Node 0 Nov 8 00:26:28.739146 kernel: SRAT: PXM 0 -> APIC 0x20 -> Node 0 Nov 8 00:26:28.739150 kernel: SRAT: PXM 0 -> APIC 0x22 -> Node 0 Nov 8 00:26:28.739155 kernel: SRAT: PXM 0 -> APIC 0x24 -> Node 0 Nov 8 00:26:28.739160 kernel: SRAT: PXM 0 -> APIC 0x26 -> Node 0 Nov 8 00:26:28.739165 kernel: SRAT: PXM 0 -> APIC 0x28 -> Node 0 Nov 8 00:26:28.739170 kernel: SRAT: PXM 0 -> APIC 0x2a -> Node 0 Nov 8 00:26:28.739174 kernel: SRAT: PXM 0 -> APIC 0x2c -> Node 0 Nov 8 00:26:28.739179 kernel: SRAT: PXM 0 -> APIC 0x2e -> Node 0 Nov 8 00:26:28.739184 kernel: SRAT: PXM 0 -> APIC 0x30 -> Node 0 Nov 8 00:26:28.739188 kernel: SRAT: PXM 0 -> APIC 0x32 -> Node 0 Nov 8 00:26:28.739194 kernel: SRAT: PXM 0 -> APIC 0x34 -> Node 0 Nov 8 00:26:28.739199 kernel: SRAT: PXM 0 -> APIC 0x36 -> Node 0 Nov 8 00:26:28.739204 kernel: SRAT: PXM 0 -> APIC 0x38 -> Node 0 Nov 8 00:26:28.739209 kernel: SRAT: PXM 0 -> APIC 0x3a -> Node 0 Nov 8 00:26:28.739214 kernel: SRAT: PXM 0 -> APIC 0x3c -> Node 0 Nov 8 00:26:28.739218 kernel: SRAT: PXM 0 -> APIC 0x3e -> Node 0 Nov 8 00:26:28.739223 kernel: SRAT: PXM 0 -> APIC 0x40 -> Node 0 Nov 8 00:26:28.739228 kernel: SRAT: PXM 0 -> APIC 0x42 -> Node 0 Nov 8 00:26:28.739232 kernel: SRAT: PXM 0 -> APIC 0x44 -> Node 0 Nov 8 00:26:28.739237 kernel: SRAT: PXM 0 -> APIC 0x46 -> Node 0 Nov 8 00:26:28.739243 kernel: SRAT: PXM 0 -> APIC 0x48 -> Node 0 Nov 8 00:26:28.739248 kernel: SRAT: PXM 0 -> APIC 0x4a -> Node 0 Nov 8 00:26:28.739253 kernel: SRAT: PXM 0 -> APIC 0x4c -> Node 0 Nov 8 00:26:28.739257 kernel: SRAT: PXM 0 -> APIC 0x4e -> Node 0 Nov 8 00:26:28.739262 kernel: SRAT: PXM 0 -> APIC 0x50 -> Node 0 Nov 8 00:26:28.739267 kernel: SRAT: PXM 0 -> APIC 0x52 -> Node 0 Nov 8 00:26:28.739271 kernel: SRAT: PXM 0 -> APIC 0x54 -> Node 0 Nov 8 00:26:28.739276 kernel: SRAT: PXM 0 -> APIC 0x56 -> Node 0 Nov 8 00:26:28.739281 kernel: SRAT: PXM 0 -> APIC 0x58 -> Node 0 Nov 8 00:26:28.739286 kernel: SRAT: PXM 0 -> APIC 0x5a -> Node 0 Nov 8 00:26:28.739292 kernel: SRAT: PXM 0 -> APIC 0x5c -> Node 0 Nov 8 00:26:28.739296 kernel: SRAT: PXM 0 -> APIC 0x5e -> Node 0 Nov 8 00:26:28.739301 kernel: SRAT: PXM 0 -> APIC 0x60 -> Node 0 Nov 8 00:26:28.739306 kernel: SRAT: PXM 0 -> APIC 0x62 -> Node 0 Nov 8 00:26:28.739311 kernel: SRAT: PXM 0 -> APIC 0x64 -> Node 0 Nov 8 00:26:28.739315 kernel: SRAT: PXM 0 -> APIC 0x66 -> Node 0 Nov 8 00:26:28.739320 kernel: SRAT: PXM 0 -> APIC 0x68 -> Node 0 Nov 8 00:26:28.739325 kernel: SRAT: PXM 0 -> APIC 0x6a -> Node 0 Nov 8 00:26:28.739329 kernel: SRAT: PXM 0 -> APIC 0x6c -> Node 0 Nov 8 00:26:28.739334 kernel: SRAT: PXM 0 -> APIC 0x6e -> Node 0 Nov 8 00:26:28.739340 kernel: SRAT: PXM 0 -> APIC 0x70 -> Node 0 Nov 8 00:26:28.739345 kernel: SRAT: PXM 0 -> APIC 0x72 -> Node 0 Nov 8 00:26:28.739350 kernel: SRAT: PXM 0 -> APIC 0x74 -> Node 0 Nov 8 00:26:28.739358 kernel: SRAT: PXM 0 -> APIC 0x76 -> Node 0 Nov 8 00:26:28.739364 kernel: SRAT: PXM 0 -> APIC 0x78 -> Node 0 Nov 8 00:26:28.739369 kernel: SRAT: PXM 0 -> APIC 0x7a -> Node 0 Nov 8 00:26:28.739374 kernel: SRAT: PXM 0 -> APIC 0x7c -> Node 0 Nov 8 00:26:28.739379 kernel: SRAT: PXM 0 -> APIC 0x7e -> Node 0 Nov 8 00:26:28.739384 kernel: SRAT: PXM 0 -> APIC 0x80 -> Node 0 Nov 8 00:26:28.739436 kernel: SRAT: PXM 0 -> APIC 0x82 -> Node 0 Nov 8 00:26:28.739443 kernel: SRAT: PXM 0 -> APIC 0x84 -> Node 0 Nov 8 00:26:28.739448 kernel: SRAT: PXM 0 -> APIC 0x86 -> Node 0 Nov 8 00:26:28.739453 kernel: SRAT: PXM 0 -> APIC 0x88 -> Node 0 Nov 8 00:26:28.739458 kernel: SRAT: PXM 0 -> APIC 0x8a -> Node 0 Nov 8 00:26:28.739463 kernel: SRAT: PXM 0 -> APIC 0x8c -> Node 0 Nov 8 00:26:28.739468 kernel: SRAT: PXM 0 -> APIC 0x8e -> Node 0 Nov 8 00:26:28.739473 kernel: SRAT: PXM 0 -> APIC 0x90 -> Node 0 Nov 8 00:26:28.739478 kernel: SRAT: PXM 0 -> APIC 0x92 -> Node 0 Nov 8 00:26:28.739483 kernel: SRAT: PXM 0 -> APIC 0x94 -> Node 0 Nov 8 00:26:28.739490 kernel: SRAT: PXM 0 -> APIC 0x96 -> Node 0 Nov 8 00:26:28.739495 kernel: SRAT: PXM 0 -> APIC 0x98 -> Node 0 Nov 8 00:26:28.739501 kernel: SRAT: PXM 0 -> APIC 0x9a -> Node 0 Nov 8 00:26:28.739506 kernel: SRAT: PXM 0 -> APIC 0x9c -> Node 0 Nov 8 00:26:28.739511 kernel: SRAT: PXM 0 -> APIC 0x9e -> Node 0 Nov 8 00:26:28.739516 kernel: SRAT: PXM 0 -> APIC 0xa0 -> Node 0 Nov 8 00:26:28.739521 kernel: SRAT: PXM 0 -> APIC 0xa2 -> Node 0 Nov 8 00:26:28.739526 kernel: SRAT: PXM 0 -> APIC 0xa4 -> Node 0 Nov 8 00:26:28.739531 kernel: SRAT: PXM 0 -> APIC 0xa6 -> Node 0 Nov 8 00:26:28.739536 kernel: SRAT: PXM 0 -> APIC 0xa8 -> Node 0 Nov 8 00:26:28.739542 kernel: SRAT: PXM 0 -> APIC 0xaa -> Node 0 Nov 8 00:26:28.739547 kernel: SRAT: PXM 0 -> APIC 0xac -> Node 0 Nov 8 00:26:28.739552 kernel: SRAT: PXM 0 -> APIC 0xae -> Node 0 Nov 8 00:26:28.739557 kernel: SRAT: PXM 0 -> APIC 0xb0 -> Node 0 Nov 8 00:26:28.739562 kernel: SRAT: PXM 0 -> APIC 0xb2 -> Node 0 Nov 8 00:26:28.739567 kernel: SRAT: PXM 0 -> APIC 0xb4 -> Node 0 Nov 8 00:26:28.739573 kernel: SRAT: PXM 0 -> APIC 0xb6 -> Node 0 Nov 8 00:26:28.739578 kernel: SRAT: PXM 0 -> APIC 0xb8 -> Node 0 Nov 8 00:26:28.739583 kernel: SRAT: PXM 0 -> APIC 0xba -> Node 0 Nov 8 00:26:28.739588 kernel: SRAT: PXM 0 -> APIC 0xbc -> Node 0 Nov 8 00:26:28.739594 kernel: SRAT: PXM 0 -> APIC 0xbe -> Node 0 Nov 8 00:26:28.739599 kernel: SRAT: PXM 0 -> APIC 0xc0 -> Node 0 Nov 8 00:26:28.739604 kernel: SRAT: PXM 0 -> APIC 0xc2 -> Node 0 Nov 8 00:26:28.739609 kernel: SRAT: PXM 0 -> APIC 0xc4 -> Node 0 Nov 8 00:26:28.739614 kernel: SRAT: PXM 0 -> APIC 0xc6 -> Node 0 Nov 8 00:26:28.739619 kernel: SRAT: PXM 0 -> APIC 0xc8 -> Node 0 Nov 8 00:26:28.739624 kernel: SRAT: PXM 0 -> APIC 0xca -> Node 0 Nov 8 00:26:28.739629 kernel: SRAT: PXM 0 -> APIC 0xcc -> Node 0 Nov 8 00:26:28.739634 kernel: SRAT: PXM 0 -> APIC 0xce -> Node 0 Nov 8 00:26:28.739639 kernel: SRAT: PXM 0 -> APIC 0xd0 -> Node 0 Nov 8 00:26:28.739645 kernel: SRAT: PXM 0 -> APIC 0xd2 -> Node 0 Nov 8 00:26:28.739651 kernel: SRAT: PXM 0 -> APIC 0xd4 -> Node 0 Nov 8 00:26:28.739656 kernel: SRAT: PXM 0 -> APIC 0xd6 -> Node 0 Nov 8 00:26:28.739661 kernel: SRAT: PXM 0 -> APIC 0xd8 -> Node 0 Nov 8 00:26:28.739666 kernel: SRAT: PXM 0 -> APIC 0xda -> Node 0 Nov 8 00:26:28.739671 kernel: SRAT: PXM 0 -> APIC 0xdc -> Node 0 Nov 8 00:26:28.739676 kernel: SRAT: PXM 0 -> APIC 0xde -> Node 0 Nov 8 00:26:28.739681 kernel: SRAT: PXM 0 -> APIC 0xe0 -> Node 0 Nov 8 00:26:28.739686 kernel: SRAT: PXM 0 -> APIC 0xe2 -> Node 0 Nov 8 00:26:28.739691 kernel: SRAT: PXM 0 -> APIC 0xe4 -> Node 0 Nov 8 00:26:28.739696 kernel: SRAT: PXM 0 -> APIC 0xe6 -> Node 0 Nov 8 00:26:28.739702 kernel: SRAT: PXM 0 -> APIC 0xe8 -> Node 0 Nov 8 00:26:28.739707 kernel: SRAT: PXM 0 -> APIC 0xea -> Node 0 Nov 8 00:26:28.739712 kernel: SRAT: PXM 0 -> APIC 0xec -> Node 0 Nov 8 00:26:28.739717 kernel: SRAT: PXM 0 -> APIC 0xee -> Node 0 Nov 8 00:26:28.739722 kernel: SRAT: PXM 0 -> APIC 0xf0 -> Node 0 Nov 8 00:26:28.739727 kernel: SRAT: PXM 0 -> APIC 0xf2 -> Node 0 Nov 8 00:26:28.739732 kernel: SRAT: PXM 0 -> APIC 0xf4 -> Node 0 Nov 8 00:26:28.739737 kernel: SRAT: PXM 0 -> APIC 0xf6 -> Node 0 Nov 8 00:26:28.739742 kernel: SRAT: PXM 0 -> APIC 0xf8 -> Node 0 Nov 8 00:26:28.739748 kernel: SRAT: PXM 0 -> APIC 0xfa -> Node 0 Nov 8 00:26:28.739754 kernel: SRAT: PXM 0 -> APIC 0xfc -> Node 0 Nov 8 00:26:28.739759 kernel: SRAT: PXM 0 -> APIC 0xfe -> Node 0 Nov 8 00:26:28.739764 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x0009ffff] Nov 8 00:26:28.739769 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00100000-0x7fffffff] Nov 8 00:26:28.739775 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x80000000-0xbfffffff] hotplug Nov 8 00:26:28.739780 kernel: NUMA: Node 0 [mem 0x00000000-0x0009ffff] + [mem 0x00100000-0x7fffffff] -> [mem 0x00000000-0x7fffffff] Nov 8 00:26:28.739785 kernel: NODE_DATA(0) allocated [mem 0x7fffa000-0x7fffffff] Nov 8 00:26:28.739791 kernel: Zone ranges: Nov 8 00:26:28.739796 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Nov 8 00:26:28.739802 kernel: DMA32 [mem 0x0000000001000000-0x000000007fffffff] Nov 8 00:26:28.739808 kernel: Normal empty Nov 8 00:26:28.739813 kernel: Movable zone start for each node Nov 8 00:26:28.739818 kernel: Early memory node ranges Nov 8 00:26:28.739823 kernel: node 0: [mem 0x0000000000001000-0x000000000009dfff] Nov 8 00:26:28.739829 kernel: node 0: [mem 0x0000000000100000-0x000000007fedffff] Nov 8 00:26:28.739834 kernel: node 0: [mem 0x000000007ff00000-0x000000007fffffff] Nov 8 00:26:28.739839 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000007fffffff] Nov 8 00:26:28.739844 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Nov 8 00:26:28.739849 kernel: On node 0, zone DMA: 98 pages in unavailable ranges Nov 8 00:26:28.739855 kernel: On node 0, zone DMA32: 32 pages in unavailable ranges Nov 8 00:26:28.739860 kernel: ACPI: PM-Timer IO Port: 0x1008 Nov 8 00:26:28.739866 kernel: system APIC only can use physical flat Nov 8 00:26:28.739871 kernel: ACPI: LAPIC_NMI (acpi_id[0x00] high edge lint[0x1]) Nov 8 00:26:28.739876 kernel: ACPI: LAPIC_NMI (acpi_id[0x01] high edge lint[0x1]) Nov 8 00:26:28.739881 kernel: ACPI: LAPIC_NMI (acpi_id[0x02] high edge lint[0x1]) Nov 8 00:26:28.739886 kernel: ACPI: LAPIC_NMI (acpi_id[0x03] high edge lint[0x1]) Nov 8 00:26:28.739891 kernel: ACPI: LAPIC_NMI (acpi_id[0x04] high edge lint[0x1]) Nov 8 00:26:28.739896 kernel: ACPI: LAPIC_NMI (acpi_id[0x05] high edge lint[0x1]) Nov 8 00:26:28.739902 kernel: ACPI: LAPIC_NMI (acpi_id[0x06] high edge lint[0x1]) Nov 8 00:26:28.739907 kernel: ACPI: LAPIC_NMI (acpi_id[0x07] high edge lint[0x1]) Nov 8 00:26:28.739913 kernel: ACPI: LAPIC_NMI (acpi_id[0x08] high edge lint[0x1]) Nov 8 00:26:28.739918 kernel: ACPI: LAPIC_NMI (acpi_id[0x09] high edge lint[0x1]) Nov 8 00:26:28.739923 kernel: ACPI: LAPIC_NMI (acpi_id[0x0a] high edge lint[0x1]) Nov 8 00:26:28.739928 kernel: ACPI: LAPIC_NMI (acpi_id[0x0b] high edge lint[0x1]) Nov 8 00:26:28.739933 kernel: ACPI: LAPIC_NMI (acpi_id[0x0c] high edge lint[0x1]) Nov 8 00:26:28.739938 kernel: ACPI: LAPIC_NMI (acpi_id[0x0d] high edge lint[0x1]) Nov 8 00:26:28.739943 kernel: ACPI: LAPIC_NMI (acpi_id[0x0e] high edge lint[0x1]) Nov 8 00:26:28.739948 kernel: ACPI: LAPIC_NMI (acpi_id[0x0f] high edge lint[0x1]) Nov 8 00:26:28.739954 kernel: ACPI: LAPIC_NMI (acpi_id[0x10] high edge lint[0x1]) Nov 8 00:26:28.739960 kernel: ACPI: LAPIC_NMI (acpi_id[0x11] high edge lint[0x1]) Nov 8 00:26:28.739965 kernel: ACPI: LAPIC_NMI (acpi_id[0x12] high edge lint[0x1]) Nov 8 00:26:28.739970 kernel: ACPI: LAPIC_NMI (acpi_id[0x13] high edge lint[0x1]) Nov 8 00:26:28.739975 kernel: ACPI: LAPIC_NMI (acpi_id[0x14] high edge lint[0x1]) Nov 8 00:26:28.739980 kernel: ACPI: LAPIC_NMI (acpi_id[0x15] high edge lint[0x1]) Nov 8 00:26:28.739985 kernel: ACPI: LAPIC_NMI (acpi_id[0x16] high edge lint[0x1]) Nov 8 00:26:28.739990 kernel: ACPI: LAPIC_NMI (acpi_id[0x17] high edge lint[0x1]) Nov 8 00:26:28.739995 kernel: ACPI: LAPIC_NMI (acpi_id[0x18] high edge lint[0x1]) Nov 8 00:26:28.740000 kernel: ACPI: LAPIC_NMI (acpi_id[0x19] high edge lint[0x1]) Nov 8 00:26:28.740007 kernel: ACPI: LAPIC_NMI (acpi_id[0x1a] high edge lint[0x1]) Nov 8 00:26:28.740012 kernel: ACPI: LAPIC_NMI (acpi_id[0x1b] high edge lint[0x1]) Nov 8 00:26:28.740017 kernel: ACPI: LAPIC_NMI (acpi_id[0x1c] high edge lint[0x1]) Nov 8 00:26:28.740022 kernel: ACPI: LAPIC_NMI (acpi_id[0x1d] high edge lint[0x1]) Nov 8 00:26:28.740027 kernel: ACPI: LAPIC_NMI (acpi_id[0x1e] high edge lint[0x1]) Nov 8 00:26:28.740032 kernel: ACPI: LAPIC_NMI (acpi_id[0x1f] high edge lint[0x1]) Nov 8 00:26:28.740038 kernel: ACPI: LAPIC_NMI (acpi_id[0x20] high edge lint[0x1]) Nov 8 00:26:28.740043 kernel: ACPI: LAPIC_NMI (acpi_id[0x21] high edge lint[0x1]) Nov 8 00:26:28.740048 kernel: ACPI: LAPIC_NMI (acpi_id[0x22] high edge lint[0x1]) Nov 8 00:26:28.740054 kernel: ACPI: LAPIC_NMI (acpi_id[0x23] high edge lint[0x1]) Nov 8 00:26:28.740059 kernel: ACPI: LAPIC_NMI (acpi_id[0x24] high edge lint[0x1]) Nov 8 00:26:28.740064 kernel: ACPI: LAPIC_NMI (acpi_id[0x25] high edge lint[0x1]) Nov 8 00:26:28.740069 kernel: ACPI: LAPIC_NMI (acpi_id[0x26] high edge lint[0x1]) Nov 8 00:26:28.740075 kernel: ACPI: LAPIC_NMI (acpi_id[0x27] high edge lint[0x1]) Nov 8 00:26:28.740080 kernel: ACPI: LAPIC_NMI (acpi_id[0x28] high edge lint[0x1]) Nov 8 00:26:28.740085 kernel: ACPI: LAPIC_NMI (acpi_id[0x29] high edge lint[0x1]) Nov 8 00:26:28.740090 kernel: ACPI: LAPIC_NMI (acpi_id[0x2a] high edge lint[0x1]) Nov 8 00:26:28.740095 kernel: ACPI: LAPIC_NMI (acpi_id[0x2b] high edge lint[0x1]) Nov 8 00:26:28.740100 kernel: ACPI: LAPIC_NMI (acpi_id[0x2c] high edge lint[0x1]) Nov 8 00:26:28.740106 kernel: ACPI: LAPIC_NMI (acpi_id[0x2d] high edge lint[0x1]) Nov 8 00:26:28.740112 kernel: ACPI: LAPIC_NMI (acpi_id[0x2e] high edge lint[0x1]) Nov 8 00:26:28.740117 kernel: ACPI: LAPIC_NMI (acpi_id[0x2f] high edge lint[0x1]) Nov 8 00:26:28.740122 kernel: ACPI: LAPIC_NMI (acpi_id[0x30] high edge lint[0x1]) Nov 8 00:26:28.740127 kernel: ACPI: LAPIC_NMI (acpi_id[0x31] high edge lint[0x1]) Nov 8 00:26:28.740132 kernel: ACPI: LAPIC_NMI (acpi_id[0x32] high edge lint[0x1]) Nov 8 00:26:28.740137 kernel: ACPI: LAPIC_NMI (acpi_id[0x33] high edge lint[0x1]) Nov 8 00:26:28.740142 kernel: ACPI: LAPIC_NMI (acpi_id[0x34] high edge lint[0x1]) Nov 8 00:26:28.740147 kernel: ACPI: LAPIC_NMI (acpi_id[0x35] high edge lint[0x1]) Nov 8 00:26:28.740154 kernel: ACPI: LAPIC_NMI (acpi_id[0x36] high edge lint[0x1]) Nov 8 00:26:28.740159 kernel: ACPI: LAPIC_NMI (acpi_id[0x37] high edge lint[0x1]) Nov 8 00:26:28.740164 kernel: ACPI: LAPIC_NMI (acpi_id[0x38] high edge lint[0x1]) Nov 8 00:26:28.740169 kernel: ACPI: LAPIC_NMI (acpi_id[0x39] high edge lint[0x1]) Nov 8 00:26:28.740174 kernel: ACPI: LAPIC_NMI (acpi_id[0x3a] high edge lint[0x1]) Nov 8 00:26:28.740179 kernel: ACPI: LAPIC_NMI (acpi_id[0x3b] high edge lint[0x1]) Nov 8 00:26:28.740184 kernel: ACPI: LAPIC_NMI (acpi_id[0x3c] high edge lint[0x1]) Nov 8 00:26:28.740189 kernel: ACPI: LAPIC_NMI (acpi_id[0x3d] high edge lint[0x1]) Nov 8 00:26:28.740194 kernel: ACPI: LAPIC_NMI (acpi_id[0x3e] high edge lint[0x1]) Nov 8 00:26:28.740199 kernel: ACPI: LAPIC_NMI (acpi_id[0x3f] high edge lint[0x1]) Nov 8 00:26:28.740206 kernel: ACPI: LAPIC_NMI (acpi_id[0x40] high edge lint[0x1]) Nov 8 00:26:28.740211 kernel: ACPI: LAPIC_NMI (acpi_id[0x41] high edge lint[0x1]) Nov 8 00:26:28.740216 kernel: ACPI: LAPIC_NMI (acpi_id[0x42] high edge lint[0x1]) Nov 8 00:26:28.740221 kernel: ACPI: LAPIC_NMI (acpi_id[0x43] high edge lint[0x1]) Nov 8 00:26:28.740226 kernel: ACPI: LAPIC_NMI (acpi_id[0x44] high edge lint[0x1]) Nov 8 00:26:28.740231 kernel: ACPI: LAPIC_NMI (acpi_id[0x45] high edge lint[0x1]) Nov 8 00:26:28.740236 kernel: ACPI: LAPIC_NMI (acpi_id[0x46] high edge lint[0x1]) Nov 8 00:26:28.740241 kernel: ACPI: LAPIC_NMI (acpi_id[0x47] high edge lint[0x1]) Nov 8 00:26:28.740247 kernel: ACPI: LAPIC_NMI (acpi_id[0x48] high edge lint[0x1]) Nov 8 00:26:28.740252 kernel: ACPI: LAPIC_NMI (acpi_id[0x49] high edge lint[0x1]) Nov 8 00:26:28.740258 kernel: ACPI: LAPIC_NMI (acpi_id[0x4a] high edge lint[0x1]) Nov 8 00:26:28.740263 kernel: ACPI: LAPIC_NMI (acpi_id[0x4b] high edge lint[0x1]) Nov 8 00:26:28.740269 kernel: ACPI: LAPIC_NMI (acpi_id[0x4c] high edge lint[0x1]) Nov 8 00:26:28.740274 kernel: ACPI: LAPIC_NMI (acpi_id[0x4d] high edge lint[0x1]) Nov 8 00:26:28.740279 kernel: ACPI: LAPIC_NMI (acpi_id[0x4e] high edge lint[0x1]) Nov 8 00:26:28.740284 kernel: ACPI: LAPIC_NMI (acpi_id[0x4f] high edge lint[0x1]) Nov 8 00:26:28.740289 kernel: ACPI: LAPIC_NMI (acpi_id[0x50] high edge lint[0x1]) Nov 8 00:26:28.740294 kernel: ACPI: LAPIC_NMI (acpi_id[0x51] high edge lint[0x1]) Nov 8 00:26:28.740299 kernel: ACPI: LAPIC_NMI (acpi_id[0x52] high edge lint[0x1]) Nov 8 00:26:28.740304 kernel: ACPI: LAPIC_NMI (acpi_id[0x53] high edge lint[0x1]) Nov 8 00:26:28.740311 kernel: ACPI: LAPIC_NMI (acpi_id[0x54] high edge lint[0x1]) Nov 8 00:26:28.740316 kernel: ACPI: LAPIC_NMI (acpi_id[0x55] high edge lint[0x1]) Nov 8 00:26:28.740321 kernel: ACPI: LAPIC_NMI (acpi_id[0x56] high edge lint[0x1]) Nov 8 00:26:28.740326 kernel: ACPI: LAPIC_NMI (acpi_id[0x57] high edge lint[0x1]) Nov 8 00:26:28.740331 kernel: ACPI: LAPIC_NMI (acpi_id[0x58] high edge lint[0x1]) Nov 8 00:26:28.740336 kernel: ACPI: LAPIC_NMI (acpi_id[0x59] high edge lint[0x1]) Nov 8 00:26:28.740341 kernel: ACPI: LAPIC_NMI (acpi_id[0x5a] high edge lint[0x1]) Nov 8 00:26:28.740346 kernel: ACPI: LAPIC_NMI (acpi_id[0x5b] high edge lint[0x1]) Nov 8 00:26:28.740351 kernel: ACPI: LAPIC_NMI (acpi_id[0x5c] high edge lint[0x1]) Nov 8 00:26:28.740358 kernel: ACPI: LAPIC_NMI (acpi_id[0x5d] high edge lint[0x1]) Nov 8 00:26:28.740363 kernel: ACPI: LAPIC_NMI (acpi_id[0x5e] high edge lint[0x1]) Nov 8 00:26:28.740368 kernel: ACPI: LAPIC_NMI (acpi_id[0x5f] high edge lint[0x1]) Nov 8 00:26:28.740373 kernel: ACPI: LAPIC_NMI (acpi_id[0x60] high edge lint[0x1]) Nov 8 00:26:28.740378 kernel: ACPI: LAPIC_NMI (acpi_id[0x61] high edge lint[0x1]) Nov 8 00:26:28.740383 kernel: ACPI: LAPIC_NMI (acpi_id[0x62] high edge lint[0x1]) Nov 8 00:26:28.740388 kernel: ACPI: LAPIC_NMI (acpi_id[0x63] high edge lint[0x1]) Nov 8 00:26:28.740393 kernel: ACPI: LAPIC_NMI (acpi_id[0x64] high edge lint[0x1]) Nov 8 00:26:28.740403 kernel: ACPI: LAPIC_NMI (acpi_id[0x65] high edge lint[0x1]) Nov 8 00:26:28.740408 kernel: ACPI: LAPIC_NMI (acpi_id[0x66] high edge lint[0x1]) Nov 8 00:26:28.740414 kernel: ACPI: LAPIC_NMI (acpi_id[0x67] high edge lint[0x1]) Nov 8 00:26:28.740420 kernel: ACPI: LAPIC_NMI (acpi_id[0x68] high edge lint[0x1]) Nov 8 00:26:28.740425 kernel: ACPI: LAPIC_NMI (acpi_id[0x69] high edge lint[0x1]) Nov 8 00:26:28.740430 kernel: ACPI: LAPIC_NMI (acpi_id[0x6a] high edge lint[0x1]) Nov 8 00:26:28.740435 kernel: ACPI: LAPIC_NMI (acpi_id[0x6b] high edge lint[0x1]) Nov 8 00:26:28.740440 kernel: ACPI: LAPIC_NMI (acpi_id[0x6c] high edge lint[0x1]) Nov 8 00:26:28.740445 kernel: ACPI: LAPIC_NMI (acpi_id[0x6d] high edge lint[0x1]) Nov 8 00:26:28.740450 kernel: ACPI: LAPIC_NMI (acpi_id[0x6e] high edge lint[0x1]) Nov 8 00:26:28.740456 kernel: ACPI: LAPIC_NMI (acpi_id[0x6f] high edge lint[0x1]) Nov 8 00:26:28.740461 kernel: ACPI: LAPIC_NMI (acpi_id[0x70] high edge lint[0x1]) Nov 8 00:26:28.740467 kernel: ACPI: LAPIC_NMI (acpi_id[0x71] high edge lint[0x1]) Nov 8 00:26:28.740472 kernel: ACPI: LAPIC_NMI (acpi_id[0x72] high edge lint[0x1]) Nov 8 00:26:28.740477 kernel: ACPI: LAPIC_NMI (acpi_id[0x73] high edge lint[0x1]) Nov 8 00:26:28.740482 kernel: ACPI: LAPIC_NMI (acpi_id[0x74] high edge lint[0x1]) Nov 8 00:26:28.740488 kernel: ACPI: LAPIC_NMI (acpi_id[0x75] high edge lint[0x1]) Nov 8 00:26:28.740493 kernel: ACPI: LAPIC_NMI (acpi_id[0x76] high edge lint[0x1]) Nov 8 00:26:28.740498 kernel: ACPI: LAPIC_NMI (acpi_id[0x77] high edge lint[0x1]) Nov 8 00:26:28.740503 kernel: ACPI: LAPIC_NMI (acpi_id[0x78] high edge lint[0x1]) Nov 8 00:26:28.740508 kernel: ACPI: LAPIC_NMI (acpi_id[0x79] high edge lint[0x1]) Nov 8 00:26:28.740514 kernel: ACPI: LAPIC_NMI (acpi_id[0x7a] high edge lint[0x1]) Nov 8 00:26:28.740519 kernel: ACPI: LAPIC_NMI (acpi_id[0x7b] high edge lint[0x1]) Nov 8 00:26:28.740524 kernel: ACPI: LAPIC_NMI (acpi_id[0x7c] high edge lint[0x1]) Nov 8 00:26:28.740529 kernel: ACPI: LAPIC_NMI (acpi_id[0x7d] high edge lint[0x1]) Nov 8 00:26:28.740535 kernel: ACPI: LAPIC_NMI (acpi_id[0x7e] high edge lint[0x1]) Nov 8 00:26:28.740540 kernel: ACPI: LAPIC_NMI (acpi_id[0x7f] high edge lint[0x1]) Nov 8 00:26:28.740545 kernel: IOAPIC[0]: apic_id 1, version 17, address 0xfec00000, GSI 0-23 Nov 8 00:26:28.740550 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 high edge) Nov 8 00:26:28.740555 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Nov 8 00:26:28.740561 kernel: ACPI: HPET id: 0x8086af01 base: 0xfed00000 Nov 8 00:26:28.740568 kernel: TSC deadline timer available Nov 8 00:26:28.740573 kernel: smpboot: Allowing 128 CPUs, 126 hotplug CPUs Nov 8 00:26:28.740579 kernel: [mem 0x80000000-0xefffffff] available for PCI devices Nov 8 00:26:28.740584 kernel: Booting paravirtualized kernel on VMware hypervisor Nov 8 00:26:28.740589 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Nov 8 00:26:28.740594 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:128 nr_cpu_ids:128 nr_node_ids:1 Nov 8 00:26:28.740600 kernel: percpu: Embedded 58 pages/cpu s196712 r8192 d32664 u262144 Nov 8 00:26:28.740605 kernel: pcpu-alloc: s196712 r8192 d32664 u262144 alloc=1*2097152 Nov 8 00:26:28.740610 kernel: pcpu-alloc: [0] 000 001 002 003 004 005 006 007 Nov 8 00:26:28.740616 kernel: pcpu-alloc: [0] 008 009 010 011 012 013 014 015 Nov 8 00:26:28.740622 kernel: pcpu-alloc: [0] 016 017 018 019 020 021 022 023 Nov 8 00:26:28.740627 kernel: pcpu-alloc: [0] 024 025 026 027 028 029 030 031 Nov 8 00:26:28.740633 kernel: pcpu-alloc: [0] 032 033 034 035 036 037 038 039 Nov 8 00:26:28.740652 kernel: pcpu-alloc: [0] 040 041 042 043 044 045 046 047 Nov 8 00:26:28.740663 kernel: pcpu-alloc: [0] 048 049 050 051 052 053 054 055 Nov 8 00:26:28.740669 kernel: pcpu-alloc: [0] 056 057 058 059 060 061 062 063 Nov 8 00:26:28.740674 kernel: pcpu-alloc: [0] 064 065 066 067 068 069 070 071 Nov 8 00:26:28.740680 kernel: pcpu-alloc: [0] 072 073 074 075 076 077 078 079 Nov 8 00:26:28.740703 kernel: pcpu-alloc: [0] 080 081 082 083 084 085 086 087 Nov 8 00:26:28.740708 kernel: pcpu-alloc: [0] 088 089 090 091 092 093 094 095 Nov 8 00:26:28.740714 kernel: pcpu-alloc: [0] 096 097 098 099 100 101 102 103 Nov 8 00:26:28.740719 kernel: pcpu-alloc: [0] 104 105 106 107 108 109 110 111 Nov 8 00:26:28.740725 kernel: pcpu-alloc: [0] 112 113 114 115 116 117 118 119 Nov 8 00:26:28.740730 kernel: pcpu-alloc: [0] 120 121 122 123 124 125 126 127 Nov 8 00:26:28.740736 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=vmware flatcar.autologin verity.usrhash=480a02cf7f2001774aa495c3e719d4173e968e6839485a7d2b207ef2facca472 Nov 8 00:26:28.740742 kernel: random: crng init done Nov 8 00:26:28.740749 kernel: printk: log_buf_len individual max cpu contribution: 4096 bytes Nov 8 00:26:28.740755 kernel: printk: log_buf_len total cpu_extra contributions: 520192 bytes Nov 8 00:26:28.740761 kernel: printk: log_buf_len min size: 262144 bytes Nov 8 00:26:28.740766 kernel: printk: log_buf_len: 1048576 bytes Nov 8 00:26:28.740772 kernel: printk: early log buf free: 239760(91%) Nov 8 00:26:28.740777 kernel: Dentry cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Nov 8 00:26:28.740783 kernel: Inode-cache hash table entries: 131072 (order: 8, 1048576 bytes, linear) Nov 8 00:26:28.740788 kernel: Fallback order for Node 0: 0 Nov 8 00:26:28.740794 kernel: Built 1 zonelists, mobility grouping on. Total pages: 515808 Nov 8 00:26:28.740801 kernel: Policy zone: DMA32 Nov 8 00:26:28.740806 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Nov 8 00:26:28.740812 kernel: Memory: 1936368K/2096628K available (12288K kernel code, 2288K rwdata, 22748K rodata, 42880K init, 2320K bss, 160000K reserved, 0K cma-reserved) Nov 8 00:26:28.740819 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=128, Nodes=1 Nov 8 00:26:28.740824 kernel: ftrace: allocating 37980 entries in 149 pages Nov 8 00:26:28.740831 kernel: ftrace: allocated 149 pages with 4 groups Nov 8 00:26:28.740837 kernel: Dynamic Preempt: voluntary Nov 8 00:26:28.740842 kernel: rcu: Preemptible hierarchical RCU implementation. Nov 8 00:26:28.740848 kernel: rcu: RCU event tracing is enabled. Nov 8 00:26:28.740854 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=128. Nov 8 00:26:28.740859 kernel: Trampoline variant of Tasks RCU enabled. Nov 8 00:26:28.740865 kernel: Rude variant of Tasks RCU enabled. Nov 8 00:26:28.740871 kernel: Tracing variant of Tasks RCU enabled. Nov 8 00:26:28.740876 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Nov 8 00:26:28.740882 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=128 Nov 8 00:26:28.740888 kernel: NR_IRQS: 33024, nr_irqs: 1448, preallocated irqs: 16 Nov 8 00:26:28.740894 kernel: rcu: srcu_init: Setting srcu_struct sizes to big. Nov 8 00:26:28.740899 kernel: Console: colour VGA+ 80x25 Nov 8 00:26:28.740905 kernel: printk: console [tty0] enabled Nov 8 00:26:28.740911 kernel: printk: console [ttyS0] enabled Nov 8 00:26:28.740916 kernel: ACPI: Core revision 20230628 Nov 8 00:26:28.740922 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 133484882848 ns Nov 8 00:26:28.740928 kernel: APIC: Switch to symmetric I/O mode setup Nov 8 00:26:28.740934 kernel: x2apic enabled Nov 8 00:26:28.740940 kernel: APIC: Switched APIC routing to: physical x2apic Nov 8 00:26:28.740946 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Nov 8 00:26:28.740952 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x311fd3cd494, max_idle_ns: 440795223879 ns Nov 8 00:26:28.740957 kernel: Calibrating delay loop (skipped) preset value.. 6816.00 BogoMIPS (lpj=3408000) Nov 8 00:26:28.740963 kernel: Disabled fast string operations Nov 8 00:26:28.740969 kernel: Last level iTLB entries: 4KB 64, 2MB 8, 4MB 8 Nov 8 00:26:28.740974 kernel: Last level dTLB entries: 4KB 64, 2MB 32, 4MB 32, 1GB 4 Nov 8 00:26:28.740981 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Nov 8 00:26:28.740987 kernel: Spectre V2 : Spectre BHI mitigation: SW BHB clearing on vm exit Nov 8 00:26:28.740994 kernel: Spectre V2 : Spectre BHI mitigation: SW BHB clearing on syscall Nov 8 00:26:28.741000 kernel: Spectre V2 : Mitigation: Enhanced / Automatic IBRS Nov 8 00:26:28.741005 kernel: Spectre V2 : Spectre v2 / PBRSB-eIBRS: Retire a single CALL on VMEXIT Nov 8 00:26:28.741011 kernel: RETBleed: Mitigation: Enhanced IBRS Nov 8 00:26:28.741016 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Nov 8 00:26:28.741022 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Nov 8 00:26:28.741028 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode Nov 8 00:26:28.741033 kernel: SRBDS: Unknown: Dependent on hypervisor status Nov 8 00:26:28.741039 kernel: GDS: Unknown: Dependent on hypervisor status Nov 8 00:26:28.741045 kernel: active return thunk: its_return_thunk Nov 8 00:26:28.741051 kernel: ITS: Mitigation: Aligned branch/return thunks Nov 8 00:26:28.741057 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Nov 8 00:26:28.741062 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Nov 8 00:26:28.741068 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Nov 8 00:26:28.741073 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Nov 8 00:26:28.741079 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format. Nov 8 00:26:28.741084 kernel: Freeing SMP alternatives memory: 32K Nov 8 00:26:28.741090 kernel: pid_max: default: 131072 minimum: 1024 Nov 8 00:26:28.741096 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Nov 8 00:26:28.741102 kernel: landlock: Up and running. Nov 8 00:26:28.741108 kernel: SELinux: Initializing. Nov 8 00:26:28.741113 kernel: Mount-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Nov 8 00:26:28.741119 kernel: Mountpoint-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Nov 8 00:26:28.741125 kernel: smpboot: CPU0: Intel(R) Xeon(R) E-2278G CPU @ 3.40GHz (family: 0x6, model: 0x9e, stepping: 0xd) Nov 8 00:26:28.741130 kernel: RCU Tasks: Setting shift to 7 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=128. Nov 8 00:26:28.741136 kernel: RCU Tasks Rude: Setting shift to 7 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=128. Nov 8 00:26:28.741141 kernel: RCU Tasks Trace: Setting shift to 7 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=128. Nov 8 00:26:28.741148 kernel: Performance Events: Skylake events, core PMU driver. Nov 8 00:26:28.741154 kernel: core: CPUID marked event: 'cpu cycles' unavailable Nov 8 00:26:28.741159 kernel: core: CPUID marked event: 'instructions' unavailable Nov 8 00:26:28.741165 kernel: core: CPUID marked event: 'bus cycles' unavailable Nov 8 00:26:28.741170 kernel: core: CPUID marked event: 'cache references' unavailable Nov 8 00:26:28.741175 kernel: core: CPUID marked event: 'cache misses' unavailable Nov 8 00:26:28.741181 kernel: core: CPUID marked event: 'branch instructions' unavailable Nov 8 00:26:28.741186 kernel: core: CPUID marked event: 'branch misses' unavailable Nov 8 00:26:28.741193 kernel: ... version: 1 Nov 8 00:26:28.741198 kernel: ... bit width: 48 Nov 8 00:26:28.741204 kernel: ... generic registers: 4 Nov 8 00:26:28.741209 kernel: ... value mask: 0000ffffffffffff Nov 8 00:26:28.741215 kernel: ... max period: 000000007fffffff Nov 8 00:26:28.741220 kernel: ... fixed-purpose events: 0 Nov 8 00:26:28.741226 kernel: ... event mask: 000000000000000f Nov 8 00:26:28.741232 kernel: signal: max sigframe size: 1776 Nov 8 00:26:28.741237 kernel: rcu: Hierarchical SRCU implementation. Nov 8 00:26:28.741244 kernel: rcu: Max phase no-delay instances is 400. Nov 8 00:26:28.741250 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Nov 8 00:26:28.741255 kernel: smp: Bringing up secondary CPUs ... Nov 8 00:26:28.741261 kernel: smpboot: x86: Booting SMP configuration: Nov 8 00:26:28.741267 kernel: .... node #0, CPUs: #1 Nov 8 00:26:28.741272 kernel: Disabled fast string operations Nov 8 00:26:28.741277 kernel: smpboot: CPU 1 Converting physical 2 to logical package 1 Nov 8 00:26:28.741283 kernel: smpboot: CPU 1 Converting physical 0 to logical die 1 Nov 8 00:26:28.741288 kernel: smp: Brought up 1 node, 2 CPUs Nov 8 00:26:28.741294 kernel: smpboot: Max logical packages: 128 Nov 8 00:26:28.741301 kernel: smpboot: Total of 2 processors activated (13632.00 BogoMIPS) Nov 8 00:26:28.741306 kernel: devtmpfs: initialized Nov 8 00:26:28.741312 kernel: x86/mm: Memory block size: 128MB Nov 8 00:26:28.741318 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x7feff000-0x7fefffff] (4096 bytes) Nov 8 00:26:28.741324 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Nov 8 00:26:28.741330 kernel: futex hash table entries: 32768 (order: 9, 2097152 bytes, linear) Nov 8 00:26:28.741336 kernel: pinctrl core: initialized pinctrl subsystem Nov 8 00:26:28.741341 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Nov 8 00:26:28.741347 kernel: audit: initializing netlink subsys (disabled) Nov 8 00:26:28.741354 kernel: audit: type=2000 audit(1762561587.086:1): state=initialized audit_enabled=0 res=1 Nov 8 00:26:28.741359 kernel: thermal_sys: Registered thermal governor 'step_wise' Nov 8 00:26:28.741364 kernel: thermal_sys: Registered thermal governor 'user_space' Nov 8 00:26:28.741370 kernel: cpuidle: using governor menu Nov 8 00:26:28.741376 kernel: Simple Boot Flag at 0x36 set to 0x80 Nov 8 00:26:28.741381 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Nov 8 00:26:28.741387 kernel: dca service started, version 1.12.1 Nov 8 00:26:28.741392 kernel: PCI: MMCONFIG for domain 0000 [bus 00-7f] at [mem 0xf0000000-0xf7ffffff] (base 0xf0000000) Nov 8 00:26:28.741439 kernel: PCI: Using configuration type 1 for base access Nov 8 00:26:28.741448 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Nov 8 00:26:28.741453 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Nov 8 00:26:28.741459 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Nov 8 00:26:28.741465 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Nov 8 00:26:28.741470 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Nov 8 00:26:28.741476 kernel: ACPI: Added _OSI(Module Device) Nov 8 00:26:28.741481 kernel: ACPI: Added _OSI(Processor Device) Nov 8 00:26:28.741487 kernel: ACPI: Added _OSI(Processor Aggregator Device) Nov 8 00:26:28.741492 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Nov 8 00:26:28.741499 kernel: ACPI: [Firmware Bug]: BIOS _OSI(Linux) query ignored Nov 8 00:26:28.741505 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Nov 8 00:26:28.741510 kernel: ACPI: Interpreter enabled Nov 8 00:26:28.741516 kernel: ACPI: PM: (supports S0 S1 S5) Nov 8 00:26:28.741521 kernel: ACPI: Using IOAPIC for interrupt routing Nov 8 00:26:28.741527 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Nov 8 00:26:28.741532 kernel: PCI: Using E820 reservations for host bridge windows Nov 8 00:26:28.741538 kernel: ACPI: Enabled 4 GPEs in block 00 to 0F Nov 8 00:26:28.741543 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-7f]) Nov 8 00:26:28.741625 kernel: acpi PNP0A03:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Nov 8 00:26:28.741680 kernel: acpi PNP0A03:00: _OSC: platform does not support [AER LTR] Nov 8 00:26:28.741729 kernel: acpi PNP0A03:00: _OSC: OS now controls [PCIeHotplug PME PCIeCapability] Nov 8 00:26:28.741737 kernel: PCI host bridge to bus 0000:00 Nov 8 00:26:28.741789 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Nov 8 00:26:28.741833 kernel: pci_bus 0000:00: root bus resource [mem 0x000cc000-0x000dbfff window] Nov 8 00:26:28.741879 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Nov 8 00:26:28.741922 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Nov 8 00:26:28.741964 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xfeff window] Nov 8 00:26:28.742007 kernel: pci_bus 0000:00: root bus resource [bus 00-7f] Nov 8 00:26:28.742135 kernel: pci 0000:00:00.0: [8086:7190] type 00 class 0x060000 Nov 8 00:26:28.742195 kernel: pci 0000:00:01.0: [8086:7191] type 01 class 0x060400 Nov 8 00:26:28.742306 kernel: pci 0000:00:07.0: [8086:7110] type 00 class 0x060100 Nov 8 00:26:28.742365 kernel: pci 0000:00:07.1: [8086:7111] type 00 class 0x01018a Nov 8 00:26:28.742430 kernel: pci 0000:00:07.1: reg 0x20: [io 0x1060-0x106f] Nov 8 00:26:28.742482 kernel: pci 0000:00:07.1: legacy IDE quirk: reg 0x10: [io 0x01f0-0x01f7] Nov 8 00:26:28.742531 kernel: pci 0000:00:07.1: legacy IDE quirk: reg 0x14: [io 0x03f6] Nov 8 00:26:28.742581 kernel: pci 0000:00:07.1: legacy IDE quirk: reg 0x18: [io 0x0170-0x0177] Nov 8 00:26:28.742631 kernel: pci 0000:00:07.1: legacy IDE quirk: reg 0x1c: [io 0x0376] Nov 8 00:26:28.742710 kernel: pci 0000:00:07.3: [8086:7113] type 00 class 0x068000 Nov 8 00:26:28.742792 kernel: pci 0000:00:07.3: quirk: [io 0x1000-0x103f] claimed by PIIX4 ACPI Nov 8 00:26:28.742841 kernel: pci 0000:00:07.3: quirk: [io 0x1040-0x104f] claimed by PIIX4 SMB Nov 8 00:26:28.742895 kernel: pci 0000:00:07.7: [15ad:0740] type 00 class 0x088000 Nov 8 00:26:28.742945 kernel: pci 0000:00:07.7: reg 0x10: [io 0x1080-0x10bf] Nov 8 00:26:28.742994 kernel: pci 0000:00:07.7: reg 0x14: [mem 0xfebfe000-0xfebfffff 64bit] Nov 8 00:26:28.743048 kernel: pci 0000:00:0f.0: [15ad:0405] type 00 class 0x030000 Nov 8 00:26:28.743225 kernel: pci 0000:00:0f.0: reg 0x10: [io 0x1070-0x107f] Nov 8 00:26:28.743310 kernel: pci 0000:00:0f.0: reg 0x14: [mem 0xe8000000-0xefffffff pref] Nov 8 00:26:28.743393 kernel: pci 0000:00:0f.0: reg 0x18: [mem 0xfe000000-0xfe7fffff] Nov 8 00:26:28.743864 kernel: pci 0000:00:0f.0: reg 0x30: [mem 0x00000000-0x00007fff pref] Nov 8 00:26:28.743919 kernel: pci 0000:00:0f.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Nov 8 00:26:28.743976 kernel: pci 0000:00:11.0: [15ad:0790] type 01 class 0x060401 Nov 8 00:26:28.744037 kernel: pci 0000:00:15.0: [15ad:07a0] type 01 class 0x060400 Nov 8 00:26:28.744090 kernel: pci 0000:00:15.0: PME# supported from D0 D3hot D3cold Nov 8 00:26:28.744149 kernel: pci 0000:00:15.1: [15ad:07a0] type 01 class 0x060400 Nov 8 00:26:28.744201 kernel: pci 0000:00:15.1: PME# supported from D0 D3hot D3cold Nov 8 00:26:28.744257 kernel: pci 0000:00:15.2: [15ad:07a0] type 01 class 0x060400 Nov 8 00:26:28.744310 kernel: pci 0000:00:15.2: PME# supported from D0 D3hot D3cold Nov 8 00:26:28.744365 kernel: pci 0000:00:15.3: [15ad:07a0] type 01 class 0x060400 Nov 8 00:26:28.744427 kernel: pci 0000:00:15.3: PME# supported from D0 D3hot D3cold Nov 8 00:26:28.744483 kernel: pci 0000:00:15.4: [15ad:07a0] type 01 class 0x060400 Nov 8 00:26:28.746323 kernel: pci 0000:00:15.4: PME# supported from D0 D3hot D3cold Nov 8 00:26:28.746389 kernel: pci 0000:00:15.5: [15ad:07a0] type 01 class 0x060400 Nov 8 00:26:28.746455 kernel: pci 0000:00:15.5: PME# supported from D0 D3hot D3cold Nov 8 00:26:28.746515 kernel: pci 0000:00:15.6: [15ad:07a0] type 01 class 0x060400 Nov 8 00:26:28.746572 kernel: pci 0000:00:15.6: PME# supported from D0 D3hot D3cold Nov 8 00:26:28.746627 kernel: pci 0000:00:15.7: [15ad:07a0] type 01 class 0x060400 Nov 8 00:26:28.746681 kernel: pci 0000:00:15.7: PME# supported from D0 D3hot D3cold Nov 8 00:26:28.746737 kernel: pci 0000:00:16.0: [15ad:07a0] type 01 class 0x060400 Nov 8 00:26:28.746790 kernel: pci 0000:00:16.0: PME# supported from D0 D3hot D3cold Nov 8 00:26:28.746848 kernel: pci 0000:00:16.1: [15ad:07a0] type 01 class 0x060400 Nov 8 00:26:28.746900 kernel: pci 0000:00:16.1: PME# supported from D0 D3hot D3cold Nov 8 00:26:28.746955 kernel: pci 0000:00:16.2: [15ad:07a0] type 01 class 0x060400 Nov 8 00:26:28.747007 kernel: pci 0000:00:16.2: PME# supported from D0 D3hot D3cold Nov 8 00:26:28.747062 kernel: pci 0000:00:16.3: [15ad:07a0] type 01 class 0x060400 Nov 8 00:26:28.747114 kernel: pci 0000:00:16.3: PME# supported from D0 D3hot D3cold Nov 8 00:26:28.747171 kernel: pci 0000:00:16.4: [15ad:07a0] type 01 class 0x060400 Nov 8 00:26:28.747224 kernel: pci 0000:00:16.4: PME# supported from D0 D3hot D3cold Nov 8 00:26:28.747279 kernel: pci 0000:00:16.5: [15ad:07a0] type 01 class 0x060400 Nov 8 00:26:28.747331 kernel: pci 0000:00:16.5: PME# supported from D0 D3hot D3cold Nov 8 00:26:28.747389 kernel: pci 0000:00:16.6: [15ad:07a0] type 01 class 0x060400 Nov 8 00:26:28.748496 kernel: pci 0000:00:16.6: PME# supported from D0 D3hot D3cold Nov 8 00:26:28.748558 kernel: pci 0000:00:16.7: [15ad:07a0] type 01 class 0x060400 Nov 8 00:26:28.748616 kernel: pci 0000:00:16.7: PME# supported from D0 D3hot D3cold Nov 8 00:26:28.748673 kernel: pci 0000:00:17.0: [15ad:07a0] type 01 class 0x060400 Nov 8 00:26:28.748738 kernel: pci 0000:00:17.0: PME# supported from D0 D3hot D3cold Nov 8 00:26:28.748795 kernel: pci 0000:00:17.1: [15ad:07a0] type 01 class 0x060400 Nov 8 00:26:28.748848 kernel: pci 0000:00:17.1: PME# supported from D0 D3hot D3cold Nov 8 00:26:28.748904 kernel: pci 0000:00:17.2: [15ad:07a0] type 01 class 0x060400 Nov 8 00:26:28.748959 kernel: pci 0000:00:17.2: PME# supported from D0 D3hot D3cold Nov 8 00:26:28.749017 kernel: pci 0000:00:17.3: [15ad:07a0] type 01 class 0x060400 Nov 8 00:26:28.749132 kernel: pci 0000:00:17.3: PME# supported from D0 D3hot D3cold Nov 8 00:26:28.749192 kernel: pci 0000:00:17.4: [15ad:07a0] type 01 class 0x060400 Nov 8 00:26:28.749244 kernel: pci 0000:00:17.4: PME# supported from D0 D3hot D3cold Nov 8 00:26:28.749300 kernel: pci 0000:00:17.5: [15ad:07a0] type 01 class 0x060400 Nov 8 00:26:28.749355 kernel: pci 0000:00:17.5: PME# supported from D0 D3hot D3cold Nov 8 00:26:28.750428 kernel: pci 0000:00:17.6: [15ad:07a0] type 01 class 0x060400 Nov 8 00:26:28.750488 kernel: pci 0000:00:17.6: PME# supported from D0 D3hot D3cold Nov 8 00:26:28.750602 kernel: pci 0000:00:17.7: [15ad:07a0] type 01 class 0x060400 Nov 8 00:26:28.750658 kernel: pci 0000:00:17.7: PME# supported from D0 D3hot D3cold Nov 8 00:26:28.750735 kernel: pci 0000:00:18.0: [15ad:07a0] type 01 class 0x060400 Nov 8 00:26:28.750792 kernel: pci 0000:00:18.0: PME# supported from D0 D3hot D3cold Nov 8 00:26:28.750847 kernel: pci 0000:00:18.1: [15ad:07a0] type 01 class 0x060400 Nov 8 00:26:28.750898 kernel: pci 0000:00:18.1: PME# supported from D0 D3hot D3cold Nov 8 00:26:28.750953 kernel: pci 0000:00:18.2: [15ad:07a0] type 01 class 0x060400 Nov 8 00:26:28.751004 kernel: pci 0000:00:18.2: PME# supported from D0 D3hot D3cold Nov 8 00:26:28.751059 kernel: pci 0000:00:18.3: [15ad:07a0] type 01 class 0x060400 Nov 8 00:26:28.751111 kernel: pci 0000:00:18.3: PME# supported from D0 D3hot D3cold Nov 8 00:26:28.751170 kernel: pci 0000:00:18.4: [15ad:07a0] type 01 class 0x060400 Nov 8 00:26:28.751222 kernel: pci 0000:00:18.4: PME# supported from D0 D3hot D3cold Nov 8 00:26:28.751276 kernel: pci 0000:00:18.5: [15ad:07a0] type 01 class 0x060400 Nov 8 00:26:28.751327 kernel: pci 0000:00:18.5: PME# supported from D0 D3hot D3cold Nov 8 00:26:28.753433 kernel: pci 0000:00:18.6: [15ad:07a0] type 01 class 0x060400 Nov 8 00:26:28.753491 kernel: pci 0000:00:18.6: PME# supported from D0 D3hot D3cold Nov 8 00:26:28.753552 kernel: pci 0000:00:18.7: [15ad:07a0] type 01 class 0x060400 Nov 8 00:26:28.753604 kernel: pci 0000:00:18.7: PME# supported from D0 D3hot D3cold Nov 8 00:26:28.753656 kernel: pci_bus 0000:01: extended config space not accessible Nov 8 00:26:28.753707 kernel: pci 0000:00:01.0: PCI bridge to [bus 01] Nov 8 00:26:28.753782 kernel: pci_bus 0000:02: extended config space not accessible Nov 8 00:26:28.753800 kernel: acpiphp: Slot [32] registered Nov 8 00:26:28.753809 kernel: acpiphp: Slot [33] registered Nov 8 00:26:28.753815 kernel: acpiphp: Slot [34] registered Nov 8 00:26:28.753820 kernel: acpiphp: Slot [35] registered Nov 8 00:26:28.753826 kernel: acpiphp: Slot [36] registered Nov 8 00:26:28.753834 kernel: acpiphp: Slot [37] registered Nov 8 00:26:28.753840 kernel: acpiphp: Slot [38] registered Nov 8 00:26:28.753845 kernel: acpiphp: Slot [39] registered Nov 8 00:26:28.753851 kernel: acpiphp: Slot [40] registered Nov 8 00:26:28.753857 kernel: acpiphp: Slot [41] registered Nov 8 00:26:28.753862 kernel: acpiphp: Slot [42] registered Nov 8 00:26:28.753869 kernel: acpiphp: Slot [43] registered Nov 8 00:26:28.753875 kernel: acpiphp: Slot [44] registered Nov 8 00:26:28.753880 kernel: acpiphp: Slot [45] registered Nov 8 00:26:28.753886 kernel: acpiphp: Slot [46] registered Nov 8 00:26:28.753892 kernel: acpiphp: Slot [47] registered Nov 8 00:26:28.753897 kernel: acpiphp: Slot [48] registered Nov 8 00:26:28.753903 kernel: acpiphp: Slot [49] registered Nov 8 00:26:28.753908 kernel: acpiphp: Slot [50] registered Nov 8 00:26:28.753914 kernel: acpiphp: Slot [51] registered Nov 8 00:26:28.753921 kernel: acpiphp: Slot [52] registered Nov 8 00:26:28.753926 kernel: acpiphp: Slot [53] registered Nov 8 00:26:28.753932 kernel: acpiphp: Slot [54] registered Nov 8 00:26:28.753937 kernel: acpiphp: Slot [55] registered Nov 8 00:26:28.753943 kernel: acpiphp: Slot [56] registered Nov 8 00:26:28.753949 kernel: acpiphp: Slot [57] registered Nov 8 00:26:28.753954 kernel: acpiphp: Slot [58] registered Nov 8 00:26:28.753960 kernel: acpiphp: Slot [59] registered Nov 8 00:26:28.753965 kernel: acpiphp: Slot [60] registered Nov 8 00:26:28.753971 kernel: acpiphp: Slot [61] registered Nov 8 00:26:28.753978 kernel: acpiphp: Slot [62] registered Nov 8 00:26:28.753984 kernel: acpiphp: Slot [63] registered Nov 8 00:26:28.754039 kernel: pci 0000:00:11.0: PCI bridge to [bus 02] (subtractive decode) Nov 8 00:26:28.754089 kernel: pci 0000:00:11.0: bridge window [io 0x2000-0x3fff] Nov 8 00:26:28.754137 kernel: pci 0000:00:11.0: bridge window [mem 0xfd600000-0xfdffffff] Nov 8 00:26:28.754185 kernel: pci 0000:00:11.0: bridge window [mem 0xe7b00000-0xe7ffffff 64bit pref] Nov 8 00:26:28.754234 kernel: pci 0000:00:11.0: bridge window [mem 0x000a0000-0x000bffff window] (subtractive decode) Nov 8 00:26:28.754283 kernel: pci 0000:00:11.0: bridge window [mem 0x000cc000-0x000dbfff window] (subtractive decode) Nov 8 00:26:28.754344 kernel: pci 0000:00:11.0: bridge window [mem 0xc0000000-0xfebfffff window] (subtractive decode) Nov 8 00:26:28.754423 kernel: pci 0000:00:11.0: bridge window [io 0x0000-0x0cf7 window] (subtractive decode) Nov 8 00:26:28.754474 kernel: pci 0000:00:11.0: bridge window [io 0x0d00-0xfeff window] (subtractive decode) Nov 8 00:26:28.754529 kernel: pci 0000:03:00.0: [15ad:07c0] type 00 class 0x010700 Nov 8 00:26:28.754581 kernel: pci 0000:03:00.0: reg 0x10: [io 0x4000-0x4007] Nov 8 00:26:28.754630 kernel: pci 0000:03:00.0: reg 0x14: [mem 0xfd5f8000-0xfd5fffff 64bit] Nov 8 00:26:28.754680 kernel: pci 0000:03:00.0: reg 0x30: [mem 0x00000000-0x0000ffff pref] Nov 8 00:26:28.754769 kernel: pci 0000:03:00.0: PME# supported from D0 D3hot D3cold Nov 8 00:26:28.754819 kernel: pci 0000:03:00.0: disabling ASPM on pre-1.1 PCIe device. You can enable it with 'pcie_aspm=force' Nov 8 00:26:28.754870 kernel: pci 0000:00:15.0: PCI bridge to [bus 03] Nov 8 00:26:28.754919 kernel: pci 0000:00:15.0: bridge window [io 0x4000-0x4fff] Nov 8 00:26:28.754969 kernel: pci 0000:00:15.0: bridge window [mem 0xfd500000-0xfd5fffff] Nov 8 00:26:28.755020 kernel: pci 0000:00:15.1: PCI bridge to [bus 04] Nov 8 00:26:28.755069 kernel: pci 0000:00:15.1: bridge window [io 0x8000-0x8fff] Nov 8 00:26:28.755117 kernel: pci 0000:00:15.1: bridge window [mem 0xfd100000-0xfd1fffff] Nov 8 00:26:28.755170 kernel: pci 0000:00:15.1: bridge window [mem 0xe7800000-0xe78fffff 64bit pref] Nov 8 00:26:28.755219 kernel: pci 0000:00:15.2: PCI bridge to [bus 05] Nov 8 00:26:28.755268 kernel: pci 0000:00:15.2: bridge window [io 0xc000-0xcfff] Nov 8 00:26:28.755317 kernel: pci 0000:00:15.2: bridge window [mem 0xfcd00000-0xfcdfffff] Nov 8 00:26:28.755366 kernel: pci 0000:00:15.2: bridge window [mem 0xe7400000-0xe74fffff 64bit pref] Nov 8 00:26:28.757443 kernel: pci 0000:00:15.3: PCI bridge to [bus 06] Nov 8 00:26:28.757508 kernel: pci 0000:00:15.3: bridge window [mem 0xfc900000-0xfc9fffff] Nov 8 00:26:28.757566 kernel: pci 0000:00:15.3: bridge window [mem 0xe7000000-0xe70fffff 64bit pref] Nov 8 00:26:28.757619 kernel: pci 0000:00:15.4: PCI bridge to [bus 07] Nov 8 00:26:28.757670 kernel: pci 0000:00:15.4: bridge window [mem 0xfc500000-0xfc5fffff] Nov 8 00:26:28.757758 kernel: pci 0000:00:15.4: bridge window [mem 0xe6c00000-0xe6cfffff 64bit pref] Nov 8 00:26:28.757812 kernel: pci 0000:00:15.5: PCI bridge to [bus 08] Nov 8 00:26:28.757863 kernel: pci 0000:00:15.5: bridge window [mem 0xfc100000-0xfc1fffff] Nov 8 00:26:28.757913 kernel: pci 0000:00:15.5: bridge window [mem 0xe6800000-0xe68fffff 64bit pref] Nov 8 00:26:28.757964 kernel: pci 0000:00:15.6: PCI bridge to [bus 09] Nov 8 00:26:28.758014 kernel: pci 0000:00:15.6: bridge window [mem 0xfbd00000-0xfbdfffff] Nov 8 00:26:28.758064 kernel: pci 0000:00:15.6: bridge window [mem 0xe6400000-0xe64fffff 64bit pref] Nov 8 00:26:28.758114 kernel: pci 0000:00:15.7: PCI bridge to [bus 0a] Nov 8 00:26:28.758163 kernel: pci 0000:00:15.7: bridge window [mem 0xfb900000-0xfb9fffff] Nov 8 00:26:28.758215 kernel: pci 0000:00:15.7: bridge window [mem 0xe6000000-0xe60fffff 64bit pref] Nov 8 00:26:28.758272 kernel: pci 0000:0b:00.0: [15ad:07b0] type 00 class 0x020000 Nov 8 00:26:28.758324 kernel: pci 0000:0b:00.0: reg 0x10: [mem 0xfd4fc000-0xfd4fcfff] Nov 8 00:26:28.758375 kernel: pci 0000:0b:00.0: reg 0x14: [mem 0xfd4fd000-0xfd4fdfff] Nov 8 00:26:28.758441 kernel: pci 0000:0b:00.0: reg 0x18: [mem 0xfd4fe000-0xfd4fffff] Nov 8 00:26:28.758493 kernel: pci 0000:0b:00.0: reg 0x1c: [io 0x5000-0x500f] Nov 8 00:26:28.758543 kernel: pci 0000:0b:00.0: reg 0x30: [mem 0x00000000-0x0000ffff pref] Nov 8 00:26:28.758594 kernel: pci 0000:0b:00.0: supports D1 D2 Nov 8 00:26:28.758649 kernel: pci 0000:0b:00.0: PME# supported from D0 D1 D2 D3hot D3cold Nov 8 00:26:28.758704 kernel: pci 0000:0b:00.0: disabling ASPM on pre-1.1 PCIe device. You can enable it with 'pcie_aspm=force' Nov 8 00:26:28.758755 kernel: pci 0000:00:16.0: PCI bridge to [bus 0b] Nov 8 00:26:28.758806 kernel: pci 0000:00:16.0: bridge window [io 0x5000-0x5fff] Nov 8 00:26:28.758856 kernel: pci 0000:00:16.0: bridge window [mem 0xfd400000-0xfd4fffff] Nov 8 00:26:28.758907 kernel: pci 0000:00:16.1: PCI bridge to [bus 0c] Nov 8 00:26:28.758957 kernel: pci 0000:00:16.1: bridge window [io 0x9000-0x9fff] Nov 8 00:26:28.759008 kernel: pci 0000:00:16.1: bridge window [mem 0xfd000000-0xfd0fffff] Nov 8 00:26:28.759062 kernel: pci 0000:00:16.1: bridge window [mem 0xe7700000-0xe77fffff 64bit pref] Nov 8 00:26:28.759114 kernel: pci 0000:00:16.2: PCI bridge to [bus 0d] Nov 8 00:26:28.759164 kernel: pci 0000:00:16.2: bridge window [io 0xd000-0xdfff] Nov 8 00:26:28.759214 kernel: pci 0000:00:16.2: bridge window [mem 0xfcc00000-0xfccfffff] Nov 8 00:26:28.759265 kernel: pci 0000:00:16.2: bridge window [mem 0xe7300000-0xe73fffff 64bit pref] Nov 8 00:26:28.759316 kernel: pci 0000:00:16.3: PCI bridge to [bus 0e] Nov 8 00:26:28.759367 kernel: pci 0000:00:16.3: bridge window [mem 0xfc800000-0xfc8fffff] Nov 8 00:26:28.760243 kernel: pci 0000:00:16.3: bridge window [mem 0xe6f00000-0xe6ffffff 64bit pref] Nov 8 00:26:28.760300 kernel: pci 0000:00:16.4: PCI bridge to [bus 0f] Nov 8 00:26:28.760350 kernel: pci 0000:00:16.4: bridge window [mem 0xfc400000-0xfc4fffff] Nov 8 00:26:28.760408 kernel: pci 0000:00:16.4: bridge window [mem 0xe6b00000-0xe6bfffff 64bit pref] Nov 8 00:26:28.760459 kernel: pci 0000:00:16.5: PCI bridge to [bus 10] Nov 8 00:26:28.760508 kernel: pci 0000:00:16.5: bridge window [mem 0xfc000000-0xfc0fffff] Nov 8 00:26:28.760557 kernel: pci 0000:00:16.5: bridge window [mem 0xe6700000-0xe67fffff 64bit pref] Nov 8 00:26:28.760605 kernel: pci 0000:00:16.6: PCI bridge to [bus 11] Nov 8 00:26:28.760659 kernel: pci 0000:00:16.6: bridge window [mem 0xfbc00000-0xfbcfffff] Nov 8 00:26:28.760747 kernel: pci 0000:00:16.6: bridge window [mem 0xe6300000-0xe63fffff 64bit pref] Nov 8 00:26:28.760798 kernel: pci 0000:00:16.7: PCI bridge to [bus 12] Nov 8 00:26:28.760848 kernel: pci 0000:00:16.7: bridge window [mem 0xfb800000-0xfb8fffff] Nov 8 00:26:28.760897 kernel: pci 0000:00:16.7: bridge window [mem 0xe5f00000-0xe5ffffff 64bit pref] Nov 8 00:26:28.760947 kernel: pci 0000:00:17.0: PCI bridge to [bus 13] Nov 8 00:26:28.760997 kernel: pci 0000:00:17.0: bridge window [io 0x6000-0x6fff] Nov 8 00:26:28.761046 kernel: pci 0000:00:17.0: bridge window [mem 0xfd300000-0xfd3fffff] Nov 8 00:26:28.761097 kernel: pci 0000:00:17.0: bridge window [mem 0xe7a00000-0xe7afffff 64bit pref] Nov 8 00:26:28.761148 kernel: pci 0000:00:17.1: PCI bridge to [bus 14] Nov 8 00:26:28.761197 kernel: pci 0000:00:17.1: bridge window [io 0xa000-0xafff] Nov 8 00:26:28.761247 kernel: pci 0000:00:17.1: bridge window [mem 0xfcf00000-0xfcffffff] Nov 8 00:26:28.761296 kernel: pci 0000:00:17.1: bridge window [mem 0xe7600000-0xe76fffff 64bit pref] Nov 8 00:26:28.761346 kernel: pci 0000:00:17.2: PCI bridge to [bus 15] Nov 8 00:26:28.761403 kernel: pci 0000:00:17.2: bridge window [io 0xe000-0xefff] Nov 8 00:26:28.761456 kernel: pci 0000:00:17.2: bridge window [mem 0xfcb00000-0xfcbfffff] Nov 8 00:26:28.761508 kernel: pci 0000:00:17.2: bridge window [mem 0xe7200000-0xe72fffff 64bit pref] Nov 8 00:26:28.761559 kernel: pci 0000:00:17.3: PCI bridge to [bus 16] Nov 8 00:26:28.761609 kernel: pci 0000:00:17.3: bridge window [mem 0xfc700000-0xfc7fffff] Nov 8 00:26:28.761659 kernel: pci 0000:00:17.3: bridge window [mem 0xe6e00000-0xe6efffff 64bit pref] Nov 8 00:26:28.761708 kernel: pci 0000:00:17.4: PCI bridge to [bus 17] Nov 8 00:26:28.761758 kernel: pci 0000:00:17.4: bridge window [mem 0xfc300000-0xfc3fffff] Nov 8 00:26:28.761808 kernel: pci 0000:00:17.4: bridge window [mem 0xe6a00000-0xe6afffff 64bit pref] Nov 8 00:26:28.761861 kernel: pci 0000:00:17.5: PCI bridge to [bus 18] Nov 8 00:26:28.761911 kernel: pci 0000:00:17.5: bridge window [mem 0xfbf00000-0xfbffffff] Nov 8 00:26:28.761960 kernel: pci 0000:00:17.5: bridge window [mem 0xe6600000-0xe66fffff 64bit pref] Nov 8 00:26:28.762010 kernel: pci 0000:00:17.6: PCI bridge to [bus 19] Nov 8 00:26:28.762060 kernel: pci 0000:00:17.6: bridge window [mem 0xfbb00000-0xfbbfffff] Nov 8 00:26:28.762109 kernel: pci 0000:00:17.6: bridge window [mem 0xe6200000-0xe62fffff 64bit pref] Nov 8 00:26:28.762159 kernel: pci 0000:00:17.7: PCI bridge to [bus 1a] Nov 8 00:26:28.762208 kernel: pci 0000:00:17.7: bridge window [mem 0xfb700000-0xfb7fffff] Nov 8 00:26:28.762260 kernel: pci 0000:00:17.7: bridge window [mem 0xe5e00000-0xe5efffff 64bit pref] Nov 8 00:26:28.762309 kernel: pci 0000:00:18.0: PCI bridge to [bus 1b] Nov 8 00:26:28.762358 kernel: pci 0000:00:18.0: bridge window [io 0x7000-0x7fff] Nov 8 00:26:28.762669 kernel: pci 0000:00:18.0: bridge window [mem 0xfd200000-0xfd2fffff] Nov 8 00:26:28.762775 kernel: pci 0000:00:18.0: bridge window [mem 0xe7900000-0xe79fffff 64bit pref] Nov 8 00:26:28.762827 kernel: pci 0000:00:18.1: PCI bridge to [bus 1c] Nov 8 00:26:28.762877 kernel: pci 0000:00:18.1: bridge window [io 0xb000-0xbfff] Nov 8 00:26:28.762927 kernel: pci 0000:00:18.1: bridge window [mem 0xfce00000-0xfcefffff] Nov 8 00:26:28.762979 kernel: pci 0000:00:18.1: bridge window [mem 0xe7500000-0xe75fffff 64bit pref] Nov 8 00:26:28.763030 kernel: pci 0000:00:18.2: PCI bridge to [bus 1d] Nov 8 00:26:28.763079 kernel: pci 0000:00:18.2: bridge window [mem 0xfca00000-0xfcafffff] Nov 8 00:26:28.763128 kernel: pci 0000:00:18.2: bridge window [mem 0xe7100000-0xe71fffff 64bit pref] Nov 8 00:26:28.763177 kernel: pci 0000:00:18.3: PCI bridge to [bus 1e] Nov 8 00:26:28.763227 kernel: pci 0000:00:18.3: bridge window [mem 0xfc600000-0xfc6fffff] Nov 8 00:26:28.763276 kernel: pci 0000:00:18.3: bridge window [mem 0xe6d00000-0xe6dfffff 64bit pref] Nov 8 00:26:28.763324 kernel: pci 0000:00:18.4: PCI bridge to [bus 1f] Nov 8 00:26:28.763377 kernel: pci 0000:00:18.4: bridge window [mem 0xfc200000-0xfc2fffff] Nov 8 00:26:28.763440 kernel: pci 0000:00:18.4: bridge window [mem 0xe6900000-0xe69fffff 64bit pref] Nov 8 00:26:28.763491 kernel: pci 0000:00:18.5: PCI bridge to [bus 20] Nov 8 00:26:28.763540 kernel: pci 0000:00:18.5: bridge window [mem 0xfbe00000-0xfbefffff] Nov 8 00:26:28.763588 kernel: pci 0000:00:18.5: bridge window [mem 0xe6500000-0xe65fffff 64bit pref] Nov 8 00:26:28.763637 kernel: pci 0000:00:18.6: PCI bridge to [bus 21] Nov 8 00:26:28.763692 kernel: pci 0000:00:18.6: bridge window [mem 0xfba00000-0xfbafffff] Nov 8 00:26:28.763745 kernel: pci 0000:00:18.6: bridge window [mem 0xe6100000-0xe61fffff 64bit pref] Nov 8 00:26:28.763798 kernel: pci 0000:00:18.7: PCI bridge to [bus 22] Nov 8 00:26:28.763848 kernel: pci 0000:00:18.7: bridge window [mem 0xfb600000-0xfb6fffff] Nov 8 00:26:28.763897 kernel: pci 0000:00:18.7: bridge window [mem 0xe5d00000-0xe5dfffff 64bit pref] Nov 8 00:26:28.763906 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 9 Nov 8 00:26:28.763912 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 0 Nov 8 00:26:28.763918 kernel: ACPI: PCI: Interrupt link LNKB disabled Nov 8 00:26:28.763923 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Nov 8 00:26:28.763929 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 10 Nov 8 00:26:28.763935 kernel: iommu: Default domain type: Translated Nov 8 00:26:28.763942 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Nov 8 00:26:28.763948 kernel: PCI: Using ACPI for IRQ routing Nov 8 00:26:28.763954 kernel: PCI: pci_cache_line_size set to 64 bytes Nov 8 00:26:28.763959 kernel: e820: reserve RAM buffer [mem 0x0009ec00-0x0009ffff] Nov 8 00:26:28.763965 kernel: e820: reserve RAM buffer [mem 0x7fee0000-0x7fffffff] Nov 8 00:26:28.764013 kernel: pci 0000:00:0f.0: vgaarb: setting as boot VGA device Nov 8 00:26:28.764062 kernel: pci 0000:00:0f.0: vgaarb: bridge control possible Nov 8 00:26:28.764111 kernel: pci 0000:00:0f.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Nov 8 00:26:28.764121 kernel: vgaarb: loaded Nov 8 00:26:28.764127 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 Nov 8 00:26:28.764132 kernel: hpet0: 16 comparators, 64-bit 14.318180 MHz counter Nov 8 00:26:28.764138 kernel: clocksource: Switched to clocksource tsc-early Nov 8 00:26:28.764144 kernel: VFS: Disk quotas dquot_6.6.0 Nov 8 00:26:28.764149 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Nov 8 00:26:28.764155 kernel: pnp: PnP ACPI init Nov 8 00:26:28.764207 kernel: system 00:00: [io 0x1000-0x103f] has been reserved Nov 8 00:26:28.764253 kernel: system 00:00: [io 0x1040-0x104f] has been reserved Nov 8 00:26:28.764301 kernel: system 00:00: [io 0x0cf0-0x0cf1] has been reserved Nov 8 00:26:28.764349 kernel: system 00:04: [mem 0xfed00000-0xfed003ff] has been reserved Nov 8 00:26:28.764460 kernel: pnp 00:06: [dma 2] Nov 8 00:26:28.764520 kernel: system 00:07: [io 0xfce0-0xfcff] has been reserved Nov 8 00:26:28.764566 kernel: system 00:07: [mem 0xf0000000-0xf7ffffff] has been reserved Nov 8 00:26:28.764611 kernel: system 00:07: [mem 0xfe800000-0xfe9fffff] has been reserved Nov 8 00:26:28.764622 kernel: pnp: PnP ACPI: found 8 devices Nov 8 00:26:28.764628 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Nov 8 00:26:28.764633 kernel: NET: Registered PF_INET protocol family Nov 8 00:26:28.764639 kernel: IP idents hash table entries: 32768 (order: 6, 262144 bytes, linear) Nov 8 00:26:28.764645 kernel: tcp_listen_portaddr_hash hash table entries: 1024 (order: 2, 16384 bytes, linear) Nov 8 00:26:28.764651 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Nov 8 00:26:28.764656 kernel: TCP established hash table entries: 16384 (order: 5, 131072 bytes, linear) Nov 8 00:26:28.764662 kernel: TCP bind hash table entries: 16384 (order: 7, 524288 bytes, linear) Nov 8 00:26:28.764668 kernel: TCP: Hash tables configured (established 16384 bind 16384) Nov 8 00:26:28.764674 kernel: UDP hash table entries: 1024 (order: 3, 32768 bytes, linear) Nov 8 00:26:28.764680 kernel: UDP-Lite hash table entries: 1024 (order: 3, 32768 bytes, linear) Nov 8 00:26:28.764689 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Nov 8 00:26:28.764695 kernel: NET: Registered PF_XDP protocol family Nov 8 00:26:28.764747 kernel: pci 0000:00:15.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 03] add_size 200000 add_align 100000 Nov 8 00:26:28.764797 kernel: pci 0000:00:15.3: bridge window [io 0x1000-0x0fff] to [bus 06] add_size 1000 Nov 8 00:26:28.764847 kernel: pci 0000:00:15.4: bridge window [io 0x1000-0x0fff] to [bus 07] add_size 1000 Nov 8 00:26:28.764900 kernel: pci 0000:00:15.5: bridge window [io 0x1000-0x0fff] to [bus 08] add_size 1000 Nov 8 00:26:28.764949 kernel: pci 0000:00:15.6: bridge window [io 0x1000-0x0fff] to [bus 09] add_size 1000 Nov 8 00:26:28.764998 kernel: pci 0000:00:15.7: bridge window [io 0x1000-0x0fff] to [bus 0a] add_size 1000 Nov 8 00:26:28.765047 kernel: pci 0000:00:16.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 0b] add_size 200000 add_align 100000 Nov 8 00:26:28.765097 kernel: pci 0000:00:16.3: bridge window [io 0x1000-0x0fff] to [bus 0e] add_size 1000 Nov 8 00:26:28.765146 kernel: pci 0000:00:16.4: bridge window [io 0x1000-0x0fff] to [bus 0f] add_size 1000 Nov 8 00:26:28.765198 kernel: pci 0000:00:16.5: bridge window [io 0x1000-0x0fff] to [bus 10] add_size 1000 Nov 8 00:26:28.765247 kernel: pci 0000:00:16.6: bridge window [io 0x1000-0x0fff] to [bus 11] add_size 1000 Nov 8 00:26:28.765297 kernel: pci 0000:00:16.7: bridge window [io 0x1000-0x0fff] to [bus 12] add_size 1000 Nov 8 00:26:28.765346 kernel: pci 0000:00:17.3: bridge window [io 0x1000-0x0fff] to [bus 16] add_size 1000 Nov 8 00:26:28.765402 kernel: pci 0000:00:17.4: bridge window [io 0x1000-0x0fff] to [bus 17] add_size 1000 Nov 8 00:26:28.765454 kernel: pci 0000:00:17.5: bridge window [io 0x1000-0x0fff] to [bus 18] add_size 1000 Nov 8 00:26:28.765507 kernel: pci 0000:00:17.6: bridge window [io 0x1000-0x0fff] to [bus 19] add_size 1000 Nov 8 00:26:28.765557 kernel: pci 0000:00:17.7: bridge window [io 0x1000-0x0fff] to [bus 1a] add_size 1000 Nov 8 00:26:28.765608 kernel: pci 0000:00:18.2: bridge window [io 0x1000-0x0fff] to [bus 1d] add_size 1000 Nov 8 00:26:28.765657 kernel: pci 0000:00:18.3: bridge window [io 0x1000-0x0fff] to [bus 1e] add_size 1000 Nov 8 00:26:28.765707 kernel: pci 0000:00:18.4: bridge window [io 0x1000-0x0fff] to [bus 1f] add_size 1000 Nov 8 00:26:28.765757 kernel: pci 0000:00:18.5: bridge window [io 0x1000-0x0fff] to [bus 20] add_size 1000 Nov 8 00:26:28.765808 kernel: pci 0000:00:18.6: bridge window [io 0x1000-0x0fff] to [bus 21] add_size 1000 Nov 8 00:26:28.765858 kernel: pci 0000:00:18.7: bridge window [io 0x1000-0x0fff] to [bus 22] add_size 1000 Nov 8 00:26:28.765908 kernel: pci 0000:00:15.0: BAR 15: assigned [mem 0xc0000000-0xc01fffff 64bit pref] Nov 8 00:26:28.765957 kernel: pci 0000:00:16.0: BAR 15: assigned [mem 0xc0200000-0xc03fffff 64bit pref] Nov 8 00:26:28.766006 kernel: pci 0000:00:15.3: BAR 13: no space for [io size 0x1000] Nov 8 00:26:28.766055 kernel: pci 0000:00:15.3: BAR 13: failed to assign [io size 0x1000] Nov 8 00:26:28.766107 kernel: pci 0000:00:15.4: BAR 13: no space for [io size 0x1000] Nov 8 00:26:28.766156 kernel: pci 0000:00:15.4: BAR 13: failed to assign [io size 0x1000] Nov 8 00:26:28.766205 kernel: pci 0000:00:15.5: BAR 13: no space for [io size 0x1000] Nov 8 00:26:28.766254 kernel: pci 0000:00:15.5: BAR 13: failed to assign [io size 0x1000] Nov 8 00:26:28.766304 kernel: pci 0000:00:15.6: BAR 13: no space for [io size 0x1000] Nov 8 00:26:28.766353 kernel: pci 0000:00:15.6: BAR 13: failed to assign [io size 0x1000] Nov 8 00:26:28.766426 kernel: pci 0000:00:15.7: BAR 13: no space for [io size 0x1000] Nov 8 00:26:28.766477 kernel: pci 0000:00:15.7: BAR 13: failed to assign [io size 0x1000] Nov 8 00:26:28.766530 kernel: pci 0000:00:16.3: BAR 13: no space for [io size 0x1000] Nov 8 00:26:28.766578 kernel: pci 0000:00:16.3: BAR 13: failed to assign [io size 0x1000] Nov 8 00:26:28.766626 kernel: pci 0000:00:16.4: BAR 13: no space for [io size 0x1000] Nov 8 00:26:28.766675 kernel: pci 0000:00:16.4: BAR 13: failed to assign [io size 0x1000] Nov 8 00:26:28.766728 kernel: pci 0000:00:16.5: BAR 13: no space for [io size 0x1000] Nov 8 00:26:28.766776 kernel: pci 0000:00:16.5: BAR 13: failed to assign [io size 0x1000] Nov 8 00:26:28.766824 kernel: pci 0000:00:16.6: BAR 13: no space for [io size 0x1000] Nov 8 00:26:28.766872 kernel: pci 0000:00:16.6: BAR 13: failed to assign [io size 0x1000] Nov 8 00:26:28.766921 kernel: pci 0000:00:16.7: BAR 13: no space for [io size 0x1000] Nov 8 00:26:28.766973 kernel: pci 0000:00:16.7: BAR 13: failed to assign [io size 0x1000] Nov 8 00:26:28.767022 kernel: pci 0000:00:17.3: BAR 13: no space for [io size 0x1000] Nov 8 00:26:28.767071 kernel: pci 0000:00:17.3: BAR 13: failed to assign [io size 0x1000] Nov 8 00:26:28.767119 kernel: pci 0000:00:17.4: BAR 13: no space for [io size 0x1000] Nov 8 00:26:28.767168 kernel: pci 0000:00:17.4: BAR 13: failed to assign [io size 0x1000] Nov 8 00:26:28.767217 kernel: pci 0000:00:17.5: BAR 13: no space for [io size 0x1000] Nov 8 00:26:28.767265 kernel: pci 0000:00:17.5: BAR 13: failed to assign [io size 0x1000] Nov 8 00:26:28.767315 kernel: pci 0000:00:17.6: BAR 13: no space for [io size 0x1000] Nov 8 00:26:28.767367 kernel: pci 0000:00:17.6: BAR 13: failed to assign [io size 0x1000] Nov 8 00:26:28.767422 kernel: pci 0000:00:17.7: BAR 13: no space for [io size 0x1000] Nov 8 00:26:28.767472 kernel: pci 0000:00:17.7: BAR 13: failed to assign [io size 0x1000] Nov 8 00:26:28.767521 kernel: pci 0000:00:18.2: BAR 13: no space for [io size 0x1000] Nov 8 00:26:28.767570 kernel: pci 0000:00:18.2: BAR 13: failed to assign [io size 0x1000] Nov 8 00:26:28.767619 kernel: pci 0000:00:18.3: BAR 13: no space for [io size 0x1000] Nov 8 00:26:28.767668 kernel: pci 0000:00:18.3: BAR 13: failed to assign [io size 0x1000] Nov 8 00:26:28.767717 kernel: pci 0000:00:18.4: BAR 13: no space for [io size 0x1000] Nov 8 00:26:28.767769 kernel: pci 0000:00:18.4: BAR 13: failed to assign [io size 0x1000] Nov 8 00:26:28.767818 kernel: pci 0000:00:18.5: BAR 13: no space for [io size 0x1000] Nov 8 00:26:28.767867 kernel: pci 0000:00:18.5: BAR 13: failed to assign [io size 0x1000] Nov 8 00:26:28.767916 kernel: pci 0000:00:18.6: BAR 13: no space for [io size 0x1000] Nov 8 00:26:28.767965 kernel: pci 0000:00:18.6: BAR 13: failed to assign [io size 0x1000] Nov 8 00:26:28.768014 kernel: pci 0000:00:18.7: BAR 13: no space for [io size 0x1000] Nov 8 00:26:28.768062 kernel: pci 0000:00:18.7: BAR 13: failed to assign [io size 0x1000] Nov 8 00:26:28.768110 kernel: pci 0000:00:18.7: BAR 13: no space for [io size 0x1000] Nov 8 00:26:28.768162 kernel: pci 0000:00:18.7: BAR 13: failed to assign [io size 0x1000] Nov 8 00:26:28.768210 kernel: pci 0000:00:18.6: BAR 13: no space for [io size 0x1000] Nov 8 00:26:28.768259 kernel: pci 0000:00:18.6: BAR 13: failed to assign [io size 0x1000] Nov 8 00:26:28.768308 kernel: pci 0000:00:18.5: BAR 13: no space for [io size 0x1000] Nov 8 00:26:28.768356 kernel: pci 0000:00:18.5: BAR 13: failed to assign [io size 0x1000] Nov 8 00:26:28.768417 kernel: pci 0000:00:18.4: BAR 13: no space for [io size 0x1000] Nov 8 00:26:28.768470 kernel: pci 0000:00:18.4: BAR 13: failed to assign [io size 0x1000] Nov 8 00:26:28.768519 kernel: pci 0000:00:18.3: BAR 13: no space for [io size 0x1000] Nov 8 00:26:28.768568 kernel: pci 0000:00:18.3: BAR 13: failed to assign [io size 0x1000] Nov 8 00:26:28.768619 kernel: pci 0000:00:18.2: BAR 13: no space for [io size 0x1000] Nov 8 00:26:28.768669 kernel: pci 0000:00:18.2: BAR 13: failed to assign [io size 0x1000] Nov 8 00:26:28.768722 kernel: pci 0000:00:17.7: BAR 13: no space for [io size 0x1000] Nov 8 00:26:28.768772 kernel: pci 0000:00:17.7: BAR 13: failed to assign [io size 0x1000] Nov 8 00:26:28.768821 kernel: pci 0000:00:17.6: BAR 13: no space for [io size 0x1000] Nov 8 00:26:28.768870 kernel: pci 0000:00:17.6: BAR 13: failed to assign [io size 0x1000] Nov 8 00:26:28.768919 kernel: pci 0000:00:17.5: BAR 13: no space for [io size 0x1000] Nov 8 00:26:28.768968 kernel: pci 0000:00:17.5: BAR 13: failed to assign [io size 0x1000] Nov 8 00:26:28.769017 kernel: pci 0000:00:17.4: BAR 13: no space for [io size 0x1000] Nov 8 00:26:28.769067 kernel: pci 0000:00:17.4: BAR 13: failed to assign [io size 0x1000] Nov 8 00:26:28.769119 kernel: pci 0000:00:17.3: BAR 13: no space for [io size 0x1000] Nov 8 00:26:28.769168 kernel: pci 0000:00:17.3: BAR 13: failed to assign [io size 0x1000] Nov 8 00:26:28.769217 kernel: pci 0000:00:16.7: BAR 13: no space for [io size 0x1000] Nov 8 00:26:28.769267 kernel: pci 0000:00:16.7: BAR 13: failed to assign [io size 0x1000] Nov 8 00:26:28.769316 kernel: pci 0000:00:16.6: BAR 13: no space for [io size 0x1000] Nov 8 00:26:28.769364 kernel: pci 0000:00:16.6: BAR 13: failed to assign [io size 0x1000] Nov 8 00:26:28.769484 kernel: pci 0000:00:16.5: BAR 13: no space for [io size 0x1000] Nov 8 00:26:28.769535 kernel: pci 0000:00:16.5: BAR 13: failed to assign [io size 0x1000] Nov 8 00:26:28.769584 kernel: pci 0000:00:16.4: BAR 13: no space for [io size 0x1000] Nov 8 00:26:28.769637 kernel: pci 0000:00:16.4: BAR 13: failed to assign [io size 0x1000] Nov 8 00:26:28.769690 kernel: pci 0000:00:16.3: BAR 13: no space for [io size 0x1000] Nov 8 00:26:28.769739 kernel: pci 0000:00:16.3: BAR 13: failed to assign [io size 0x1000] Nov 8 00:26:28.769792 kernel: pci 0000:00:15.7: BAR 13: no space for [io size 0x1000] Nov 8 00:26:28.769841 kernel: pci 0000:00:15.7: BAR 13: failed to assign [io size 0x1000] Nov 8 00:26:28.769890 kernel: pci 0000:00:15.6: BAR 13: no space for [io size 0x1000] Nov 8 00:26:28.769938 kernel: pci 0000:00:15.6: BAR 13: failed to assign [io size 0x1000] Nov 8 00:26:28.769987 kernel: pci 0000:00:15.5: BAR 13: no space for [io size 0x1000] Nov 8 00:26:28.770036 kernel: pci 0000:00:15.5: BAR 13: failed to assign [io size 0x1000] Nov 8 00:26:28.770085 kernel: pci 0000:00:15.4: BAR 13: no space for [io size 0x1000] Nov 8 00:26:28.770136 kernel: pci 0000:00:15.4: BAR 13: failed to assign [io size 0x1000] Nov 8 00:26:28.770184 kernel: pci 0000:00:15.3: BAR 13: no space for [io size 0x1000] Nov 8 00:26:28.770233 kernel: pci 0000:00:15.3: BAR 13: failed to assign [io size 0x1000] Nov 8 00:26:28.770281 kernel: pci 0000:00:01.0: PCI bridge to [bus 01] Nov 8 00:26:28.770330 kernel: pci 0000:00:11.0: PCI bridge to [bus 02] Nov 8 00:26:28.770379 kernel: pci 0000:00:11.0: bridge window [io 0x2000-0x3fff] Nov 8 00:26:28.771477 kernel: pci 0000:00:11.0: bridge window [mem 0xfd600000-0xfdffffff] Nov 8 00:26:28.771536 kernel: pci 0000:00:11.0: bridge window [mem 0xe7b00000-0xe7ffffff 64bit pref] Nov 8 00:26:28.771595 kernel: pci 0000:03:00.0: BAR 6: assigned [mem 0xfd500000-0xfd50ffff pref] Nov 8 00:26:28.771647 kernel: pci 0000:00:15.0: PCI bridge to [bus 03] Nov 8 00:26:28.771697 kernel: pci 0000:00:15.0: bridge window [io 0x4000-0x4fff] Nov 8 00:26:28.771746 kernel: pci 0000:00:15.0: bridge window [mem 0xfd500000-0xfd5fffff] Nov 8 00:26:28.771822 kernel: pci 0000:00:15.0: bridge window [mem 0xc0000000-0xc01fffff 64bit pref] Nov 8 00:26:28.771875 kernel: pci 0000:00:15.1: PCI bridge to [bus 04] Nov 8 00:26:28.771925 kernel: pci 0000:00:15.1: bridge window [io 0x8000-0x8fff] Nov 8 00:26:28.771975 kernel: pci 0000:00:15.1: bridge window [mem 0xfd100000-0xfd1fffff] Nov 8 00:26:28.772025 kernel: pci 0000:00:15.1: bridge window [mem 0xe7800000-0xe78fffff 64bit pref] Nov 8 00:26:28.772079 kernel: pci 0000:00:15.2: PCI bridge to [bus 05] Nov 8 00:26:28.772128 kernel: pci 0000:00:15.2: bridge window [io 0xc000-0xcfff] Nov 8 00:26:28.772178 kernel: pci 0000:00:15.2: bridge window [mem 0xfcd00000-0xfcdfffff] Nov 8 00:26:28.772227 kernel: pci 0000:00:15.2: bridge window [mem 0xe7400000-0xe74fffff 64bit pref] Nov 8 00:26:28.772277 kernel: pci 0000:00:15.3: PCI bridge to [bus 06] Nov 8 00:26:28.772326 kernel: pci 0000:00:15.3: bridge window [mem 0xfc900000-0xfc9fffff] Nov 8 00:26:28.772376 kernel: pci 0000:00:15.3: bridge window [mem 0xe7000000-0xe70fffff 64bit pref] Nov 8 00:26:28.772434 kernel: pci 0000:00:15.4: PCI bridge to [bus 07] Nov 8 00:26:28.772484 kernel: pci 0000:00:15.4: bridge window [mem 0xfc500000-0xfc5fffff] Nov 8 00:26:28.772533 kernel: pci 0000:00:15.4: bridge window [mem 0xe6c00000-0xe6cfffff 64bit pref] Nov 8 00:26:28.772587 kernel: pci 0000:00:15.5: PCI bridge to [bus 08] Nov 8 00:26:28.772636 kernel: pci 0000:00:15.5: bridge window [mem 0xfc100000-0xfc1fffff] Nov 8 00:26:28.772711 kernel: pci 0000:00:15.5: bridge window [mem 0xe6800000-0xe68fffff 64bit pref] Nov 8 00:26:28.772780 kernel: pci 0000:00:15.6: PCI bridge to [bus 09] Nov 8 00:26:28.772829 kernel: pci 0000:00:15.6: bridge window [mem 0xfbd00000-0xfbdfffff] Nov 8 00:26:28.772880 kernel: pci 0000:00:15.6: bridge window [mem 0xe6400000-0xe64fffff 64bit pref] Nov 8 00:26:28.772930 kernel: pci 0000:00:15.7: PCI bridge to [bus 0a] Nov 8 00:26:28.772980 kernel: pci 0000:00:15.7: bridge window [mem 0xfb900000-0xfb9fffff] Nov 8 00:26:28.773030 kernel: pci 0000:00:15.7: bridge window [mem 0xe6000000-0xe60fffff 64bit pref] Nov 8 00:26:28.773081 kernel: pci 0000:0b:00.0: BAR 6: assigned [mem 0xfd400000-0xfd40ffff pref] Nov 8 00:26:28.773132 kernel: pci 0000:00:16.0: PCI bridge to [bus 0b] Nov 8 00:26:28.773181 kernel: pci 0000:00:16.0: bridge window [io 0x5000-0x5fff] Nov 8 00:26:28.773231 kernel: pci 0000:00:16.0: bridge window [mem 0xfd400000-0xfd4fffff] Nov 8 00:26:28.773280 kernel: pci 0000:00:16.0: bridge window [mem 0xc0200000-0xc03fffff 64bit pref] Nov 8 00:26:28.773333 kernel: pci 0000:00:16.1: PCI bridge to [bus 0c] Nov 8 00:26:28.773382 kernel: pci 0000:00:16.1: bridge window [io 0x9000-0x9fff] Nov 8 00:26:28.774454 kernel: pci 0000:00:16.1: bridge window [mem 0xfd000000-0xfd0fffff] Nov 8 00:26:28.774511 kernel: pci 0000:00:16.1: bridge window [mem 0xe7700000-0xe77fffff 64bit pref] Nov 8 00:26:28.774564 kernel: pci 0000:00:16.2: PCI bridge to [bus 0d] Nov 8 00:26:28.774615 kernel: pci 0000:00:16.2: bridge window [io 0xd000-0xdfff] Nov 8 00:26:28.774665 kernel: pci 0000:00:16.2: bridge window [mem 0xfcc00000-0xfccfffff] Nov 8 00:26:28.774715 kernel: pci 0000:00:16.2: bridge window [mem 0xe7300000-0xe73fffff 64bit pref] Nov 8 00:26:28.774799 kernel: pci 0000:00:16.3: PCI bridge to [bus 0e] Nov 8 00:26:28.774849 kernel: pci 0000:00:16.3: bridge window [mem 0xfc800000-0xfc8fffff] Nov 8 00:26:28.774903 kernel: pci 0000:00:16.3: bridge window [mem 0xe6f00000-0xe6ffffff 64bit pref] Nov 8 00:26:28.774953 kernel: pci 0000:00:16.4: PCI bridge to [bus 0f] Nov 8 00:26:28.775003 kernel: pci 0000:00:16.4: bridge window [mem 0xfc400000-0xfc4fffff] Nov 8 00:26:28.775052 kernel: pci 0000:00:16.4: bridge window [mem 0xe6b00000-0xe6bfffff 64bit pref] Nov 8 00:26:28.775102 kernel: pci 0000:00:16.5: PCI bridge to [bus 10] Nov 8 00:26:28.775180 kernel: pci 0000:00:16.5: bridge window [mem 0xfc000000-0xfc0fffff] Nov 8 00:26:28.775936 kernel: pci 0000:00:16.5: bridge window [mem 0xe6700000-0xe67fffff 64bit pref] Nov 8 00:26:28.775997 kernel: pci 0000:00:16.6: PCI bridge to [bus 11] Nov 8 00:26:28.776049 kernel: pci 0000:00:16.6: bridge window [mem 0xfbc00000-0xfbcfffff] Nov 8 00:26:28.776103 kernel: pci 0000:00:16.6: bridge window [mem 0xe6300000-0xe63fffff 64bit pref] Nov 8 00:26:28.776153 kernel: pci 0000:00:16.7: PCI bridge to [bus 12] Nov 8 00:26:28.776203 kernel: pci 0000:00:16.7: bridge window [mem 0xfb800000-0xfb8fffff] Nov 8 00:26:28.776252 kernel: pci 0000:00:16.7: bridge window [mem 0xe5f00000-0xe5ffffff 64bit pref] Nov 8 00:26:28.776302 kernel: pci 0000:00:17.0: PCI bridge to [bus 13] Nov 8 00:26:28.776352 kernel: pci 0000:00:17.0: bridge window [io 0x6000-0x6fff] Nov 8 00:26:28.776448 kernel: pci 0000:00:17.0: bridge window [mem 0xfd300000-0xfd3fffff] Nov 8 00:26:28.776502 kernel: pci 0000:00:17.0: bridge window [mem 0xe7a00000-0xe7afffff 64bit pref] Nov 8 00:26:28.776553 kernel: pci 0000:00:17.1: PCI bridge to [bus 14] Nov 8 00:26:28.776602 kernel: pci 0000:00:17.1: bridge window [io 0xa000-0xafff] Nov 8 00:26:28.776655 kernel: pci 0000:00:17.1: bridge window [mem 0xfcf00000-0xfcffffff] Nov 8 00:26:28.776712 kernel: pci 0000:00:17.1: bridge window [mem 0xe7600000-0xe76fffff 64bit pref] Nov 8 00:26:28.776763 kernel: pci 0000:00:17.2: PCI bridge to [bus 15] Nov 8 00:26:28.776812 kernel: pci 0000:00:17.2: bridge window [io 0xe000-0xefff] Nov 8 00:26:28.776861 kernel: pci 0000:00:17.2: bridge window [mem 0xfcb00000-0xfcbfffff] Nov 8 00:26:28.776910 kernel: pci 0000:00:17.2: bridge window [mem 0xe7200000-0xe72fffff 64bit pref] Nov 8 00:26:28.776959 kernel: pci 0000:00:17.3: PCI bridge to [bus 16] Nov 8 00:26:28.777009 kernel: pci 0000:00:17.3: bridge window [mem 0xfc700000-0xfc7fffff] Nov 8 00:26:28.777059 kernel: pci 0000:00:17.3: bridge window [mem 0xe6e00000-0xe6efffff 64bit pref] Nov 8 00:26:28.777112 kernel: pci 0000:00:17.4: PCI bridge to [bus 17] Nov 8 00:26:28.777251 kernel: pci 0000:00:17.4: bridge window [mem 0xfc300000-0xfc3fffff] Nov 8 00:26:28.777304 kernel: pci 0000:00:17.4: bridge window [mem 0xe6a00000-0xe6afffff 64bit pref] Nov 8 00:26:28.777354 kernel: pci 0000:00:17.5: PCI bridge to [bus 18] Nov 8 00:26:28.777431 kernel: pci 0000:00:17.5: bridge window [mem 0xfbf00000-0xfbffffff] Nov 8 00:26:28.777484 kernel: pci 0000:00:17.5: bridge window [mem 0xe6600000-0xe66fffff 64bit pref] Nov 8 00:26:28.777533 kernel: pci 0000:00:17.6: PCI bridge to [bus 19] Nov 8 00:26:28.777582 kernel: pci 0000:00:17.6: bridge window [mem 0xfbb00000-0xfbbfffff] Nov 8 00:26:28.777631 kernel: pci 0000:00:17.6: bridge window [mem 0xe6200000-0xe62fffff 64bit pref] Nov 8 00:26:28.777680 kernel: pci 0000:00:17.7: PCI bridge to [bus 1a] Nov 8 00:26:28.777732 kernel: pci 0000:00:17.7: bridge window [mem 0xfb700000-0xfb7fffff] Nov 8 00:26:28.777782 kernel: pci 0000:00:17.7: bridge window [mem 0xe5e00000-0xe5efffff 64bit pref] Nov 8 00:26:28.777831 kernel: pci 0000:00:18.0: PCI bridge to [bus 1b] Nov 8 00:26:28.777882 kernel: pci 0000:00:18.0: bridge window [io 0x7000-0x7fff] Nov 8 00:26:28.777931 kernel: pci 0000:00:18.0: bridge window [mem 0xfd200000-0xfd2fffff] Nov 8 00:26:28.777981 kernel: pci 0000:00:18.0: bridge window [mem 0xe7900000-0xe79fffff 64bit pref] Nov 8 00:26:28.778031 kernel: pci 0000:00:18.1: PCI bridge to [bus 1c] Nov 8 00:26:28.778081 kernel: pci 0000:00:18.1: bridge window [io 0xb000-0xbfff] Nov 8 00:26:28.778131 kernel: pci 0000:00:18.1: bridge window [mem 0xfce00000-0xfcefffff] Nov 8 00:26:28.778183 kernel: pci 0000:00:18.1: bridge window [mem 0xe7500000-0xe75fffff 64bit pref] Nov 8 00:26:28.778235 kernel: pci 0000:00:18.2: PCI bridge to [bus 1d] Nov 8 00:26:28.778284 kernel: pci 0000:00:18.2: bridge window [mem 0xfca00000-0xfcafffff] Nov 8 00:26:28.778369 kernel: pci 0000:00:18.2: bridge window [mem 0xe7100000-0xe71fffff 64bit pref] Nov 8 00:26:28.778469 kernel: pci 0000:00:18.3: PCI bridge to [bus 1e] Nov 8 00:26:28.778522 kernel: pci 0000:00:18.3: bridge window [mem 0xfc600000-0xfc6fffff] Nov 8 00:26:28.778572 kernel: pci 0000:00:18.3: bridge window [mem 0xe6d00000-0xe6dfffff 64bit pref] Nov 8 00:26:28.778622 kernel: pci 0000:00:18.4: PCI bridge to [bus 1f] Nov 8 00:26:28.778672 kernel: pci 0000:00:18.4: bridge window [mem 0xfc200000-0xfc2fffff] Nov 8 00:26:28.778754 kernel: pci 0000:00:18.4: bridge window [mem 0xe6900000-0xe69fffff 64bit pref] Nov 8 00:26:28.778807 kernel: pci 0000:00:18.5: PCI bridge to [bus 20] Nov 8 00:26:28.778856 kernel: pci 0000:00:18.5: bridge window [mem 0xfbe00000-0xfbefffff] Nov 8 00:26:28.778906 kernel: pci 0000:00:18.5: bridge window [mem 0xe6500000-0xe65fffff 64bit pref] Nov 8 00:26:28.778956 kernel: pci 0000:00:18.6: PCI bridge to [bus 21] Nov 8 00:26:28.779006 kernel: pci 0000:00:18.6: bridge window [mem 0xfba00000-0xfbafffff] Nov 8 00:26:28.779056 kernel: pci 0000:00:18.6: bridge window [mem 0xe6100000-0xe61fffff 64bit pref] Nov 8 00:26:28.779106 kernel: pci 0000:00:18.7: PCI bridge to [bus 22] Nov 8 00:26:28.779155 kernel: pci 0000:00:18.7: bridge window [mem 0xfb600000-0xfb6fffff] Nov 8 00:26:28.779205 kernel: pci 0000:00:18.7: bridge window [mem 0xe5d00000-0xe5dfffff 64bit pref] Nov 8 00:26:28.779257 kernel: pci_bus 0000:00: resource 4 [mem 0x000a0000-0x000bffff window] Nov 8 00:26:28.779302 kernel: pci_bus 0000:00: resource 5 [mem 0x000cc000-0x000dbfff window] Nov 8 00:26:28.779346 kernel: pci_bus 0000:00: resource 6 [mem 0xc0000000-0xfebfffff window] Nov 8 00:26:28.779389 kernel: pci_bus 0000:00: resource 7 [io 0x0000-0x0cf7 window] Nov 8 00:26:28.779458 kernel: pci_bus 0000:00: resource 8 [io 0x0d00-0xfeff window] Nov 8 00:26:28.779507 kernel: pci_bus 0000:02: resource 0 [io 0x2000-0x3fff] Nov 8 00:26:28.779586 kernel: pci_bus 0000:02: resource 1 [mem 0xfd600000-0xfdffffff] Nov 8 00:26:28.779656 kernel: pci_bus 0000:02: resource 2 [mem 0xe7b00000-0xe7ffffff 64bit pref] Nov 8 00:26:28.779725 kernel: pci_bus 0000:02: resource 4 [mem 0x000a0000-0x000bffff window] Nov 8 00:26:28.779779 kernel: pci_bus 0000:02: resource 5 [mem 0x000cc000-0x000dbfff window] Nov 8 00:26:28.779825 kernel: pci_bus 0000:02: resource 6 [mem 0xc0000000-0xfebfffff window] Nov 8 00:26:28.779870 kernel: pci_bus 0000:02: resource 7 [io 0x0000-0x0cf7 window] Nov 8 00:26:28.779914 kernel: pci_bus 0000:02: resource 8 [io 0x0d00-0xfeff window] Nov 8 00:26:28.779964 kernel: pci_bus 0000:03: resource 0 [io 0x4000-0x4fff] Nov 8 00:26:28.780010 kernel: pci_bus 0000:03: resource 1 [mem 0xfd500000-0xfd5fffff] Nov 8 00:26:28.780058 kernel: pci_bus 0000:03: resource 2 [mem 0xc0000000-0xc01fffff 64bit pref] Nov 8 00:26:28.780108 kernel: pci_bus 0000:04: resource 0 [io 0x8000-0x8fff] Nov 8 00:26:28.780153 kernel: pci_bus 0000:04: resource 1 [mem 0xfd100000-0xfd1fffff] Nov 8 00:26:28.780198 kernel: pci_bus 0000:04: resource 2 [mem 0xe7800000-0xe78fffff 64bit pref] Nov 8 00:26:28.780247 kernel: pci_bus 0000:05: resource 0 [io 0xc000-0xcfff] Nov 8 00:26:28.780293 kernel: pci_bus 0000:05: resource 1 [mem 0xfcd00000-0xfcdfffff] Nov 8 00:26:28.780339 kernel: pci_bus 0000:05: resource 2 [mem 0xe7400000-0xe74fffff 64bit pref] Nov 8 00:26:28.780391 kernel: pci_bus 0000:06: resource 1 [mem 0xfc900000-0xfc9fffff] Nov 8 00:26:28.780488 kernel: pci_bus 0000:06: resource 2 [mem 0xe7000000-0xe70fffff 64bit pref] Nov 8 00:26:28.780537 kernel: pci_bus 0000:07: resource 1 [mem 0xfc500000-0xfc5fffff] Nov 8 00:26:28.780583 kernel: pci_bus 0000:07: resource 2 [mem 0xe6c00000-0xe6cfffff 64bit pref] Nov 8 00:26:28.780633 kernel: pci_bus 0000:08: resource 1 [mem 0xfc100000-0xfc1fffff] Nov 8 00:26:28.780679 kernel: pci_bus 0000:08: resource 2 [mem 0xe6800000-0xe68fffff 64bit pref] Nov 8 00:26:28.780773 kernel: pci_bus 0000:09: resource 1 [mem 0xfbd00000-0xfbdfffff] Nov 8 00:26:28.780819 kernel: pci_bus 0000:09: resource 2 [mem 0xe6400000-0xe64fffff 64bit pref] Nov 8 00:26:28.780871 kernel: pci_bus 0000:0a: resource 1 [mem 0xfb900000-0xfb9fffff] Nov 8 00:26:28.780917 kernel: pci_bus 0000:0a: resource 2 [mem 0xe6000000-0xe60fffff 64bit pref] Nov 8 00:26:28.780977 kernel: pci_bus 0000:0b: resource 0 [io 0x5000-0x5fff] Nov 8 00:26:28.781023 kernel: pci_bus 0000:0b: resource 1 [mem 0xfd400000-0xfd4fffff] Nov 8 00:26:28.781071 kernel: pci_bus 0000:0b: resource 2 [mem 0xc0200000-0xc03fffff 64bit pref] Nov 8 00:26:28.781119 kernel: pci_bus 0000:0c: resource 0 [io 0x9000-0x9fff] Nov 8 00:26:28.781166 kernel: pci_bus 0000:0c: resource 1 [mem 0xfd000000-0xfd0fffff] Nov 8 00:26:28.781212 kernel: pci_bus 0000:0c: resource 2 [mem 0xe7700000-0xe77fffff 64bit pref] Nov 8 00:26:28.781261 kernel: pci_bus 0000:0d: resource 0 [io 0xd000-0xdfff] Nov 8 00:26:28.781308 kernel: pci_bus 0000:0d: resource 1 [mem 0xfcc00000-0xfccfffff] Nov 8 00:26:28.781360 kernel: pci_bus 0000:0d: resource 2 [mem 0xe7300000-0xe73fffff 64bit pref] Nov 8 00:26:28.781428 kernel: pci_bus 0000:0e: resource 1 [mem 0xfc800000-0xfc8fffff] Nov 8 00:26:28.781477 kernel: pci_bus 0000:0e: resource 2 [mem 0xe6f00000-0xe6ffffff 64bit pref] Nov 8 00:26:28.781526 kernel: pci_bus 0000:0f: resource 1 [mem 0xfc400000-0xfc4fffff] Nov 8 00:26:28.781573 kernel: pci_bus 0000:0f: resource 2 [mem 0xe6b00000-0xe6bfffff 64bit pref] Nov 8 00:26:28.781622 kernel: pci_bus 0000:10: resource 1 [mem 0xfc000000-0xfc0fffff] Nov 8 00:26:28.781668 kernel: pci_bus 0000:10: resource 2 [mem 0xe6700000-0xe67fffff 64bit pref] Nov 8 00:26:28.781720 kernel: pci_bus 0000:11: resource 1 [mem 0xfbc00000-0xfbcfffff] Nov 8 00:26:28.781767 kernel: pci_bus 0000:11: resource 2 [mem 0xe6300000-0xe63fffff 64bit pref] Nov 8 00:26:28.781818 kernel: pci_bus 0000:12: resource 1 [mem 0xfb800000-0xfb8fffff] Nov 8 00:26:28.781864 kernel: pci_bus 0000:12: resource 2 [mem 0xe5f00000-0xe5ffffff 64bit pref] Nov 8 00:26:28.781913 kernel: pci_bus 0000:13: resource 0 [io 0x6000-0x6fff] Nov 8 00:26:28.781960 kernel: pci_bus 0000:13: resource 1 [mem 0xfd300000-0xfd3fffff] Nov 8 00:26:28.782008 kernel: pci_bus 0000:13: resource 2 [mem 0xe7a00000-0xe7afffff 64bit pref] Nov 8 00:26:28.782057 kernel: pci_bus 0000:14: resource 0 [io 0xa000-0xafff] Nov 8 00:26:28.782103 kernel: pci_bus 0000:14: resource 1 [mem 0xfcf00000-0xfcffffff] Nov 8 00:26:28.782148 kernel: pci_bus 0000:14: resource 2 [mem 0xe7600000-0xe76fffff 64bit pref] Nov 8 00:26:28.782197 kernel: pci_bus 0000:15: resource 0 [io 0xe000-0xefff] Nov 8 00:26:28.782243 kernel: pci_bus 0000:15: resource 1 [mem 0xfcb00000-0xfcbfffff] Nov 8 00:26:28.782288 kernel: pci_bus 0000:15: resource 2 [mem 0xe7200000-0xe72fffff 64bit pref] Nov 8 00:26:28.782340 kernel: pci_bus 0000:16: resource 1 [mem 0xfc700000-0xfc7fffff] Nov 8 00:26:28.782387 kernel: pci_bus 0000:16: resource 2 [mem 0xe6e00000-0xe6efffff 64bit pref] Nov 8 00:26:28.782460 kernel: pci_bus 0000:17: resource 1 [mem 0xfc300000-0xfc3fffff] Nov 8 00:26:28.782506 kernel: pci_bus 0000:17: resource 2 [mem 0xe6a00000-0xe6afffff 64bit pref] Nov 8 00:26:28.782555 kernel: pci_bus 0000:18: resource 1 [mem 0xfbf00000-0xfbffffff] Nov 8 00:26:28.782617 kernel: pci_bus 0000:18: resource 2 [mem 0xe6600000-0xe66fffff 64bit pref] Nov 8 00:26:28.782671 kernel: pci_bus 0000:19: resource 1 [mem 0xfbb00000-0xfbbfffff] Nov 8 00:26:28.782769 kernel: pci_bus 0000:19: resource 2 [mem 0xe6200000-0xe62fffff 64bit pref] Nov 8 00:26:28.782821 kernel: pci_bus 0000:1a: resource 1 [mem 0xfb700000-0xfb7fffff] Nov 8 00:26:28.782867 kernel: pci_bus 0000:1a: resource 2 [mem 0xe5e00000-0xe5efffff 64bit pref] Nov 8 00:26:28.782920 kernel: pci_bus 0000:1b: resource 0 [io 0x7000-0x7fff] Nov 8 00:26:28.782966 kernel: pci_bus 0000:1b: resource 1 [mem 0xfd200000-0xfd2fffff] Nov 8 00:26:28.783014 kernel: pci_bus 0000:1b: resource 2 [mem 0xe7900000-0xe79fffff 64bit pref] Nov 8 00:26:28.783063 kernel: pci_bus 0000:1c: resource 0 [io 0xb000-0xbfff] Nov 8 00:26:28.783112 kernel: pci_bus 0000:1c: resource 1 [mem 0xfce00000-0xfcefffff] Nov 8 00:26:28.783158 kernel: pci_bus 0000:1c: resource 2 [mem 0xe7500000-0xe75fffff 64bit pref] Nov 8 00:26:28.783207 kernel: pci_bus 0000:1d: resource 1 [mem 0xfca00000-0xfcafffff] Nov 8 00:26:28.783253 kernel: pci_bus 0000:1d: resource 2 [mem 0xe7100000-0xe71fffff 64bit pref] Nov 8 00:26:28.783305 kernel: pci_bus 0000:1e: resource 1 [mem 0xfc600000-0xfc6fffff] Nov 8 00:26:28.783352 kernel: pci_bus 0000:1e: resource 2 [mem 0xe6d00000-0xe6dfffff 64bit pref] Nov 8 00:26:28.783411 kernel: pci_bus 0000:1f: resource 1 [mem 0xfc200000-0xfc2fffff] Nov 8 00:26:28.783462 kernel: pci_bus 0000:1f: resource 2 [mem 0xe6900000-0xe69fffff 64bit pref] Nov 8 00:26:28.783512 kernel: pci_bus 0000:20: resource 1 [mem 0xfbe00000-0xfbefffff] Nov 8 00:26:28.783560 kernel: pci_bus 0000:20: resource 2 [mem 0xe6500000-0xe65fffff 64bit pref] Nov 8 00:26:28.783610 kernel: pci_bus 0000:21: resource 1 [mem 0xfba00000-0xfbafffff] Nov 8 00:26:28.783660 kernel: pci_bus 0000:21: resource 2 [mem 0xe6100000-0xe61fffff 64bit pref] Nov 8 00:26:28.783714 kernel: pci_bus 0000:22: resource 1 [mem 0xfb600000-0xfb6fffff] Nov 8 00:26:28.783761 kernel: pci_bus 0000:22: resource 2 [mem 0xe5d00000-0xe5dfffff 64bit pref] Nov 8 00:26:28.783815 kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers Nov 8 00:26:28.783824 kernel: PCI: CLS 32 bytes, default 64 Nov 8 00:26:28.783831 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Nov 8 00:26:28.783840 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x311fd3cd494, max_idle_ns: 440795223879 ns Nov 8 00:26:28.783846 kernel: clocksource: Switched to clocksource tsc Nov 8 00:26:28.783852 kernel: Initialise system trusted keyrings Nov 8 00:26:28.783858 kernel: workingset: timestamp_bits=39 max_order=19 bucket_order=0 Nov 8 00:26:28.783864 kernel: Key type asymmetric registered Nov 8 00:26:28.783870 kernel: Asymmetric key parser 'x509' registered Nov 8 00:26:28.783876 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Nov 8 00:26:28.783882 kernel: io scheduler mq-deadline registered Nov 8 00:26:28.783888 kernel: io scheduler kyber registered Nov 8 00:26:28.783894 kernel: io scheduler bfq registered Nov 8 00:26:28.783947 kernel: pcieport 0000:00:15.0: PME: Signaling with IRQ 24 Nov 8 00:26:28.783999 kernel: pcieport 0000:00:15.0: pciehp: Slot #160 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Nov 8 00:26:28.784051 kernel: pcieport 0000:00:15.1: PME: Signaling with IRQ 25 Nov 8 00:26:28.784101 kernel: pcieport 0000:00:15.1: pciehp: Slot #161 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Nov 8 00:26:28.784152 kernel: pcieport 0000:00:15.2: PME: Signaling with IRQ 26 Nov 8 00:26:28.784203 kernel: pcieport 0000:00:15.2: pciehp: Slot #162 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Nov 8 00:26:28.784254 kernel: pcieport 0000:00:15.3: PME: Signaling with IRQ 27 Nov 8 00:26:28.784307 kernel: pcieport 0000:00:15.3: pciehp: Slot #163 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Nov 8 00:26:28.784358 kernel: pcieport 0000:00:15.4: PME: Signaling with IRQ 28 Nov 8 00:26:28.785496 kernel: pcieport 0000:00:15.4: pciehp: Slot #164 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Nov 8 00:26:28.785559 kernel: pcieport 0000:00:15.5: PME: Signaling with IRQ 29 Nov 8 00:26:28.785615 kernel: pcieport 0000:00:15.5: pciehp: Slot #165 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Nov 8 00:26:28.785672 kernel: pcieport 0000:00:15.6: PME: Signaling with IRQ 30 Nov 8 00:26:28.785724 kernel: pcieport 0000:00:15.6: pciehp: Slot #166 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Nov 8 00:26:28.785776 kernel: pcieport 0000:00:15.7: PME: Signaling with IRQ 31 Nov 8 00:26:28.785829 kernel: pcieport 0000:00:15.7: pciehp: Slot #167 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Nov 8 00:26:28.785879 kernel: pcieport 0000:00:16.0: PME: Signaling with IRQ 32 Nov 8 00:26:28.785931 kernel: pcieport 0000:00:16.0: pciehp: Slot #192 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Nov 8 00:26:28.785985 kernel: pcieport 0000:00:16.1: PME: Signaling with IRQ 33 Nov 8 00:26:28.786036 kernel: pcieport 0000:00:16.1: pciehp: Slot #193 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Nov 8 00:26:28.786086 kernel: pcieport 0000:00:16.2: PME: Signaling with IRQ 34 Nov 8 00:26:28.786137 kernel: pcieport 0000:00:16.2: pciehp: Slot #194 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Nov 8 00:26:28.786187 kernel: pcieport 0000:00:16.3: PME: Signaling with IRQ 35 Nov 8 00:26:28.786238 kernel: pcieport 0000:00:16.3: pciehp: Slot #195 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Nov 8 00:26:28.786288 kernel: pcieport 0000:00:16.4: PME: Signaling with IRQ 36 Nov 8 00:26:28.786342 kernel: pcieport 0000:00:16.4: pciehp: Slot #196 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Nov 8 00:26:28.786393 kernel: pcieport 0000:00:16.5: PME: Signaling with IRQ 37 Nov 8 00:26:28.787056 kernel: pcieport 0000:00:16.5: pciehp: Slot #197 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Nov 8 00:26:28.787113 kernel: pcieport 0000:00:16.6: PME: Signaling with IRQ 38 Nov 8 00:26:28.787167 kernel: pcieport 0000:00:16.6: pciehp: Slot #198 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Nov 8 00:26:28.787223 kernel: pcieport 0000:00:16.7: PME: Signaling with IRQ 39 Nov 8 00:26:28.787274 kernel: pcieport 0000:00:16.7: pciehp: Slot #199 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Nov 8 00:26:28.787327 kernel: pcieport 0000:00:17.0: PME: Signaling with IRQ 40 Nov 8 00:26:28.787378 kernel: pcieport 0000:00:17.0: pciehp: Slot #224 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Nov 8 00:26:28.787467 kernel: pcieport 0000:00:17.1: PME: Signaling with IRQ 41 Nov 8 00:26:28.787521 kernel: pcieport 0000:00:17.1: pciehp: Slot #225 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Nov 8 00:26:28.787576 kernel: pcieport 0000:00:17.2: PME: Signaling with IRQ 42 Nov 8 00:26:28.787628 kernel: pcieport 0000:00:17.2: pciehp: Slot #226 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Nov 8 00:26:28.788004 kernel: pcieport 0000:00:17.3: PME: Signaling with IRQ 43 Nov 8 00:26:28.788062 kernel: pcieport 0000:00:17.3: pciehp: Slot #227 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Nov 8 00:26:28.788115 kernel: pcieport 0000:00:17.4: PME: Signaling with IRQ 44 Nov 8 00:26:28.788168 kernel: pcieport 0000:00:17.4: pciehp: Slot #228 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Nov 8 00:26:28.788223 kernel: pcieport 0000:00:17.5: PME: Signaling with IRQ 45 Nov 8 00:26:28.788275 kernel: pcieport 0000:00:17.5: pciehp: Slot #229 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Nov 8 00:26:28.788327 kernel: pcieport 0000:00:17.6: PME: Signaling with IRQ 46 Nov 8 00:26:28.788379 kernel: pcieport 0000:00:17.6: pciehp: Slot #230 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Nov 8 00:26:28.788445 kernel: pcieport 0000:00:17.7: PME: Signaling with IRQ 47 Nov 8 00:26:28.788501 kernel: pcieport 0000:00:17.7: pciehp: Slot #231 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Nov 8 00:26:28.788553 kernel: pcieport 0000:00:18.0: PME: Signaling with IRQ 48 Nov 8 00:26:28.788604 kernel: pcieport 0000:00:18.0: pciehp: Slot #256 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Nov 8 00:26:28.788656 kernel: pcieport 0000:00:18.1: PME: Signaling with IRQ 49 Nov 8 00:26:28.788713 kernel: pcieport 0000:00:18.1: pciehp: Slot #257 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Nov 8 00:26:28.788766 kernel: pcieport 0000:00:18.2: PME: Signaling with IRQ 50 Nov 8 00:26:28.788836 kernel: pcieport 0000:00:18.2: pciehp: Slot #258 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Nov 8 00:26:28.788906 kernel: pcieport 0000:00:18.3: PME: Signaling with IRQ 51 Nov 8 00:26:28.788957 kernel: pcieport 0000:00:18.3: pciehp: Slot #259 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Nov 8 00:26:28.789009 kernel: pcieport 0000:00:18.4: PME: Signaling with IRQ 52 Nov 8 00:26:28.789060 kernel: pcieport 0000:00:18.4: pciehp: Slot #260 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Nov 8 00:26:28.789111 kernel: pcieport 0000:00:18.5: PME: Signaling with IRQ 53 Nov 8 00:26:28.789165 kernel: pcieport 0000:00:18.5: pciehp: Slot #261 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Nov 8 00:26:28.789217 kernel: pcieport 0000:00:18.6: PME: Signaling with IRQ 54 Nov 8 00:26:28.789268 kernel: pcieport 0000:00:18.6: pciehp: Slot #262 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Nov 8 00:26:28.789319 kernel: pcieport 0000:00:18.7: PME: Signaling with IRQ 55 Nov 8 00:26:28.789371 kernel: pcieport 0000:00:18.7: pciehp: Slot #263 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Nov 8 00:26:28.789382 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Nov 8 00:26:28.789388 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Nov 8 00:26:28.789395 kernel: 00:05: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Nov 8 00:26:28.789535 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBC,PNP0f13:MOUS] at 0x60,0x64 irq 1,12 Nov 8 00:26:28.789542 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Nov 8 00:26:28.789548 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Nov 8 00:26:28.789605 kernel: rtc_cmos 00:01: registered as rtc0 Nov 8 00:26:28.789879 kernel: rtc_cmos 00:01: setting system clock to 2025-11-08T00:26:28 UTC (1762561588) Nov 8 00:26:28.789937 kernel: rtc_cmos 00:01: alarms up to one month, y3k, 114 bytes nvram Nov 8 00:26:28.789946 kernel: intel_pstate: CPU model not supported Nov 8 00:26:28.789953 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Nov 8 00:26:28.789959 kernel: NET: Registered PF_INET6 protocol family Nov 8 00:26:28.789965 kernel: Segment Routing with IPv6 Nov 8 00:26:28.789972 kernel: In-situ OAM (IOAM) with IPv6 Nov 8 00:26:28.789978 kernel: NET: Registered PF_PACKET protocol family Nov 8 00:26:28.789984 kernel: Key type dns_resolver registered Nov 8 00:26:28.789990 kernel: IPI shorthand broadcast: enabled Nov 8 00:26:28.789999 kernel: sched_clock: Marking stable (874436697, 215198721)->(1141805795, -52170377) Nov 8 00:26:28.790009 kernel: registered taskstats version 1 Nov 8 00:26:28.790015 kernel: Loading compiled-in X.509 certificates Nov 8 00:26:28.790021 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.113-flatcar: cf7a35a152685ec84a621291e4ce58c959319dfd' Nov 8 00:26:28.790027 kernel: Key type .fscrypt registered Nov 8 00:26:28.790033 kernel: Key type fscrypt-provisioning registered Nov 8 00:26:28.790039 kernel: ima: No TPM chip found, activating TPM-bypass! Nov 8 00:26:28.790045 kernel: ima: Allocated hash algorithm: sha1 Nov 8 00:26:28.790052 kernel: ima: No architecture policies found Nov 8 00:26:28.790059 kernel: clk: Disabling unused clocks Nov 8 00:26:28.790065 kernel: Freeing unused kernel image (initmem) memory: 42880K Nov 8 00:26:28.790071 kernel: Write protecting the kernel read-only data: 36864k Nov 8 00:26:28.790077 kernel: Freeing unused kernel image (rodata/data gap) memory: 1828K Nov 8 00:26:28.790083 kernel: Run /init as init process Nov 8 00:26:28.790090 kernel: with arguments: Nov 8 00:26:28.790099 kernel: /init Nov 8 00:26:28.790105 kernel: with environment: Nov 8 00:26:28.790112 kernel: HOME=/ Nov 8 00:26:28.790119 kernel: TERM=linux Nov 8 00:26:28.790126 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Nov 8 00:26:28.790134 systemd[1]: Detected virtualization vmware. Nov 8 00:26:28.790141 systemd[1]: Detected architecture x86-64. Nov 8 00:26:28.790147 systemd[1]: Running in initrd. Nov 8 00:26:28.790153 systemd[1]: No hostname configured, using default hostname. Nov 8 00:26:28.790160 systemd[1]: Hostname set to . Nov 8 00:26:28.790167 systemd[1]: Initializing machine ID from random generator. Nov 8 00:26:28.790173 systemd[1]: Queued start job for default target initrd.target. Nov 8 00:26:28.790180 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Nov 8 00:26:28.790186 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Nov 8 00:26:28.790193 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Nov 8 00:26:28.790200 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Nov 8 00:26:28.790206 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Nov 8 00:26:28.790212 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Nov 8 00:26:28.790221 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Nov 8 00:26:28.790228 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Nov 8 00:26:28.790234 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Nov 8 00:26:28.790241 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Nov 8 00:26:28.790247 systemd[1]: Reached target paths.target - Path Units. Nov 8 00:26:28.790253 systemd[1]: Reached target slices.target - Slice Units. Nov 8 00:26:28.790260 systemd[1]: Reached target swap.target - Swaps. Nov 8 00:26:28.790267 systemd[1]: Reached target timers.target - Timer Units. Nov 8 00:26:28.790274 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Nov 8 00:26:28.790280 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Nov 8 00:26:28.790286 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Nov 8 00:26:28.790293 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Nov 8 00:26:28.790299 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Nov 8 00:26:28.790306 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Nov 8 00:26:28.790312 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Nov 8 00:26:28.790319 systemd[1]: Reached target sockets.target - Socket Units. Nov 8 00:26:28.790326 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Nov 8 00:26:28.790333 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Nov 8 00:26:28.790339 systemd[1]: Finished network-cleanup.service - Network Cleanup. Nov 8 00:26:28.790346 systemd[1]: Starting systemd-fsck-usr.service... Nov 8 00:26:28.790352 systemd[1]: Starting systemd-journald.service - Journal Service... Nov 8 00:26:28.790358 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Nov 8 00:26:28.790365 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Nov 8 00:26:28.790371 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Nov 8 00:26:28.790388 systemd-journald[216]: Collecting audit messages is disabled. Nov 8 00:26:28.792373 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Nov 8 00:26:28.792382 systemd[1]: Finished systemd-fsck-usr.service. Nov 8 00:26:28.792392 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Nov 8 00:26:28.792407 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Nov 8 00:26:28.792415 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Nov 8 00:26:28.792422 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Nov 8 00:26:28.792429 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Nov 8 00:26:28.792435 kernel: Bridge firewalling registered Nov 8 00:26:28.792444 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Nov 8 00:26:28.792451 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Nov 8 00:26:28.792457 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Nov 8 00:26:28.792467 systemd-journald[216]: Journal started Nov 8 00:26:28.792481 systemd-journald[216]: Runtime Journal (/run/log/journal/014e752593104202b7791e02f48b387a) is 4.8M, max 38.6M, 33.8M free. Nov 8 00:26:28.752067 systemd-modules-load[217]: Inserted module 'overlay' Nov 8 00:26:28.780423 systemd-modules-load[217]: Inserted module 'br_netfilter' Nov 8 00:26:28.794408 systemd[1]: Started systemd-journald.service - Journal Service. Nov 8 00:26:28.795273 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Nov 8 00:26:28.795496 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Nov 8 00:26:28.804654 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Nov 8 00:26:28.804958 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Nov 8 00:26:28.806506 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Nov 8 00:26:28.811426 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Nov 8 00:26:28.812527 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Nov 8 00:26:28.817603 dracut-cmdline[248]: dracut-dracut-053 Nov 8 00:26:28.819611 dracut-cmdline[248]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=vmware flatcar.autologin verity.usrhash=480a02cf7f2001774aa495c3e719d4173e968e6839485a7d2b207ef2facca472 Nov 8 00:26:28.836525 systemd-resolved[252]: Positive Trust Anchors: Nov 8 00:26:28.836534 systemd-resolved[252]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Nov 8 00:26:28.836555 systemd-resolved[252]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Nov 8 00:26:28.838103 systemd-resolved[252]: Defaulting to hostname 'linux'. Nov 8 00:26:28.839834 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Nov 8 00:26:28.839988 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Nov 8 00:26:28.867410 kernel: SCSI subsystem initialized Nov 8 00:26:28.874410 kernel: Loading iSCSI transport class v2.0-870. Nov 8 00:26:28.881408 kernel: iscsi: registered transport (tcp) Nov 8 00:26:28.895522 kernel: iscsi: registered transport (qla4xxx) Nov 8 00:26:28.895538 kernel: QLogic iSCSI HBA Driver Nov 8 00:26:28.914616 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Nov 8 00:26:28.919665 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Nov 8 00:26:28.938357 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Nov 8 00:26:28.938380 kernel: device-mapper: uevent: version 1.0.3 Nov 8 00:26:28.938390 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Nov 8 00:26:28.968442 kernel: raid6: avx2x4 gen() 54959 MB/s Nov 8 00:26:28.985442 kernel: raid6: avx2x2 gen() 54083 MB/s Nov 8 00:26:29.002531 kernel: raid6: avx2x1 gen() 46943 MB/s Nov 8 00:26:29.002550 kernel: raid6: using algorithm avx2x4 gen() 54959 MB/s Nov 8 00:26:29.020535 kernel: raid6: .... xor() 22382 MB/s, rmw enabled Nov 8 00:26:29.020557 kernel: raid6: using avx2x2 recovery algorithm Nov 8 00:26:29.033409 kernel: xor: automatically using best checksumming function avx Nov 8 00:26:29.131416 kernel: Btrfs loaded, zoned=no, fsverity=no Nov 8 00:26:29.136149 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Nov 8 00:26:29.141618 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Nov 8 00:26:29.148713 systemd-udevd[433]: Using default interface naming scheme 'v255'. Nov 8 00:26:29.151101 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Nov 8 00:26:29.156492 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Nov 8 00:26:29.163236 dracut-pre-trigger[435]: rd.md=0: removing MD RAID activation Nov 8 00:26:29.177846 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Nov 8 00:26:29.182601 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Nov 8 00:26:29.251862 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Nov 8 00:26:29.255495 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Nov 8 00:26:29.266509 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Nov 8 00:26:29.267259 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Nov 8 00:26:29.267876 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Nov 8 00:26:29.268090 systemd[1]: Reached target remote-fs.target - Remote File Systems. Nov 8 00:26:29.273502 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Nov 8 00:26:29.280996 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Nov 8 00:26:29.316524 kernel: libata version 3.00 loaded. Nov 8 00:26:29.319421 kernel: ata_piix 0000:00:07.1: version 2.13 Nov 8 00:26:29.320409 kernel: scsi host0: ata_piix Nov 8 00:26:29.320570 kernel: scsi host1: ata_piix Nov 8 00:26:29.327507 kernel: ata1: PATA max UDMA/33 cmd 0x1f0 ctl 0x3f6 bmdma 0x1060 irq 14 Nov 8 00:26:29.327524 kernel: ata2: PATA max UDMA/33 cmd 0x170 ctl 0x376 bmdma 0x1068 irq 15 Nov 8 00:26:29.333416 kernel: VMware PVSCSI driver - version 1.0.7.0-k Nov 8 00:26:29.334684 kernel: vmw_pvscsi: using 64bit dma Nov 8 00:26:29.334700 kernel: vmw_pvscsi: max_id: 16 Nov 8 00:26:29.334708 kernel: vmw_pvscsi: setting ring_pages to 8 Nov 8 00:26:29.337505 kernel: vmw_pvscsi: enabling reqCallThreshold Nov 8 00:26:29.337520 kernel: vmw_pvscsi: driver-based request coalescing enabled Nov 8 00:26:29.337531 kernel: vmw_pvscsi: using MSI-X Nov 8 00:26:29.337538 kernel: scsi host2: VMware PVSCSI storage adapter rev 2, req/cmp/msg rings: 8/8/1 pages, cmd_per_lun=254 Nov 8 00:26:29.341416 kernel: vmw_pvscsi 0000:03:00.0: VMware PVSCSI rev 2 host #2 Nov 8 00:26:29.343450 kernel: scsi 2:0:0:0: Direct-Access VMware Virtual disk 2.0 PQ: 0 ANSI: 6 Nov 8 00:26:29.348411 kernel: VMware vmxnet3 virtual NIC driver - version 1.7.0.0-k-NAPI Nov 8 00:26:29.350436 kernel: vmxnet3 0000:0b:00.0: # of Tx queues : 2, # of Rx queues : 2 Nov 8 00:26:29.352419 kernel: vmxnet3 0000:0b:00.0 eth0: NIC Link is Up 10000 Mbps Nov 8 00:26:29.363757 kernel: cryptd: max_cpu_qlen set to 1000 Nov 8 00:26:29.364507 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Nov 8 00:26:29.364745 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Nov 8 00:26:29.365065 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Nov 8 00:26:29.365162 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Nov 8 00:26:29.365238 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Nov 8 00:26:29.365340 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Nov 8 00:26:29.376586 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Nov 8 00:26:29.387153 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Nov 8 00:26:29.391551 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Nov 8 00:26:29.403734 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Nov 8 00:26:29.492497 kernel: ata2.00: ATAPI: VMware Virtual IDE CDROM Drive, 00000001, max UDMA/33 Nov 8 00:26:29.498417 kernel: scsi 1:0:0:0: CD-ROM NECVMWar VMware IDE CDR10 1.00 PQ: 0 ANSI: 5 Nov 8 00:26:29.503604 kernel: vmxnet3 0000:0b:00.0 ens192: renamed from eth0 Nov 8 00:26:29.512838 kernel: AVX2 version of gcm_enc/dec engaged. Nov 8 00:26:29.512858 kernel: AES CTR mode by8 optimization enabled Nov 8 00:26:29.522565 kernel: sd 2:0:0:0: [sda] 17805312 512-byte logical blocks: (9.12 GB/8.49 GiB) Nov 8 00:26:29.522678 kernel: sd 2:0:0:0: [sda] Write Protect is off Nov 8 00:26:29.522800 kernel: sd 2:0:0:0: [sda] Mode Sense: 31 00 00 00 Nov 8 00:26:29.522864 kernel: sd 2:0:0:0: [sda] Cache data unavailable Nov 8 00:26:29.523989 kernel: sd 2:0:0:0: [sda] Assuming drive cache: write through Nov 8 00:26:29.525662 kernel: sr 1:0:0:0: [sr0] scsi3-mmc drive: 1x/1x writer dvd-ram cd/rw xa/form2 cdda tray Nov 8 00:26:29.525773 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Nov 8 00:26:29.531494 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Nov 8 00:26:29.531515 kernel: sd 2:0:0:0: [sda] Attached SCSI disk Nov 8 00:26:29.534411 kernel: sr 1:0:0:0: Attached scsi CD-ROM sr0 Nov 8 00:26:29.560413 kernel: BTRFS: device label OEM devid 1 transid 10 /dev/sda6 scanned by (udev-worker) (492) Nov 8 00:26:29.564413 kernel: BTRFS: device fsid a2737782-a37e-42f9-8b56-489a87f47acc devid 1 transid 35 /dev/sda3 scanned by (udev-worker) (489) Nov 8 00:26:29.566137 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Virtual_disk ROOT. Nov 8 00:26:29.568716 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Virtual_disk EFI-SYSTEM. Nov 8 00:26:29.570937 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - Virtual_disk USR-A. Nov 8 00:26:29.571431 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Virtual_disk USR-A. Nov 8 00:26:29.576480 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Nov 8 00:26:29.579228 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Virtual_disk OEM. Nov 8 00:26:29.600429 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Nov 8 00:26:29.607406 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Nov 8 00:26:30.606695 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Nov 8 00:26:30.607084 disk-uuid[594]: The operation has completed successfully. Nov 8 00:26:30.645791 systemd[1]: disk-uuid.service: Deactivated successfully. Nov 8 00:26:30.646058 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Nov 8 00:26:30.649493 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Nov 8 00:26:30.651208 sh[612]: Success Nov 8 00:26:30.658411 kernel: device-mapper: verity: sha256 using implementation "sha256-avx2" Nov 8 00:26:30.701051 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Nov 8 00:26:30.706447 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Nov 8 00:26:30.706647 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Nov 8 00:26:30.723927 kernel: BTRFS info (device dm-0): first mount of filesystem a2737782-a37e-42f9-8b56-489a87f47acc Nov 8 00:26:30.723948 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Nov 8 00:26:30.723956 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Nov 8 00:26:30.723963 kernel: BTRFS info (device dm-0): disabling log replay at mount time Nov 8 00:26:30.723970 kernel: BTRFS info (device dm-0): using free space tree Nov 8 00:26:30.731404 kernel: BTRFS info (device dm-0): enabling ssd optimizations Nov 8 00:26:30.731932 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Nov 8 00:26:30.741487 systemd[1]: Starting afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments... Nov 8 00:26:30.742600 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Nov 8 00:26:30.756808 kernel: BTRFS info (device sda6): first mount of filesystem 7b59d8a2-cf4e-4d67-8d1e-00d7f134f45e Nov 8 00:26:30.756832 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Nov 8 00:26:30.756841 kernel: BTRFS info (device sda6): using free space tree Nov 8 00:26:30.764584 kernel: BTRFS info (device sda6): enabling ssd optimizations Nov 8 00:26:30.769408 kernel: BTRFS info (device sda6): last unmount of filesystem 7b59d8a2-cf4e-4d67-8d1e-00d7f134f45e Nov 8 00:26:30.770146 systemd[1]: mnt-oem.mount: Deactivated successfully. Nov 8 00:26:30.773381 systemd[1]: Finished ignition-setup.service - Ignition (setup). Nov 8 00:26:30.778488 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Nov 8 00:26:30.811965 systemd[1]: Finished afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments. Nov 8 00:26:30.819527 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Nov 8 00:26:30.876470 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Nov 8 00:26:30.883485 systemd[1]: Starting systemd-networkd.service - Network Configuration... Nov 8 00:26:30.884771 ignition[671]: Ignition 2.19.0 Nov 8 00:26:30.884777 ignition[671]: Stage: fetch-offline Nov 8 00:26:30.884820 ignition[671]: no configs at "/usr/lib/ignition/base.d" Nov 8 00:26:30.884827 ignition[671]: no config dir at "/usr/lib/ignition/base.platform.d/vmware" Nov 8 00:26:30.884920 ignition[671]: parsed url from cmdline: "" Nov 8 00:26:30.884922 ignition[671]: no config URL provided Nov 8 00:26:30.884926 ignition[671]: reading system config file "/usr/lib/ignition/user.ign" Nov 8 00:26:30.884931 ignition[671]: no config at "/usr/lib/ignition/user.ign" Nov 8 00:26:30.885414 ignition[671]: config successfully fetched Nov 8 00:26:30.885439 ignition[671]: parsing config with SHA512: 04025625e7cfe85ad58d4ccf1aadc97e86a3e4c6e8285938a01bca915efe77e07392bd128bbb2ec29adfbb1a2ffd2a59fbd86380015b02eca3ec3827c9029e11 Nov 8 00:26:30.889921 unknown[671]: fetched base config from "system" Nov 8 00:26:30.890053 unknown[671]: fetched user config from "vmware" Nov 8 00:26:30.890461 ignition[671]: fetch-offline: fetch-offline passed Nov 8 00:26:30.890619 ignition[671]: Ignition finished successfully Nov 8 00:26:30.891273 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Nov 8 00:26:30.897439 systemd-networkd[803]: lo: Link UP Nov 8 00:26:30.897445 systemd-networkd[803]: lo: Gained carrier Nov 8 00:26:30.898217 systemd-networkd[803]: Enumeration completed Nov 8 00:26:30.898358 systemd[1]: Started systemd-networkd.service - Network Configuration. Nov 8 00:26:30.898499 systemd[1]: Reached target network.target - Network. Nov 8 00:26:30.898565 systemd-networkd[803]: ens192: Configuring with /etc/systemd/network/10-dracut-cmdline-99.network. Nov 8 00:26:30.898588 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Nov 8 00:26:30.902390 kernel: vmxnet3 0000:0b:00.0 ens192: intr type 3, mode 0, 3 vectors allocated Nov 8 00:26:30.902499 kernel: vmxnet3 0000:0b:00.0 ens192: NIC Link is Up 10000 Mbps Nov 8 00:26:30.901983 systemd-networkd[803]: ens192: Link UP Nov 8 00:26:30.901985 systemd-networkd[803]: ens192: Gained carrier Nov 8 00:26:30.906499 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Nov 8 00:26:30.914445 ignition[806]: Ignition 2.19.0 Nov 8 00:26:30.914451 ignition[806]: Stage: kargs Nov 8 00:26:30.914545 ignition[806]: no configs at "/usr/lib/ignition/base.d" Nov 8 00:26:30.914551 ignition[806]: no config dir at "/usr/lib/ignition/base.platform.d/vmware" Nov 8 00:26:30.915037 ignition[806]: kargs: kargs passed Nov 8 00:26:30.915058 ignition[806]: Ignition finished successfully Nov 8 00:26:30.916380 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Nov 8 00:26:30.920608 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Nov 8 00:26:30.927880 ignition[814]: Ignition 2.19.0 Nov 8 00:26:30.927886 ignition[814]: Stage: disks Nov 8 00:26:30.927991 ignition[814]: no configs at "/usr/lib/ignition/base.d" Nov 8 00:26:30.927997 ignition[814]: no config dir at "/usr/lib/ignition/base.platform.d/vmware" Nov 8 00:26:30.928545 ignition[814]: disks: disks passed Nov 8 00:26:30.928593 ignition[814]: Ignition finished successfully Nov 8 00:26:30.929466 systemd[1]: Finished ignition-disks.service - Ignition (disks). Nov 8 00:26:30.929739 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Nov 8 00:26:30.929963 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Nov 8 00:26:30.930171 systemd[1]: Reached target local-fs.target - Local File Systems. Nov 8 00:26:30.930364 systemd[1]: Reached target sysinit.target - System Initialization. Nov 8 00:26:30.930517 systemd[1]: Reached target basic.target - Basic System. Nov 8 00:26:30.934620 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Nov 8 00:26:30.944869 systemd-fsck[822]: ROOT: clean, 14/1628000 files, 120691/1617920 blocks Nov 8 00:26:30.946246 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Nov 8 00:26:30.949448 systemd[1]: Mounting sysroot.mount - /sysroot... Nov 8 00:26:31.004180 systemd[1]: Mounted sysroot.mount - /sysroot. Nov 8 00:26:31.004407 kernel: EXT4-fs (sda9): mounted filesystem 3cd35b5c-4e0e-45c1-abc9-cf70eebd42df r/w with ordered data mode. Quota mode: none. Nov 8 00:26:31.004672 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Nov 8 00:26:31.012577 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Nov 8 00:26:31.014021 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Nov 8 00:26:31.014434 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Nov 8 00:26:31.014467 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Nov 8 00:26:31.014485 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Nov 8 00:26:31.019299 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Nov 8 00:26:31.022386 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/sda6 scanned by mount (830) Nov 8 00:26:31.022423 kernel: BTRFS info (device sda6): first mount of filesystem 7b59d8a2-cf4e-4d67-8d1e-00d7f134f45e Nov 8 00:26:31.022434 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Nov 8 00:26:31.022444 kernel: BTRFS info (device sda6): using free space tree Nov 8 00:26:31.021146 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Nov 8 00:26:31.027481 kernel: BTRFS info (device sda6): enabling ssd optimizations Nov 8 00:26:31.029185 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Nov 8 00:26:31.054835 initrd-setup-root[854]: cut: /sysroot/etc/passwd: No such file or directory Nov 8 00:26:31.057575 initrd-setup-root[861]: cut: /sysroot/etc/group: No such file or directory Nov 8 00:26:31.059932 initrd-setup-root[868]: cut: /sysroot/etc/shadow: No such file or directory Nov 8 00:26:31.062246 initrd-setup-root[875]: cut: /sysroot/etc/gshadow: No such file or directory Nov 8 00:26:31.112947 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Nov 8 00:26:31.116610 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Nov 8 00:26:31.119003 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Nov 8 00:26:31.121467 kernel: BTRFS info (device sda6): last unmount of filesystem 7b59d8a2-cf4e-4d67-8d1e-00d7f134f45e Nov 8 00:26:31.134274 ignition[943]: INFO : Ignition 2.19.0 Nov 8 00:26:31.134274 ignition[943]: INFO : Stage: mount Nov 8 00:26:31.134274 ignition[943]: INFO : no configs at "/usr/lib/ignition/base.d" Nov 8 00:26:31.134274 ignition[943]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/vmware" Nov 8 00:26:31.134274 ignition[943]: INFO : mount: mount passed Nov 8 00:26:31.134274 ignition[943]: INFO : Ignition finished successfully Nov 8 00:26:31.135165 systemd[1]: Finished ignition-mount.service - Ignition (mount). Nov 8 00:26:31.140767 systemd[1]: Starting ignition-files.service - Ignition (files)... Nov 8 00:26:31.141031 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Nov 8 00:26:31.719960 systemd[1]: sysroot-oem.mount: Deactivated successfully. Nov 8 00:26:31.728602 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Nov 8 00:26:31.736415 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/sda6 scanned by mount (955) Nov 8 00:26:31.739653 kernel: BTRFS info (device sda6): first mount of filesystem 7b59d8a2-cf4e-4d67-8d1e-00d7f134f45e Nov 8 00:26:31.739667 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Nov 8 00:26:31.739674 kernel: BTRFS info (device sda6): using free space tree Nov 8 00:26:31.744429 kernel: BTRFS info (device sda6): enabling ssd optimizations Nov 8 00:26:31.744059 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Nov 8 00:26:31.758821 ignition[972]: INFO : Ignition 2.19.0 Nov 8 00:26:31.758821 ignition[972]: INFO : Stage: files Nov 8 00:26:31.759222 ignition[972]: INFO : no configs at "/usr/lib/ignition/base.d" Nov 8 00:26:31.759222 ignition[972]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/vmware" Nov 8 00:26:31.759854 ignition[972]: DEBUG : files: compiled without relabeling support, skipping Nov 8 00:26:31.760373 ignition[972]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Nov 8 00:26:31.760373 ignition[972]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Nov 8 00:26:31.762630 ignition[972]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Nov 8 00:26:31.762883 ignition[972]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Nov 8 00:26:31.763219 unknown[972]: wrote ssh authorized keys file for user: core Nov 8 00:26:31.763466 ignition[972]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Nov 8 00:26:31.766083 ignition[972]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Nov 8 00:26:31.766319 ignition[972]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.3-linux-amd64.tar.gz: attempt #1 Nov 8 00:26:31.827509 ignition[972]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Nov 8 00:26:31.866416 ignition[972]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Nov 8 00:26:31.866416 ignition[972]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Nov 8 00:26:31.866416 ignition[972]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Nov 8 00:26:31.866416 ignition[972]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Nov 8 00:26:31.866416 ignition[972]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Nov 8 00:26:31.866416 ignition[972]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Nov 8 00:26:31.866416 ignition[972]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Nov 8 00:26:31.866416 ignition[972]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Nov 8 00:26:31.867708 ignition[972]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Nov 8 00:26:31.867708 ignition[972]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Nov 8 00:26:31.867708 ignition[972]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Nov 8 00:26:31.867708 ignition[972]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Nov 8 00:26:31.867708 ignition[972]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Nov 8 00:26:31.867708 ignition[972]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Nov 8 00:26:31.867708 ignition[972]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.33.0-x86-64.raw: attempt #1 Nov 8 00:26:31.983687 systemd-networkd[803]: ens192: Gained IPv6LL Nov 8 00:26:32.286248 ignition[972]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Nov 8 00:26:32.483032 ignition[972]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Nov 8 00:26:32.483287 ignition[972]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/etc/systemd/network/00-vmware.network" Nov 8 00:26:32.483287 ignition[972]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/etc/systemd/network/00-vmware.network" Nov 8 00:26:32.483287 ignition[972]: INFO : files: op(c): [started] processing unit "prepare-helm.service" Nov 8 00:26:32.483287 ignition[972]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Nov 8 00:26:32.483932 ignition[972]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Nov 8 00:26:32.483932 ignition[972]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" Nov 8 00:26:32.483932 ignition[972]: INFO : files: op(e): [started] processing unit "coreos-metadata.service" Nov 8 00:26:32.483932 ignition[972]: INFO : files: op(e): op(f): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Nov 8 00:26:32.483932 ignition[972]: INFO : files: op(e): op(f): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Nov 8 00:26:32.483932 ignition[972]: INFO : files: op(e): [finished] processing unit "coreos-metadata.service" Nov 8 00:26:32.483932 ignition[972]: INFO : files: op(10): [started] setting preset to disabled for "coreos-metadata.service" Nov 8 00:26:32.518027 ignition[972]: INFO : files: op(10): op(11): [started] removing enablement symlink(s) for "coreos-metadata.service" Nov 8 00:26:32.520214 ignition[972]: INFO : files: op(10): op(11): [finished] removing enablement symlink(s) for "coreos-metadata.service" Nov 8 00:26:32.520378 ignition[972]: INFO : files: op(10): [finished] setting preset to disabled for "coreos-metadata.service" Nov 8 00:26:32.520378 ignition[972]: INFO : files: op(12): [started] setting preset to enabled for "prepare-helm.service" Nov 8 00:26:32.520378 ignition[972]: INFO : files: op(12): [finished] setting preset to enabled for "prepare-helm.service" Nov 8 00:26:32.521302 ignition[972]: INFO : files: createResultFile: createFiles: op(13): [started] writing file "/sysroot/etc/.ignition-result.json" Nov 8 00:26:32.521302 ignition[972]: INFO : files: createResultFile: createFiles: op(13): [finished] writing file "/sysroot/etc/.ignition-result.json" Nov 8 00:26:32.521302 ignition[972]: INFO : files: files passed Nov 8 00:26:32.521302 ignition[972]: INFO : Ignition finished successfully Nov 8 00:26:32.521372 systemd[1]: Finished ignition-files.service - Ignition (files). Nov 8 00:26:32.527632 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Nov 8 00:26:32.529502 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Nov 8 00:26:32.529740 systemd[1]: ignition-quench.service: Deactivated successfully. Nov 8 00:26:32.529784 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Nov 8 00:26:32.535696 initrd-setup-root-after-ignition[1002]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Nov 8 00:26:32.535696 initrd-setup-root-after-ignition[1002]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Nov 8 00:26:32.536596 initrd-setup-root-after-ignition[1006]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Nov 8 00:26:32.537519 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Nov 8 00:26:32.537811 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Nov 8 00:26:32.541651 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Nov 8 00:26:32.553432 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Nov 8 00:26:32.553489 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Nov 8 00:26:32.553745 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Nov 8 00:26:32.553853 systemd[1]: Reached target initrd.target - Initrd Default Target. Nov 8 00:26:32.554042 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Nov 8 00:26:32.554451 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Nov 8 00:26:32.563616 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Nov 8 00:26:32.567614 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Nov 8 00:26:32.572743 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Nov 8 00:26:32.572928 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Nov 8 00:26:32.573139 systemd[1]: Stopped target timers.target - Timer Units. Nov 8 00:26:32.573318 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Nov 8 00:26:32.573377 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Nov 8 00:26:32.573744 systemd[1]: Stopped target initrd.target - Initrd Default Target. Nov 8 00:26:32.573887 systemd[1]: Stopped target basic.target - Basic System. Nov 8 00:26:32.574059 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Nov 8 00:26:32.574238 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Nov 8 00:26:32.574448 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Nov 8 00:26:32.574645 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Nov 8 00:26:32.574976 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Nov 8 00:26:32.575174 systemd[1]: Stopped target sysinit.target - System Initialization. Nov 8 00:26:32.575368 systemd[1]: Stopped target local-fs.target - Local File Systems. Nov 8 00:26:32.575611 systemd[1]: Stopped target swap.target - Swaps. Nov 8 00:26:32.575758 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Nov 8 00:26:32.575818 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Nov 8 00:26:32.576049 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Nov 8 00:26:32.576195 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Nov 8 00:26:32.576370 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Nov 8 00:26:32.576423 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Nov 8 00:26:32.576585 systemd[1]: dracut-initqueue.service: Deactivated successfully. Nov 8 00:26:32.576641 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Nov 8 00:26:32.576870 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Nov 8 00:26:32.576928 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Nov 8 00:26:32.577178 systemd[1]: Stopped target paths.target - Path Units. Nov 8 00:26:32.577314 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Nov 8 00:26:32.580495 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Nov 8 00:26:32.580721 systemd[1]: Stopped target slices.target - Slice Units. Nov 8 00:26:32.581011 systemd[1]: Stopped target sockets.target - Socket Units. Nov 8 00:26:32.581219 systemd[1]: iscsid.socket: Deactivated successfully. Nov 8 00:26:32.581283 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Nov 8 00:26:32.581495 systemd[1]: iscsiuio.socket: Deactivated successfully. Nov 8 00:26:32.581561 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Nov 8 00:26:32.581774 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Nov 8 00:26:32.581858 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Nov 8 00:26:32.582086 systemd[1]: ignition-files.service: Deactivated successfully. Nov 8 00:26:32.582165 systemd[1]: Stopped ignition-files.service - Ignition (files). Nov 8 00:26:32.593641 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Nov 8 00:26:32.596639 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Nov 8 00:26:32.596923 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Nov 8 00:26:32.597235 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Nov 8 00:26:32.597667 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Nov 8 00:26:32.597747 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Nov 8 00:26:32.600588 systemd[1]: initrd-cleanup.service: Deactivated successfully. Nov 8 00:26:32.600646 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Nov 8 00:26:32.604334 ignition[1026]: INFO : Ignition 2.19.0 Nov 8 00:26:32.604334 ignition[1026]: INFO : Stage: umount Nov 8 00:26:32.606957 ignition[1026]: INFO : no configs at "/usr/lib/ignition/base.d" Nov 8 00:26:32.606957 ignition[1026]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/vmware" Nov 8 00:26:32.606957 ignition[1026]: INFO : umount: umount passed Nov 8 00:26:32.606957 ignition[1026]: INFO : Ignition finished successfully Nov 8 00:26:32.605740 systemd[1]: ignition-mount.service: Deactivated successfully. Nov 8 00:26:32.605804 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Nov 8 00:26:32.606130 systemd[1]: Stopped target network.target - Network. Nov 8 00:26:32.606323 systemd[1]: ignition-disks.service: Deactivated successfully. Nov 8 00:26:32.606355 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Nov 8 00:26:32.606489 systemd[1]: ignition-kargs.service: Deactivated successfully. Nov 8 00:26:32.606514 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Nov 8 00:26:32.606714 systemd[1]: ignition-setup.service: Deactivated successfully. Nov 8 00:26:32.606739 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Nov 8 00:26:32.606877 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Nov 8 00:26:32.606899 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Nov 8 00:26:32.607125 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Nov 8 00:26:32.607291 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Nov 8 00:26:32.612144 systemd[1]: systemd-networkd.service: Deactivated successfully. Nov 8 00:26:32.612208 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Nov 8 00:26:32.612659 systemd[1]: systemd-networkd.socket: Deactivated successfully. Nov 8 00:26:32.612695 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Nov 8 00:26:32.620615 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Nov 8 00:26:32.620730 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Nov 8 00:26:32.620759 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Nov 8 00:26:32.620883 systemd[1]: afterburn-network-kargs.service: Deactivated successfully. Nov 8 00:26:32.620905 systemd[1]: Stopped afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments. Nov 8 00:26:32.621058 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Nov 8 00:26:32.621929 systemd[1]: sysroot-boot.mount: Deactivated successfully. Nov 8 00:26:32.622236 systemd[1]: systemd-resolved.service: Deactivated successfully. Nov 8 00:26:32.623314 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Nov 8 00:26:32.624797 systemd[1]: systemd-sysctl.service: Deactivated successfully. Nov 8 00:26:32.624845 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Nov 8 00:26:32.625123 systemd[1]: systemd-modules-load.service: Deactivated successfully. Nov 8 00:26:32.625146 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Nov 8 00:26:32.625252 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Nov 8 00:26:32.625273 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Nov 8 00:26:32.629476 systemd[1]: network-cleanup.service: Deactivated successfully. Nov 8 00:26:32.629536 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Nov 8 00:26:32.635776 systemd[1]: systemd-udevd.service: Deactivated successfully. Nov 8 00:26:32.635878 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Nov 8 00:26:32.636420 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Nov 8 00:26:32.636475 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Nov 8 00:26:32.636651 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Nov 8 00:26:32.636675 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Nov 8 00:26:32.636878 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Nov 8 00:26:32.636909 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Nov 8 00:26:32.637271 systemd[1]: dracut-cmdline.service: Deactivated successfully. Nov 8 00:26:32.637301 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Nov 8 00:26:32.637693 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Nov 8 00:26:32.637724 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Nov 8 00:26:32.643671 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Nov 8 00:26:32.644220 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Nov 8 00:26:32.644427 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Nov 8 00:26:32.644799 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Nov 8 00:26:32.644830 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Nov 8 00:26:32.645188 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Nov 8 00:26:32.645218 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Nov 8 00:26:32.645601 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Nov 8 00:26:32.645631 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Nov 8 00:26:32.647634 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Nov 8 00:26:32.647903 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Nov 8 00:26:32.708940 systemd[1]: sysroot-boot.service: Deactivated successfully. Nov 8 00:26:32.709168 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Nov 8 00:26:32.709459 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Nov 8 00:26:32.709594 systemd[1]: initrd-setup-root.service: Deactivated successfully. Nov 8 00:26:32.709627 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Nov 8 00:26:32.713529 systemd[1]: Starting initrd-switch-root.service - Switch Root... Nov 8 00:26:32.723287 systemd[1]: Switching root. Nov 8 00:26:32.756152 systemd-journald[216]: Journal stopped Nov 8 00:26:33.797479 systemd-journald[216]: Received SIGTERM from PID 1 (systemd). Nov 8 00:26:33.797501 kernel: SELinux: policy capability network_peer_controls=1 Nov 8 00:26:33.797510 kernel: SELinux: policy capability open_perms=1 Nov 8 00:26:33.797515 kernel: SELinux: policy capability extended_socket_class=1 Nov 8 00:26:33.797520 kernel: SELinux: policy capability always_check_network=0 Nov 8 00:26:33.797525 kernel: SELinux: policy capability cgroup_seclabel=1 Nov 8 00:26:33.797533 kernel: SELinux: policy capability nnp_nosuid_transition=1 Nov 8 00:26:33.797538 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Nov 8 00:26:33.797544 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Nov 8 00:26:33.797550 systemd[1]: Successfully loaded SELinux policy in 33.662ms. Nov 8 00:26:33.797556 kernel: audit: type=1403 audit(1762561593.305:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Nov 8 00:26:33.797563 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 7.514ms. Nov 8 00:26:33.797570 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Nov 8 00:26:33.797577 systemd[1]: Detected virtualization vmware. Nov 8 00:26:33.797584 systemd[1]: Detected architecture x86-64. Nov 8 00:26:33.797590 systemd[1]: Detected first boot. Nov 8 00:26:33.797597 systemd[1]: Initializing machine ID from random generator. Nov 8 00:26:33.797605 zram_generator::config[1068]: No configuration found. Nov 8 00:26:33.797611 systemd[1]: Populated /etc with preset unit settings. Nov 8 00:26:33.797619 systemd[1]: /etc/systemd/system/coreos-metadata.service:11: Ignoring unknown escape sequences: "echo "COREOS_CUSTOM_PRIVATE_IPV4=$(ip addr show ens192 | grep "inet 10." | grep -Po "inet \K[\d.]+") Nov 8 00:26:33.797626 systemd[1]: COREOS_CUSTOM_PUBLIC_IPV4=$(ip addr show ens192 | grep -v "inet 10." | grep -Po "inet \K[\d.]+")" > ${OUTPUT}" Nov 8 00:26:33.797632 systemd[1]: initrd-switch-root.service: Deactivated successfully. Nov 8 00:26:33.797638 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Nov 8 00:26:33.797645 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Nov 8 00:26:33.797653 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Nov 8 00:26:33.797660 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Nov 8 00:26:33.797667 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Nov 8 00:26:33.797673 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Nov 8 00:26:33.797680 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Nov 8 00:26:33.797705 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Nov 8 00:26:33.797712 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Nov 8 00:26:33.797719 systemd[1]: Created slice user.slice - User and Session Slice. Nov 8 00:26:33.797726 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Nov 8 00:26:33.797748 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Nov 8 00:26:33.797754 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Nov 8 00:26:33.797761 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Nov 8 00:26:33.797768 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Nov 8 00:26:33.797774 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Nov 8 00:26:33.797781 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Nov 8 00:26:33.797789 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Nov 8 00:26:33.797796 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Nov 8 00:26:33.797804 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Nov 8 00:26:33.797810 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Nov 8 00:26:33.797817 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Nov 8 00:26:33.797824 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Nov 8 00:26:33.797831 systemd[1]: Reached target remote-fs.target - Remote File Systems. Nov 8 00:26:33.797838 systemd[1]: Reached target slices.target - Slice Units. Nov 8 00:26:33.797846 systemd[1]: Reached target swap.target - Swaps. Nov 8 00:26:33.797852 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Nov 8 00:26:33.797859 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Nov 8 00:26:33.797866 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Nov 8 00:26:33.797873 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Nov 8 00:26:33.797881 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Nov 8 00:26:33.797888 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Nov 8 00:26:33.797894 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Nov 8 00:26:33.797901 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Nov 8 00:26:33.797908 systemd[1]: Mounting media.mount - External Media Directory... Nov 8 00:26:33.797915 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 8 00:26:33.798728 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Nov 8 00:26:33.798740 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Nov 8 00:26:33.798749 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Nov 8 00:26:33.798756 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Nov 8 00:26:33.798763 systemd[1]: Reached target machines.target - Containers. Nov 8 00:26:33.798770 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Nov 8 00:26:33.798777 systemd[1]: Starting ignition-delete-config.service - Ignition (delete config)... Nov 8 00:26:33.798784 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Nov 8 00:26:33.798790 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Nov 8 00:26:33.798797 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Nov 8 00:26:33.798805 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Nov 8 00:26:33.798812 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Nov 8 00:26:33.798819 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Nov 8 00:26:33.798825 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Nov 8 00:26:33.798832 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Nov 8 00:26:33.798839 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Nov 8 00:26:33.798845 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Nov 8 00:26:33.798852 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Nov 8 00:26:33.798859 systemd[1]: Stopped systemd-fsck-usr.service. Nov 8 00:26:33.798867 systemd[1]: Starting systemd-journald.service - Journal Service... Nov 8 00:26:33.798874 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Nov 8 00:26:33.798881 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Nov 8 00:26:33.798888 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Nov 8 00:26:33.798894 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Nov 8 00:26:33.798901 systemd[1]: verity-setup.service: Deactivated successfully. Nov 8 00:26:33.798907 systemd[1]: Stopped verity-setup.service. Nov 8 00:26:33.798915 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 8 00:26:33.798922 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Nov 8 00:26:33.798929 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Nov 8 00:26:33.798936 systemd[1]: Mounted media.mount - External Media Directory. Nov 8 00:26:33.798943 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Nov 8 00:26:33.798949 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Nov 8 00:26:33.798956 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Nov 8 00:26:33.798963 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Nov 8 00:26:33.798970 systemd[1]: modprobe@configfs.service: Deactivated successfully. Nov 8 00:26:33.798976 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Nov 8 00:26:33.798984 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Nov 8 00:26:33.798991 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Nov 8 00:26:33.798998 kernel: fuse: init (API version 7.39) Nov 8 00:26:33.799004 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Nov 8 00:26:33.799011 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Nov 8 00:26:33.799017 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Nov 8 00:26:33.799024 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Nov 8 00:26:33.799042 systemd-journald[1155]: Collecting audit messages is disabled. Nov 8 00:26:33.799059 systemd[1]: Reached target network-pre.target - Preparation for Network. Nov 8 00:26:33.799066 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Nov 8 00:26:33.799073 systemd-journald[1155]: Journal started Nov 8 00:26:33.799088 systemd-journald[1155]: Runtime Journal (/run/log/journal/250d13346d28458faaf50e5f66d0bded) is 4.8M, max 38.6M, 33.8M free. Nov 8 00:26:33.604206 systemd[1]: Queued start job for default target multi-user.target. Nov 8 00:26:33.618441 systemd[1]: Unnecessary job was removed for dev-sda6.device - /dev/sda6. Nov 8 00:26:33.618707 systemd[1]: systemd-journald.service: Deactivated successfully. Nov 8 00:26:33.801077 jq[1135]: true Nov 8 00:26:33.807367 kernel: loop: module loaded Nov 8 00:26:33.807386 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Nov 8 00:26:33.807652 jq[1167]: true Nov 8 00:26:33.817595 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Nov 8 00:26:33.817620 systemd[1]: Started systemd-journald.service - Journal Service. Nov 8 00:26:33.818940 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Nov 8 00:26:33.819176 systemd[1]: modprobe@fuse.service: Deactivated successfully. Nov 8 00:26:33.819262 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Nov 8 00:26:33.819492 systemd[1]: modprobe@loop.service: Deactivated successfully. Nov 8 00:26:33.819563 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Nov 8 00:26:33.819794 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Nov 8 00:26:33.819965 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Nov 8 00:26:33.839512 kernel: ACPI: bus type drm_connector registered Nov 8 00:26:33.834548 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Nov 8 00:26:33.834678 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Nov 8 00:26:33.834700 systemd[1]: Reached target local-fs.target - Local File Systems. Nov 8 00:26:33.835314 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Nov 8 00:26:33.840264 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Nov 8 00:26:33.846514 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Nov 8 00:26:33.846683 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Nov 8 00:26:33.849546 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Nov 8 00:26:33.851506 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Nov 8 00:26:33.851631 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Nov 8 00:26:33.854576 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Nov 8 00:26:33.854708 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Nov 8 00:26:33.858153 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Nov 8 00:26:33.859078 systemd[1]: modprobe@drm.service: Deactivated successfully. Nov 8 00:26:33.860516 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Nov 8 00:26:33.860785 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Nov 8 00:26:33.863433 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Nov 8 00:26:33.882523 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Nov 8 00:26:33.884956 systemd-journald[1155]: Time spent on flushing to /var/log/journal/250d13346d28458faaf50e5f66d0bded is 25.653ms for 1834 entries. Nov 8 00:26:33.884956 systemd-journald[1155]: System Journal (/var/log/journal/250d13346d28458faaf50e5f66d0bded) is 8.0M, max 584.8M, 576.8M free. Nov 8 00:26:33.916428 systemd-journald[1155]: Received client request to flush runtime journal. Nov 8 00:26:33.916451 kernel: loop0: detected capacity change from 0 to 140768 Nov 8 00:26:33.894031 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Nov 8 00:26:33.894475 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Nov 8 00:26:33.900710 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Nov 8 00:26:33.918008 ignition[1174]: Ignition 2.19.0 Nov 8 00:26:33.918202 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Nov 8 00:26:33.918439 ignition[1174]: deleting config from guestinfo properties Nov 8 00:26:33.920632 ignition[1174]: Successfully deleted config Nov 8 00:26:33.921719 systemd-tmpfiles[1181]: ACLs are not supported, ignoring. Nov 8 00:26:33.921735 systemd-tmpfiles[1181]: ACLs are not supported, ignoring. Nov 8 00:26:33.924512 systemd[1]: Finished ignition-delete-config.service - Ignition (delete config). Nov 8 00:26:33.928091 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Nov 8 00:26:33.933520 systemd[1]: Starting systemd-sysusers.service - Create System Users... Nov 8 00:26:33.946737 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Nov 8 00:26:33.947105 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Nov 8 00:26:33.954541 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Nov 8 00:26:33.972610 systemd[1]: Finished systemd-sysusers.service - Create System Users. Nov 8 00:26:33.977557 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Nov 8 00:26:33.978424 kernel: loop1: detected capacity change from 0 to 2976 Nov 8 00:26:33.995114 systemd-tmpfiles[1231]: ACLs are not supported, ignoring. Nov 8 00:26:33.995682 systemd-tmpfiles[1231]: ACLs are not supported, ignoring. Nov 8 00:26:34.001943 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Nov 8 00:26:34.020372 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Nov 8 00:26:34.023418 kernel: loop2: detected capacity change from 0 to 229808 Nov 8 00:26:34.025480 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Nov 8 00:26:34.031738 udevadm[1236]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Nov 8 00:26:34.109123 kernel: loop3: detected capacity change from 0 to 142488 Nov 8 00:26:34.151699 kernel: loop4: detected capacity change from 0 to 140768 Nov 8 00:26:34.172413 kernel: loop5: detected capacity change from 0 to 2976 Nov 8 00:26:34.181414 kernel: loop6: detected capacity change from 0 to 229808 Nov 8 00:26:34.201538 kernel: loop7: detected capacity change from 0 to 142488 Nov 8 00:26:34.226241 (sd-merge)[1240]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-vmware'. Nov 8 00:26:34.227054 (sd-merge)[1240]: Merged extensions into '/usr'. Nov 8 00:26:34.230448 systemd[1]: Reloading requested from client PID 1209 ('systemd-sysext') (unit systemd-sysext.service)... Nov 8 00:26:34.230457 systemd[1]: Reloading... Nov 8 00:26:34.288414 zram_generator::config[1262]: No configuration found. Nov 8 00:26:34.384548 systemd[1]: /etc/systemd/system/coreos-metadata.service:11: Ignoring unknown escape sequences: "echo "COREOS_CUSTOM_PRIVATE_IPV4=$(ip addr show ens192 | grep "inet 10." | grep -Po "inet \K[\d.]+") Nov 8 00:26:34.399778 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Nov 8 00:26:34.427306 systemd[1]: Reloading finished in 196 ms. Nov 8 00:26:34.449412 ldconfig[1205]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Nov 8 00:26:34.453422 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Nov 8 00:26:34.453715 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Nov 8 00:26:34.453949 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Nov 8 00:26:34.460666 systemd[1]: Starting ensure-sysext.service... Nov 8 00:26:34.461515 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Nov 8 00:26:34.462495 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Nov 8 00:26:34.470454 systemd[1]: Reloading requested from client PID 1323 ('systemctl') (unit ensure-sysext.service)... Nov 8 00:26:34.470462 systemd[1]: Reloading... Nov 8 00:26:34.482148 systemd-tmpfiles[1324]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Nov 8 00:26:34.482550 systemd-tmpfiles[1324]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Nov 8 00:26:34.483087 systemd-tmpfiles[1324]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Nov 8 00:26:34.483290 systemd-tmpfiles[1324]: ACLs are not supported, ignoring. Nov 8 00:26:34.483362 systemd-tmpfiles[1324]: ACLs are not supported, ignoring. Nov 8 00:26:34.488573 systemd-udevd[1325]: Using default interface naming scheme 'v255'. Nov 8 00:26:34.492201 systemd-tmpfiles[1324]: Detected autofs mount point /boot during canonicalization of boot. Nov 8 00:26:34.492206 systemd-tmpfiles[1324]: Skipping /boot Nov 8 00:26:34.499436 systemd-tmpfiles[1324]: Detected autofs mount point /boot during canonicalization of boot. Nov 8 00:26:34.499442 systemd-tmpfiles[1324]: Skipping /boot Nov 8 00:26:34.518417 zram_generator::config[1352]: No configuration found. Nov 8 00:26:34.595670 systemd[1]: /etc/systemd/system/coreos-metadata.service:11: Ignoring unknown escape sequences: "echo "COREOS_CUSTOM_PRIVATE_IPV4=$(ip addr show ens192 | grep "inet 10." | grep -Po "inet \K[\d.]+") Nov 8 00:26:34.614507 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input2 Nov 8 00:26:34.612847 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Nov 8 00:26:34.623407 kernel: ACPI: button: Power Button [PWRF] Nov 8 00:26:34.631407 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 35 scanned by (udev-worker) (1381) Nov 8 00:26:34.652444 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Nov 8 00:26:34.652716 systemd[1]: Reloading finished in 182 ms. Nov 8 00:26:34.659273 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Nov 8 00:26:34.659653 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Nov 8 00:26:34.685560 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Nov 8 00:26:34.688734 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Nov 8 00:26:34.690933 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Nov 8 00:26:34.694184 systemd[1]: Starting systemd-networkd.service - Network Configuration... Nov 8 00:26:34.697251 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Nov 8 00:26:34.704585 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Nov 8 00:26:34.705863 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Virtual_disk OEM. Nov 8 00:26:34.708078 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 8 00:26:34.710892 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Nov 8 00:26:34.716439 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Nov 8 00:26:34.719376 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Nov 8 00:26:34.719532 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Nov 8 00:26:34.727543 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Nov 8 00:26:34.729875 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Nov 8 00:26:34.730038 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 8 00:26:34.734032 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Nov 8 00:26:34.734118 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Nov 8 00:26:34.734476 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Nov 8 00:26:34.737455 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 8 00:26:34.739671 kernel: piix4_smbus 0000:00:07.3: SMBus Host Controller not enabled! Nov 8 00:26:34.743654 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Nov 8 00:26:34.743806 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Nov 8 00:26:34.743903 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 8 00:26:34.746337 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 8 00:26:34.747969 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Nov 8 00:26:34.749194 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Nov 8 00:26:34.749277 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 8 00:26:34.749684 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Nov 8 00:26:34.750004 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Nov 8 00:26:34.750082 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Nov 8 00:26:34.750384 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Nov 8 00:26:34.750802 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Nov 8 00:26:34.753883 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Nov 8 00:26:34.757281 systemd[1]: Finished ensure-sysext.service. Nov 8 00:26:34.766339 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Nov 8 00:26:34.766901 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Nov 8 00:26:34.776042 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Nov 8 00:26:34.776357 systemd[1]: modprobe@loop.service: Deactivated successfully. Nov 8 00:26:34.776502 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Nov 8 00:26:34.776765 systemd[1]: modprobe@drm.service: Deactivated successfully. Nov 8 00:26:34.776883 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Nov 8 00:26:34.777790 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Nov 8 00:26:34.781512 systemd[1]: Starting systemd-update-done.service - Update is Completed... Nov 8 00:26:34.784539 kernel: vmw_vmci 0000:00:07.7: Using capabilities 0xc Nov 8 00:26:34.788497 systemd[1]: Started systemd-userdbd.service - User Database Manager. Nov 8 00:26:34.793166 kernel: input: ImPS/2 Generic Wheel Mouse as /devices/platform/i8042/serio1/input/input3 Nov 8 00:26:34.793195 kernel: Guest personality initialized and is active Nov 8 00:26:34.793208 kernel: VMCI host device registered (name=vmci, major=10, minor=125) Nov 8 00:26:34.793218 kernel: Initialized host personality Nov 8 00:26:34.809726 systemd[1]: Finished systemd-update-done.service - Update is Completed. Nov 8 00:26:34.814537 augenrules[1487]: No rules Nov 8 00:26:34.815891 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Nov 8 00:26:34.855440 kernel: mousedev: PS/2 mouse device common for all mice Nov 8 00:26:34.857113 (udev-worker)[1382]: id: Truncating stdout of 'dmi_memory_id' up to 16384 byte. Nov 8 00:26:34.870555 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Nov 8 00:26:34.878081 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Nov 8 00:26:34.878130 systemd-resolved[1442]: Positive Trust Anchors: Nov 8 00:26:34.878135 systemd-resolved[1442]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Nov 8 00:26:34.878157 systemd-resolved[1442]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Nov 8 00:26:34.878259 systemd[1]: Reached target time-set.target - System Time Set. Nov 8 00:26:34.879801 systemd-networkd[1441]: lo: Link UP Nov 8 00:26:34.880678 systemd-networkd[1441]: lo: Gained carrier Nov 8 00:26:34.880791 systemd-resolved[1442]: Defaulting to hostname 'linux'. Nov 8 00:26:34.881549 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Nov 8 00:26:34.881674 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Nov 8 00:26:34.882249 systemd-networkd[1441]: Enumeration completed Nov 8 00:26:34.882327 systemd[1]: Started systemd-networkd.service - Network Configuration. Nov 8 00:26:34.882487 systemd[1]: Reached target network.target - Network. Nov 8 00:26:34.883713 systemd-timesyncd[1466]: No network connectivity, watching for changes. Nov 8 00:26:34.888010 systemd-networkd[1441]: ens192: Configuring with /etc/systemd/network/00-vmware.network. Nov 8 00:26:34.890416 kernel: vmxnet3 0000:0b:00.0 ens192: intr type 3, mode 0, 3 vectors allocated Nov 8 00:26:34.890539 kernel: vmxnet3 0000:0b:00.0 ens192: NIC Link is Up 10000 Mbps Nov 8 00:26:34.890183 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Nov 8 00:26:34.890470 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Nov 8 00:26:34.891648 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Nov 8 00:26:34.893874 systemd-networkd[1441]: ens192: Link UP Nov 8 00:26:34.894019 systemd-networkd[1441]: ens192: Gained carrier Nov 8 00:26:34.894369 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Nov 8 00:26:34.896899 systemd-timesyncd[1466]: Network configuration changed, trying to establish connection. Nov 8 00:26:34.897187 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Nov 8 00:26:34.911407 lvm[1502]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Nov 8 00:26:34.934353 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Nov 8 00:26:34.934577 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Nov 8 00:26:34.940601 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Nov 8 00:26:34.941169 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Nov 8 00:26:34.941816 systemd[1]: Reached target sysinit.target - System Initialization. Nov 8 00:26:34.942167 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Nov 8 00:26:34.942375 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Nov 8 00:26:34.942602 systemd[1]: Started logrotate.timer - Daily rotation of log files. Nov 8 00:26:34.942850 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Nov 8 00:26:34.943046 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Nov 8 00:26:34.943162 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Nov 8 00:26:34.943176 systemd[1]: Reached target paths.target - Path Units. Nov 8 00:26:34.943358 systemd[1]: Reached target timers.target - Timer Units. Nov 8 00:26:34.943706 lvm[1506]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Nov 8 00:26:34.944352 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Nov 8 00:26:34.945359 systemd[1]: Starting docker.socket - Docker Socket for the API... Nov 8 00:26:34.947686 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Nov 8 00:26:34.948103 systemd[1]: Listening on docker.socket - Docker Socket for the API. Nov 8 00:26:34.948250 systemd[1]: Reached target sockets.target - Socket Units. Nov 8 00:26:34.948346 systemd[1]: Reached target basic.target - Basic System. Nov 8 00:26:34.948469 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Nov 8 00:26:34.948482 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Nov 8 00:26:34.950512 systemd[1]: Starting containerd.service - containerd container runtime... Nov 8 00:26:34.953567 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Nov 8 00:26:34.957477 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Nov 8 00:26:34.959479 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Nov 8 00:26:34.960483 jq[1512]: false Nov 8 00:26:34.960494 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Nov 8 00:26:34.962324 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Nov 8 00:26:34.964642 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Nov 8 00:26:34.968385 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Nov 8 00:26:34.971508 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Nov 8 00:26:34.975333 extend-filesystems[1513]: Found loop4 Nov 8 00:26:34.975333 extend-filesystems[1513]: Found loop5 Nov 8 00:26:34.975333 extend-filesystems[1513]: Found loop6 Nov 8 00:26:34.975333 extend-filesystems[1513]: Found loop7 Nov 8 00:26:34.975333 extend-filesystems[1513]: Found sda Nov 8 00:26:34.975333 extend-filesystems[1513]: Found sda1 Nov 8 00:26:34.975333 extend-filesystems[1513]: Found sda2 Nov 8 00:26:34.975333 extend-filesystems[1513]: Found sda3 Nov 8 00:26:34.975333 extend-filesystems[1513]: Found usr Nov 8 00:26:34.975333 extend-filesystems[1513]: Found sda4 Nov 8 00:26:34.975333 extend-filesystems[1513]: Found sda6 Nov 8 00:26:34.975333 extend-filesystems[1513]: Found sda7 Nov 8 00:26:34.975333 extend-filesystems[1513]: Found sda9 Nov 8 00:26:34.975333 extend-filesystems[1513]: Checking size of /dev/sda9 Nov 8 00:26:34.975166 systemd[1]: Starting systemd-logind.service - User Login Management... Nov 8 00:26:34.975488 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Nov 8 00:26:34.975856 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Nov 8 00:26:34.976176 systemd[1]: Starting update-engine.service - Update Engine... Nov 8 00:26:34.987677 extend-filesystems[1513]: Old size kept for /dev/sda9 Nov 8 00:26:34.987677 extend-filesystems[1513]: Found sr0 Nov 8 00:26:34.988495 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Nov 8 00:26:34.990808 systemd[1]: Starting vgauthd.service - VGAuth Service for open-vm-tools... Nov 8 00:26:34.991379 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Nov 8 00:26:34.992598 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Nov 8 00:26:34.992690 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Nov 8 00:26:34.992877 systemd[1]: extend-filesystems.service: Deactivated successfully. Nov 8 00:26:34.992971 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Nov 8 00:26:34.995564 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Nov 8 00:26:34.995658 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Nov 8 00:26:35.005683 jq[1523]: true Nov 8 00:26:35.008666 update_engine[1522]: I20251108 00:26:35.008625 1522 main.cc:92] Flatcar Update Engine starting Nov 8 00:26:35.013614 systemd[1]: motdgen.service: Deactivated successfully. Nov 8 00:26:35.013810 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Nov 8 00:26:35.021937 jq[1545]: true Nov 8 00:26:35.025649 (ntainerd)[1547]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Nov 8 00:26:35.034214 dbus-daemon[1511]: [system] SELinux support is enabled Nov 8 00:26:35.034304 systemd[1]: Started dbus.service - D-Bus System Message Bus. Nov 8 00:26:35.035643 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Nov 8 00:26:35.035662 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Nov 8 00:26:35.036487 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Nov 8 00:26:35.036496 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Nov 8 00:26:35.037029 systemd[1]: Started update-engine.service - Update Engine. Nov 8 00:26:35.037320 update_engine[1522]: I20251108 00:26:35.037235 1522 update_check_scheduler.cc:74] Next update check in 3m33s Nov 8 00:26:35.042544 systemd[1]: Started locksmithd.service - Cluster reboot manager. Nov 8 00:26:35.043631 systemd[1]: Started vgauthd.service - VGAuth Service for open-vm-tools. Nov 8 00:26:35.045105 tar[1538]: linux-amd64/LICENSE Nov 8 00:26:35.045105 tar[1538]: linux-amd64/helm Nov 8 00:26:35.050470 systemd[1]: Starting vmtoolsd.service - Service for virtual machines hosted on VMware... Nov 8 00:26:35.057414 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 35 scanned by (udev-worker) (1393) Nov 8 00:26:35.089345 unknown[1558]: Pref_Init: Using '/etc/vmware-tools/vgauth.conf' as preferences filepath Nov 8 00:26:35.106950 kernel: NET: Registered PF_VSOCK protocol family Nov 8 00:26:35.091611 systemd[1]: Started vmtoolsd.service - Service for virtual machines hosted on VMware. Nov 8 00:26:35.092136 unknown[1558]: Core dump limit set to -1 Nov 8 00:26:35.116204 bash[1571]: Updated "/home/core/.ssh/authorized_keys" Nov 8 00:26:35.118082 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Nov 8 00:26:35.121692 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Nov 8 00:26:35.138026 systemd-logind[1519]: Watching system buttons on /dev/input/event1 (Power Button) Nov 8 00:26:35.138041 systemd-logind[1519]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Nov 8 00:26:35.139168 systemd-logind[1519]: New seat seat0. Nov 8 00:26:35.142214 systemd[1]: Started systemd-logind.service - User Login Management. Nov 8 00:26:35.231568 locksmithd[1557]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Nov 8 00:26:35.245009 sshd_keygen[1537]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Nov 8 00:26:35.270585 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Nov 8 00:26:35.278961 systemd[1]: Starting issuegen.service - Generate /run/issue... Nov 8 00:26:35.286231 systemd[1]: issuegen.service: Deactivated successfully. Nov 8 00:26:35.286383 systemd[1]: Finished issuegen.service - Generate /run/issue. Nov 8 00:26:35.293575 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Nov 8 00:26:35.306433 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Nov 8 00:26:35.313387 containerd[1547]: time="2025-11-08T00:26:35.310544998Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Nov 8 00:26:35.313608 systemd[1]: Started getty@tty1.service - Getty on tty1. Nov 8 00:26:35.316102 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Nov 8 00:26:35.316289 systemd[1]: Reached target getty.target - Login Prompts. Nov 8 00:26:35.331552 containerd[1547]: time="2025-11-08T00:26:35.331529906Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Nov 8 00:26:35.333307 containerd[1547]: time="2025-11-08T00:26:35.333289911Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.113-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Nov 8 00:26:35.333353 containerd[1547]: time="2025-11-08T00:26:35.333346205Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Nov 8 00:26:35.333389 containerd[1547]: time="2025-11-08T00:26:35.333381425Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Nov 8 00:26:35.333562 containerd[1547]: time="2025-11-08T00:26:35.333552362Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Nov 8 00:26:35.334128 containerd[1547]: time="2025-11-08T00:26:35.334117408Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Nov 8 00:26:35.334203 containerd[1547]: time="2025-11-08T00:26:35.334193079Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Nov 8 00:26:35.334239 containerd[1547]: time="2025-11-08T00:26:35.334231837Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Nov 8 00:26:35.334355 containerd[1547]: time="2025-11-08T00:26:35.334345266Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Nov 8 00:26:35.334387 containerd[1547]: time="2025-11-08T00:26:35.334380803Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Nov 8 00:26:35.334446 containerd[1547]: time="2025-11-08T00:26:35.334437844Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Nov 8 00:26:35.334480 containerd[1547]: time="2025-11-08T00:26:35.334473287Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Nov 8 00:26:35.334551 containerd[1547]: time="2025-11-08T00:26:35.334542992Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Nov 8 00:26:35.334721 containerd[1547]: time="2025-11-08T00:26:35.334711917Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Nov 8 00:26:35.335484 containerd[1547]: time="2025-11-08T00:26:35.335471745Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Nov 8 00:26:35.335522 containerd[1547]: time="2025-11-08T00:26:35.335514865Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Nov 8 00:26:35.335604 containerd[1547]: time="2025-11-08T00:26:35.335595037Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Nov 8 00:26:35.335661 containerd[1547]: time="2025-11-08T00:26:35.335652934Z" level=info msg="metadata content store policy set" policy=shared Nov 8 00:26:35.339953 containerd[1547]: time="2025-11-08T00:26:35.339941822Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Nov 8 00:26:35.340033 containerd[1547]: time="2025-11-08T00:26:35.340023392Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Nov 8 00:26:35.340073 containerd[1547]: time="2025-11-08T00:26:35.340065993Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Nov 8 00:26:35.340112 containerd[1547]: time="2025-11-08T00:26:35.340103978Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Nov 8 00:26:35.340144 containerd[1547]: time="2025-11-08T00:26:35.340138039Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Nov 8 00:26:35.340268 containerd[1547]: time="2025-11-08T00:26:35.340260016Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Nov 8 00:26:35.340952 containerd[1547]: time="2025-11-08T00:26:35.340941898Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Nov 8 00:26:35.341068 containerd[1547]: time="2025-11-08T00:26:35.341059421Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Nov 8 00:26:35.341262 containerd[1547]: time="2025-11-08T00:26:35.341252317Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Nov 8 00:26:35.341307 containerd[1547]: time="2025-11-08T00:26:35.341299637Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Nov 8 00:26:35.341340 containerd[1547]: time="2025-11-08T00:26:35.341333354Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Nov 8 00:26:35.341378 containerd[1547]: time="2025-11-08T00:26:35.341370505Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Nov 8 00:26:35.341483 containerd[1547]: time="2025-11-08T00:26:35.341474031Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Nov 8 00:26:35.341520 containerd[1547]: time="2025-11-08T00:26:35.341513045Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Nov 8 00:26:35.341581 containerd[1547]: time="2025-11-08T00:26:35.341573117Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Nov 8 00:26:35.341617 containerd[1547]: time="2025-11-08T00:26:35.341610193Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Nov 8 00:26:35.341674 containerd[1547]: time="2025-11-08T00:26:35.341666254Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Nov 8 00:26:35.341705 containerd[1547]: time="2025-11-08T00:26:35.341698753Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Nov 8 00:26:35.341771 containerd[1547]: time="2025-11-08T00:26:35.341762801Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Nov 8 00:26:35.341808 containerd[1547]: time="2025-11-08T00:26:35.341798063Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Nov 8 00:26:35.341859 containerd[1547]: time="2025-11-08T00:26:35.341851978Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Nov 8 00:26:35.342678 containerd[1547]: time="2025-11-08T00:26:35.341928621Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Nov 8 00:26:35.342678 containerd[1547]: time="2025-11-08T00:26:35.341939286Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Nov 8 00:26:35.342678 containerd[1547]: time="2025-11-08T00:26:35.341947151Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Nov 8 00:26:35.342678 containerd[1547]: time="2025-11-08T00:26:35.341953681Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Nov 8 00:26:35.342678 containerd[1547]: time="2025-11-08T00:26:35.341960456Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Nov 8 00:26:35.342678 containerd[1547]: time="2025-11-08T00:26:35.341967513Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Nov 8 00:26:35.342678 containerd[1547]: time="2025-11-08T00:26:35.341981800Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Nov 8 00:26:35.342678 containerd[1547]: time="2025-11-08T00:26:35.341989318Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Nov 8 00:26:35.342678 containerd[1547]: time="2025-11-08T00:26:35.341996019Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Nov 8 00:26:35.342678 containerd[1547]: time="2025-11-08T00:26:35.342008554Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Nov 8 00:26:35.342678 containerd[1547]: time="2025-11-08T00:26:35.342024087Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Nov 8 00:26:35.342678 containerd[1547]: time="2025-11-08T00:26:35.342037141Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Nov 8 00:26:35.342678 containerd[1547]: time="2025-11-08T00:26:35.342043800Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Nov 8 00:26:35.342678 containerd[1547]: time="2025-11-08T00:26:35.342049466Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Nov 8 00:26:35.343382 containerd[1547]: time="2025-11-08T00:26:35.342073458Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Nov 8 00:26:35.343382 containerd[1547]: time="2025-11-08T00:26:35.342083483Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Nov 8 00:26:35.343382 containerd[1547]: time="2025-11-08T00:26:35.342090054Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Nov 8 00:26:35.343382 containerd[1547]: time="2025-11-08T00:26:35.342096713Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Nov 8 00:26:35.343382 containerd[1547]: time="2025-11-08T00:26:35.342105282Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Nov 8 00:26:35.343382 containerd[1547]: time="2025-11-08T00:26:35.342114448Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Nov 8 00:26:35.343382 containerd[1547]: time="2025-11-08T00:26:35.342122599Z" level=info msg="NRI interface is disabled by configuration." Nov 8 00:26:35.343382 containerd[1547]: time="2025-11-08T00:26:35.342128198Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Nov 8 00:26:35.342996 systemd[1]: Started containerd.service - containerd container runtime. Nov 8 00:26:35.343547 containerd[1547]: time="2025-11-08T00:26:35.342283237Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Nov 8 00:26:35.343547 containerd[1547]: time="2025-11-08T00:26:35.342317328Z" level=info msg="Connect containerd service" Nov 8 00:26:35.343547 containerd[1547]: time="2025-11-08T00:26:35.342339168Z" level=info msg="using legacy CRI server" Nov 8 00:26:35.343547 containerd[1547]: time="2025-11-08T00:26:35.342343517Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Nov 8 00:26:35.343547 containerd[1547]: time="2025-11-08T00:26:35.342411530Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Nov 8 00:26:35.343547 containerd[1547]: time="2025-11-08T00:26:35.342717842Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Nov 8 00:26:35.343547 containerd[1547]: time="2025-11-08T00:26:35.342820496Z" level=info msg="Start subscribing containerd event" Nov 8 00:26:35.343547 containerd[1547]: time="2025-11-08T00:26:35.342851475Z" level=info msg="Start recovering state" Nov 8 00:26:35.343547 containerd[1547]: time="2025-11-08T00:26:35.342853533Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Nov 8 00:26:35.343547 containerd[1547]: time="2025-11-08T00:26:35.342887201Z" level=info msg=serving... address=/run/containerd/containerd.sock Nov 8 00:26:35.343547 containerd[1547]: time="2025-11-08T00:26:35.342888290Z" level=info msg="Start event monitor" Nov 8 00:26:35.343547 containerd[1547]: time="2025-11-08T00:26:35.342906863Z" level=info msg="Start snapshots syncer" Nov 8 00:26:35.343547 containerd[1547]: time="2025-11-08T00:26:35.342912028Z" level=info msg="Start cni network conf syncer for default" Nov 8 00:26:35.343547 containerd[1547]: time="2025-11-08T00:26:35.342916480Z" level=info msg="Start streaming server" Nov 8 00:26:35.343547 containerd[1547]: time="2025-11-08T00:26:35.342945620Z" level=info msg="containerd successfully booted in 0.034097s" Nov 8 00:28:14.065290 systemd-timesyncd[1466]: Contacted time server 45.56.66.53:123 (0.flatcar.pool.ntp.org). Nov 8 00:28:14.065324 systemd-timesyncd[1466]: Initial clock synchronization to Sat 2025-11-08 00:28:14.065170 UTC. Nov 8 00:28:14.065534 systemd-resolved[1442]: Clock change detected. Flushing caches. Nov 8 00:28:14.216429 tar[1538]: linux-amd64/README.md Nov 8 00:28:14.222833 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Nov 8 00:28:15.227855 systemd-networkd[1441]: ens192: Gained IPv6LL Nov 8 00:28:15.229514 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Nov 8 00:28:15.230032 systemd[1]: Reached target network-online.target - Network is Online. Nov 8 00:28:15.234771 systemd[1]: Starting coreos-metadata.service - VMware metadata agent... Nov 8 00:28:15.238758 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 8 00:28:15.240977 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Nov 8 00:28:15.258125 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Nov 8 00:28:15.269697 systemd[1]: coreos-metadata.service: Deactivated successfully. Nov 8 00:28:15.269826 systemd[1]: Finished coreos-metadata.service - VMware metadata agent. Nov 8 00:28:15.270404 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Nov 8 00:28:16.161425 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 8 00:28:16.161952 systemd[1]: Reached target multi-user.target - Multi-User System. Nov 8 00:28:16.162491 systemd[1]: Startup finished in 954ms (kernel) + 4.680s (initrd) + 4.189s (userspace) = 9.823s. Nov 8 00:28:16.167835 (kubelet)[1690]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Nov 8 00:28:16.265306 login[1620]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Nov 8 00:28:16.267382 login[1623]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Nov 8 00:28:16.273720 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Nov 8 00:28:16.277727 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Nov 8 00:28:16.278816 systemd-logind[1519]: New session 2 of user core. Nov 8 00:28:16.281644 systemd-logind[1519]: New session 1 of user core. Nov 8 00:28:16.285978 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Nov 8 00:28:16.290913 systemd[1]: Starting user@500.service - User Manager for UID 500... Nov 8 00:28:16.293018 (systemd)[1697]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Nov 8 00:28:16.347059 systemd[1697]: Queued start job for default target default.target. Nov 8 00:28:16.357327 systemd[1697]: Created slice app.slice - User Application Slice. Nov 8 00:28:16.357550 systemd[1697]: Reached target paths.target - Paths. Nov 8 00:28:16.357594 systemd[1697]: Reached target timers.target - Timers. Nov 8 00:28:16.358304 systemd[1697]: Starting dbus.socket - D-Bus User Message Bus Socket... Nov 8 00:28:16.365361 systemd[1697]: Listening on dbus.socket - D-Bus User Message Bus Socket. Nov 8 00:28:16.365392 systemd[1697]: Reached target sockets.target - Sockets. Nov 8 00:28:16.365417 systemd[1697]: Reached target basic.target - Basic System. Nov 8 00:28:16.365440 systemd[1697]: Reached target default.target - Main User Target. Nov 8 00:28:16.365458 systemd[1697]: Startup finished in 69ms. Nov 8 00:28:16.365502 systemd[1]: Started user@500.service - User Manager for UID 500. Nov 8 00:28:16.367674 systemd[1]: Started session-1.scope - Session 1 of User core. Nov 8 00:28:16.368274 systemd[1]: Started session-2.scope - Session 2 of User core. Nov 8 00:28:16.762175 kubelet[1690]: E1108 00:28:16.762110 1690 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Nov 8 00:28:16.763337 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Nov 8 00:28:16.763441 systemd[1]: kubelet.service: Failed with result 'exit-code'. Nov 8 00:28:26.922421 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Nov 8 00:28:26.936810 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 8 00:28:27.238433 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 8 00:28:27.241738 (kubelet)[1741]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Nov 8 00:28:27.311304 kubelet[1741]: E1108 00:28:27.311266 1741 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Nov 8 00:28:27.313815 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Nov 8 00:28:27.313907 systemd[1]: kubelet.service: Failed with result 'exit-code'. Nov 8 00:28:37.422668 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Nov 8 00:28:37.428896 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 8 00:28:37.691153 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 8 00:28:37.695943 (kubelet)[1756]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Nov 8 00:28:37.781116 kubelet[1756]: E1108 00:28:37.781076 1756 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Nov 8 00:28:37.782630 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Nov 8 00:28:37.782722 systemd[1]: kubelet.service: Failed with result 'exit-code'. Nov 8 00:28:43.875563 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Nov 8 00:28:43.877748 systemd[1]: Started sshd@0-139.178.70.106:22-147.75.109.163:41386.service - OpenSSH per-connection server daemon (147.75.109.163:41386). Nov 8 00:28:43.910352 sshd[1764]: Accepted publickey for core from 147.75.109.163 port 41386 ssh2: RSA SHA256:w/xGfnUobRwx5tSVykkPMEgd5qYNjwGoDH1wcB/4M9g Nov 8 00:28:43.911047 sshd[1764]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 8 00:28:43.913375 systemd-logind[1519]: New session 3 of user core. Nov 8 00:28:43.920687 systemd[1]: Started session-3.scope - Session 3 of User core. Nov 8 00:28:43.980740 systemd[1]: Started sshd@1-139.178.70.106:22-147.75.109.163:41396.service - OpenSSH per-connection server daemon (147.75.109.163:41396). Nov 8 00:28:44.006999 sshd[1769]: Accepted publickey for core from 147.75.109.163 port 41396 ssh2: RSA SHA256:w/xGfnUobRwx5tSVykkPMEgd5qYNjwGoDH1wcB/4M9g Nov 8 00:28:44.007771 sshd[1769]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 8 00:28:44.011631 systemd-logind[1519]: New session 4 of user core. Nov 8 00:28:44.014681 systemd[1]: Started session-4.scope - Session 4 of User core. Nov 8 00:28:44.064201 sshd[1769]: pam_unix(sshd:session): session closed for user core Nov 8 00:28:44.069579 systemd[1]: sshd@1-139.178.70.106:22-147.75.109.163:41396.service: Deactivated successfully. Nov 8 00:28:44.070393 systemd[1]: session-4.scope: Deactivated successfully. Nov 8 00:28:44.071209 systemd-logind[1519]: Session 4 logged out. Waiting for processes to exit. Nov 8 00:28:44.074859 systemd[1]: Started sshd@2-139.178.70.106:22-147.75.109.163:41410.service - OpenSSH per-connection server daemon (147.75.109.163:41410). Nov 8 00:28:44.076082 systemd-logind[1519]: Removed session 4. Nov 8 00:28:44.102393 sshd[1776]: Accepted publickey for core from 147.75.109.163 port 41410 ssh2: RSA SHA256:w/xGfnUobRwx5tSVykkPMEgd5qYNjwGoDH1wcB/4M9g Nov 8 00:28:44.103286 sshd[1776]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 8 00:28:44.106055 systemd-logind[1519]: New session 5 of user core. Nov 8 00:28:44.113681 systemd[1]: Started session-5.scope - Session 5 of User core. Nov 8 00:28:44.158945 sshd[1776]: pam_unix(sshd:session): session closed for user core Nov 8 00:28:44.171863 systemd[1]: sshd@2-139.178.70.106:22-147.75.109.163:41410.service: Deactivated successfully. Nov 8 00:28:44.172574 systemd[1]: session-5.scope: Deactivated successfully. Nov 8 00:28:44.173291 systemd-logind[1519]: Session 5 logged out. Waiting for processes to exit. Nov 8 00:28:44.173948 systemd[1]: Started sshd@3-139.178.70.106:22-147.75.109.163:41418.service - OpenSSH per-connection server daemon (147.75.109.163:41418). Nov 8 00:28:44.175739 systemd-logind[1519]: Removed session 5. Nov 8 00:28:44.203105 sshd[1783]: Accepted publickey for core from 147.75.109.163 port 41418 ssh2: RSA SHA256:w/xGfnUobRwx5tSVykkPMEgd5qYNjwGoDH1wcB/4M9g Nov 8 00:28:44.203752 sshd[1783]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 8 00:28:44.205928 systemd-logind[1519]: New session 6 of user core. Nov 8 00:28:44.212677 systemd[1]: Started session-6.scope - Session 6 of User core. Nov 8 00:28:44.260872 sshd[1783]: pam_unix(sshd:session): session closed for user core Nov 8 00:28:44.271054 systemd[1]: sshd@3-139.178.70.106:22-147.75.109.163:41418.service: Deactivated successfully. Nov 8 00:28:44.271866 systemd[1]: session-6.scope: Deactivated successfully. Nov 8 00:28:44.272662 systemd-logind[1519]: Session 6 logged out. Waiting for processes to exit. Nov 8 00:28:44.273342 systemd[1]: Started sshd@4-139.178.70.106:22-147.75.109.163:41426.service - OpenSSH per-connection server daemon (147.75.109.163:41426). Nov 8 00:28:44.274880 systemd-logind[1519]: Removed session 6. Nov 8 00:28:44.309278 sshd[1790]: Accepted publickey for core from 147.75.109.163 port 41426 ssh2: RSA SHA256:w/xGfnUobRwx5tSVykkPMEgd5qYNjwGoDH1wcB/4M9g Nov 8 00:28:44.310071 sshd[1790]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 8 00:28:44.312635 systemd-logind[1519]: New session 7 of user core. Nov 8 00:28:44.322711 systemd[1]: Started session-7.scope - Session 7 of User core. Nov 8 00:28:44.376851 sudo[1793]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Nov 8 00:28:44.377014 sudo[1793]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Nov 8 00:28:44.386999 sudo[1793]: pam_unix(sudo:session): session closed for user root Nov 8 00:28:44.387985 sshd[1790]: pam_unix(sshd:session): session closed for user core Nov 8 00:28:44.403983 systemd[1]: sshd@4-139.178.70.106:22-147.75.109.163:41426.service: Deactivated successfully. Nov 8 00:28:44.404711 systemd[1]: session-7.scope: Deactivated successfully. Nov 8 00:28:44.405432 systemd-logind[1519]: Session 7 logged out. Waiting for processes to exit. Nov 8 00:28:44.406127 systemd[1]: Started sshd@5-139.178.70.106:22-147.75.109.163:41438.service - OpenSSH per-connection server daemon (147.75.109.163:41438). Nov 8 00:28:44.407778 systemd-logind[1519]: Removed session 7. Nov 8 00:28:44.435626 sshd[1798]: Accepted publickey for core from 147.75.109.163 port 41438 ssh2: RSA SHA256:w/xGfnUobRwx5tSVykkPMEgd5qYNjwGoDH1wcB/4M9g Nov 8 00:28:44.436314 sshd[1798]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 8 00:28:44.439264 systemd-logind[1519]: New session 8 of user core. Nov 8 00:28:44.445686 systemd[1]: Started session-8.scope - Session 8 of User core. Nov 8 00:28:44.493101 sudo[1802]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Nov 8 00:28:44.493445 sudo[1802]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Nov 8 00:28:44.495247 sudo[1802]: pam_unix(sudo:session): session closed for user root Nov 8 00:28:44.498115 sudo[1801]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Nov 8 00:28:44.498269 sudo[1801]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Nov 8 00:28:44.506768 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Nov 8 00:28:44.507810 auditctl[1805]: No rules Nov 8 00:28:44.508058 systemd[1]: audit-rules.service: Deactivated successfully. Nov 8 00:28:44.508162 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Nov 8 00:28:44.509667 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Nov 8 00:28:44.525242 augenrules[1823]: No rules Nov 8 00:28:44.525954 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Nov 8 00:28:44.526682 sudo[1801]: pam_unix(sudo:session): session closed for user root Nov 8 00:28:44.527697 sshd[1798]: pam_unix(sshd:session): session closed for user core Nov 8 00:28:44.536099 systemd[1]: sshd@5-139.178.70.106:22-147.75.109.163:41438.service: Deactivated successfully. Nov 8 00:28:44.537331 systemd[1]: session-8.scope: Deactivated successfully. Nov 8 00:28:44.538265 systemd-logind[1519]: Session 8 logged out. Waiting for processes to exit. Nov 8 00:28:44.549989 systemd[1]: Started sshd@6-139.178.70.106:22-147.75.109.163:41446.service - OpenSSH per-connection server daemon (147.75.109.163:41446). Nov 8 00:28:44.551039 systemd-logind[1519]: Removed session 8. Nov 8 00:28:44.577839 sshd[1831]: Accepted publickey for core from 147.75.109.163 port 41446 ssh2: RSA SHA256:w/xGfnUobRwx5tSVykkPMEgd5qYNjwGoDH1wcB/4M9g Nov 8 00:28:44.578663 sshd[1831]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 8 00:28:44.581014 systemd-logind[1519]: New session 9 of user core. Nov 8 00:28:44.590791 systemd[1]: Started session-9.scope - Session 9 of User core. Nov 8 00:28:44.639305 sudo[1834]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Nov 8 00:28:44.639694 sudo[1834]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Nov 8 00:28:44.923776 systemd[1]: Starting docker.service - Docker Application Container Engine... Nov 8 00:28:44.923845 (dockerd)[1850]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Nov 8 00:28:45.185122 dockerd[1850]: time="2025-11-08T00:28:45.185042916Z" level=info msg="Starting up" Nov 8 00:28:45.243437 systemd[1]: var-lib-docker-check\x2doverlayfs\x2dsupport2176168208-merged.mount: Deactivated successfully. Nov 8 00:28:45.261426 dockerd[1850]: time="2025-11-08T00:28:45.261375430Z" level=info msg="Loading containers: start." Nov 8 00:28:45.329747 kernel: Initializing XFRM netlink socket Nov 8 00:28:45.391147 systemd-networkd[1441]: docker0: Link UP Nov 8 00:28:45.401384 dockerd[1850]: time="2025-11-08T00:28:45.401356825Z" level=info msg="Loading containers: done." Nov 8 00:28:45.413358 dockerd[1850]: time="2025-11-08T00:28:45.413329591Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Nov 8 00:28:45.413470 dockerd[1850]: time="2025-11-08T00:28:45.413404123Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Nov 8 00:28:45.413491 dockerd[1850]: time="2025-11-08T00:28:45.413472831Z" level=info msg="Daemon has completed initialization" Nov 8 00:28:45.428752 dockerd[1850]: time="2025-11-08T00:28:45.428508215Z" level=info msg="API listen on /run/docker.sock" Nov 8 00:28:45.428996 systemd[1]: Started docker.service - Docker Application Container Engine. Nov 8 00:28:46.380851 containerd[1547]: time="2025-11-08T00:28:46.380714975Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.5\"" Nov 8 00:28:47.043505 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1877161404.mount: Deactivated successfully. Nov 8 00:28:47.922730 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Nov 8 00:28:47.928815 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 8 00:28:48.018586 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 8 00:28:48.022806 (kubelet)[2056]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Nov 8 00:28:48.073777 kubelet[2056]: E1108 00:28:48.073751 2056 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Nov 8 00:28:48.076031 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Nov 8 00:28:48.076123 systemd[1]: kubelet.service: Failed with result 'exit-code'. Nov 8 00:28:48.376667 containerd[1547]: time="2025-11-08T00:28:48.376087914Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.33.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:28:48.376897 containerd[1547]: time="2025-11-08T00:28:48.376762538Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.33.5: active requests=0, bytes read=30114893" Nov 8 00:28:48.377179 containerd[1547]: time="2025-11-08T00:28:48.377162361Z" level=info msg="ImageCreate event name:\"sha256:b7335a56022aba291f5df653c01b7ab98d64fb5cab221378617f4a1236e06a62\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:28:48.378987 containerd[1547]: time="2025-11-08T00:28:48.378968227Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:1b9c6c00bc1fe86860e72efb8e4148f9e436a132eba4ca636ca4f48d61d6dfb4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:28:48.379817 containerd[1547]: time="2025-11-08T00:28:48.379628380Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.33.5\" with image id \"sha256:b7335a56022aba291f5df653c01b7ab98d64fb5cab221378617f4a1236e06a62\", repo tag \"registry.k8s.io/kube-apiserver:v1.33.5\", repo digest \"registry.k8s.io/kube-apiserver@sha256:1b9c6c00bc1fe86860e72efb8e4148f9e436a132eba4ca636ca4f48d61d6dfb4\", size \"30111492\" in 1.998890687s" Nov 8 00:28:48.379817 containerd[1547]: time="2025-11-08T00:28:48.379667624Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.5\" returns image reference \"sha256:b7335a56022aba291f5df653c01b7ab98d64fb5cab221378617f4a1236e06a62\"" Nov 8 00:28:48.380448 containerd[1547]: time="2025-11-08T00:28:48.380416474Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.5\"" Nov 8 00:28:50.518629 containerd[1547]: time="2025-11-08T00:28:50.518231697Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.33.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:28:50.526258 containerd[1547]: time="2025-11-08T00:28:50.526028359Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.33.5: active requests=0, bytes read=26020844" Nov 8 00:28:50.535967 containerd[1547]: time="2025-11-08T00:28:50.535928424Z" level=info msg="ImageCreate event name:\"sha256:8bb43160a0df4d7d34c89d9edbc48735bc2f830771e4b501937338221be0f668\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:28:50.546400 containerd[1547]: time="2025-11-08T00:28:50.546353087Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:1082a6ab67fb46397314dd36b36cb197ba4a4c5365033e9ad22bc7edaaaabd5c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:28:50.547329 containerd[1547]: time="2025-11-08T00:28:50.547242852Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.33.5\" with image id \"sha256:8bb43160a0df4d7d34c89d9edbc48735bc2f830771e4b501937338221be0f668\", repo tag \"registry.k8s.io/kube-controller-manager:v1.33.5\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:1082a6ab67fb46397314dd36b36cb197ba4a4c5365033e9ad22bc7edaaaabd5c\", size \"27681301\" in 2.166740304s" Nov 8 00:28:50.547329 containerd[1547]: time="2025-11-08T00:28:50.547266529Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.5\" returns image reference \"sha256:8bb43160a0df4d7d34c89d9edbc48735bc2f830771e4b501937338221be0f668\"" Nov 8 00:28:50.547717 containerd[1547]: time="2025-11-08T00:28:50.547697536Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.5\"" Nov 8 00:28:51.742627 containerd[1547]: time="2025-11-08T00:28:51.742256459Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.33.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:28:51.743404 containerd[1547]: time="2025-11-08T00:28:51.743219703Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.33.5: active requests=0, bytes read=20155568" Nov 8 00:28:51.743720 containerd[1547]: time="2025-11-08T00:28:51.743704471Z" level=info msg="ImageCreate event name:\"sha256:33b680aadf474b7e5e73957fc00c6af86dd0484c699c8461ba33ee656d1823bf\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:28:51.745768 containerd[1547]: time="2025-11-08T00:28:51.745743035Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:3e7b57c9d9f06b77f0064e5be7f3df61e0151101160acd5fdecce911df28a189\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:28:51.747068 containerd[1547]: time="2025-11-08T00:28:51.747048099Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.33.5\" with image id \"sha256:33b680aadf474b7e5e73957fc00c6af86dd0484c699c8461ba33ee656d1823bf\", repo tag \"registry.k8s.io/kube-scheduler:v1.33.5\", repo digest \"registry.k8s.io/kube-scheduler@sha256:3e7b57c9d9f06b77f0064e5be7f3df61e0151101160acd5fdecce911df28a189\", size \"21816043\" in 1.199324999s" Nov 8 00:28:51.747119 containerd[1547]: time="2025-11-08T00:28:51.747072152Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.5\" returns image reference \"sha256:33b680aadf474b7e5e73957fc00c6af86dd0484c699c8461ba33ee656d1823bf\"" Nov 8 00:28:51.747497 containerd[1547]: time="2025-11-08T00:28:51.747476684Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.5\"" Nov 8 00:28:52.684282 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1394050859.mount: Deactivated successfully. Nov 8 00:28:53.088679 containerd[1547]: time="2025-11-08T00:28:53.088422427Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.33.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:28:53.102526 containerd[1547]: time="2025-11-08T00:28:53.102474238Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.33.5: active requests=0, bytes read=31929469" Nov 8 00:28:53.120817 containerd[1547]: time="2025-11-08T00:28:53.120771118Z" level=info msg="ImageCreate event name:\"sha256:2844ee7bb56c2c194e1f4adafb9e7b60b9ed16aa4d07ab8ad1f019362e2efab3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:28:53.130922 containerd[1547]: time="2025-11-08T00:28:53.130881082Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:71445ec84ad98bd52a7784865a9d31b1b50b56092d3f7699edc39eefd71befe1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:28:53.131736 containerd[1547]: time="2025-11-08T00:28:53.131536142Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.33.5\" with image id \"sha256:2844ee7bb56c2c194e1f4adafb9e7b60b9ed16aa4d07ab8ad1f019362e2efab3\", repo tag \"registry.k8s.io/kube-proxy:v1.33.5\", repo digest \"registry.k8s.io/kube-proxy@sha256:71445ec84ad98bd52a7784865a9d31b1b50b56092d3f7699edc39eefd71befe1\", size \"31928488\" in 1.384025129s" Nov 8 00:28:53.131736 containerd[1547]: time="2025-11-08T00:28:53.131563809Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.5\" returns image reference \"sha256:2844ee7bb56c2c194e1f4adafb9e7b60b9ed16aa4d07ab8ad1f019362e2efab3\"" Nov 8 00:28:53.132030 containerd[1547]: time="2025-11-08T00:28:53.132011531Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\"" Nov 8 00:28:53.873331 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4025591787.mount: Deactivated successfully. Nov 8 00:28:55.719630 containerd[1547]: time="2025-11-08T00:28:55.719105364Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.12.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:28:55.721585 containerd[1547]: time="2025-11-08T00:28:55.721426138Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.12.0: active requests=0, bytes read=20942238" Nov 8 00:28:55.723685 containerd[1547]: time="2025-11-08T00:28:55.723648799Z" level=info msg="ImageCreate event name:\"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:28:55.725743 containerd[1547]: time="2025-11-08T00:28:55.725719909Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:28:55.727385 containerd[1547]: time="2025-11-08T00:28:55.727357400Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.12.0\" with image id \"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\", repo tag \"registry.k8s.io/coredns/coredns:v1.12.0\", repo digest \"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\", size \"20939036\" in 2.595320179s" Nov 8 00:28:55.727437 containerd[1547]: time="2025-11-08T00:28:55.727396749Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\" returns image reference \"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\"" Nov 8 00:28:55.727770 containerd[1547]: time="2025-11-08T00:28:55.727750671Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Nov 8 00:28:56.622320 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1682687403.mount: Deactivated successfully. Nov 8 00:28:56.626350 containerd[1547]: time="2025-11-08T00:28:56.626317824Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:28:56.627667 containerd[1547]: time="2025-11-08T00:28:56.627636234Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321138" Nov 8 00:28:56.627976 containerd[1547]: time="2025-11-08T00:28:56.627948232Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:28:56.629114 containerd[1547]: time="2025-11-08T00:28:56.629100447Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:28:56.629960 containerd[1547]: time="2025-11-08T00:28:56.629594633Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 901.821357ms" Nov 8 00:28:56.629960 containerd[1547]: time="2025-11-08T00:28:56.629626406Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Nov 8 00:28:56.630159 containerd[1547]: time="2025-11-08T00:28:56.630147671Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.21-0\"" Nov 8 00:28:57.270242 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount108712487.mount: Deactivated successfully. Nov 8 00:28:58.172509 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 4. Nov 8 00:28:58.187087 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 8 00:28:59.354102 update_engine[1522]: I20251108 00:28:59.353654 1522 update_attempter.cc:509] Updating boot flags... Nov 8 00:28:59.521700 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 8 00:28:59.529860 (kubelet)[2196]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Nov 8 00:28:59.566627 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 35 scanned by (udev-worker) (2202) Nov 8 00:28:59.636294 kubelet[2196]: E1108 00:28:59.635879 2196 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Nov 8 00:28:59.639315 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Nov 8 00:28:59.639412 systemd[1]: kubelet.service: Failed with result 'exit-code'. Nov 8 00:29:00.968672 containerd[1547]: time="2025-11-08T00:29:00.968642500Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.21-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:29:00.969404 containerd[1547]: time="2025-11-08T00:29:00.969086472Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.21-0: active requests=0, bytes read=58378433" Nov 8 00:29:00.969853 containerd[1547]: time="2025-11-08T00:29:00.969831664Z" level=info msg="ImageCreate event name:\"sha256:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:29:00.972132 containerd[1547]: time="2025-11-08T00:29:00.971805226Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:d58c035df557080a27387d687092e3fc2b64c6d0e3162dc51453a115f847d121\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:29:00.972629 containerd[1547]: time="2025-11-08T00:29:00.972600986Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.21-0\" with image id \"sha256:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1\", repo tag \"registry.k8s.io/etcd:3.5.21-0\", repo digest \"registry.k8s.io/etcd@sha256:d58c035df557080a27387d687092e3fc2b64c6d0e3162dc51453a115f847d121\", size \"58938593\" in 4.342407631s" Nov 8 00:29:00.972666 containerd[1547]: time="2025-11-08T00:29:00.972631977Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.21-0\" returns image reference \"sha256:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1\"" Nov 8 00:29:03.729014 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Nov 8 00:29:03.737741 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 8 00:29:03.756684 systemd[1]: Reloading requested from client PID 2249 ('systemctl') (unit session-9.scope)... Nov 8 00:29:03.756765 systemd[1]: Reloading... Nov 8 00:29:03.814721 zram_generator::config[2286]: No configuration found. Nov 8 00:29:03.872057 systemd[1]: /etc/systemd/system/coreos-metadata.service:11: Ignoring unknown escape sequences: "echo "COREOS_CUSTOM_PRIVATE_IPV4=$(ip addr show ens192 | grep "inet 10." | grep -Po "inet \K[\d.]+") Nov 8 00:29:03.887143 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Nov 8 00:29:03.931592 systemd[1]: Reloading finished in 174 ms. Nov 8 00:29:03.975940 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Nov 8 00:29:03.976002 systemd[1]: kubelet.service: Failed with result 'signal'. Nov 8 00:29:03.976168 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Nov 8 00:29:03.981814 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 8 00:29:04.540783 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 8 00:29:04.544097 (kubelet)[2354]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Nov 8 00:29:04.567193 kubelet[2354]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Nov 8 00:29:04.567193 kubelet[2354]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Nov 8 00:29:04.567193 kubelet[2354]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Nov 8 00:29:04.584715 kubelet[2354]: I1108 00:29:04.584681 2354 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Nov 8 00:29:05.314168 kubelet[2354]: I1108 00:29:05.313258 2354 server.go:530] "Kubelet version" kubeletVersion="v1.33.0" Nov 8 00:29:05.314168 kubelet[2354]: I1108 00:29:05.313291 2354 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Nov 8 00:29:05.314168 kubelet[2354]: I1108 00:29:05.313651 2354 server.go:956] "Client rotation is on, will bootstrap in background" Nov 8 00:29:05.578993 kubelet[2354]: I1108 00:29:05.578829 2354 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Nov 8 00:29:05.602352 kubelet[2354]: E1108 00:29:05.602313 2354 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://139.178.70.106:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 139.178.70.106:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Nov 8 00:29:05.764549 kubelet[2354]: E1108 00:29:05.764492 2354 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Nov 8 00:29:05.764549 kubelet[2354]: I1108 00:29:05.764530 2354 server.go:1423] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Nov 8 00:29:05.791037 kubelet[2354]: I1108 00:29:05.791008 2354 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Nov 8 00:29:05.796447 kubelet[2354]: I1108 00:29:05.796416 2354 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Nov 8 00:29:05.799530 kubelet[2354]: I1108 00:29:05.796448 2354 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Nov 8 00:29:05.799696 kubelet[2354]: I1108 00:29:05.799539 2354 topology_manager.go:138] "Creating topology manager with none policy" Nov 8 00:29:05.799696 kubelet[2354]: I1108 00:29:05.799553 2354 container_manager_linux.go:303] "Creating device plugin manager" Nov 8 00:29:05.800526 kubelet[2354]: I1108 00:29:05.800508 2354 state_mem.go:36] "Initialized new in-memory state store" Nov 8 00:29:05.803543 kubelet[2354]: I1108 00:29:05.803260 2354 kubelet.go:480] "Attempting to sync node with API server" Nov 8 00:29:05.803543 kubelet[2354]: I1108 00:29:05.803289 2354 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Nov 8 00:29:05.804354 kubelet[2354]: I1108 00:29:05.803903 2354 kubelet.go:386] "Adding apiserver pod source" Nov 8 00:29:05.806013 kubelet[2354]: I1108 00:29:05.805657 2354 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Nov 8 00:29:05.807556 kubelet[2354]: E1108 00:29:05.807535 2354 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://139.178.70.106:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 139.178.70.106:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Nov 8 00:29:05.817942 kubelet[2354]: E1108 00:29:05.817912 2354 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://139.178.70.106:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 139.178.70.106:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Nov 8 00:29:05.818371 kubelet[2354]: I1108 00:29:05.818353 2354 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Nov 8 00:29:05.818837 kubelet[2354]: I1108 00:29:05.818768 2354 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Nov 8 00:29:05.820112 kubelet[2354]: W1108 00:29:05.820094 2354 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Nov 8 00:29:05.824322 kubelet[2354]: I1108 00:29:05.824305 2354 watchdog_linux.go:99] "Systemd watchdog is not enabled" Nov 8 00:29:05.824370 kubelet[2354]: I1108 00:29:05.824346 2354 server.go:1289] "Started kubelet" Nov 8 00:29:05.824528 kubelet[2354]: I1108 00:29:05.824503 2354 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Nov 8 00:29:05.826491 kubelet[2354]: I1108 00:29:05.826477 2354 server.go:317] "Adding debug handlers to kubelet server" Nov 8 00:29:05.828414 kubelet[2354]: I1108 00:29:05.828378 2354 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Nov 8 00:29:05.828648 kubelet[2354]: I1108 00:29:05.828636 2354 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Nov 8 00:29:05.832548 kubelet[2354]: E1108 00:29:05.828714 2354 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://139.178.70.106:6443/api/v1/namespaces/default/events\": dial tcp 139.178.70.106:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.1875e091c28d152a default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-11-08 00:29:05.824322858 +0000 UTC m=+1.278173505,LastTimestamp:2025-11-08 00:29:05.824322858 +0000 UTC m=+1.278173505,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Nov 8 00:29:05.832548 kubelet[2354]: I1108 00:29:05.832427 2354 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Nov 8 00:29:05.833982 kubelet[2354]: I1108 00:29:05.833928 2354 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Nov 8 00:29:05.835817 kubelet[2354]: E1108 00:29:05.835635 2354 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Nov 8 00:29:05.835817 kubelet[2354]: I1108 00:29:05.835661 2354 volume_manager.go:297] "Starting Kubelet Volume Manager" Nov 8 00:29:05.835817 kubelet[2354]: I1108 00:29:05.835796 2354 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Nov 8 00:29:05.835897 kubelet[2354]: I1108 00:29:05.835842 2354 reconciler.go:26] "Reconciler: start to sync state" Nov 8 00:29:05.836092 kubelet[2354]: E1108 00:29:05.836074 2354 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://139.178.70.106:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 139.178.70.106:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Nov 8 00:29:05.839376 kubelet[2354]: E1108 00:29:05.839050 2354 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://139.178.70.106:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 139.178.70.106:6443: connect: connection refused" interval="200ms" Nov 8 00:29:05.839376 kubelet[2354]: I1108 00:29:05.839267 2354 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Nov 8 00:29:05.841499 kubelet[2354]: I1108 00:29:05.840992 2354 factory.go:223] Registration of the containerd container factory successfully Nov 8 00:29:05.841499 kubelet[2354]: I1108 00:29:05.841012 2354 factory.go:223] Registration of the systemd container factory successfully Nov 8 00:29:05.847882 kubelet[2354]: I1108 00:29:05.847860 2354 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Nov 8 00:29:05.848534 kubelet[2354]: I1108 00:29:05.848526 2354 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Nov 8 00:29:05.848576 kubelet[2354]: I1108 00:29:05.848571 2354 status_manager.go:230] "Starting to sync pod status with apiserver" Nov 8 00:29:05.848676 kubelet[2354]: I1108 00:29:05.848667 2354 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Nov 8 00:29:05.848723 kubelet[2354]: I1108 00:29:05.848717 2354 kubelet.go:2436] "Starting kubelet main sync loop" Nov 8 00:29:05.848780 kubelet[2354]: E1108 00:29:05.848768 2354 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Nov 8 00:29:05.852826 kubelet[2354]: E1108 00:29:05.852814 2354 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Nov 8 00:29:05.852993 kubelet[2354]: E1108 00:29:05.852973 2354 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://139.178.70.106:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 139.178.70.106:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Nov 8 00:29:05.866073 kubelet[2354]: I1108 00:29:05.866024 2354 cpu_manager.go:221] "Starting CPU manager" policy="none" Nov 8 00:29:05.866073 kubelet[2354]: I1108 00:29:05.866035 2354 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Nov 8 00:29:05.866073 kubelet[2354]: I1108 00:29:05.866045 2354 state_mem.go:36] "Initialized new in-memory state store" Nov 8 00:29:05.871622 kubelet[2354]: I1108 00:29:05.871566 2354 policy_none.go:49] "None policy: Start" Nov 8 00:29:05.871622 kubelet[2354]: I1108 00:29:05.871581 2354 memory_manager.go:186] "Starting memorymanager" policy="None" Nov 8 00:29:05.871622 kubelet[2354]: I1108 00:29:05.871592 2354 state_mem.go:35] "Initializing new in-memory state store" Nov 8 00:29:05.925486 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Nov 8 00:29:05.933857 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Nov 8 00:29:05.936438 kubelet[2354]: E1108 00:29:05.936362 2354 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Nov 8 00:29:05.936948 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Nov 8 00:29:05.944567 kubelet[2354]: E1108 00:29:05.944399 2354 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Nov 8 00:29:05.944731 kubelet[2354]: I1108 00:29:05.944715 2354 eviction_manager.go:189] "Eviction manager: starting control loop" Nov 8 00:29:05.944770 kubelet[2354]: I1108 00:29:05.944728 2354 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Nov 8 00:29:05.946009 kubelet[2354]: I1108 00:29:05.944980 2354 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Nov 8 00:29:05.946009 kubelet[2354]: E1108 00:29:05.945674 2354 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Nov 8 00:29:05.946009 kubelet[2354]: E1108 00:29:05.945698 2354 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Nov 8 00:29:05.965772 systemd[1]: Created slice kubepods-burstable-podd782e18d8f4d22232283e7c4c7bd441c.slice - libcontainer container kubepods-burstable-podd782e18d8f4d22232283e7c4c7bd441c.slice. Nov 8 00:29:05.976434 kubelet[2354]: E1108 00:29:05.976256 2354 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Nov 8 00:29:05.980230 systemd[1]: Created slice kubepods-burstable-pod20c890a246d840d308022312da9174cb.slice - libcontainer container kubepods-burstable-pod20c890a246d840d308022312da9174cb.slice. Nov 8 00:29:05.982061 kubelet[2354]: E1108 00:29:05.981950 2354 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Nov 8 00:29:05.990370 systemd[1]: Created slice kubepods-burstable-podd13d96f639b65e57f439b4396b605564.slice - libcontainer container kubepods-burstable-podd13d96f639b65e57f439b4396b605564.slice. Nov 8 00:29:05.991473 kubelet[2354]: E1108 00:29:05.991455 2354 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Nov 8 00:29:06.040250 kubelet[2354]: E1108 00:29:06.040214 2354 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://139.178.70.106:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 139.178.70.106:6443: connect: connection refused" interval="400ms" Nov 8 00:29:06.046353 kubelet[2354]: I1108 00:29:06.046331 2354 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Nov 8 00:29:06.046569 kubelet[2354]: E1108 00:29:06.046553 2354 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://139.178.70.106:6443/api/v1/nodes\": dial tcp 139.178.70.106:6443: connect: connection refused" node="localhost" Nov 8 00:29:06.137282 kubelet[2354]: I1108 00:29:06.137186 2354 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/d13d96f639b65e57f439b4396b605564-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"d13d96f639b65e57f439b4396b605564\") " pod="kube-system/kube-scheduler-localhost" Nov 8 00:29:06.137282 kubelet[2354]: I1108 00:29:06.137216 2354 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/d782e18d8f4d22232283e7c4c7bd441c-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"d782e18d8f4d22232283e7c4c7bd441c\") " pod="kube-system/kube-apiserver-localhost" Nov 8 00:29:06.137282 kubelet[2354]: I1108 00:29:06.137236 2354 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/20c890a246d840d308022312da9174cb-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"20c890a246d840d308022312da9174cb\") " pod="kube-system/kube-controller-manager-localhost" Nov 8 00:29:06.137282 kubelet[2354]: I1108 00:29:06.137253 2354 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/20c890a246d840d308022312da9174cb-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"20c890a246d840d308022312da9174cb\") " pod="kube-system/kube-controller-manager-localhost" Nov 8 00:29:06.137935 kubelet[2354]: I1108 00:29:06.137910 2354 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/d782e18d8f4d22232283e7c4c7bd441c-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"d782e18d8f4d22232283e7c4c7bd441c\") " pod="kube-system/kube-apiserver-localhost" Nov 8 00:29:06.137971 kubelet[2354]: I1108 00:29:06.137946 2354 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/d782e18d8f4d22232283e7c4c7bd441c-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"d782e18d8f4d22232283e7c4c7bd441c\") " pod="kube-system/kube-apiserver-localhost" Nov 8 00:29:06.137993 kubelet[2354]: I1108 00:29:06.137962 2354 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/20c890a246d840d308022312da9174cb-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"20c890a246d840d308022312da9174cb\") " pod="kube-system/kube-controller-manager-localhost" Nov 8 00:29:06.137993 kubelet[2354]: I1108 00:29:06.137981 2354 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/20c890a246d840d308022312da9174cb-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"20c890a246d840d308022312da9174cb\") " pod="kube-system/kube-controller-manager-localhost" Nov 8 00:29:06.137993 kubelet[2354]: I1108 00:29:06.137990 2354 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/20c890a246d840d308022312da9174cb-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"20c890a246d840d308022312da9174cb\") " pod="kube-system/kube-controller-manager-localhost" Nov 8 00:29:06.247742 kubelet[2354]: I1108 00:29:06.247714 2354 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Nov 8 00:29:06.247961 kubelet[2354]: E1108 00:29:06.247945 2354 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://139.178.70.106:6443/api/v1/nodes\": dial tcp 139.178.70.106:6443: connect: connection refused" node="localhost" Nov 8 00:29:06.278249 containerd[1547]: time="2025-11-08T00:29:06.277904877Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:d782e18d8f4d22232283e7c4c7bd441c,Namespace:kube-system,Attempt:0,}" Nov 8 00:29:06.292464 containerd[1547]: time="2025-11-08T00:29:06.292418807Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:20c890a246d840d308022312da9174cb,Namespace:kube-system,Attempt:0,}" Nov 8 00:29:06.292626 containerd[1547]: time="2025-11-08T00:29:06.292600253Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:d13d96f639b65e57f439b4396b605564,Namespace:kube-system,Attempt:0,}" Nov 8 00:29:06.441091 kubelet[2354]: E1108 00:29:06.441061 2354 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://139.178.70.106:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 139.178.70.106:6443: connect: connection refused" interval="800ms" Nov 8 00:29:06.649443 kubelet[2354]: I1108 00:29:06.649280 2354 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Nov 8 00:29:06.649728 kubelet[2354]: E1108 00:29:06.649475 2354 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://139.178.70.106:6443/api/v1/nodes\": dial tcp 139.178.70.106:6443: connect: connection refused" node="localhost" Nov 8 00:29:06.728444 kubelet[2354]: E1108 00:29:06.728367 2354 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://139.178.70.106:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 139.178.70.106:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Nov 8 00:29:06.940077 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2796203737.mount: Deactivated successfully. Nov 8 00:29:06.944646 containerd[1547]: time="2025-11-08T00:29:06.942776946Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Nov 8 00:29:06.944646 containerd[1547]: time="2025-11-08T00:29:06.943277127Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Nov 8 00:29:06.944646 containerd[1547]: time="2025-11-08T00:29:06.943893465Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Nov 8 00:29:06.944646 containerd[1547]: time="2025-11-08T00:29:06.943912818Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312056" Nov 8 00:29:06.944646 containerd[1547]: time="2025-11-08T00:29:06.944557146Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Nov 8 00:29:06.944785 containerd[1547]: time="2025-11-08T00:29:06.944598003Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Nov 8 00:29:06.946771 containerd[1547]: time="2025-11-08T00:29:06.946756565Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Nov 8 00:29:06.947220 containerd[1547]: time="2025-11-08T00:29:06.947207307Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 654.716137ms" Nov 8 00:29:06.948057 containerd[1547]: time="2025-11-08T00:29:06.948035734Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 655.352258ms" Nov 8 00:29:06.950618 containerd[1547]: time="2025-11-08T00:29:06.949907562Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 671.932267ms" Nov 8 00:29:06.950618 containerd[1547]: time="2025-11-08T00:29:06.950226672Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Nov 8 00:29:07.072238 kubelet[2354]: E1108 00:29:07.072141 2354 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://139.178.70.106:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 139.178.70.106:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Nov 8 00:29:07.181365 kubelet[2354]: E1108 00:29:07.181262 2354 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://139.178.70.106:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 139.178.70.106:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Nov 8 00:29:07.191282 containerd[1547]: time="2025-11-08T00:29:07.188962980Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 8 00:29:07.191282 containerd[1547]: time="2025-11-08T00:29:07.188986298Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 8 00:29:07.191282 containerd[1547]: time="2025-11-08T00:29:07.188995805Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 8 00:29:07.191282 containerd[1547]: time="2025-11-08T00:29:07.189037576Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 8 00:29:07.191282 containerd[1547]: time="2025-11-08T00:29:07.185484130Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 8 00:29:07.191282 containerd[1547]: time="2025-11-08T00:29:07.185513126Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 8 00:29:07.191282 containerd[1547]: time="2025-11-08T00:29:07.185522683Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 8 00:29:07.191282 containerd[1547]: time="2025-11-08T00:29:07.185564894Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 8 00:29:07.194088 containerd[1547]: time="2025-11-08T00:29:07.190465973Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 8 00:29:07.194088 containerd[1547]: time="2025-11-08T00:29:07.190491298Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 8 00:29:07.194088 containerd[1547]: time="2025-11-08T00:29:07.190501412Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 8 00:29:07.194088 containerd[1547]: time="2025-11-08T00:29:07.190549217Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 8 00:29:07.226706 systemd[1]: Started cri-containerd-f84e729404b76f3b615c5d39da557d205398c382337ce20102ccdf351329f87c.scope - libcontainer container f84e729404b76f3b615c5d39da557d205398c382337ce20102ccdf351329f87c. Nov 8 00:29:07.229938 systemd[1]: Started cri-containerd-0a65b69cefe9a4eaf4454cc3f434d9c2193ecf8a3292bd5d7995d300d34ac5e3.scope - libcontainer container 0a65b69cefe9a4eaf4454cc3f434d9c2193ecf8a3292bd5d7995d300d34ac5e3. Nov 8 00:29:07.231077 systemd[1]: Started cri-containerd-56e745cfcc2801cdd126c8e7727e39cf69d3c8ccf9856da4881ce3d2f24e63b1.scope - libcontainer container 56e745cfcc2801cdd126c8e7727e39cf69d3c8ccf9856da4881ce3d2f24e63b1. Nov 8 00:29:07.271695 kubelet[2354]: E1108 00:29:07.271200 2354 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://139.178.70.106:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 139.178.70.106:6443: connect: connection refused" interval="1.6s" Nov 8 00:29:07.273027 kubelet[2354]: E1108 00:29:07.272814 2354 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://139.178.70.106:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 139.178.70.106:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Nov 8 00:29:07.310552 containerd[1547]: time="2025-11-08T00:29:07.310474087Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:20c890a246d840d308022312da9174cb,Namespace:kube-system,Attempt:0,} returns sandbox id \"f84e729404b76f3b615c5d39da557d205398c382337ce20102ccdf351329f87c\"" Nov 8 00:29:07.314023 containerd[1547]: time="2025-11-08T00:29:07.314001333Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:d13d96f639b65e57f439b4396b605564,Namespace:kube-system,Attempt:0,} returns sandbox id \"56e745cfcc2801cdd126c8e7727e39cf69d3c8ccf9856da4881ce3d2f24e63b1\"" Nov 8 00:29:07.317519 containerd[1547]: time="2025-11-08T00:29:07.317453941Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:d782e18d8f4d22232283e7c4c7bd441c,Namespace:kube-system,Attempt:0,} returns sandbox id \"0a65b69cefe9a4eaf4454cc3f434d9c2193ecf8a3292bd5d7995d300d34ac5e3\"" Nov 8 00:29:07.697876 kubelet[2354]: I1108 00:29:07.450808 2354 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Nov 8 00:29:07.697876 kubelet[2354]: E1108 00:29:07.450993 2354 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://139.178.70.106:6443/api/v1/nodes\": dial tcp 139.178.70.106:6443: connect: connection refused" node="localhost" Nov 8 00:29:07.701240 containerd[1547]: time="2025-11-08T00:29:07.701208918Z" level=info msg="CreateContainer within sandbox \"56e745cfcc2801cdd126c8e7727e39cf69d3c8ccf9856da4881ce3d2f24e63b1\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Nov 8 00:29:07.732171 containerd[1547]: time="2025-11-08T00:29:07.732137006Z" level=info msg="CreateContainer within sandbox \"0a65b69cefe9a4eaf4454cc3f434d9c2193ecf8a3292bd5d7995d300d34ac5e3\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Nov 8 00:29:07.740631 kubelet[2354]: E1108 00:29:07.737186 2354 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://139.178.70.106:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 139.178.70.106:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Nov 8 00:29:07.760507 containerd[1547]: time="2025-11-08T00:29:07.760474957Z" level=info msg="CreateContainer within sandbox \"f84e729404b76f3b615c5d39da557d205398c382337ce20102ccdf351329f87c\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Nov 8 00:29:07.798717 containerd[1547]: time="2025-11-08T00:29:07.798682182Z" level=info msg="CreateContainer within sandbox \"56e745cfcc2801cdd126c8e7727e39cf69d3c8ccf9856da4881ce3d2f24e63b1\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"4e48af6e7730d8143f0f717e41985bc1144ff93b81085e2775e40426a80e27b4\"" Nov 8 00:29:07.799125 containerd[1547]: time="2025-11-08T00:29:07.798962986Z" level=info msg="CreateContainer within sandbox \"f84e729404b76f3b615c5d39da557d205398c382337ce20102ccdf351329f87c\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"b5512df52c7aec4ed99896614619af362ee13aa74a6fe4925481f3d4683368c3\"" Nov 8 00:29:07.799240 containerd[1547]: time="2025-11-08T00:29:07.799223048Z" level=info msg="StartContainer for \"b5512df52c7aec4ed99896614619af362ee13aa74a6fe4925481f3d4683368c3\"" Nov 8 00:29:07.802737 containerd[1547]: time="2025-11-08T00:29:07.802514365Z" level=info msg="CreateContainer within sandbox \"0a65b69cefe9a4eaf4454cc3f434d9c2193ecf8a3292bd5d7995d300d34ac5e3\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"9a197417d610f9a88e79edc2bb2d5cfd161ed23b063fcf66b3614ce7ff327b90\"" Nov 8 00:29:07.802737 containerd[1547]: time="2025-11-08T00:29:07.802639467Z" level=info msg="StartContainer for \"4e48af6e7730d8143f0f717e41985bc1144ff93b81085e2775e40426a80e27b4\"" Nov 8 00:29:07.805433 containerd[1547]: time="2025-11-08T00:29:07.805403420Z" level=info msg="StartContainer for \"9a197417d610f9a88e79edc2bb2d5cfd161ed23b063fcf66b3614ce7ff327b90\"" Nov 8 00:29:07.828763 systemd[1]: Started cri-containerd-4e48af6e7730d8143f0f717e41985bc1144ff93b81085e2775e40426a80e27b4.scope - libcontainer container 4e48af6e7730d8143f0f717e41985bc1144ff93b81085e2775e40426a80e27b4. Nov 8 00:29:07.831454 systemd[1]: Started cri-containerd-b5512df52c7aec4ed99896614619af362ee13aa74a6fe4925481f3d4683368c3.scope - libcontainer container b5512df52c7aec4ed99896614619af362ee13aa74a6fe4925481f3d4683368c3. Nov 8 00:29:07.835364 systemd[1]: Started cri-containerd-9a197417d610f9a88e79edc2bb2d5cfd161ed23b063fcf66b3614ce7ff327b90.scope - libcontainer container 9a197417d610f9a88e79edc2bb2d5cfd161ed23b063fcf66b3614ce7ff327b90. Nov 8 00:29:07.883632 containerd[1547]: time="2025-11-08T00:29:07.882562140Z" level=info msg="StartContainer for \"9a197417d610f9a88e79edc2bb2d5cfd161ed23b063fcf66b3614ce7ff327b90\" returns successfully" Nov 8 00:29:07.891789 containerd[1547]: time="2025-11-08T00:29:07.891763805Z" level=info msg="StartContainer for \"4e48af6e7730d8143f0f717e41985bc1144ff93b81085e2775e40426a80e27b4\" returns successfully" Nov 8 00:29:07.892259 containerd[1547]: time="2025-11-08T00:29:07.891815448Z" level=info msg="StartContainer for \"b5512df52c7aec4ed99896614619af362ee13aa74a6fe4925481f3d4683368c3\" returns successfully" Nov 8 00:29:08.874374 kubelet[2354]: E1108 00:29:08.874257 2354 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Nov 8 00:29:08.876767 kubelet[2354]: E1108 00:29:08.876756 2354 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Nov 8 00:29:08.878228 kubelet[2354]: E1108 00:29:08.878139 2354 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Nov 8 00:29:09.052679 kubelet[2354]: I1108 00:29:09.052505 2354 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Nov 8 00:29:09.501327 kubelet[2354]: E1108 00:29:09.501304 2354 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Nov 8 00:29:09.604542 kubelet[2354]: I1108 00:29:09.604508 2354 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Nov 8 00:29:09.604542 kubelet[2354]: E1108 00:29:09.604543 2354 kubelet_node_status.go:548] "Error updating node status, will retry" err="error getting node \"localhost\": node \"localhost\" not found" Nov 8 00:29:09.611063 kubelet[2354]: E1108 00:29:09.611043 2354 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Nov 8 00:29:09.711769 kubelet[2354]: E1108 00:29:09.711740 2354 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Nov 8 00:29:09.810172 kubelet[2354]: I1108 00:29:09.810056 2354 apiserver.go:52] "Watching apiserver" Nov 8 00:29:09.836878 kubelet[2354]: I1108 00:29:09.836681 2354 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Nov 8 00:29:09.837019 kubelet[2354]: I1108 00:29:09.837002 2354 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Nov 8 00:29:09.840635 kubelet[2354]: E1108 00:29:09.840522 2354 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-localhost" Nov 8 00:29:09.840635 kubelet[2354]: I1108 00:29:09.840535 2354 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Nov 8 00:29:09.841507 kubelet[2354]: E1108 00:29:09.841429 2354 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-localhost" Nov 8 00:29:09.841507 kubelet[2354]: I1108 00:29:09.841440 2354 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Nov 8 00:29:09.842285 kubelet[2354]: E1108 00:29:09.842272 2354 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-localhost" Nov 8 00:29:09.879626 kubelet[2354]: I1108 00:29:09.878848 2354 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Nov 8 00:29:09.879626 kubelet[2354]: I1108 00:29:09.878907 2354 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Nov 8 00:29:09.880047 kubelet[2354]: I1108 00:29:09.879925 2354 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Nov 8 00:29:09.880489 kubelet[2354]: E1108 00:29:09.880384 2354 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-localhost" Nov 8 00:29:09.881134 kubelet[2354]: E1108 00:29:09.881115 2354 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-localhost" Nov 8 00:29:09.881258 kubelet[2354]: E1108 00:29:09.881175 2354 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-localhost" Nov 8 00:29:10.880089 kubelet[2354]: I1108 00:29:10.880066 2354 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Nov 8 00:29:10.880365 kubelet[2354]: I1108 00:29:10.880311 2354 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Nov 8 00:29:11.561031 systemd[1]: Reloading requested from client PID 2633 ('systemctl') (unit session-9.scope)... Nov 8 00:29:11.561048 systemd[1]: Reloading... Nov 8 00:29:11.611637 zram_generator::config[2671]: No configuration found. Nov 8 00:29:11.680723 systemd[1]: /etc/systemd/system/coreos-metadata.service:11: Ignoring unknown escape sequences: "echo "COREOS_CUSTOM_PRIVATE_IPV4=$(ip addr show ens192 | grep "inet 10." | grep -Po "inet \K[\d.]+") Nov 8 00:29:11.696038 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Nov 8 00:29:11.747934 systemd[1]: Reloading finished in 186 ms. Nov 8 00:29:11.768135 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Nov 8 00:29:11.776333 systemd[1]: kubelet.service: Deactivated successfully. Nov 8 00:29:11.776481 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Nov 8 00:29:11.783853 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 8 00:29:12.299619 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 8 00:29:12.309025 (kubelet)[2738]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Nov 8 00:29:12.416213 kubelet[2738]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Nov 8 00:29:12.416514 kubelet[2738]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Nov 8 00:29:12.416514 kubelet[2738]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Nov 8 00:29:12.416697 kubelet[2738]: I1108 00:29:12.416564 2738 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Nov 8 00:29:12.420245 kubelet[2738]: I1108 00:29:12.420234 2738 server.go:530] "Kubelet version" kubeletVersion="v1.33.0" Nov 8 00:29:12.421047 kubelet[2738]: I1108 00:29:12.420318 2738 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Nov 8 00:29:12.421047 kubelet[2738]: I1108 00:29:12.420420 2738 server.go:956] "Client rotation is on, will bootstrap in background" Nov 8 00:29:12.422515 kubelet[2738]: I1108 00:29:12.422499 2738 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem" Nov 8 00:29:12.428539 kubelet[2738]: I1108 00:29:12.428523 2738 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Nov 8 00:29:12.444076 kubelet[2738]: E1108 00:29:12.444049 2738 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Nov 8 00:29:12.445921 kubelet[2738]: I1108 00:29:12.445906 2738 server.go:1423] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Nov 8 00:29:12.449264 kubelet[2738]: I1108 00:29:12.449222 2738 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Nov 8 00:29:12.449569 kubelet[2738]: I1108 00:29:12.449448 2738 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Nov 8 00:29:12.449569 kubelet[2738]: I1108 00:29:12.449464 2738 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Nov 8 00:29:12.452416 kubelet[2738]: I1108 00:29:12.452402 2738 topology_manager.go:138] "Creating topology manager with none policy" Nov 8 00:29:12.452416 kubelet[2738]: I1108 00:29:12.452417 2738 container_manager_linux.go:303] "Creating device plugin manager" Nov 8 00:29:12.452469 kubelet[2738]: I1108 00:29:12.452446 2738 state_mem.go:36] "Initialized new in-memory state store" Nov 8 00:29:12.452576 kubelet[2738]: I1108 00:29:12.452567 2738 kubelet.go:480] "Attempting to sync node with API server" Nov 8 00:29:12.452597 kubelet[2738]: I1108 00:29:12.452578 2738 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Nov 8 00:29:12.452597 kubelet[2738]: I1108 00:29:12.452592 2738 kubelet.go:386] "Adding apiserver pod source" Nov 8 00:29:12.452771 kubelet[2738]: I1108 00:29:12.452599 2738 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Nov 8 00:29:12.455111 kubelet[2738]: I1108 00:29:12.455094 2738 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Nov 8 00:29:12.455823 kubelet[2738]: I1108 00:29:12.455366 2738 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Nov 8 00:29:12.458896 kubelet[2738]: I1108 00:29:12.458881 2738 watchdog_linux.go:99] "Systemd watchdog is not enabled" Nov 8 00:29:12.458936 kubelet[2738]: I1108 00:29:12.458911 2738 server.go:1289] "Started kubelet" Nov 8 00:29:12.466671 kubelet[2738]: I1108 00:29:12.465621 2738 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Nov 8 00:29:12.472271 kubelet[2738]: I1108 00:29:12.471311 2738 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Nov 8 00:29:12.475140 kubelet[2738]: I1108 00:29:12.474578 2738 server.go:317] "Adding debug handlers to kubelet server" Nov 8 00:29:12.477837 kubelet[2738]: I1108 00:29:12.477798 2738 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Nov 8 00:29:12.478587 kubelet[2738]: I1108 00:29:12.478414 2738 volume_manager.go:297] "Starting Kubelet Volume Manager" Nov 8 00:29:12.479326 kubelet[2738]: I1108 00:29:12.479309 2738 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Nov 8 00:29:12.479487 kubelet[2738]: I1108 00:29:12.479455 2738 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Nov 8 00:29:12.482621 kubelet[2738]: I1108 00:29:12.482589 2738 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Nov 8 00:29:12.482826 kubelet[2738]: I1108 00:29:12.482742 2738 reconciler.go:26] "Reconciler: start to sync state" Nov 8 00:29:12.483669 kubelet[2738]: I1108 00:29:12.483657 2738 factory.go:223] Registration of the systemd container factory successfully Nov 8 00:29:12.483779 kubelet[2738]: I1108 00:29:12.483749 2738 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Nov 8 00:29:12.484792 kubelet[2738]: I1108 00:29:12.484779 2738 factory.go:223] Registration of the containerd container factory successfully Nov 8 00:29:12.486290 kubelet[2738]: I1108 00:29:12.485816 2738 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Nov 8 00:29:12.487131 kubelet[2738]: I1108 00:29:12.487121 2738 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Nov 8 00:29:12.487319 kubelet[2738]: I1108 00:29:12.487175 2738 status_manager.go:230] "Starting to sync pod status with apiserver" Nov 8 00:29:12.487319 kubelet[2738]: I1108 00:29:12.487188 2738 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Nov 8 00:29:12.487319 kubelet[2738]: I1108 00:29:12.487192 2738 kubelet.go:2436] "Starting kubelet main sync loop" Nov 8 00:29:12.487319 kubelet[2738]: E1108 00:29:12.487213 2738 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Nov 8 00:29:12.492151 kubelet[2738]: E1108 00:29:12.492138 2738 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Nov 8 00:29:12.519941 kubelet[2738]: I1108 00:29:12.519927 2738 cpu_manager.go:221] "Starting CPU manager" policy="none" Nov 8 00:29:12.520046 kubelet[2738]: I1108 00:29:12.520035 2738 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Nov 8 00:29:12.520296 kubelet[2738]: I1108 00:29:12.520104 2738 state_mem.go:36] "Initialized new in-memory state store" Nov 8 00:29:12.520296 kubelet[2738]: I1108 00:29:12.520179 2738 state_mem.go:88] "Updated default CPUSet" cpuSet="" Nov 8 00:29:12.520296 kubelet[2738]: I1108 00:29:12.520188 2738 state_mem.go:96] "Updated CPUSet assignments" assignments={} Nov 8 00:29:12.520296 kubelet[2738]: I1108 00:29:12.520199 2738 policy_none.go:49] "None policy: Start" Nov 8 00:29:12.520296 kubelet[2738]: I1108 00:29:12.520206 2738 memory_manager.go:186] "Starting memorymanager" policy="None" Nov 8 00:29:12.520296 kubelet[2738]: I1108 00:29:12.520212 2738 state_mem.go:35] "Initializing new in-memory state store" Nov 8 00:29:12.520296 kubelet[2738]: I1108 00:29:12.520263 2738 state_mem.go:75] "Updated machine memory state" Nov 8 00:29:12.522910 kubelet[2738]: E1108 00:29:12.522902 2738 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Nov 8 00:29:12.523884 kubelet[2738]: I1108 00:29:12.523374 2738 eviction_manager.go:189] "Eviction manager: starting control loop" Nov 8 00:29:12.523884 kubelet[2738]: I1108 00:29:12.523383 2738 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Nov 8 00:29:12.523884 kubelet[2738]: I1108 00:29:12.523633 2738 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Nov 8 00:29:12.525456 kubelet[2738]: E1108 00:29:12.525446 2738 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Nov 8 00:29:12.587855 kubelet[2738]: I1108 00:29:12.587756 2738 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Nov 8 00:29:12.588070 kubelet[2738]: I1108 00:29:12.587816 2738 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Nov 8 00:29:12.588995 kubelet[2738]: I1108 00:29:12.587890 2738 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Nov 8 00:29:12.592303 kubelet[2738]: E1108 00:29:12.592149 2738 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Nov 8 00:29:12.592493 kubelet[2738]: E1108 00:29:12.592481 2738 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" Nov 8 00:29:12.626832 kubelet[2738]: I1108 00:29:12.626311 2738 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Nov 8 00:29:12.629905 kubelet[2738]: I1108 00:29:12.629851 2738 kubelet_node_status.go:124] "Node was previously registered" node="localhost" Nov 8 00:29:12.630375 kubelet[2738]: I1108 00:29:12.629945 2738 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Nov 8 00:29:12.683905 kubelet[2738]: I1108 00:29:12.683749 2738 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/20c890a246d840d308022312da9174cb-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"20c890a246d840d308022312da9174cb\") " pod="kube-system/kube-controller-manager-localhost" Nov 8 00:29:12.683905 kubelet[2738]: I1108 00:29:12.683775 2738 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/20c890a246d840d308022312da9174cb-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"20c890a246d840d308022312da9174cb\") " pod="kube-system/kube-controller-manager-localhost" Nov 8 00:29:12.683905 kubelet[2738]: I1108 00:29:12.683787 2738 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/d782e18d8f4d22232283e7c4c7bd441c-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"d782e18d8f4d22232283e7c4c7bd441c\") " pod="kube-system/kube-apiserver-localhost" Nov 8 00:29:12.683905 kubelet[2738]: I1108 00:29:12.683802 2738 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/20c890a246d840d308022312da9174cb-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"20c890a246d840d308022312da9174cb\") " pod="kube-system/kube-controller-manager-localhost" Nov 8 00:29:12.683905 kubelet[2738]: I1108 00:29:12.683814 2738 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/20c890a246d840d308022312da9174cb-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"20c890a246d840d308022312da9174cb\") " pod="kube-system/kube-controller-manager-localhost" Nov 8 00:29:12.684077 kubelet[2738]: I1108 00:29:12.683826 2738 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/d13d96f639b65e57f439b4396b605564-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"d13d96f639b65e57f439b4396b605564\") " pod="kube-system/kube-scheduler-localhost" Nov 8 00:29:12.684077 kubelet[2738]: I1108 00:29:12.683836 2738 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/d782e18d8f4d22232283e7c4c7bd441c-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"d782e18d8f4d22232283e7c4c7bd441c\") " pod="kube-system/kube-apiserver-localhost" Nov 8 00:29:12.684077 kubelet[2738]: I1108 00:29:12.683847 2738 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/d782e18d8f4d22232283e7c4c7bd441c-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"d782e18d8f4d22232283e7c4c7bd441c\") " pod="kube-system/kube-apiserver-localhost" Nov 8 00:29:12.684077 kubelet[2738]: I1108 00:29:12.683856 2738 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/20c890a246d840d308022312da9174cb-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"20c890a246d840d308022312da9174cb\") " pod="kube-system/kube-controller-manager-localhost" Nov 8 00:29:13.453892 kubelet[2738]: I1108 00:29:13.453871 2738 apiserver.go:52] "Watching apiserver" Nov 8 00:29:13.482761 kubelet[2738]: I1108 00:29:13.482734 2738 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Nov 8 00:29:13.513206 kubelet[2738]: I1108 00:29:13.513184 2738 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Nov 8 00:29:13.513703 kubelet[2738]: I1108 00:29:13.513416 2738 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Nov 8 00:29:13.513703 kubelet[2738]: I1108 00:29:13.513517 2738 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Nov 8 00:29:13.516981 kubelet[2738]: E1108 00:29:13.516808 2738 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" already exists" pod="kube-system/kube-controller-manager-localhost" Nov 8 00:29:13.517625 kubelet[2738]: E1108 00:29:13.517336 2738 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" Nov 8 00:29:13.519746 kubelet[2738]: E1108 00:29:13.519732 2738 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Nov 8 00:29:13.553292 kubelet[2738]: I1108 00:29:13.553233 2738 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=3.553221907 podStartE2EDuration="3.553221907s" podCreationTimestamp="2025-11-08 00:29:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-08 00:29:13.552711021 +0000 UTC m=+1.169222941" watchObservedRunningTime="2025-11-08 00:29:13.553221907 +0000 UTC m=+1.169733823" Nov 8 00:29:13.553433 kubelet[2738]: I1108 00:29:13.553320 2738 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=1.553315553 podStartE2EDuration="1.553315553s" podCreationTimestamp="2025-11-08 00:29:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-08 00:29:13.544436922 +0000 UTC m=+1.160948847" watchObservedRunningTime="2025-11-08 00:29:13.553315553 +0000 UTC m=+1.169827478" Nov 8 00:29:16.606005 kubelet[2738]: I1108 00:29:16.605978 2738 kuberuntime_manager.go:1746] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Nov 8 00:29:16.606313 containerd[1547]: time="2025-11-08T00:29:16.606219852Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Nov 8 00:29:16.606478 kubelet[2738]: I1108 00:29:16.606331 2738 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Nov 8 00:29:17.023980 kubelet[2738]: I1108 00:29:17.023947 2738 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=7.023935359 podStartE2EDuration="7.023935359s" podCreationTimestamp="2025-11-08 00:29:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-08 00:29:13.560281936 +0000 UTC m=+1.176793861" watchObservedRunningTime="2025-11-08 00:29:17.023935359 +0000 UTC m=+4.640447279" Nov 8 00:29:17.032922 systemd[1]: Created slice kubepods-besteffort-pod94536301_3458_4ccf_a647_7dcea95875d6.slice - libcontainer container kubepods-besteffort-pod94536301_3458_4ccf_a647_7dcea95875d6.slice. Nov 8 00:29:17.108705 kubelet[2738]: I1108 00:29:17.108644 2738 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/94536301-3458-4ccf-a647-7dcea95875d6-kube-proxy\") pod \"kube-proxy-rg7jx\" (UID: \"94536301-3458-4ccf-a647-7dcea95875d6\") " pod="kube-system/kube-proxy-rg7jx" Nov 8 00:29:17.108705 kubelet[2738]: I1108 00:29:17.108676 2738 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/94536301-3458-4ccf-a647-7dcea95875d6-xtables-lock\") pod \"kube-proxy-rg7jx\" (UID: \"94536301-3458-4ccf-a647-7dcea95875d6\") " pod="kube-system/kube-proxy-rg7jx" Nov 8 00:29:17.108849 kubelet[2738]: I1108 00:29:17.108726 2738 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/94536301-3458-4ccf-a647-7dcea95875d6-lib-modules\") pod \"kube-proxy-rg7jx\" (UID: \"94536301-3458-4ccf-a647-7dcea95875d6\") " pod="kube-system/kube-proxy-rg7jx" Nov 8 00:29:17.108849 kubelet[2738]: I1108 00:29:17.108760 2738 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l9fc4\" (UniqueName: \"kubernetes.io/projected/94536301-3458-4ccf-a647-7dcea95875d6-kube-api-access-l9fc4\") pod \"kube-proxy-rg7jx\" (UID: \"94536301-3458-4ccf-a647-7dcea95875d6\") " pod="kube-system/kube-proxy-rg7jx" Nov 8 00:29:17.212428 kubelet[2738]: E1108 00:29:17.212375 2738 projected.go:289] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found Nov 8 00:29:17.212428 kubelet[2738]: E1108 00:29:17.212424 2738 projected.go:194] Error preparing data for projected volume kube-api-access-l9fc4 for pod kube-system/kube-proxy-rg7jx: configmap "kube-root-ca.crt" not found Nov 8 00:29:17.212544 kubelet[2738]: E1108 00:29:17.212496 2738 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/94536301-3458-4ccf-a647-7dcea95875d6-kube-api-access-l9fc4 podName:94536301-3458-4ccf-a647-7dcea95875d6 nodeName:}" failed. No retries permitted until 2025-11-08 00:29:17.712456117 +0000 UTC m=+5.328968039 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-l9fc4" (UniqueName: "kubernetes.io/projected/94536301-3458-4ccf-a647-7dcea95875d6-kube-api-access-l9fc4") pod "kube-proxy-rg7jx" (UID: "94536301-3458-4ccf-a647-7dcea95875d6") : configmap "kube-root-ca.crt" not found Nov 8 00:29:17.713356 systemd[1]: Created slice kubepods-besteffort-pod32cc11f0_af57_45da_9f7e_ab4243a856fb.slice - libcontainer container kubepods-besteffort-pod32cc11f0_af57_45da_9f7e_ab4243a856fb.slice. Nov 8 00:29:17.813337 kubelet[2738]: I1108 00:29:17.813300 2738 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/32cc11f0-af57-45da-9f7e-ab4243a856fb-var-lib-calico\") pod \"tigera-operator-7dcd859c48-2zgkd\" (UID: \"32cc11f0-af57-45da-9f7e-ab4243a856fb\") " pod="tigera-operator/tigera-operator-7dcd859c48-2zgkd" Nov 8 00:29:17.813781 kubelet[2738]: I1108 00:29:17.813713 2738 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zpmht\" (UniqueName: \"kubernetes.io/projected/32cc11f0-af57-45da-9f7e-ab4243a856fb-kube-api-access-zpmht\") pod \"tigera-operator-7dcd859c48-2zgkd\" (UID: \"32cc11f0-af57-45da-9f7e-ab4243a856fb\") " pod="tigera-operator/tigera-operator-7dcd859c48-2zgkd" Nov 8 00:29:17.941168 containerd[1547]: time="2025-11-08T00:29:17.941138041Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-rg7jx,Uid:94536301-3458-4ccf-a647-7dcea95875d6,Namespace:kube-system,Attempt:0,}" Nov 8 00:29:17.960043 containerd[1547]: time="2025-11-08T00:29:17.959758938Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 8 00:29:17.960043 containerd[1547]: time="2025-11-08T00:29:17.959811111Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 8 00:29:17.960043 containerd[1547]: time="2025-11-08T00:29:17.959822470Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 8 00:29:17.960043 containerd[1547]: time="2025-11-08T00:29:17.959919757Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 8 00:29:17.978705 systemd[1]: Started cri-containerd-97779d6be5e483db1297bcc0618e7c30bf4c4268ae72e0643524a84614c07f64.scope - libcontainer container 97779d6be5e483db1297bcc0618e7c30bf4c4268ae72e0643524a84614c07f64. Nov 8 00:29:17.993753 containerd[1547]: time="2025-11-08T00:29:17.993727371Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-rg7jx,Uid:94536301-3458-4ccf-a647-7dcea95875d6,Namespace:kube-system,Attempt:0,} returns sandbox id \"97779d6be5e483db1297bcc0618e7c30bf4c4268ae72e0643524a84614c07f64\"" Nov 8 00:29:17.996621 containerd[1547]: time="2025-11-08T00:29:17.996560124Z" level=info msg="CreateContainer within sandbox \"97779d6be5e483db1297bcc0618e7c30bf4c4268ae72e0643524a84614c07f64\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Nov 8 00:29:18.002260 containerd[1547]: time="2025-11-08T00:29:18.002235257Z" level=info msg="CreateContainer within sandbox \"97779d6be5e483db1297bcc0618e7c30bf4c4268ae72e0643524a84614c07f64\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"50e35556c213849a1e15ee88d89f5c4e2f3764bfb9408ebc80fbc386de3e2083\"" Nov 8 00:29:18.003234 containerd[1547]: time="2025-11-08T00:29:18.003217158Z" level=info msg="StartContainer for \"50e35556c213849a1e15ee88d89f5c4e2f3764bfb9408ebc80fbc386de3e2083\"" Nov 8 00:29:18.016309 containerd[1547]: time="2025-11-08T00:29:18.016027830Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-7dcd859c48-2zgkd,Uid:32cc11f0-af57-45da-9f7e-ab4243a856fb,Namespace:tigera-operator,Attempt:0,}" Nov 8 00:29:18.019735 systemd[1]: Started cri-containerd-50e35556c213849a1e15ee88d89f5c4e2f3764bfb9408ebc80fbc386de3e2083.scope - libcontainer container 50e35556c213849a1e15ee88d89f5c4e2f3764bfb9408ebc80fbc386de3e2083. Nov 8 00:29:18.056334 containerd[1547]: time="2025-11-08T00:29:18.056308527Z" level=info msg="StartContainer for \"50e35556c213849a1e15ee88d89f5c4e2f3764bfb9408ebc80fbc386de3e2083\" returns successfully" Nov 8 00:29:18.090168 containerd[1547]: time="2025-11-08T00:29:18.089855080Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 8 00:29:18.090168 containerd[1547]: time="2025-11-08T00:29:18.089893851Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 8 00:29:18.090168 containerd[1547]: time="2025-11-08T00:29:18.089902518Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 8 00:29:18.090168 containerd[1547]: time="2025-11-08T00:29:18.089957474Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 8 00:29:18.104713 systemd[1]: Started cri-containerd-1ff369e8221f76c7d114dd53cee20373e824c1555873669c9db3831078baf9f4.scope - libcontainer container 1ff369e8221f76c7d114dd53cee20373e824c1555873669c9db3831078baf9f4. Nov 8 00:29:18.133338 containerd[1547]: time="2025-11-08T00:29:18.133314515Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-7dcd859c48-2zgkd,Uid:32cc11f0-af57-45da-9f7e-ab4243a856fb,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"1ff369e8221f76c7d114dd53cee20373e824c1555873669c9db3831078baf9f4\"" Nov 8 00:29:18.134689 containerd[1547]: time="2025-11-08T00:29:18.134659743Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.7\"" Nov 8 00:29:18.532299 kubelet[2738]: I1108 00:29:18.532117 2738 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-rg7jx" podStartSLOduration=1.532105508 podStartE2EDuration="1.532105508s" podCreationTimestamp="2025-11-08 00:29:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-08 00:29:18.531984829 +0000 UTC m=+6.148496763" watchObservedRunningTime="2025-11-08 00:29:18.532105508 +0000 UTC m=+6.148617434" Nov 8 00:29:18.818613 systemd[1]: run-containerd-runc-k8s.io-97779d6be5e483db1297bcc0618e7c30bf4c4268ae72e0643524a84614c07f64-runc.VD4yHQ.mount: Deactivated successfully. Nov 8 00:29:19.498442 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1291824815.mount: Deactivated successfully. Nov 8 00:29:20.081284 containerd[1547]: time="2025-11-08T00:29:20.081090275Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.38.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:29:20.081284 containerd[1547]: time="2025-11-08T00:29:20.081244150Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.38.7: active requests=0, bytes read=25061691" Nov 8 00:29:20.082650 containerd[1547]: time="2025-11-08T00:29:20.081859577Z" level=info msg="ImageCreate event name:\"sha256:f2c1be207523e593db82e3b8cf356a12f3ad8d1aad2225f8114b2cf9d6486cf1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:29:20.083131 containerd[1547]: time="2025-11-08T00:29:20.083110567Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:1b629a1403f5b6d7243f7dd523d04b8a50352a33c1d4d6970b6002a8733acf2e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:29:20.083635 containerd[1547]: time="2025-11-08T00:29:20.083613197Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.38.7\" with image id \"sha256:f2c1be207523e593db82e3b8cf356a12f3ad8d1aad2225f8114b2cf9d6486cf1\", repo tag \"quay.io/tigera/operator:v1.38.7\", repo digest \"quay.io/tigera/operator@sha256:1b629a1403f5b6d7243f7dd523d04b8a50352a33c1d4d6970b6002a8733acf2e\", size \"25057686\" in 1.948923014s" Nov 8 00:29:20.083705 containerd[1547]: time="2025-11-08T00:29:20.083693955Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.7\" returns image reference \"sha256:f2c1be207523e593db82e3b8cf356a12f3ad8d1aad2225f8114b2cf9d6486cf1\"" Nov 8 00:29:20.086582 containerd[1547]: time="2025-11-08T00:29:20.086562347Z" level=info msg="CreateContainer within sandbox \"1ff369e8221f76c7d114dd53cee20373e824c1555873669c9db3831078baf9f4\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Nov 8 00:29:20.133833 containerd[1547]: time="2025-11-08T00:29:20.133806951Z" level=info msg="CreateContainer within sandbox \"1ff369e8221f76c7d114dd53cee20373e824c1555873669c9db3831078baf9f4\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"11b89c0dc5acc2251e8a4a58209d823ed7883a6c00658691032d1bd3babd74fc\"" Nov 8 00:29:20.134427 containerd[1547]: time="2025-11-08T00:29:20.134408981Z" level=info msg="StartContainer for \"11b89c0dc5acc2251e8a4a58209d823ed7883a6c00658691032d1bd3babd74fc\"" Nov 8 00:29:20.161709 systemd[1]: Started cri-containerd-11b89c0dc5acc2251e8a4a58209d823ed7883a6c00658691032d1bd3babd74fc.scope - libcontainer container 11b89c0dc5acc2251e8a4a58209d823ed7883a6c00658691032d1bd3babd74fc. Nov 8 00:29:20.181502 containerd[1547]: time="2025-11-08T00:29:20.181476909Z" level=info msg="StartContainer for \"11b89c0dc5acc2251e8a4a58209d823ed7883a6c00658691032d1bd3babd74fc\" returns successfully" Nov 8 00:29:20.534552 kubelet[2738]: I1108 00:29:20.534424 2738 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-7dcd859c48-2zgkd" podStartSLOduration=1.584415034 podStartE2EDuration="3.534413052s" podCreationTimestamp="2025-11-08 00:29:17 +0000 UTC" firstStartedPulling="2025-11-08 00:29:18.134179419 +0000 UTC m=+5.750691332" lastFinishedPulling="2025-11-08 00:29:20.084177427 +0000 UTC m=+7.700689350" observedRunningTime="2025-11-08 00:29:20.533885864 +0000 UTC m=+8.150397789" watchObservedRunningTime="2025-11-08 00:29:20.534413052 +0000 UTC m=+8.150924972" Nov 8 00:29:22.395005 systemd[1]: cri-containerd-11b89c0dc5acc2251e8a4a58209d823ed7883a6c00658691032d1bd3babd74fc.scope: Deactivated successfully. Nov 8 00:29:22.468739 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-11b89c0dc5acc2251e8a4a58209d823ed7883a6c00658691032d1bd3babd74fc-rootfs.mount: Deactivated successfully. Nov 8 00:29:22.521076 containerd[1547]: time="2025-11-08T00:29:22.475641317Z" level=info msg="shim disconnected" id=11b89c0dc5acc2251e8a4a58209d823ed7883a6c00658691032d1bd3babd74fc namespace=k8s.io Nov 8 00:29:22.521337 containerd[1547]: time="2025-11-08T00:29:22.521077716Z" level=warning msg="cleaning up after shim disconnected" id=11b89c0dc5acc2251e8a4a58209d823ed7883a6c00658691032d1bd3babd74fc namespace=k8s.io Nov 8 00:29:22.521337 containerd[1547]: time="2025-11-08T00:29:22.521089375Z" level=info msg="cleaning up dead shim" namespace=k8s.io Nov 8 00:29:23.544515 kubelet[2738]: I1108 00:29:23.544489 2738 scope.go:117] "RemoveContainer" containerID="11b89c0dc5acc2251e8a4a58209d823ed7883a6c00658691032d1bd3babd74fc" Nov 8 00:29:23.548231 containerd[1547]: time="2025-11-08T00:29:23.546295758Z" level=info msg="CreateContainer within sandbox \"1ff369e8221f76c7d114dd53cee20373e824c1555873669c9db3831078baf9f4\" for container &ContainerMetadata{Name:tigera-operator,Attempt:1,}" Nov 8 00:29:23.558345 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount569902935.mount: Deactivated successfully. Nov 8 00:29:23.570059 containerd[1547]: time="2025-11-08T00:29:23.569965606Z" level=info msg="CreateContainer within sandbox \"1ff369e8221f76c7d114dd53cee20373e824c1555873669c9db3831078baf9f4\" for &ContainerMetadata{Name:tigera-operator,Attempt:1,} returns container id \"d4e296ded3d2210295ca79a34a9be1d18decf75213160e85462e5bfdd82f399f\"" Nov 8 00:29:23.570612 containerd[1547]: time="2025-11-08T00:29:23.570181563Z" level=info msg="StartContainer for \"d4e296ded3d2210295ca79a34a9be1d18decf75213160e85462e5bfdd82f399f\"" Nov 8 00:29:23.586660 systemd[1]: run-containerd-runc-k8s.io-d4e296ded3d2210295ca79a34a9be1d18decf75213160e85462e5bfdd82f399f-runc.viPlxF.mount: Deactivated successfully. Nov 8 00:29:23.598752 systemd[1]: Started cri-containerd-d4e296ded3d2210295ca79a34a9be1d18decf75213160e85462e5bfdd82f399f.scope - libcontainer container d4e296ded3d2210295ca79a34a9be1d18decf75213160e85462e5bfdd82f399f. Nov 8 00:29:23.615875 containerd[1547]: time="2025-11-08T00:29:23.615817775Z" level=info msg="StartContainer for \"d4e296ded3d2210295ca79a34a9be1d18decf75213160e85462e5bfdd82f399f\" returns successfully" Nov 8 00:29:25.283086 sudo[1834]: pam_unix(sudo:session): session closed for user root Nov 8 00:29:25.285494 sshd[1831]: pam_unix(sshd:session): session closed for user core Nov 8 00:29:25.287654 systemd[1]: sshd@6-139.178.70.106:22-147.75.109.163:41446.service: Deactivated successfully. Nov 8 00:29:25.288929 systemd[1]: session-9.scope: Deactivated successfully. Nov 8 00:29:25.289126 systemd[1]: session-9.scope: Consumed 3.415s CPU time, 144.6M memory peak, 0B memory swap peak. Nov 8 00:29:25.289583 systemd-logind[1519]: Session 9 logged out. Waiting for processes to exit. Nov 8 00:29:25.290457 systemd-logind[1519]: Removed session 9. Nov 8 00:29:30.423251 systemd[1]: Created slice kubepods-besteffort-podf19e8d9f_7a76_4201_ab62_cf1abaed0d7c.slice - libcontainer container kubepods-besteffort-podf19e8d9f_7a76_4201_ab62_cf1abaed0d7c.slice. Nov 8 00:29:30.507047 kubelet[2738]: I1108 00:29:30.507018 2738 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/f19e8d9f-7a76-4201-ab62-cf1abaed0d7c-typha-certs\") pod \"calico-typha-7dcc55bf47-b8pgb\" (UID: \"f19e8d9f-7a76-4201-ab62-cf1abaed0d7c\") " pod="calico-system/calico-typha-7dcc55bf47-b8pgb" Nov 8 00:29:30.507379 kubelet[2738]: I1108 00:29:30.507330 2738 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f19e8d9f-7a76-4201-ab62-cf1abaed0d7c-tigera-ca-bundle\") pod \"calico-typha-7dcc55bf47-b8pgb\" (UID: \"f19e8d9f-7a76-4201-ab62-cf1abaed0d7c\") " pod="calico-system/calico-typha-7dcc55bf47-b8pgb" Nov 8 00:29:30.507379 kubelet[2738]: I1108 00:29:30.507349 2738 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6fk24\" (UniqueName: \"kubernetes.io/projected/f19e8d9f-7a76-4201-ab62-cf1abaed0d7c-kube-api-access-6fk24\") pod \"calico-typha-7dcc55bf47-b8pgb\" (UID: \"f19e8d9f-7a76-4201-ab62-cf1abaed0d7c\") " pod="calico-system/calico-typha-7dcc55bf47-b8pgb" Nov 8 00:29:30.584663 systemd[1]: Created slice kubepods-besteffort-pod93ecf8b5_1d82_4132_b8b1_c73119f42320.slice - libcontainer container kubepods-besteffort-pod93ecf8b5_1d82_4132_b8b1_c73119f42320.slice. Nov 8 00:29:30.607736 kubelet[2738]: I1108 00:29:30.607703 2738 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/93ecf8b5-1d82-4132-b8b1-c73119f42320-var-lib-calico\") pod \"calico-node-tppxc\" (UID: \"93ecf8b5-1d82-4132-b8b1-c73119f42320\") " pod="calico-system/calico-node-tppxc" Nov 8 00:29:30.607834 kubelet[2738]: I1108 00:29:30.607755 2738 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/93ecf8b5-1d82-4132-b8b1-c73119f42320-xtables-lock\") pod \"calico-node-tppxc\" (UID: \"93ecf8b5-1d82-4132-b8b1-c73119f42320\") " pod="calico-system/calico-node-tppxc" Nov 8 00:29:30.607834 kubelet[2738]: I1108 00:29:30.607792 2738 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/93ecf8b5-1d82-4132-b8b1-c73119f42320-tigera-ca-bundle\") pod \"calico-node-tppxc\" (UID: \"93ecf8b5-1d82-4132-b8b1-c73119f42320\") " pod="calico-system/calico-node-tppxc" Nov 8 00:29:30.607834 kubelet[2738]: I1108 00:29:30.607809 2738 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/93ecf8b5-1d82-4132-b8b1-c73119f42320-var-run-calico\") pod \"calico-node-tppxc\" (UID: \"93ecf8b5-1d82-4132-b8b1-c73119f42320\") " pod="calico-system/calico-node-tppxc" Nov 8 00:29:30.607834 kubelet[2738]: I1108 00:29:30.607826 2738 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cwzs6\" (UniqueName: \"kubernetes.io/projected/93ecf8b5-1d82-4132-b8b1-c73119f42320-kube-api-access-cwzs6\") pod \"calico-node-tppxc\" (UID: \"93ecf8b5-1d82-4132-b8b1-c73119f42320\") " pod="calico-system/calico-node-tppxc" Nov 8 00:29:30.607917 kubelet[2738]: I1108 00:29:30.607845 2738 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/93ecf8b5-1d82-4132-b8b1-c73119f42320-policysync\") pod \"calico-node-tppxc\" (UID: \"93ecf8b5-1d82-4132-b8b1-c73119f42320\") " pod="calico-system/calico-node-tppxc" Nov 8 00:29:30.607917 kubelet[2738]: I1108 00:29:30.607871 2738 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/93ecf8b5-1d82-4132-b8b1-c73119f42320-cni-bin-dir\") pod \"calico-node-tppxc\" (UID: \"93ecf8b5-1d82-4132-b8b1-c73119f42320\") " pod="calico-system/calico-node-tppxc" Nov 8 00:29:30.607917 kubelet[2738]: I1108 00:29:30.607886 2738 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/93ecf8b5-1d82-4132-b8b1-c73119f42320-cni-log-dir\") pod \"calico-node-tppxc\" (UID: \"93ecf8b5-1d82-4132-b8b1-c73119f42320\") " pod="calico-system/calico-node-tppxc" Nov 8 00:29:30.607917 kubelet[2738]: I1108 00:29:30.607902 2738 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/93ecf8b5-1d82-4132-b8b1-c73119f42320-cni-net-dir\") pod \"calico-node-tppxc\" (UID: \"93ecf8b5-1d82-4132-b8b1-c73119f42320\") " pod="calico-system/calico-node-tppxc" Nov 8 00:29:30.607983 kubelet[2738]: I1108 00:29:30.607917 2738 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/93ecf8b5-1d82-4132-b8b1-c73119f42320-node-certs\") pod \"calico-node-tppxc\" (UID: \"93ecf8b5-1d82-4132-b8b1-c73119f42320\") " pod="calico-system/calico-node-tppxc" Nov 8 00:29:30.607983 kubelet[2738]: I1108 00:29:30.607948 2738 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/93ecf8b5-1d82-4132-b8b1-c73119f42320-flexvol-driver-host\") pod \"calico-node-tppxc\" (UID: \"93ecf8b5-1d82-4132-b8b1-c73119f42320\") " pod="calico-system/calico-node-tppxc" Nov 8 00:29:30.607983 kubelet[2738]: I1108 00:29:30.607963 2738 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/93ecf8b5-1d82-4132-b8b1-c73119f42320-lib-modules\") pod \"calico-node-tppxc\" (UID: \"93ecf8b5-1d82-4132-b8b1-c73119f42320\") " pod="calico-system/calico-node-tppxc" Nov 8 00:29:30.731012 containerd[1547]: time="2025-11-08T00:29:30.730612441Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-7dcc55bf47-b8pgb,Uid:f19e8d9f-7a76-4201-ab62-cf1abaed0d7c,Namespace:calico-system,Attempt:0,}" Nov 8 00:29:30.746265 containerd[1547]: time="2025-11-08T00:29:30.746181660Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 8 00:29:30.746265 containerd[1547]: time="2025-11-08T00:29:30.746236627Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 8 00:29:30.746265 containerd[1547]: time="2025-11-08T00:29:30.746247476Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 8 00:29:30.746488 containerd[1547]: time="2025-11-08T00:29:30.746309075Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 8 00:29:30.770739 systemd[1]: Started cri-containerd-4363a01b21fb9eb0147b65226fd1120704881fefc6b78ee3c04186e72d17ec71.scope - libcontainer container 4363a01b21fb9eb0147b65226fd1120704881fefc6b78ee3c04186e72d17ec71. Nov 8 00:29:30.774128 kubelet[2738]: E1108 00:29:30.773835 2738 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-w4kl5" podUID="a1ec52db-bd41-4d19-b1f6-a1fab4a28f01" Nov 8 00:29:30.789193 kubelet[2738]: E1108 00:29:30.789116 2738 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:29:30.789193 kubelet[2738]: W1108 00:29:30.789131 2738 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:29:30.789193 kubelet[2738]: E1108 00:29:30.789147 2738 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:29:30.789414 kubelet[2738]: E1108 00:29:30.789360 2738 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:29:30.789414 kubelet[2738]: W1108 00:29:30.789366 2738 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:29:30.789414 kubelet[2738]: E1108 00:29:30.789372 2738 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:29:30.789535 kubelet[2738]: E1108 00:29:30.789503 2738 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:29:30.789535 kubelet[2738]: W1108 00:29:30.789509 2738 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:29:30.789535 kubelet[2738]: E1108 00:29:30.789514 2738 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:29:30.808779 kubelet[2738]: E1108 00:29:30.808663 2738 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:29:30.808779 kubelet[2738]: W1108 00:29:30.808681 2738 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:29:30.808779 kubelet[2738]: E1108 00:29:30.808695 2738 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:29:30.808974 kubelet[2738]: E1108 00:29:30.808922 2738 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:29:30.808974 kubelet[2738]: W1108 00:29:30.808930 2738 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:29:30.808974 kubelet[2738]: E1108 00:29:30.808936 2738 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:29:30.809123 kubelet[2738]: E1108 00:29:30.809021 2738 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:29:30.809123 kubelet[2738]: W1108 00:29:30.809025 2738 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:29:30.809123 kubelet[2738]: E1108 00:29:30.809030 2738 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:29:30.809307 kubelet[2738]: E1108 00:29:30.809243 2738 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:29:30.809307 kubelet[2738]: W1108 00:29:30.809249 2738 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:29:30.809307 kubelet[2738]: E1108 00:29:30.809255 2738 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:29:30.809584 kubelet[2738]: E1108 00:29:30.809403 2738 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:29:30.809584 kubelet[2738]: W1108 00:29:30.809409 2738 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:29:30.809584 kubelet[2738]: E1108 00:29:30.809417 2738 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:29:30.809584 kubelet[2738]: E1108 00:29:30.809524 2738 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:29:30.809584 kubelet[2738]: W1108 00:29:30.809529 2738 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:29:30.809584 kubelet[2738]: E1108 00:29:30.809534 2738 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:29:30.809870 kubelet[2738]: E1108 00:29:30.809819 2738 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:29:30.809870 kubelet[2738]: W1108 00:29:30.809825 2738 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:29:30.809870 kubelet[2738]: E1108 00:29:30.809831 2738 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:29:30.810136 kubelet[2738]: E1108 00:29:30.809988 2738 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:29:30.810136 kubelet[2738]: W1108 00:29:30.809993 2738 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:29:30.810136 kubelet[2738]: E1108 00:29:30.809998 2738 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:29:30.810136 kubelet[2738]: E1108 00:29:30.810085 2738 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:29:30.810136 kubelet[2738]: W1108 00:29:30.810090 2738 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:29:30.810136 kubelet[2738]: E1108 00:29:30.810095 2738 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:29:30.813973 kubelet[2738]: E1108 00:29:30.810386 2738 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:29:30.813973 kubelet[2738]: W1108 00:29:30.810392 2738 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:29:30.813973 kubelet[2738]: E1108 00:29:30.810397 2738 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:29:30.813973 kubelet[2738]: E1108 00:29:30.810580 2738 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:29:30.813973 kubelet[2738]: W1108 00:29:30.810585 2738 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:29:30.813973 kubelet[2738]: E1108 00:29:30.810589 2738 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:29:30.813973 kubelet[2738]: E1108 00:29:30.810706 2738 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:29:30.813973 kubelet[2738]: W1108 00:29:30.810711 2738 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:29:30.813973 kubelet[2738]: E1108 00:29:30.810716 2738 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:29:30.813973 kubelet[2738]: E1108 00:29:30.810811 2738 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:29:30.814141 kubelet[2738]: W1108 00:29:30.810816 2738 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:29:30.814141 kubelet[2738]: E1108 00:29:30.810821 2738 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:29:30.814141 kubelet[2738]: E1108 00:29:30.810925 2738 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:29:30.814141 kubelet[2738]: W1108 00:29:30.810930 2738 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:29:30.814141 kubelet[2738]: E1108 00:29:30.810935 2738 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:29:30.814141 kubelet[2738]: E1108 00:29:30.811063 2738 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:29:30.814141 kubelet[2738]: W1108 00:29:30.811067 2738 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:29:30.814141 kubelet[2738]: E1108 00:29:30.811073 2738 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:29:30.814141 kubelet[2738]: E1108 00:29:30.811159 2738 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:29:30.814141 kubelet[2738]: W1108 00:29:30.811164 2738 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:29:30.817904 kubelet[2738]: E1108 00:29:30.811168 2738 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:29:30.817904 kubelet[2738]: E1108 00:29:30.811252 2738 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:29:30.817904 kubelet[2738]: W1108 00:29:30.811256 2738 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:29:30.817904 kubelet[2738]: E1108 00:29:30.811260 2738 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:29:30.817904 kubelet[2738]: E1108 00:29:30.811400 2738 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:29:30.817904 kubelet[2738]: W1108 00:29:30.811406 2738 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:29:30.817904 kubelet[2738]: E1108 00:29:30.811411 2738 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:29:30.817904 kubelet[2738]: I1108 00:29:30.811424 2738 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/a1ec52db-bd41-4d19-b1f6-a1fab4a28f01-registration-dir\") pod \"csi-node-driver-w4kl5\" (UID: \"a1ec52db-bd41-4d19-b1f6-a1fab4a28f01\") " pod="calico-system/csi-node-driver-w4kl5" Nov 8 00:29:30.817904 kubelet[2738]: E1108 00:29:30.811663 2738 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:29:30.818048 kubelet[2738]: W1108 00:29:30.811673 2738 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:29:30.818048 kubelet[2738]: E1108 00:29:30.811678 2738 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:29:30.818048 kubelet[2738]: I1108 00:29:30.811696 2738 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bjqld\" (UniqueName: \"kubernetes.io/projected/a1ec52db-bd41-4d19-b1f6-a1fab4a28f01-kube-api-access-bjqld\") pod \"csi-node-driver-w4kl5\" (UID: \"a1ec52db-bd41-4d19-b1f6-a1fab4a28f01\") " pod="calico-system/csi-node-driver-w4kl5" Nov 8 00:29:30.818048 kubelet[2738]: E1108 00:29:30.811799 2738 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:29:30.818048 kubelet[2738]: W1108 00:29:30.811819 2738 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:29:30.818048 kubelet[2738]: E1108 00:29:30.811825 2738 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:29:30.818048 kubelet[2738]: I1108 00:29:30.811839 2738 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/a1ec52db-bd41-4d19-b1f6-a1fab4a28f01-varrun\") pod \"csi-node-driver-w4kl5\" (UID: \"a1ec52db-bd41-4d19-b1f6-a1fab4a28f01\") " pod="calico-system/csi-node-driver-w4kl5" Nov 8 00:29:30.818048 kubelet[2738]: E1108 00:29:30.811943 2738 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:29:30.818173 kubelet[2738]: W1108 00:29:30.811949 2738 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:29:30.818173 kubelet[2738]: E1108 00:29:30.811968 2738 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:29:30.818173 kubelet[2738]: I1108 00:29:30.811984 2738 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/a1ec52db-bd41-4d19-b1f6-a1fab4a28f01-kubelet-dir\") pod \"csi-node-driver-w4kl5\" (UID: \"a1ec52db-bd41-4d19-b1f6-a1fab4a28f01\") " pod="calico-system/csi-node-driver-w4kl5" Nov 8 00:29:30.818173 kubelet[2738]: E1108 00:29:30.812072 2738 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:29:30.818173 kubelet[2738]: W1108 00:29:30.812080 2738 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:29:30.818173 kubelet[2738]: E1108 00:29:30.812085 2738 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:29:30.818173 kubelet[2738]: I1108 00:29:30.812101 2738 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/a1ec52db-bd41-4d19-b1f6-a1fab4a28f01-socket-dir\") pod \"csi-node-driver-w4kl5\" (UID: \"a1ec52db-bd41-4d19-b1f6-a1fab4a28f01\") " pod="calico-system/csi-node-driver-w4kl5" Nov 8 00:29:30.818173 kubelet[2738]: E1108 00:29:30.812210 2738 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:29:30.818297 kubelet[2738]: W1108 00:29:30.812215 2738 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:29:30.818297 kubelet[2738]: E1108 00:29:30.812221 2738 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:29:30.818297 kubelet[2738]: E1108 00:29:30.812309 2738 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:29:30.818297 kubelet[2738]: W1108 00:29:30.812313 2738 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:29:30.818297 kubelet[2738]: E1108 00:29:30.812317 2738 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:29:30.818297 kubelet[2738]: E1108 00:29:30.812439 2738 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:29:30.818297 kubelet[2738]: W1108 00:29:30.812446 2738 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:29:30.818297 kubelet[2738]: E1108 00:29:30.812451 2738 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:29:30.818297 kubelet[2738]: E1108 00:29:30.812539 2738 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:29:30.818297 kubelet[2738]: W1108 00:29:30.812544 2738 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:29:30.818447 kubelet[2738]: E1108 00:29:30.812548 2738 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:29:30.818447 kubelet[2738]: E1108 00:29:30.812718 2738 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:29:30.818447 kubelet[2738]: W1108 00:29:30.812723 2738 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:29:30.818447 kubelet[2738]: E1108 00:29:30.812728 2738 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:29:30.818447 kubelet[2738]: E1108 00:29:30.812830 2738 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:29:30.818447 kubelet[2738]: W1108 00:29:30.812834 2738 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:29:30.818447 kubelet[2738]: E1108 00:29:30.812839 2738 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:29:30.818447 kubelet[2738]: E1108 00:29:30.812970 2738 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:29:30.818447 kubelet[2738]: W1108 00:29:30.812974 2738 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:29:30.818447 kubelet[2738]: E1108 00:29:30.812979 2738 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:29:30.818610 kubelet[2738]: E1108 00:29:30.813768 2738 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:29:30.818610 kubelet[2738]: W1108 00:29:30.813773 2738 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:29:30.818610 kubelet[2738]: E1108 00:29:30.813779 2738 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:29:30.818610 kubelet[2738]: E1108 00:29:30.813880 2738 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:29:30.818610 kubelet[2738]: W1108 00:29:30.813885 2738 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:29:30.818610 kubelet[2738]: E1108 00:29:30.813889 2738 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:29:30.818610 kubelet[2738]: E1108 00:29:30.814264 2738 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:29:30.818610 kubelet[2738]: W1108 00:29:30.814269 2738 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:29:30.818610 kubelet[2738]: E1108 00:29:30.814274 2738 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:29:30.829599 containerd[1547]: time="2025-11-08T00:29:30.829578068Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-7dcc55bf47-b8pgb,Uid:f19e8d9f-7a76-4201-ab62-cf1abaed0d7c,Namespace:calico-system,Attempt:0,} returns sandbox id \"4363a01b21fb9eb0147b65226fd1120704881fefc6b78ee3c04186e72d17ec71\"" Nov 8 00:29:30.841767 containerd[1547]: time="2025-11-08T00:29:30.841692274Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.4\"" Nov 8 00:29:30.887384 containerd[1547]: time="2025-11-08T00:29:30.887314423Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-tppxc,Uid:93ecf8b5-1d82-4132-b8b1-c73119f42320,Namespace:calico-system,Attempt:0,}" Nov 8 00:29:30.913471 kubelet[2738]: E1108 00:29:30.913451 2738 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:29:30.913693 kubelet[2738]: W1108 00:29:30.913581 2738 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:29:30.913693 kubelet[2738]: E1108 00:29:30.913599 2738 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:29:30.913847 kubelet[2738]: E1108 00:29:30.913801 2738 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:29:30.913847 kubelet[2738]: W1108 00:29:30.913807 2738 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:29:30.913847 kubelet[2738]: E1108 00:29:30.913813 2738 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:29:30.915181 kubelet[2738]: E1108 00:29:30.915016 2738 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:29:30.915181 kubelet[2738]: W1108 00:29:30.915028 2738 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:29:30.915181 kubelet[2738]: E1108 00:29:30.915038 2738 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:29:30.915408 kubelet[2738]: E1108 00:29:30.915326 2738 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:29:30.915408 kubelet[2738]: W1108 00:29:30.915333 2738 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:29:30.915408 kubelet[2738]: E1108 00:29:30.915339 2738 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:29:30.915578 kubelet[2738]: E1108 00:29:30.915520 2738 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:29:30.915578 kubelet[2738]: W1108 00:29:30.915526 2738 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:29:30.915578 kubelet[2738]: E1108 00:29:30.915532 2738 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:29:30.915859 kubelet[2738]: E1108 00:29:30.915765 2738 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:29:30.915859 kubelet[2738]: W1108 00:29:30.915772 2738 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:29:30.915859 kubelet[2738]: E1108 00:29:30.915779 2738 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:29:30.916046 kubelet[2738]: E1108 00:29:30.915956 2738 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:29:30.916046 kubelet[2738]: W1108 00:29:30.915962 2738 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:29:30.916046 kubelet[2738]: E1108 00:29:30.915967 2738 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:29:30.916168 kubelet[2738]: E1108 00:29:30.916151 2738 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:29:30.916271 kubelet[2738]: W1108 00:29:30.916199 2738 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:29:30.916271 kubelet[2738]: E1108 00:29:30.916208 2738 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:29:30.916371 kubelet[2738]: E1108 00:29:30.916364 2738 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:29:30.916433 kubelet[2738]: W1108 00:29:30.916426 2738 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:29:30.916534 kubelet[2738]: E1108 00:29:30.916452 2738 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:29:30.916819 kubelet[2738]: E1108 00:29:30.916790 2738 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:29:30.916819 kubelet[2738]: W1108 00:29:30.916796 2738 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:29:30.916819 kubelet[2738]: E1108 00:29:30.916802 2738 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:29:30.917094 kubelet[2738]: E1108 00:29:30.917033 2738 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:29:30.917094 kubelet[2738]: W1108 00:29:30.917040 2738 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:29:30.917094 kubelet[2738]: E1108 00:29:30.917045 2738 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:29:30.917932 kubelet[2738]: E1108 00:29:30.917887 2738 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:29:30.917932 kubelet[2738]: W1108 00:29:30.917895 2738 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:29:30.917932 kubelet[2738]: E1108 00:29:30.917903 2738 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:29:30.918127 kubelet[2738]: E1108 00:29:30.918119 2738 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:29:30.918228 kubelet[2738]: W1108 00:29:30.918183 2738 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:29:30.918228 kubelet[2738]: E1108 00:29:30.918192 2738 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:29:30.919696 kubelet[2738]: E1108 00:29:30.919685 2738 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:29:30.919759 kubelet[2738]: W1108 00:29:30.919751 2738 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:29:30.919807 kubelet[2738]: E1108 00:29:30.919800 2738 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:29:30.920016 kubelet[2738]: E1108 00:29:30.920008 2738 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:29:30.920075 kubelet[2738]: W1108 00:29:30.920068 2738 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:29:30.920116 kubelet[2738]: E1108 00:29:30.920110 2738 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:29:30.920788 kubelet[2738]: E1108 00:29:30.920765 2738 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:29:30.920829 kubelet[2738]: W1108 00:29:30.920822 2738 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:29:30.920894 kubelet[2738]: E1108 00:29:30.920877 2738 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:29:30.921798 kubelet[2738]: E1108 00:29:30.921788 2738 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:29:30.921875 kubelet[2738]: W1108 00:29:30.921860 2738 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:29:30.921959 kubelet[2738]: E1108 00:29:30.921919 2738 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:29:30.922280 kubelet[2738]: E1108 00:29:30.922273 2738 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:29:30.922395 kubelet[2738]: W1108 00:29:30.922326 2738 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:29:30.922445 kubelet[2738]: E1108 00:29:30.922437 2738 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:29:30.922859 kubelet[2738]: E1108 00:29:30.922852 2738 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:29:30.922941 kubelet[2738]: W1108 00:29:30.922933 2738 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:29:30.923518 kubelet[2738]: E1108 00:29:30.923444 2738 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:29:30.923822 kubelet[2738]: E1108 00:29:30.923779 2738 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:29:30.924038 kubelet[2738]: W1108 00:29:30.923950 2738 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:29:30.924038 kubelet[2738]: E1108 00:29:30.923962 2738 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:29:30.925757 containerd[1547]: time="2025-11-08T00:29:30.914373237Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 8 00:29:30.925757 containerd[1547]: time="2025-11-08T00:29:30.914440191Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 8 00:29:30.925757 containerd[1547]: time="2025-11-08T00:29:30.914461541Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 8 00:29:30.925757 containerd[1547]: time="2025-11-08T00:29:30.914565867Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 8 00:29:30.925871 kubelet[2738]: E1108 00:29:30.925643 2738 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:29:30.925871 kubelet[2738]: W1108 00:29:30.925653 2738 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:29:30.925871 kubelet[2738]: E1108 00:29:30.925665 2738 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:29:30.926035 kubelet[2738]: E1108 00:29:30.925962 2738 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:29:30.926035 kubelet[2738]: W1108 00:29:30.925968 2738 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:29:30.926035 kubelet[2738]: E1108 00:29:30.925974 2738 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:29:30.926162 kubelet[2738]: E1108 00:29:30.926109 2738 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:29:30.926162 kubelet[2738]: W1108 00:29:30.926115 2738 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:29:30.926162 kubelet[2738]: E1108 00:29:30.926119 2738 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:29:30.926390 kubelet[2738]: E1108 00:29:30.926377 2738 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:29:30.926468 kubelet[2738]: W1108 00:29:30.926433 2738 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:29:30.926468 kubelet[2738]: E1108 00:29:30.926442 2738 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:29:30.928233 kubelet[2738]: E1108 00:29:30.928218 2738 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:29:30.928346 kubelet[2738]: W1108 00:29:30.928337 2738 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:29:30.928476 kubelet[2738]: E1108 00:29:30.928416 2738 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:29:30.932715 kubelet[2738]: E1108 00:29:30.932698 2738 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:29:30.932715 kubelet[2738]: W1108 00:29:30.932711 2738 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:29:30.932805 kubelet[2738]: E1108 00:29:30.932723 2738 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:29:30.944772 systemd[1]: Started cri-containerd-18b882265e4fec5ed8da48d850feaeb246d2c0470207606499eb1b30d741b5ea.scope - libcontainer container 18b882265e4fec5ed8da48d850feaeb246d2c0470207606499eb1b30d741b5ea. Nov 8 00:29:30.963738 containerd[1547]: time="2025-11-08T00:29:30.963710455Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-tppxc,Uid:93ecf8b5-1d82-4132-b8b1-c73119f42320,Namespace:calico-system,Attempt:0,} returns sandbox id \"18b882265e4fec5ed8da48d850feaeb246d2c0470207606499eb1b30d741b5ea\"" Nov 8 00:29:32.367866 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2610876635.mount: Deactivated successfully. Nov 8 00:29:32.489577 kubelet[2738]: E1108 00:29:32.489438 2738 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-w4kl5" podUID="a1ec52db-bd41-4d19-b1f6-a1fab4a28f01" Nov 8 00:29:32.861329 containerd[1547]: time="2025-11-08T00:29:32.861306059Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:29:32.862013 containerd[1547]: time="2025-11-08T00:29:32.861724434Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.30.4: active requests=0, bytes read=35234628" Nov 8 00:29:32.862013 containerd[1547]: time="2025-11-08T00:29:32.861993530Z" level=info msg="ImageCreate event name:\"sha256:aa1490366a77160b4cc8f9af82281ab7201ffda0882871f860e1eb1c4f825958\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:29:32.863073 containerd[1547]: time="2025-11-08T00:29:32.863051212Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:6f437220b5b3c627fb4a0fc8dc323363101f3c22a8f337612c2a1ddfb73b810c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:29:32.863700 containerd[1547]: time="2025-11-08T00:29:32.863466577Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.30.4\" with image id \"sha256:aa1490366a77160b4cc8f9af82281ab7201ffda0882871f860e1eb1c4f825958\", repo tag \"ghcr.io/flatcar/calico/typha:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:6f437220b5b3c627fb4a0fc8dc323363101f3c22a8f337612c2a1ddfb73b810c\", size \"35234482\" in 2.021653474s" Nov 8 00:29:32.863700 containerd[1547]: time="2025-11-08T00:29:32.863485325Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.4\" returns image reference \"sha256:aa1490366a77160b4cc8f9af82281ab7201ffda0882871f860e1eb1c4f825958\"" Nov 8 00:29:32.866423 containerd[1547]: time="2025-11-08T00:29:32.866412405Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\"" Nov 8 00:29:32.879694 containerd[1547]: time="2025-11-08T00:29:32.879664483Z" level=info msg="CreateContainer within sandbox \"4363a01b21fb9eb0147b65226fd1120704881fefc6b78ee3c04186e72d17ec71\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Nov 8 00:29:32.940777 containerd[1547]: time="2025-11-08T00:29:32.940746165Z" level=info msg="CreateContainer within sandbox \"4363a01b21fb9eb0147b65226fd1120704881fefc6b78ee3c04186e72d17ec71\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"9920b6768c1db67b1ba964ea8b209e9d743d7ac5b08d8e7bd0a74763561b36f0\"" Nov 8 00:29:32.961210 containerd[1547]: time="2025-11-08T00:29:32.961183261Z" level=info msg="StartContainer for \"9920b6768c1db67b1ba964ea8b209e9d743d7ac5b08d8e7bd0a74763561b36f0\"" Nov 8 00:29:33.025700 systemd[1]: Started cri-containerd-9920b6768c1db67b1ba964ea8b209e9d743d7ac5b08d8e7bd0a74763561b36f0.scope - libcontainer container 9920b6768c1db67b1ba964ea8b209e9d743d7ac5b08d8e7bd0a74763561b36f0. Nov 8 00:29:33.068629 containerd[1547]: time="2025-11-08T00:29:33.068205317Z" level=info msg="StartContainer for \"9920b6768c1db67b1ba964ea8b209e9d743d7ac5b08d8e7bd0a74763561b36f0\" returns successfully" Nov 8 00:29:33.627911 kubelet[2738]: I1108 00:29:33.624739 2738 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-7dcc55bf47-b8pgb" podStartSLOduration=1.5992391769999998 podStartE2EDuration="3.624729524s" podCreationTimestamp="2025-11-08 00:29:30 +0000 UTC" firstStartedPulling="2025-11-08 00:29:30.840855176 +0000 UTC m=+18.457367089" lastFinishedPulling="2025-11-08 00:29:32.866345523 +0000 UTC m=+20.482857436" observedRunningTime="2025-11-08 00:29:33.621817277 +0000 UTC m=+21.238329201" watchObservedRunningTime="2025-11-08 00:29:33.624729524 +0000 UTC m=+21.241241450" Nov 8 00:29:33.636744 kubelet[2738]: E1108 00:29:33.636721 2738 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:29:33.636744 kubelet[2738]: W1108 00:29:33.636738 2738 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:29:33.642721 kubelet[2738]: E1108 00:29:33.642691 2738 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:29:33.642937 kubelet[2738]: E1108 00:29:33.642922 2738 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:29:33.642937 kubelet[2738]: W1108 00:29:33.642934 2738 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:29:33.643305 kubelet[2738]: E1108 00:29:33.642946 2738 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:29:33.643305 kubelet[2738]: E1108 00:29:33.643072 2738 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:29:33.643305 kubelet[2738]: W1108 00:29:33.643077 2738 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:29:33.643305 kubelet[2738]: E1108 00:29:33.643083 2738 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:29:33.643305 kubelet[2738]: E1108 00:29:33.643230 2738 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:29:33.643305 kubelet[2738]: W1108 00:29:33.643235 2738 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:29:33.643305 kubelet[2738]: E1108 00:29:33.643241 2738 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:29:33.643438 kubelet[2738]: E1108 00:29:33.643356 2738 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:29:33.643438 kubelet[2738]: W1108 00:29:33.643361 2738 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:29:33.643438 kubelet[2738]: E1108 00:29:33.643366 2738 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:29:33.643491 kubelet[2738]: E1108 00:29:33.643479 2738 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:29:33.643491 kubelet[2738]: W1108 00:29:33.643484 2738 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:29:33.643491 kubelet[2738]: E1108 00:29:33.643488 2738 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:29:33.643760 kubelet[2738]: E1108 00:29:33.643583 2738 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:29:33.643760 kubelet[2738]: W1108 00:29:33.643589 2738 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:29:33.643760 kubelet[2738]: E1108 00:29:33.643594 2738 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:29:33.643760 kubelet[2738]: E1108 00:29:33.643711 2738 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:29:33.643760 kubelet[2738]: W1108 00:29:33.643715 2738 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:29:33.643760 kubelet[2738]: E1108 00:29:33.643720 2738 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:29:33.643875 kubelet[2738]: E1108 00:29:33.643824 2738 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:29:33.643875 kubelet[2738]: W1108 00:29:33.643829 2738 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:29:33.643875 kubelet[2738]: E1108 00:29:33.643834 2738 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:29:33.643949 kubelet[2738]: E1108 00:29:33.643939 2738 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:29:33.643949 kubelet[2738]: W1108 00:29:33.643945 2738 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:29:33.644042 kubelet[2738]: E1108 00:29:33.643950 2738 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:29:33.644068 kubelet[2738]: E1108 00:29:33.644044 2738 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:29:33.644068 kubelet[2738]: W1108 00:29:33.644049 2738 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:29:33.644068 kubelet[2738]: E1108 00:29:33.644055 2738 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:29:33.644166 kubelet[2738]: E1108 00:29:33.644153 2738 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:29:33.644166 kubelet[2738]: W1108 00:29:33.644157 2738 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:29:33.644166 kubelet[2738]: E1108 00:29:33.644162 2738 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:29:33.644284 kubelet[2738]: E1108 00:29:33.644271 2738 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:29:33.644311 kubelet[2738]: W1108 00:29:33.644293 2738 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:29:33.644311 kubelet[2738]: E1108 00:29:33.644300 2738 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:29:33.644413 kubelet[2738]: E1108 00:29:33.644402 2738 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:29:33.644413 kubelet[2738]: W1108 00:29:33.644411 2738 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:29:33.644449 kubelet[2738]: E1108 00:29:33.644415 2738 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:29:33.644534 kubelet[2738]: E1108 00:29:33.644512 2738 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:29:33.644555 kubelet[2738]: W1108 00:29:33.644534 2738 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:29:33.644555 kubelet[2738]: E1108 00:29:33.644539 2738 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:29:33.681872 kubelet[2738]: E1108 00:29:33.681852 2738 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:29:33.681872 kubelet[2738]: W1108 00:29:33.681868 2738 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:29:33.681990 kubelet[2738]: E1108 00:29:33.681882 2738 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:29:33.682678 kubelet[2738]: E1108 00:29:33.682667 2738 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:29:33.682678 kubelet[2738]: W1108 00:29:33.682676 2738 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:29:33.682752 kubelet[2738]: E1108 00:29:33.682683 2738 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:29:33.682818 kubelet[2738]: E1108 00:29:33.682809 2738 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:29:33.682818 kubelet[2738]: W1108 00:29:33.682816 2738 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:29:33.682864 kubelet[2738]: E1108 00:29:33.682822 2738 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:29:33.686050 kubelet[2738]: E1108 00:29:33.686039 2738 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:29:33.686050 kubelet[2738]: W1108 00:29:33.686048 2738 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:29:33.686120 kubelet[2738]: E1108 00:29:33.686056 2738 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:29:33.686182 kubelet[2738]: E1108 00:29:33.686172 2738 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:29:33.686182 kubelet[2738]: W1108 00:29:33.686180 2738 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:29:33.686241 kubelet[2738]: E1108 00:29:33.686186 2738 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:29:33.686522 kubelet[2738]: E1108 00:29:33.686287 2738 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:29:33.686522 kubelet[2738]: W1108 00:29:33.686293 2738 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:29:33.686522 kubelet[2738]: E1108 00:29:33.686298 2738 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:29:33.686522 kubelet[2738]: E1108 00:29:33.686386 2738 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:29:33.686522 kubelet[2738]: W1108 00:29:33.686390 2738 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:29:33.686522 kubelet[2738]: E1108 00:29:33.686395 2738 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:29:33.686522 kubelet[2738]: E1108 00:29:33.686510 2738 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:29:33.686522 kubelet[2738]: W1108 00:29:33.686517 2738 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:29:33.686522 kubelet[2738]: E1108 00:29:33.686521 2738 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:29:33.686842 kubelet[2738]: E1108 00:29:33.686647 2738 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:29:33.686842 kubelet[2738]: W1108 00:29:33.686651 2738 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:29:33.686842 kubelet[2738]: E1108 00:29:33.686656 2738 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:29:33.686969 kubelet[2738]: E1108 00:29:33.686906 2738 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:29:33.686969 kubelet[2738]: W1108 00:29:33.686916 2738 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:29:33.686969 kubelet[2738]: E1108 00:29:33.686924 2738 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:29:33.687111 kubelet[2738]: E1108 00:29:33.687060 2738 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:29:33.687111 kubelet[2738]: W1108 00:29:33.687066 2738 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:29:33.687111 kubelet[2738]: E1108 00:29:33.687071 2738 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:29:33.687364 kubelet[2738]: E1108 00:29:33.687258 2738 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:29:33.687364 kubelet[2738]: W1108 00:29:33.687264 2738 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:29:33.687364 kubelet[2738]: E1108 00:29:33.687269 2738 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:29:33.687431 kubelet[2738]: E1108 00:29:33.687422 2738 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:29:33.687431 kubelet[2738]: W1108 00:29:33.687429 2738 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:29:33.687495 kubelet[2738]: E1108 00:29:33.687435 2738 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:29:33.687536 kubelet[2738]: E1108 00:29:33.687519 2738 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:29:33.687536 kubelet[2738]: W1108 00:29:33.687528 2738 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:29:33.687536 kubelet[2738]: E1108 00:29:33.687532 2738 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:29:33.687655 kubelet[2738]: E1108 00:29:33.687645 2738 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:29:33.687655 kubelet[2738]: W1108 00:29:33.687652 2738 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:29:33.687702 kubelet[2738]: E1108 00:29:33.687657 2738 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:29:33.687877 kubelet[2738]: E1108 00:29:33.687816 2738 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:29:33.687877 kubelet[2738]: W1108 00:29:33.687823 2738 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:29:33.687877 kubelet[2738]: E1108 00:29:33.687828 2738 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:29:33.688085 kubelet[2738]: E1108 00:29:33.688024 2738 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:29:33.688085 kubelet[2738]: W1108 00:29:33.688030 2738 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:29:33.688085 kubelet[2738]: E1108 00:29:33.688035 2738 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:29:33.688193 kubelet[2738]: E1108 00:29:33.688172 2738 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:29:33.688193 kubelet[2738]: W1108 00:29:33.688178 2738 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:29:33.688193 kubelet[2738]: E1108 00:29:33.688182 2738 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:29:34.697492 kubelet[2738]: E1108 00:29:34.697197 2738 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-w4kl5" podUID="a1ec52db-bd41-4d19-b1f6-a1fab4a28f01" Nov 8 00:29:34.724320 containerd[1547]: time="2025-11-08T00:29:34.724285285Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:29:34.728813 containerd[1547]: time="2025-11-08T00:29:34.728776284Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4: active requests=0, bytes read=4446754" Nov 8 00:29:34.737197 containerd[1547]: time="2025-11-08T00:29:34.736279052Z" level=info msg="ImageCreate event name:\"sha256:570719e9c34097019014ae2ad94edf4e523bc6892e77fb1c64c23e5b7f390fe5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:29:34.748108 containerd[1547]: time="2025-11-08T00:29:34.747165126Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:50bdfe370b7308fa9957ed1eaccd094aa4f27f9a4f1dfcfef2f8a7696a1551e1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:29:34.748108 containerd[1547]: time="2025-11-08T00:29:34.747838749Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" with image id \"sha256:570719e9c34097019014ae2ad94edf4e523bc6892e77fb1c64c23e5b7f390fe5\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:50bdfe370b7308fa9957ed1eaccd094aa4f27f9a4f1dfcfef2f8a7696a1551e1\", size \"5941314\" in 1.881359514s" Nov 8 00:29:34.748108 containerd[1547]: time="2025-11-08T00:29:34.747860920Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" returns image reference \"sha256:570719e9c34097019014ae2ad94edf4e523bc6892e77fb1c64c23e5b7f390fe5\"" Nov 8 00:29:34.808325 kubelet[2738]: E1108 00:29:34.808306 2738 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:29:34.808435 kubelet[2738]: W1108 00:29:34.808424 2738 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:29:34.817861 kubelet[2738]: E1108 00:29:34.817838 2738 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:29:34.818098 kubelet[2738]: E1108 00:29:34.818089 2738 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:29:34.818151 kubelet[2738]: W1108 00:29:34.818142 2738 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:29:34.818198 kubelet[2738]: E1108 00:29:34.818191 2738 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:29:34.818371 kubelet[2738]: E1108 00:29:34.818364 2738 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:29:34.818451 kubelet[2738]: W1108 00:29:34.818412 2738 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:29:34.818451 kubelet[2738]: E1108 00:29:34.818422 2738 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:29:34.818722 kubelet[2738]: E1108 00:29:34.818648 2738 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:29:34.818722 kubelet[2738]: W1108 00:29:34.818656 2738 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:29:34.818722 kubelet[2738]: E1108 00:29:34.818663 2738 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:29:34.840317 containerd[1547]: time="2025-11-08T00:29:34.840239745Z" level=info msg="CreateContainer within sandbox \"18b882265e4fec5ed8da48d850feaeb246d2c0470207606499eb1b30d741b5ea\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Nov 8 00:29:34.856452 kubelet[2738]: E1108 00:29:34.856422 2738 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:29:34.856452 kubelet[2738]: W1108 00:29:34.856441 2738 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:29:34.856545 kubelet[2738]: E1108 00:29:34.856456 2738 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:29:34.856645 kubelet[2738]: E1108 00:29:34.856635 2738 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:29:34.856645 kubelet[2738]: W1108 00:29:34.856643 2738 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:29:34.856693 kubelet[2738]: E1108 00:29:34.856649 2738 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:29:34.856753 kubelet[2738]: E1108 00:29:34.856742 2738 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:29:34.856753 kubelet[2738]: W1108 00:29:34.856750 2738 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:29:34.856798 kubelet[2738]: E1108 00:29:34.856755 2738 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:29:34.856862 kubelet[2738]: E1108 00:29:34.856848 2738 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:29:34.856862 kubelet[2738]: W1108 00:29:34.856855 2738 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:29:34.856862 kubelet[2738]: E1108 00:29:34.856860 2738 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:29:34.856971 kubelet[2738]: E1108 00:29:34.856962 2738 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:29:34.856971 kubelet[2738]: W1108 00:29:34.856970 2738 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:29:34.857035 kubelet[2738]: E1108 00:29:34.856975 2738 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:29:34.857067 kubelet[2738]: E1108 00:29:34.857060 2738 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:29:34.857067 kubelet[2738]: W1108 00:29:34.857064 2738 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:29:34.862910 kubelet[2738]: E1108 00:29:34.857068 2738 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:29:34.862910 kubelet[2738]: E1108 00:29:34.857145 2738 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:29:34.862910 kubelet[2738]: W1108 00:29:34.857149 2738 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:29:34.862910 kubelet[2738]: E1108 00:29:34.857154 2738 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:29:34.862910 kubelet[2738]: E1108 00:29:34.857254 2738 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:29:34.862910 kubelet[2738]: W1108 00:29:34.857261 2738 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:29:34.862910 kubelet[2738]: E1108 00:29:34.857265 2738 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:29:34.862910 kubelet[2738]: E1108 00:29:34.857371 2738 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:29:34.862910 kubelet[2738]: W1108 00:29:34.857376 2738 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:29:34.862910 kubelet[2738]: E1108 00:29:34.857380 2738 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:29:34.870758 kubelet[2738]: E1108 00:29:34.857491 2738 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:29:34.870758 kubelet[2738]: W1108 00:29:34.857496 2738 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:29:34.870758 kubelet[2738]: E1108 00:29:34.857500 2738 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:29:34.870758 kubelet[2738]: E1108 00:29:34.857589 2738 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:29:34.870758 kubelet[2738]: W1108 00:29:34.857593 2738 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:29:34.870758 kubelet[2738]: E1108 00:29:34.857597 2738 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:29:34.870758 kubelet[2738]: E1108 00:29:34.857751 2738 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:29:34.870758 kubelet[2738]: W1108 00:29:34.857758 2738 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:29:34.870758 kubelet[2738]: E1108 00:29:34.857765 2738 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:29:34.870758 kubelet[2738]: E1108 00:29:34.857890 2738 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:29:34.870937 kubelet[2738]: W1108 00:29:34.857894 2738 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:29:34.870937 kubelet[2738]: E1108 00:29:34.857899 2738 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:29:34.870937 kubelet[2738]: E1108 00:29:34.858006 2738 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:29:34.870937 kubelet[2738]: W1108 00:29:34.858011 2738 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:29:34.870937 kubelet[2738]: E1108 00:29:34.858016 2738 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:29:34.870937 kubelet[2738]: E1108 00:29:34.858114 2738 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:29:34.870937 kubelet[2738]: W1108 00:29:34.858119 2738 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:29:34.870937 kubelet[2738]: E1108 00:29:34.858123 2738 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:29:34.870937 kubelet[2738]: E1108 00:29:34.858229 2738 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:29:34.870937 kubelet[2738]: W1108 00:29:34.858233 2738 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:29:34.875337 kubelet[2738]: E1108 00:29:34.858238 2738 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:29:34.875337 kubelet[2738]: E1108 00:29:34.858333 2738 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:29:34.875337 kubelet[2738]: W1108 00:29:34.858337 2738 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:29:34.875337 kubelet[2738]: E1108 00:29:34.858342 2738 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:29:34.875337 kubelet[2738]: E1108 00:29:34.858541 2738 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:29:34.875337 kubelet[2738]: W1108 00:29:34.858547 2738 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:29:34.875337 kubelet[2738]: E1108 00:29:34.858553 2738 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:29:34.875337 kubelet[2738]: E1108 00:29:34.858663 2738 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:29:34.875337 kubelet[2738]: W1108 00:29:34.858668 2738 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:29:34.875337 kubelet[2738]: E1108 00:29:34.858674 2738 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:29:34.875500 kubelet[2738]: E1108 00:29:34.858774 2738 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:29:34.875500 kubelet[2738]: W1108 00:29:34.858779 2738 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:29:34.875500 kubelet[2738]: E1108 00:29:34.858783 2738 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:29:34.875500 kubelet[2738]: E1108 00:29:34.858870 2738 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:29:34.875500 kubelet[2738]: W1108 00:29:34.858874 2738 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:29:34.875500 kubelet[2738]: E1108 00:29:34.858885 2738 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:29:34.875500 kubelet[2738]: E1108 00:29:34.858978 2738 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:29:34.875500 kubelet[2738]: W1108 00:29:34.858984 2738 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:29:34.875500 kubelet[2738]: E1108 00:29:34.858988 2738 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:29:34.875500 kubelet[2738]: E1108 00:29:34.859154 2738 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:29:34.875697 kubelet[2738]: W1108 00:29:34.859158 2738 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:29:34.875697 kubelet[2738]: E1108 00:29:34.859164 2738 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:29:34.875697 kubelet[2738]: E1108 00:29:34.859282 2738 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:29:34.875697 kubelet[2738]: W1108 00:29:34.859288 2738 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:29:34.875697 kubelet[2738]: E1108 00:29:34.859294 2738 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:29:34.875697 kubelet[2738]: E1108 00:29:34.859400 2738 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:29:34.875697 kubelet[2738]: W1108 00:29:34.859405 2738 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:29:34.875697 kubelet[2738]: E1108 00:29:34.859409 2738 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:29:34.875697 kubelet[2738]: E1108 00:29:34.859512 2738 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:29:34.875697 kubelet[2738]: W1108 00:29:34.859516 2738 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:29:34.882207 kubelet[2738]: E1108 00:29:34.859521 2738 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:29:34.882207 kubelet[2738]: E1108 00:29:34.859646 2738 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:29:34.882207 kubelet[2738]: W1108 00:29:34.859651 2738 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:29:34.882207 kubelet[2738]: E1108 00:29:34.859655 2738 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:29:34.882207 kubelet[2738]: E1108 00:29:34.859818 2738 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:29:34.882207 kubelet[2738]: W1108 00:29:34.859824 2738 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:29:34.882207 kubelet[2738]: E1108 00:29:34.859829 2738 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:29:34.882207 kubelet[2738]: E1108 00:29:34.859942 2738 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:29:34.882207 kubelet[2738]: W1108 00:29:34.859946 2738 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:29:34.882207 kubelet[2738]: E1108 00:29:34.859951 2738 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:29:34.881319 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount362099254.mount: Deactivated successfully. Nov 8 00:29:34.894581 containerd[1547]: time="2025-11-08T00:29:34.894532850Z" level=info msg="CreateContainer within sandbox \"18b882265e4fec5ed8da48d850feaeb246d2c0470207606499eb1b30d741b5ea\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"4a510626620b12686d0178c02e181a15f737e1a84ead4224b64489207d30d721\"" Nov 8 00:29:34.895819 containerd[1547]: time="2025-11-08T00:29:34.895035029Z" level=info msg="StartContainer for \"4a510626620b12686d0178c02e181a15f737e1a84ead4224b64489207d30d721\"" Nov 8 00:29:34.915738 systemd[1]: Started cri-containerd-4a510626620b12686d0178c02e181a15f737e1a84ead4224b64489207d30d721.scope - libcontainer container 4a510626620b12686d0178c02e181a15f737e1a84ead4224b64489207d30d721. Nov 8 00:29:34.942298 containerd[1547]: time="2025-11-08T00:29:34.942273162Z" level=info msg="StartContainer for \"4a510626620b12686d0178c02e181a15f737e1a84ead4224b64489207d30d721\" returns successfully" Nov 8 00:29:34.950061 systemd[1]: cri-containerd-4a510626620b12686d0178c02e181a15f737e1a84ead4224b64489207d30d721.scope: Deactivated successfully. Nov 8 00:29:34.966680 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-4a510626620b12686d0178c02e181a15f737e1a84ead4224b64489207d30d721-rootfs.mount: Deactivated successfully. Nov 8 00:29:35.022762 containerd[1547]: time="2025-11-08T00:29:35.022712506Z" level=info msg="shim disconnected" id=4a510626620b12686d0178c02e181a15f737e1a84ead4224b64489207d30d721 namespace=k8s.io Nov 8 00:29:35.022762 containerd[1547]: time="2025-11-08T00:29:35.022755226Z" level=warning msg="cleaning up after shim disconnected" id=4a510626620b12686d0178c02e181a15f737e1a84ead4224b64489207d30d721 namespace=k8s.io Nov 8 00:29:35.022762 containerd[1547]: time="2025-11-08T00:29:35.022760753Z" level=info msg="cleaning up dead shim" namespace=k8s.io Nov 8 00:29:35.735068 containerd[1547]: time="2025-11-08T00:29:35.734800888Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.4\"" Nov 8 00:29:36.488767 kubelet[2738]: E1108 00:29:36.488226 2738 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-w4kl5" podUID="a1ec52db-bd41-4d19-b1f6-a1fab4a28f01" Nov 8 00:29:38.494226 kubelet[2738]: E1108 00:29:38.494199 2738 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-w4kl5" podUID="a1ec52db-bd41-4d19-b1f6-a1fab4a28f01" Nov 8 00:29:38.497270 containerd[1547]: time="2025-11-08T00:29:38.496213388Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:29:38.497270 containerd[1547]: time="2025-11-08T00:29:38.496479380Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.30.4: active requests=0, bytes read=70446859" Nov 8 00:29:38.497270 containerd[1547]: time="2025-11-08T00:29:38.496791645Z" level=info msg="ImageCreate event name:\"sha256:24e1e7377c738d4080eb462a29e2c6756d383d8d25ad87b7f49165581f20c3cd\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:29:38.498354 containerd[1547]: time="2025-11-08T00:29:38.498342254Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:273501a9cfbd848ade2b6a8452dfafdd3adb4f9bf9aec45c398a5d19b8026627\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:29:38.499195 containerd[1547]: time="2025-11-08T00:29:38.498978954Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.30.4\" with image id \"sha256:24e1e7377c738d4080eb462a29e2c6756d383d8d25ad87b7f49165581f20c3cd\", repo tag \"ghcr.io/flatcar/calico/cni:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:273501a9cfbd848ade2b6a8452dfafdd3adb4f9bf9aec45c398a5d19b8026627\", size \"71941459\" in 2.764153991s" Nov 8 00:29:38.499195 containerd[1547]: time="2025-11-08T00:29:38.498998173Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.4\" returns image reference \"sha256:24e1e7377c738d4080eb462a29e2c6756d383d8d25ad87b7f49165581f20c3cd\"" Nov 8 00:29:38.502009 containerd[1547]: time="2025-11-08T00:29:38.501489322Z" level=info msg="CreateContainer within sandbox \"18b882265e4fec5ed8da48d850feaeb246d2c0470207606499eb1b30d741b5ea\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Nov 8 00:29:38.509801 containerd[1547]: time="2025-11-08T00:29:38.509432102Z" level=info msg="CreateContainer within sandbox \"18b882265e4fec5ed8da48d850feaeb246d2c0470207606499eb1b30d741b5ea\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"597d83508b9567127754537cc8b67dc8da9e7265349367583b974d5458b68acb\"" Nov 8 00:29:38.509947 containerd[1547]: time="2025-11-08T00:29:38.509935241Z" level=info msg="StartContainer for \"597d83508b9567127754537cc8b67dc8da9e7265349367583b974d5458b68acb\"" Nov 8 00:29:38.536693 systemd[1]: Started cri-containerd-597d83508b9567127754537cc8b67dc8da9e7265349367583b974d5458b68acb.scope - libcontainer container 597d83508b9567127754537cc8b67dc8da9e7265349367583b974d5458b68acb. Nov 8 00:29:38.561285 containerd[1547]: time="2025-11-08T00:29:38.561262263Z" level=info msg="StartContainer for \"597d83508b9567127754537cc8b67dc8da9e7265349367583b974d5458b68acb\" returns successfully" Nov 8 00:29:40.121216 systemd[1]: cri-containerd-597d83508b9567127754537cc8b67dc8da9e7265349367583b974d5458b68acb.scope: Deactivated successfully. Nov 8 00:29:40.146491 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-597d83508b9567127754537cc8b67dc8da9e7265349367583b974d5458b68acb-rootfs.mount: Deactivated successfully. Nov 8 00:29:40.243092 kubelet[2738]: I1108 00:29:40.200892 2738 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Nov 8 00:29:40.257829 containerd[1547]: time="2025-11-08T00:29:40.257773888Z" level=info msg="shim disconnected" id=597d83508b9567127754537cc8b67dc8da9e7265349367583b974d5458b68acb namespace=k8s.io Nov 8 00:29:40.257829 containerd[1547]: time="2025-11-08T00:29:40.257814767Z" level=warning msg="cleaning up after shim disconnected" id=597d83508b9567127754537cc8b67dc8da9e7265349367583b974d5458b68acb namespace=k8s.io Nov 8 00:29:40.257829 containerd[1547]: time="2025-11-08T00:29:40.257820659Z" level=info msg="cleaning up dead shim" namespace=k8s.io Nov 8 00:29:40.435777 kubelet[2738]: I1108 00:29:40.435747 2738 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6a9d7321-1148-43be-b5df-da7f193de30d-tigera-ca-bundle\") pod \"calico-kube-controllers-57d6675b9f-clrr6\" (UID: \"6a9d7321-1148-43be-b5df-da7f193de30d\") " pod="calico-system/calico-kube-controllers-57d6675b9f-clrr6" Nov 8 00:29:40.435875 kubelet[2738]: I1108 00:29:40.435781 2738 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/a6c7b38c-00b0-4b95-83b4-14d8b8afda37-calico-apiserver-certs\") pod \"calico-apiserver-7f5dbf8768-w74ds\" (UID: \"a6c7b38c-00b0-4b95-83b4-14d8b8afda37\") " pod="calico-apiserver/calico-apiserver-7f5dbf8768-w74ds" Nov 8 00:29:40.435875 kubelet[2738]: I1108 00:29:40.435794 2738 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-79m85\" (UniqueName: \"kubernetes.io/projected/c5cc6a0f-3bc6-4948-a19c-70ab4e2da335-kube-api-access-79m85\") pod \"whisker-6f4bb8f964-gbjvr\" (UID: \"c5cc6a0f-3bc6-4948-a19c-70ab4e2da335\") " pod="calico-system/whisker-6f4bb8f964-gbjvr" Nov 8 00:29:40.435875 kubelet[2738]: I1108 00:29:40.435810 2738 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-srjmq\" (UniqueName: \"kubernetes.io/projected/6a9d7321-1148-43be-b5df-da7f193de30d-kube-api-access-srjmq\") pod \"calico-kube-controllers-57d6675b9f-clrr6\" (UID: \"6a9d7321-1148-43be-b5df-da7f193de30d\") " pod="calico-system/calico-kube-controllers-57d6675b9f-clrr6" Nov 8 00:29:40.435875 kubelet[2738]: I1108 00:29:40.435828 2738 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/dc1a7be6-78b9-4b63-807c-f29c0ef99466-calico-apiserver-certs\") pod \"calico-apiserver-7f5dbf8768-lwfmb\" (UID: \"dc1a7be6-78b9-4b63-807c-f29c0ef99466\") " pod="calico-apiserver/calico-apiserver-7f5dbf8768-lwfmb" Nov 8 00:29:40.435875 kubelet[2738]: I1108 00:29:40.435840 2738 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/007a5707-c952-467d-a723-faa6baf2e9bc-goldmane-ca-bundle\") pod \"goldmane-666569f655-tnwtm\" (UID: \"007a5707-c952-467d-a723-faa6baf2e9bc\") " pod="calico-system/goldmane-666569f655-tnwtm" Nov 8 00:29:40.436201 kubelet[2738]: I1108 00:29:40.435853 2738 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f6tp5\" (UniqueName: \"kubernetes.io/projected/007a5707-c952-467d-a723-faa6baf2e9bc-kube-api-access-f6tp5\") pod \"goldmane-666569f655-tnwtm\" (UID: \"007a5707-c952-467d-a723-faa6baf2e9bc\") " pod="calico-system/goldmane-666569f655-tnwtm" Nov 8 00:29:40.436201 kubelet[2738]: I1108 00:29:40.435865 2738 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-65sn4\" (UniqueName: \"kubernetes.io/projected/1885839c-21b3-4320-a460-ea9b5405da38-kube-api-access-65sn4\") pod \"coredns-674b8bbfcf-v5dvc\" (UID: \"1885839c-21b3-4320-a460-ea9b5405da38\") " pod="kube-system/coredns-674b8bbfcf-v5dvc" Nov 8 00:29:40.436201 kubelet[2738]: I1108 00:29:40.435879 2738 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s9wmf\" (UniqueName: \"kubernetes.io/projected/a6c7b38c-00b0-4b95-83b4-14d8b8afda37-kube-api-access-s9wmf\") pod \"calico-apiserver-7f5dbf8768-w74ds\" (UID: \"a6c7b38c-00b0-4b95-83b4-14d8b8afda37\") " pod="calico-apiserver/calico-apiserver-7f5dbf8768-w74ds" Nov 8 00:29:40.436201 kubelet[2738]: I1108 00:29:40.435892 2738 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/007a5707-c952-467d-a723-faa6baf2e9bc-config\") pod \"goldmane-666569f655-tnwtm\" (UID: \"007a5707-c952-467d-a723-faa6baf2e9bc\") " pod="calico-system/goldmane-666569f655-tnwtm" Nov 8 00:29:40.436201 kubelet[2738]: I1108 00:29:40.435902 2738 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/d48d5302-73ac-4c35-86c4-ee48c074bbf4-config-volume\") pod \"coredns-674b8bbfcf-4xpww\" (UID: \"d48d5302-73ac-4c35-86c4-ee48c074bbf4\") " pod="kube-system/coredns-674b8bbfcf-4xpww" Nov 8 00:29:40.436301 kubelet[2738]: I1108 00:29:40.435919 2738 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6658x\" (UniqueName: \"kubernetes.io/projected/dc1a7be6-78b9-4b63-807c-f29c0ef99466-kube-api-access-6658x\") pod \"calico-apiserver-7f5dbf8768-lwfmb\" (UID: \"dc1a7be6-78b9-4b63-807c-f29c0ef99466\") " pod="calico-apiserver/calico-apiserver-7f5dbf8768-lwfmb" Nov 8 00:29:40.436301 kubelet[2738]: I1108 00:29:40.435930 2738 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/007a5707-c952-467d-a723-faa6baf2e9bc-goldmane-key-pair\") pod \"goldmane-666569f655-tnwtm\" (UID: \"007a5707-c952-467d-a723-faa6baf2e9bc\") " pod="calico-system/goldmane-666569f655-tnwtm" Nov 8 00:29:40.436301 kubelet[2738]: I1108 00:29:40.435940 2738 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/1885839c-21b3-4320-a460-ea9b5405da38-config-volume\") pod \"coredns-674b8bbfcf-v5dvc\" (UID: \"1885839c-21b3-4320-a460-ea9b5405da38\") " pod="kube-system/coredns-674b8bbfcf-v5dvc" Nov 8 00:29:40.436301 kubelet[2738]: I1108 00:29:40.435951 2738 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lb9kd\" (UniqueName: \"kubernetes.io/projected/d48d5302-73ac-4c35-86c4-ee48c074bbf4-kube-api-access-lb9kd\") pod \"coredns-674b8bbfcf-4xpww\" (UID: \"d48d5302-73ac-4c35-86c4-ee48c074bbf4\") " pod="kube-system/coredns-674b8bbfcf-4xpww" Nov 8 00:29:40.436301 kubelet[2738]: I1108 00:29:40.435962 2738 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/c5cc6a0f-3bc6-4948-a19c-70ab4e2da335-whisker-backend-key-pair\") pod \"whisker-6f4bb8f964-gbjvr\" (UID: \"c5cc6a0f-3bc6-4948-a19c-70ab4e2da335\") " pod="calico-system/whisker-6f4bb8f964-gbjvr" Nov 8 00:29:40.436395 kubelet[2738]: I1108 00:29:40.435973 2738 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c5cc6a0f-3bc6-4948-a19c-70ab4e2da335-whisker-ca-bundle\") pod \"whisker-6f4bb8f964-gbjvr\" (UID: \"c5cc6a0f-3bc6-4948-a19c-70ab4e2da335\") " pod="calico-system/whisker-6f4bb8f964-gbjvr" Nov 8 00:29:40.447556 systemd[1]: Created slice kubepods-burstable-pod1885839c_21b3_4320_a460_ea9b5405da38.slice - libcontainer container kubepods-burstable-pod1885839c_21b3_4320_a460_ea9b5405da38.slice. Nov 8 00:29:40.455461 systemd[1]: Created slice kubepods-besteffort-poddc1a7be6_78b9_4b63_807c_f29c0ef99466.slice - libcontainer container kubepods-besteffort-poddc1a7be6_78b9_4b63_807c_f29c0ef99466.slice. Nov 8 00:29:40.462237 systemd[1]: Created slice kubepods-besteffort-pod6a9d7321_1148_43be_b5df_da7f193de30d.slice - libcontainer container kubepods-besteffort-pod6a9d7321_1148_43be_b5df_da7f193de30d.slice. Nov 8 00:29:40.468332 systemd[1]: Created slice kubepods-besteffort-poda6c7b38c_00b0_4b95_83b4_14d8b8afda37.slice - libcontainer container kubepods-besteffort-poda6c7b38c_00b0_4b95_83b4_14d8b8afda37.slice. Nov 8 00:29:40.473127 systemd[1]: Created slice kubepods-besteffort-pod007a5707_c952_467d_a723_faa6baf2e9bc.slice - libcontainer container kubepods-besteffort-pod007a5707_c952_467d_a723_faa6baf2e9bc.slice. Nov 8 00:29:40.479382 systemd[1]: Created slice kubepods-burstable-podd48d5302_73ac_4c35_86c4_ee48c074bbf4.slice - libcontainer container kubepods-burstable-podd48d5302_73ac_4c35_86c4_ee48c074bbf4.slice. Nov 8 00:29:40.485978 systemd[1]: Created slice kubepods-besteffort-podc5cc6a0f_3bc6_4948_a19c_70ab4e2da335.slice - libcontainer container kubepods-besteffort-podc5cc6a0f_3bc6_4948_a19c_70ab4e2da335.slice. Nov 8 00:29:40.496953 systemd[1]: Created slice kubepods-besteffort-poda1ec52db_bd41_4d19_b1f6_a1fab4a28f01.slice - libcontainer container kubepods-besteffort-poda1ec52db_bd41_4d19_b1f6_a1fab4a28f01.slice. Nov 8 00:29:40.512555 containerd[1547]: time="2025-11-08T00:29:40.512527530Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-w4kl5,Uid:a1ec52db-bd41-4d19-b1f6-a1fab4a28f01,Namespace:calico-system,Attempt:0,}" Nov 8 00:29:40.753385 containerd[1547]: time="2025-11-08T00:29:40.752654637Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-v5dvc,Uid:1885839c-21b3-4320-a460-ea9b5405da38,Namespace:kube-system,Attempt:0,}" Nov 8 00:29:40.758995 containerd[1547]: time="2025-11-08T00:29:40.758949565Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7f5dbf8768-lwfmb,Uid:dc1a7be6-78b9-4b63-807c-f29c0ef99466,Namespace:calico-apiserver,Attempt:0,}" Nov 8 00:29:40.765882 containerd[1547]: time="2025-11-08T00:29:40.765765986Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-57d6675b9f-clrr6,Uid:6a9d7321-1148-43be-b5df-da7f193de30d,Namespace:calico-system,Attempt:0,}" Nov 8 00:29:40.771615 containerd[1547]: time="2025-11-08T00:29:40.770770050Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7f5dbf8768-w74ds,Uid:a6c7b38c-00b0-4b95-83b4-14d8b8afda37,Namespace:calico-apiserver,Attempt:0,}" Nov 8 00:29:40.777220 containerd[1547]: time="2025-11-08T00:29:40.777042259Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-tnwtm,Uid:007a5707-c952-467d-a723-faa6baf2e9bc,Namespace:calico-system,Attempt:0,}" Nov 8 00:29:40.783295 containerd[1547]: time="2025-11-08T00:29:40.783250548Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-4xpww,Uid:d48d5302-73ac-4c35-86c4-ee48c074bbf4,Namespace:kube-system,Attempt:0,}" Nov 8 00:29:40.783572 containerd[1547]: time="2025-11-08T00:29:40.783560697Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.4\"" Nov 8 00:29:40.789779 containerd[1547]: time="2025-11-08T00:29:40.789595477Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-6f4bb8f964-gbjvr,Uid:c5cc6a0f-3bc6-4948-a19c-70ab4e2da335,Namespace:calico-system,Attempt:0,}" Nov 8 00:29:40.845017 containerd[1547]: time="2025-11-08T00:29:40.844990569Z" level=error msg="Failed to destroy network for sandbox \"264f8590fe4f357643cea05a5d2fcf0e900dec9405587a87e7b08a6f22f38702\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:29:40.849512 containerd[1547]: time="2025-11-08T00:29:40.849484691Z" level=error msg="encountered an error cleaning up failed sandbox \"264f8590fe4f357643cea05a5d2fcf0e900dec9405587a87e7b08a6f22f38702\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:29:40.849682 containerd[1547]: time="2025-11-08T00:29:40.849660246Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-w4kl5,Uid:a1ec52db-bd41-4d19-b1f6-a1fab4a28f01,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"264f8590fe4f357643cea05a5d2fcf0e900dec9405587a87e7b08a6f22f38702\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:29:40.883997 kubelet[2738]: E1108 00:29:40.883965 2738 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"264f8590fe4f357643cea05a5d2fcf0e900dec9405587a87e7b08a6f22f38702\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:29:40.884785 kubelet[2738]: E1108 00:29:40.884539 2738 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"264f8590fe4f357643cea05a5d2fcf0e900dec9405587a87e7b08a6f22f38702\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-w4kl5" Nov 8 00:29:40.888542 kubelet[2738]: E1108 00:29:40.888516 2738 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"264f8590fe4f357643cea05a5d2fcf0e900dec9405587a87e7b08a6f22f38702\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-w4kl5" Nov 8 00:29:40.888661 kubelet[2738]: E1108 00:29:40.888580 2738 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-w4kl5_calico-system(a1ec52db-bd41-4d19-b1f6-a1fab4a28f01)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-w4kl5_calico-system(a1ec52db-bd41-4d19-b1f6-a1fab4a28f01)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"264f8590fe4f357643cea05a5d2fcf0e900dec9405587a87e7b08a6f22f38702\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-w4kl5" podUID="a1ec52db-bd41-4d19-b1f6-a1fab4a28f01" Nov 8 00:29:40.898543 containerd[1547]: time="2025-11-08T00:29:40.898508405Z" level=error msg="Failed to destroy network for sandbox \"889a01e4428cbf25f34fb7e7458a2e1f1c4cd45285473cf4a813857154399f91\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:29:40.901046 containerd[1547]: time="2025-11-08T00:29:40.901025245Z" level=error msg="encountered an error cleaning up failed sandbox \"889a01e4428cbf25f34fb7e7458a2e1f1c4cd45285473cf4a813857154399f91\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:29:40.901949 containerd[1547]: time="2025-11-08T00:29:40.901932264Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-v5dvc,Uid:1885839c-21b3-4320-a460-ea9b5405da38,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"889a01e4428cbf25f34fb7e7458a2e1f1c4cd45285473cf4a813857154399f91\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:29:40.905405 kubelet[2738]: E1108 00:29:40.904625 2738 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"889a01e4428cbf25f34fb7e7458a2e1f1c4cd45285473cf4a813857154399f91\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:29:40.905405 kubelet[2738]: E1108 00:29:40.904670 2738 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"889a01e4428cbf25f34fb7e7458a2e1f1c4cd45285473cf4a813857154399f91\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-v5dvc" Nov 8 00:29:40.905405 kubelet[2738]: E1108 00:29:40.904685 2738 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"889a01e4428cbf25f34fb7e7458a2e1f1c4cd45285473cf4a813857154399f91\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-v5dvc" Nov 8 00:29:40.905522 kubelet[2738]: E1108 00:29:40.904727 2738 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-674b8bbfcf-v5dvc_kube-system(1885839c-21b3-4320-a460-ea9b5405da38)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-674b8bbfcf-v5dvc_kube-system(1885839c-21b3-4320-a460-ea9b5405da38)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"889a01e4428cbf25f34fb7e7458a2e1f1c4cd45285473cf4a813857154399f91\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-v5dvc" podUID="1885839c-21b3-4320-a460-ea9b5405da38" Nov 8 00:29:40.939880 containerd[1547]: time="2025-11-08T00:29:40.939852799Z" level=error msg="Failed to destroy network for sandbox \"5172712585d1bf9ef5c0faa0219e0f32c4c93d030eab81dc4074bf3c7ef369da\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:29:40.940300 containerd[1547]: time="2025-11-08T00:29:40.940282644Z" level=error msg="encountered an error cleaning up failed sandbox \"5172712585d1bf9ef5c0faa0219e0f32c4c93d030eab81dc4074bf3c7ef369da\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:29:40.940567 containerd[1547]: time="2025-11-08T00:29:40.940551040Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7f5dbf8768-w74ds,Uid:a6c7b38c-00b0-4b95-83b4-14d8b8afda37,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"5172712585d1bf9ef5c0faa0219e0f32c4c93d030eab81dc4074bf3c7ef369da\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:29:40.941049 kubelet[2738]: E1108 00:29:40.941026 2738 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5172712585d1bf9ef5c0faa0219e0f32c4c93d030eab81dc4074bf3c7ef369da\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:29:40.941095 kubelet[2738]: E1108 00:29:40.941063 2738 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5172712585d1bf9ef5c0faa0219e0f32c4c93d030eab81dc4074bf3c7ef369da\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7f5dbf8768-w74ds" Nov 8 00:29:40.941095 kubelet[2738]: E1108 00:29:40.941078 2738 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5172712585d1bf9ef5c0faa0219e0f32c4c93d030eab81dc4074bf3c7ef369da\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7f5dbf8768-w74ds" Nov 8 00:29:40.941157 kubelet[2738]: E1108 00:29:40.941107 2738 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-7f5dbf8768-w74ds_calico-apiserver(a6c7b38c-00b0-4b95-83b4-14d8b8afda37)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-7f5dbf8768-w74ds_calico-apiserver(a6c7b38c-00b0-4b95-83b4-14d8b8afda37)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"5172712585d1bf9ef5c0faa0219e0f32c4c93d030eab81dc4074bf3c7ef369da\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-7f5dbf8768-w74ds" podUID="a6c7b38c-00b0-4b95-83b4-14d8b8afda37" Nov 8 00:29:40.952282 containerd[1547]: time="2025-11-08T00:29:40.952250806Z" level=error msg="Failed to destroy network for sandbox \"20116e07a9e27f80e6ec41695f7f4f93fb7d93cf60575368a1d9ed5bc0f9c053\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:29:40.952484 containerd[1547]: time="2025-11-08T00:29:40.952474007Z" level=error msg="encountered an error cleaning up failed sandbox \"20116e07a9e27f80e6ec41695f7f4f93fb7d93cf60575368a1d9ed5bc0f9c053\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:29:40.952523 containerd[1547]: time="2025-11-08T00:29:40.952505628Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-4xpww,Uid:d48d5302-73ac-4c35-86c4-ee48c074bbf4,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"20116e07a9e27f80e6ec41695f7f4f93fb7d93cf60575368a1d9ed5bc0f9c053\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:29:40.952683 kubelet[2738]: E1108 00:29:40.952657 2738 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"20116e07a9e27f80e6ec41695f7f4f93fb7d93cf60575368a1d9ed5bc0f9c053\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:29:40.952722 kubelet[2738]: E1108 00:29:40.952696 2738 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"20116e07a9e27f80e6ec41695f7f4f93fb7d93cf60575368a1d9ed5bc0f9c053\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-4xpww" Nov 8 00:29:40.952722 kubelet[2738]: E1108 00:29:40.952710 2738 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"20116e07a9e27f80e6ec41695f7f4f93fb7d93cf60575368a1d9ed5bc0f9c053\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-4xpww" Nov 8 00:29:40.952764 kubelet[2738]: E1108 00:29:40.952740 2738 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-674b8bbfcf-4xpww_kube-system(d48d5302-73ac-4c35-86c4-ee48c074bbf4)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-674b8bbfcf-4xpww_kube-system(d48d5302-73ac-4c35-86c4-ee48c074bbf4)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"20116e07a9e27f80e6ec41695f7f4f93fb7d93cf60575368a1d9ed5bc0f9c053\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-4xpww" podUID="d48d5302-73ac-4c35-86c4-ee48c074bbf4" Nov 8 00:29:40.956673 containerd[1547]: time="2025-11-08T00:29:40.956642390Z" level=error msg="Failed to destroy network for sandbox \"16ee7b0656a955c5eeb3aac32acbc5e262d9ed7a5d11f72e1bd5f47b98c075e1\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:29:40.956865 containerd[1547]: time="2025-11-08T00:29:40.956848709Z" level=error msg="encountered an error cleaning up failed sandbox \"16ee7b0656a955c5eeb3aac32acbc5e262d9ed7a5d11f72e1bd5f47b98c075e1\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:29:40.956897 containerd[1547]: time="2025-11-08T00:29:40.956881825Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7f5dbf8768-lwfmb,Uid:dc1a7be6-78b9-4b63-807c-f29c0ef99466,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"16ee7b0656a955c5eeb3aac32acbc5e262d9ed7a5d11f72e1bd5f47b98c075e1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:29:40.957009 kubelet[2738]: E1108 00:29:40.956986 2738 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"16ee7b0656a955c5eeb3aac32acbc5e262d9ed7a5d11f72e1bd5f47b98c075e1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:29:40.957048 kubelet[2738]: E1108 00:29:40.957021 2738 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"16ee7b0656a955c5eeb3aac32acbc5e262d9ed7a5d11f72e1bd5f47b98c075e1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7f5dbf8768-lwfmb" Nov 8 00:29:40.957048 kubelet[2738]: E1108 00:29:40.957033 2738 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"16ee7b0656a955c5eeb3aac32acbc5e262d9ed7a5d11f72e1bd5f47b98c075e1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7f5dbf8768-lwfmb" Nov 8 00:29:40.957357 kubelet[2738]: E1108 00:29:40.957062 2738 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-7f5dbf8768-lwfmb_calico-apiserver(dc1a7be6-78b9-4b63-807c-f29c0ef99466)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-7f5dbf8768-lwfmb_calico-apiserver(dc1a7be6-78b9-4b63-807c-f29c0ef99466)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"16ee7b0656a955c5eeb3aac32acbc5e262d9ed7a5d11f72e1bd5f47b98c075e1\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-7f5dbf8768-lwfmb" podUID="dc1a7be6-78b9-4b63-807c-f29c0ef99466" Nov 8 00:29:40.967182 containerd[1547]: time="2025-11-08T00:29:40.967149980Z" level=error msg="Failed to destroy network for sandbox \"dbef0dff2b1755620ff851e1a9d0ce276f603f1685c53aa100b939e8f1fc9044\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:29:40.967381 containerd[1547]: time="2025-11-08T00:29:40.967363326Z" level=error msg="encountered an error cleaning up failed sandbox \"dbef0dff2b1755620ff851e1a9d0ce276f603f1685c53aa100b939e8f1fc9044\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:29:40.967415 containerd[1547]: time="2025-11-08T00:29:40.967402597Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-57d6675b9f-clrr6,Uid:6a9d7321-1148-43be-b5df-da7f193de30d,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"dbef0dff2b1755620ff851e1a9d0ce276f603f1685c53aa100b939e8f1fc9044\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:29:40.967550 kubelet[2738]: E1108 00:29:40.967525 2738 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"dbef0dff2b1755620ff851e1a9d0ce276f603f1685c53aa100b939e8f1fc9044\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:29:40.967581 kubelet[2738]: E1108 00:29:40.967563 2738 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"dbef0dff2b1755620ff851e1a9d0ce276f603f1685c53aa100b939e8f1fc9044\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-57d6675b9f-clrr6" Nov 8 00:29:40.967615 kubelet[2738]: E1108 00:29:40.967578 2738 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"dbef0dff2b1755620ff851e1a9d0ce276f603f1685c53aa100b939e8f1fc9044\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-57d6675b9f-clrr6" Nov 8 00:29:40.968174 kubelet[2738]: E1108 00:29:40.967620 2738 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-57d6675b9f-clrr6_calico-system(6a9d7321-1148-43be-b5df-da7f193de30d)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-57d6675b9f-clrr6_calico-system(6a9d7321-1148-43be-b5df-da7f193de30d)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"dbef0dff2b1755620ff851e1a9d0ce276f603f1685c53aa100b939e8f1fc9044\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-57d6675b9f-clrr6" podUID="6a9d7321-1148-43be-b5df-da7f193de30d" Nov 8 00:29:40.971340 containerd[1547]: time="2025-11-08T00:29:40.971316137Z" level=error msg="Failed to destroy network for sandbox \"7e21492b8efb94462f2859236a0b114fb1df55989465021aff0a26e7124d4ffc\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:29:40.971559 containerd[1547]: time="2025-11-08T00:29:40.971542086Z" level=error msg="encountered an error cleaning up failed sandbox \"7e21492b8efb94462f2859236a0b114fb1df55989465021aff0a26e7124d4ffc\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:29:40.971590 containerd[1547]: time="2025-11-08T00:29:40.971580673Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-tnwtm,Uid:007a5707-c952-467d-a723-faa6baf2e9bc,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"7e21492b8efb94462f2859236a0b114fb1df55989465021aff0a26e7124d4ffc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:29:40.971801 kubelet[2738]: E1108 00:29:40.971774 2738 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7e21492b8efb94462f2859236a0b114fb1df55989465021aff0a26e7124d4ffc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:29:40.971840 kubelet[2738]: E1108 00:29:40.971814 2738 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7e21492b8efb94462f2859236a0b114fb1df55989465021aff0a26e7124d4ffc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-666569f655-tnwtm" Nov 8 00:29:40.971840 kubelet[2738]: E1108 00:29:40.971828 2738 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7e21492b8efb94462f2859236a0b114fb1df55989465021aff0a26e7124d4ffc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-666569f655-tnwtm" Nov 8 00:29:40.971882 kubelet[2738]: E1108 00:29:40.971854 2738 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-666569f655-tnwtm_calico-system(007a5707-c952-467d-a723-faa6baf2e9bc)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-666569f655-tnwtm_calico-system(007a5707-c952-467d-a723-faa6baf2e9bc)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"7e21492b8efb94462f2859236a0b114fb1df55989465021aff0a26e7124d4ffc\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-666569f655-tnwtm" podUID="007a5707-c952-467d-a723-faa6baf2e9bc" Nov 8 00:29:40.977812 containerd[1547]: time="2025-11-08T00:29:40.977786543Z" level=error msg="Failed to destroy network for sandbox \"80507d9e861ad2c8a455bbe41b1344d725eb73c0d460351190e10f277e760590\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:29:40.978009 containerd[1547]: time="2025-11-08T00:29:40.977991252Z" level=error msg="encountered an error cleaning up failed sandbox \"80507d9e861ad2c8a455bbe41b1344d725eb73c0d460351190e10f277e760590\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:29:40.978045 containerd[1547]: time="2025-11-08T00:29:40.978032583Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-6f4bb8f964-gbjvr,Uid:c5cc6a0f-3bc6-4948-a19c-70ab4e2da335,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"80507d9e861ad2c8a455bbe41b1344d725eb73c0d460351190e10f277e760590\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:29:40.978171 kubelet[2738]: E1108 00:29:40.978146 2738 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"80507d9e861ad2c8a455bbe41b1344d725eb73c0d460351190e10f277e760590\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:29:40.978211 kubelet[2738]: E1108 00:29:40.978200 2738 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"80507d9e861ad2c8a455bbe41b1344d725eb73c0d460351190e10f277e760590\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-6f4bb8f964-gbjvr" Nov 8 00:29:40.978232 kubelet[2738]: E1108 00:29:40.978213 2738 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"80507d9e861ad2c8a455bbe41b1344d725eb73c0d460351190e10f277e760590\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-6f4bb8f964-gbjvr" Nov 8 00:29:40.978252 kubelet[2738]: E1108 00:29:40.978242 2738 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-6f4bb8f964-gbjvr_calico-system(c5cc6a0f-3bc6-4948-a19c-70ab4e2da335)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-6f4bb8f964-gbjvr_calico-system(c5cc6a0f-3bc6-4948-a19c-70ab4e2da335)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"80507d9e861ad2c8a455bbe41b1344d725eb73c0d460351190e10f277e760590\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-6f4bb8f964-gbjvr" podUID="c5cc6a0f-3bc6-4948-a19c-70ab4e2da335" Nov 8 00:29:41.154654 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-264f8590fe4f357643cea05a5d2fcf0e900dec9405587a87e7b08a6f22f38702-shm.mount: Deactivated successfully. Nov 8 00:29:41.792009 kubelet[2738]: I1108 00:29:41.791882 2738 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5172712585d1bf9ef5c0faa0219e0f32c4c93d030eab81dc4074bf3c7ef369da" Nov 8 00:29:41.793412 kubelet[2738]: I1108 00:29:41.793283 2738 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="80507d9e861ad2c8a455bbe41b1344d725eb73c0d460351190e10f277e760590" Nov 8 00:29:41.812106 kubelet[2738]: I1108 00:29:41.812087 2738 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="20116e07a9e27f80e6ec41695f7f4f93fb7d93cf60575368a1d9ed5bc0f9c053" Nov 8 00:29:41.814474 kubelet[2738]: I1108 00:29:41.814124 2738 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="dbef0dff2b1755620ff851e1a9d0ce276f603f1685c53aa100b939e8f1fc9044" Nov 8 00:29:41.816443 kubelet[2738]: I1108 00:29:41.816429 2738 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="264f8590fe4f357643cea05a5d2fcf0e900dec9405587a87e7b08a6f22f38702" Nov 8 00:29:41.818563 kubelet[2738]: I1108 00:29:41.818253 2738 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7e21492b8efb94462f2859236a0b114fb1df55989465021aff0a26e7124d4ffc" Nov 8 00:29:41.819868 kubelet[2738]: I1108 00:29:41.819858 2738 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="16ee7b0656a955c5eeb3aac32acbc5e262d9ed7a5d11f72e1bd5f47b98c075e1" Nov 8 00:29:41.821595 kubelet[2738]: I1108 00:29:41.821585 2738 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="889a01e4428cbf25f34fb7e7458a2e1f1c4cd45285473cf4a813857154399f91" Nov 8 00:29:41.837250 containerd[1547]: time="2025-11-08T00:29:41.837099761Z" level=info msg="StopPodSandbox for \"264f8590fe4f357643cea05a5d2fcf0e900dec9405587a87e7b08a6f22f38702\"" Nov 8 00:29:41.838305 containerd[1547]: time="2025-11-08T00:29:41.838109095Z" level=info msg="Ensure that sandbox 264f8590fe4f357643cea05a5d2fcf0e900dec9405587a87e7b08a6f22f38702 in task-service has been cleanup successfully" Nov 8 00:29:41.838834 containerd[1547]: time="2025-11-08T00:29:41.838647583Z" level=info msg="StopPodSandbox for \"7e21492b8efb94462f2859236a0b114fb1df55989465021aff0a26e7124d4ffc\"" Nov 8 00:29:41.838834 containerd[1547]: time="2025-11-08T00:29:41.838739311Z" level=info msg="Ensure that sandbox 7e21492b8efb94462f2859236a0b114fb1df55989465021aff0a26e7124d4ffc in task-service has been cleanup successfully" Nov 8 00:29:41.839138 containerd[1547]: time="2025-11-08T00:29:41.839119031Z" level=info msg="StopPodSandbox for \"20116e07a9e27f80e6ec41695f7f4f93fb7d93cf60575368a1d9ed5bc0f9c053\"" Nov 8 00:29:41.839214 containerd[1547]: time="2025-11-08T00:29:41.839202557Z" level=info msg="Ensure that sandbox 20116e07a9e27f80e6ec41695f7f4f93fb7d93cf60575368a1d9ed5bc0f9c053 in task-service has been cleanup successfully" Nov 8 00:29:41.839894 containerd[1547]: time="2025-11-08T00:29:41.839882832Z" level=info msg="StopPodSandbox for \"80507d9e861ad2c8a455bbe41b1344d725eb73c0d460351190e10f277e760590\"" Nov 8 00:29:41.840080 containerd[1547]: time="2025-11-08T00:29:41.840069419Z" level=info msg="StopPodSandbox for \"16ee7b0656a955c5eeb3aac32acbc5e262d9ed7a5d11f72e1bd5f47b98c075e1\"" Nov 8 00:29:41.841632 containerd[1547]: time="2025-11-08T00:29:41.841598127Z" level=info msg="Ensure that sandbox 80507d9e861ad2c8a455bbe41b1344d725eb73c0d460351190e10f277e760590 in task-service has been cleanup successfully" Nov 8 00:29:41.843872 containerd[1547]: time="2025-11-08T00:29:41.843829173Z" level=info msg="StopPodSandbox for \"5172712585d1bf9ef5c0faa0219e0f32c4c93d030eab81dc4074bf3c7ef369da\"" Nov 8 00:29:41.843988 containerd[1547]: time="2025-11-08T00:29:41.843943121Z" level=info msg="Ensure that sandbox 5172712585d1bf9ef5c0faa0219e0f32c4c93d030eab81dc4074bf3c7ef369da in task-service has been cleanup successfully" Nov 8 00:29:41.844181 containerd[1547]: time="2025-11-08T00:29:41.844118411Z" level=info msg="Ensure that sandbox 16ee7b0656a955c5eeb3aac32acbc5e262d9ed7a5d11f72e1bd5f47b98c075e1 in task-service has been cleanup successfully" Nov 8 00:29:41.844556 containerd[1547]: time="2025-11-08T00:29:41.844545060Z" level=info msg="StopPodSandbox for \"889a01e4428cbf25f34fb7e7458a2e1f1c4cd45285473cf4a813857154399f91\"" Nov 8 00:29:41.844678 containerd[1547]: time="2025-11-08T00:29:41.844669442Z" level=info msg="StopPodSandbox for \"dbef0dff2b1755620ff851e1a9d0ce276f603f1685c53aa100b939e8f1fc9044\"" Nov 8 00:29:41.845954 containerd[1547]: time="2025-11-08T00:29:41.845284780Z" level=info msg="Ensure that sandbox dbef0dff2b1755620ff851e1a9d0ce276f603f1685c53aa100b939e8f1fc9044 in task-service has been cleanup successfully" Nov 8 00:29:41.846130 containerd[1547]: time="2025-11-08T00:29:41.845336384Z" level=info msg="Ensure that sandbox 889a01e4428cbf25f34fb7e7458a2e1f1c4cd45285473cf4a813857154399f91 in task-service has been cleanup successfully" Nov 8 00:29:41.898011 containerd[1547]: time="2025-11-08T00:29:41.897978381Z" level=error msg="StopPodSandbox for \"20116e07a9e27f80e6ec41695f7f4f93fb7d93cf60575368a1d9ed5bc0f9c053\" failed" error="failed to destroy network for sandbox \"20116e07a9e27f80e6ec41695f7f4f93fb7d93cf60575368a1d9ed5bc0f9c053\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:29:41.899211 kubelet[2738]: E1108 00:29:41.898139 2738 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"20116e07a9e27f80e6ec41695f7f4f93fb7d93cf60575368a1d9ed5bc0f9c053\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="20116e07a9e27f80e6ec41695f7f4f93fb7d93cf60575368a1d9ed5bc0f9c053" Nov 8 00:29:41.900341 kubelet[2738]: E1108 00:29:41.899179 2738 kuberuntime_manager.go:1586] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"20116e07a9e27f80e6ec41695f7f4f93fb7d93cf60575368a1d9ed5bc0f9c053"} Nov 8 00:29:41.900341 kubelet[2738]: E1108 00:29:41.900261 2738 kuberuntime_manager.go:1161] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"d48d5302-73ac-4c35-86c4-ee48c074bbf4\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"20116e07a9e27f80e6ec41695f7f4f93fb7d93cf60575368a1d9ed5bc0f9c053\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Nov 8 00:29:41.900341 kubelet[2738]: E1108 00:29:41.900276 2738 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"d48d5302-73ac-4c35-86c4-ee48c074bbf4\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"20116e07a9e27f80e6ec41695f7f4f93fb7d93cf60575368a1d9ed5bc0f9c053\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-4xpww" podUID="d48d5302-73ac-4c35-86c4-ee48c074bbf4" Nov 8 00:29:41.911484 containerd[1547]: time="2025-11-08T00:29:41.911349367Z" level=error msg="StopPodSandbox for \"264f8590fe4f357643cea05a5d2fcf0e900dec9405587a87e7b08a6f22f38702\" failed" error="failed to destroy network for sandbox \"264f8590fe4f357643cea05a5d2fcf0e900dec9405587a87e7b08a6f22f38702\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:29:41.911781 kubelet[2738]: E1108 00:29:41.911482 2738 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"264f8590fe4f357643cea05a5d2fcf0e900dec9405587a87e7b08a6f22f38702\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="264f8590fe4f357643cea05a5d2fcf0e900dec9405587a87e7b08a6f22f38702" Nov 8 00:29:41.911781 kubelet[2738]: E1108 00:29:41.911517 2738 kuberuntime_manager.go:1586] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"264f8590fe4f357643cea05a5d2fcf0e900dec9405587a87e7b08a6f22f38702"} Nov 8 00:29:41.911781 kubelet[2738]: E1108 00:29:41.911540 2738 kuberuntime_manager.go:1161] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"a1ec52db-bd41-4d19-b1f6-a1fab4a28f01\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"264f8590fe4f357643cea05a5d2fcf0e900dec9405587a87e7b08a6f22f38702\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Nov 8 00:29:41.911781 kubelet[2738]: E1108 00:29:41.911560 2738 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"a1ec52db-bd41-4d19-b1f6-a1fab4a28f01\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"264f8590fe4f357643cea05a5d2fcf0e900dec9405587a87e7b08a6f22f38702\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-w4kl5" podUID="a1ec52db-bd41-4d19-b1f6-a1fab4a28f01" Nov 8 00:29:41.912959 containerd[1547]: time="2025-11-08T00:29:41.912942376Z" level=error msg="StopPodSandbox for \"16ee7b0656a955c5eeb3aac32acbc5e262d9ed7a5d11f72e1bd5f47b98c075e1\" failed" error="failed to destroy network for sandbox \"16ee7b0656a955c5eeb3aac32acbc5e262d9ed7a5d11f72e1bd5f47b98c075e1\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:29:41.913105 kubelet[2738]: E1108 00:29:41.913088 2738 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"16ee7b0656a955c5eeb3aac32acbc5e262d9ed7a5d11f72e1bd5f47b98c075e1\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="16ee7b0656a955c5eeb3aac32acbc5e262d9ed7a5d11f72e1bd5f47b98c075e1" Nov 8 00:29:41.913166 kubelet[2738]: E1108 00:29:41.913158 2738 kuberuntime_manager.go:1586] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"16ee7b0656a955c5eeb3aac32acbc5e262d9ed7a5d11f72e1bd5f47b98c075e1"} Nov 8 00:29:41.913240 kubelet[2738]: E1108 00:29:41.913213 2738 kuberuntime_manager.go:1161] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"dc1a7be6-78b9-4b63-807c-f29c0ef99466\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"16ee7b0656a955c5eeb3aac32acbc5e262d9ed7a5d11f72e1bd5f47b98c075e1\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Nov 8 00:29:41.913345 kubelet[2738]: E1108 00:29:41.913325 2738 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"dc1a7be6-78b9-4b63-807c-f29c0ef99466\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"16ee7b0656a955c5eeb3aac32acbc5e262d9ed7a5d11f72e1bd5f47b98c075e1\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-7f5dbf8768-lwfmb" podUID="dc1a7be6-78b9-4b63-807c-f29c0ef99466" Nov 8 00:29:41.913478 containerd[1547]: time="2025-11-08T00:29:41.913448564Z" level=error msg="StopPodSandbox for \"dbef0dff2b1755620ff851e1a9d0ce276f603f1685c53aa100b939e8f1fc9044\" failed" error="failed to destroy network for sandbox \"dbef0dff2b1755620ff851e1a9d0ce276f603f1685c53aa100b939e8f1fc9044\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:29:41.913625 kubelet[2738]: E1108 00:29:41.913609 2738 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"dbef0dff2b1755620ff851e1a9d0ce276f603f1685c53aa100b939e8f1fc9044\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="dbef0dff2b1755620ff851e1a9d0ce276f603f1685c53aa100b939e8f1fc9044" Nov 8 00:29:41.913658 kubelet[2738]: E1108 00:29:41.913628 2738 kuberuntime_manager.go:1586] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"dbef0dff2b1755620ff851e1a9d0ce276f603f1685c53aa100b939e8f1fc9044"} Nov 8 00:29:41.913658 kubelet[2738]: E1108 00:29:41.913641 2738 kuberuntime_manager.go:1161] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"6a9d7321-1148-43be-b5df-da7f193de30d\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"dbef0dff2b1755620ff851e1a9d0ce276f603f1685c53aa100b939e8f1fc9044\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Nov 8 00:29:41.913814 kubelet[2738]: E1108 00:29:41.913656 2738 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"6a9d7321-1148-43be-b5df-da7f193de30d\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"dbef0dff2b1755620ff851e1a9d0ce276f603f1685c53aa100b939e8f1fc9044\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-57d6675b9f-clrr6" podUID="6a9d7321-1148-43be-b5df-da7f193de30d" Nov 8 00:29:41.922960 containerd[1547]: time="2025-11-08T00:29:41.922931692Z" level=error msg="StopPodSandbox for \"80507d9e861ad2c8a455bbe41b1344d725eb73c0d460351190e10f277e760590\" failed" error="failed to destroy network for sandbox \"80507d9e861ad2c8a455bbe41b1344d725eb73c0d460351190e10f277e760590\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:29:41.923634 containerd[1547]: time="2025-11-08T00:29:41.923217315Z" level=error msg="StopPodSandbox for \"7e21492b8efb94462f2859236a0b114fb1df55989465021aff0a26e7124d4ffc\" failed" error="failed to destroy network for sandbox \"7e21492b8efb94462f2859236a0b114fb1df55989465021aff0a26e7124d4ffc\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:29:41.923692 kubelet[2738]: E1108 00:29:41.923537 2738 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"7e21492b8efb94462f2859236a0b114fb1df55989465021aff0a26e7124d4ffc\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="7e21492b8efb94462f2859236a0b114fb1df55989465021aff0a26e7124d4ffc" Nov 8 00:29:41.923692 kubelet[2738]: E1108 00:29:41.923569 2738 kuberuntime_manager.go:1586] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"7e21492b8efb94462f2859236a0b114fb1df55989465021aff0a26e7124d4ffc"} Nov 8 00:29:41.923692 kubelet[2738]: E1108 00:29:41.923589 2738 kuberuntime_manager.go:1161] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"007a5707-c952-467d-a723-faa6baf2e9bc\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"7e21492b8efb94462f2859236a0b114fb1df55989465021aff0a26e7124d4ffc\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Nov 8 00:29:41.923692 kubelet[2738]: E1108 00:29:41.923537 2738 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"80507d9e861ad2c8a455bbe41b1344d725eb73c0d460351190e10f277e760590\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="80507d9e861ad2c8a455bbe41b1344d725eb73c0d460351190e10f277e760590" Nov 8 00:29:41.923692 kubelet[2738]: E1108 00:29:41.923625 2738 kuberuntime_manager.go:1586] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"80507d9e861ad2c8a455bbe41b1344d725eb73c0d460351190e10f277e760590"} Nov 8 00:29:41.923836 kubelet[2738]: E1108 00:29:41.923640 2738 kuberuntime_manager.go:1161] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"c5cc6a0f-3bc6-4948-a19c-70ab4e2da335\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"80507d9e861ad2c8a455bbe41b1344d725eb73c0d460351190e10f277e760590\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Nov 8 00:29:41.923836 kubelet[2738]: E1108 00:29:41.923654 2738 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"c5cc6a0f-3bc6-4948-a19c-70ab4e2da335\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"80507d9e861ad2c8a455bbe41b1344d725eb73c0d460351190e10f277e760590\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-6f4bb8f964-gbjvr" podUID="c5cc6a0f-3bc6-4948-a19c-70ab4e2da335" Nov 8 00:29:41.924220 kubelet[2738]: E1108 00:29:41.923991 2738 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"007a5707-c952-467d-a723-faa6baf2e9bc\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"7e21492b8efb94462f2859236a0b114fb1df55989465021aff0a26e7124d4ffc\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-666569f655-tnwtm" podUID="007a5707-c952-467d-a723-faa6baf2e9bc" Nov 8 00:29:41.928776 containerd[1547]: time="2025-11-08T00:29:41.928754409Z" level=error msg="StopPodSandbox for \"889a01e4428cbf25f34fb7e7458a2e1f1c4cd45285473cf4a813857154399f91\" failed" error="failed to destroy network for sandbox \"889a01e4428cbf25f34fb7e7458a2e1f1c4cd45285473cf4a813857154399f91\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:29:41.929336 containerd[1547]: time="2025-11-08T00:29:41.929203853Z" level=error msg="StopPodSandbox for \"5172712585d1bf9ef5c0faa0219e0f32c4c93d030eab81dc4074bf3c7ef369da\" failed" error="failed to destroy network for sandbox \"5172712585d1bf9ef5c0faa0219e0f32c4c93d030eab81dc4074bf3c7ef369da\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:29:41.929467 kubelet[2738]: E1108 00:29:41.929434 2738 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"5172712585d1bf9ef5c0faa0219e0f32c4c93d030eab81dc4074bf3c7ef369da\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="5172712585d1bf9ef5c0faa0219e0f32c4c93d030eab81dc4074bf3c7ef369da" Nov 8 00:29:41.929498 kubelet[2738]: E1108 00:29:41.929464 2738 kuberuntime_manager.go:1586] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"5172712585d1bf9ef5c0faa0219e0f32c4c93d030eab81dc4074bf3c7ef369da"} Nov 8 00:29:41.929498 kubelet[2738]: E1108 00:29:41.929487 2738 kuberuntime_manager.go:1161] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"a6c7b38c-00b0-4b95-83b4-14d8b8afda37\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"5172712585d1bf9ef5c0faa0219e0f32c4c93d030eab81dc4074bf3c7ef369da\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Nov 8 00:29:41.929574 kubelet[2738]: E1108 00:29:41.929501 2738 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"a6c7b38c-00b0-4b95-83b4-14d8b8afda37\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"5172712585d1bf9ef5c0faa0219e0f32c4c93d030eab81dc4074bf3c7ef369da\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-7f5dbf8768-w74ds" podUID="a6c7b38c-00b0-4b95-83b4-14d8b8afda37" Nov 8 00:29:41.944849 kubelet[2738]: E1108 00:29:41.928894 2738 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"889a01e4428cbf25f34fb7e7458a2e1f1c4cd45285473cf4a813857154399f91\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="889a01e4428cbf25f34fb7e7458a2e1f1c4cd45285473cf4a813857154399f91" Nov 8 00:29:41.944849 kubelet[2738]: E1108 00:29:41.944281 2738 kuberuntime_manager.go:1586] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"889a01e4428cbf25f34fb7e7458a2e1f1c4cd45285473cf4a813857154399f91"} Nov 8 00:29:41.944849 kubelet[2738]: E1108 00:29:41.944304 2738 kuberuntime_manager.go:1161] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"1885839c-21b3-4320-a460-ea9b5405da38\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"889a01e4428cbf25f34fb7e7458a2e1f1c4cd45285473cf4a813857154399f91\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Nov 8 00:29:41.944849 kubelet[2738]: E1108 00:29:41.944317 2738 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"1885839c-21b3-4320-a460-ea9b5405da38\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"889a01e4428cbf25f34fb7e7458a2e1f1c4cd45285473cf4a813857154399f91\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-v5dvc" podUID="1885839c-21b3-4320-a460-ea9b5405da38" Nov 8 00:29:45.645401 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3554712994.mount: Deactivated successfully. Nov 8 00:29:45.872963 containerd[1547]: time="2025-11-08T00:29:45.859613328Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:29:45.885882 containerd[1547]: time="2025-11-08T00:29:45.885705183Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.30.4: active requests=0, bytes read=156883675" Nov 8 00:29:45.900632 containerd[1547]: time="2025-11-08T00:29:45.900519366Z" level=info msg="ImageCreate event name:\"sha256:833e8e11d9dc187377eab6f31e275114a6b0f8f0afc3bf578a2a00507e85afc9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:29:45.907438 containerd[1547]: time="2025-11-08T00:29:45.907381909Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:e92cca333202c87d07bf57f38182fd68f0779f912ef55305eda1fccc9f33667c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:29:45.909420 containerd[1547]: time="2025-11-08T00:29:45.909391942Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.30.4\" with image id \"sha256:833e8e11d9dc187377eab6f31e275114a6b0f8f0afc3bf578a2a00507e85afc9\", repo tag \"ghcr.io/flatcar/calico/node:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/node@sha256:e92cca333202c87d07bf57f38182fd68f0779f912ef55305eda1fccc9f33667c\", size \"156883537\" in 5.12400723s" Nov 8 00:29:45.909420 containerd[1547]: time="2025-11-08T00:29:45.909418491Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.4\" returns image reference \"sha256:833e8e11d9dc187377eab6f31e275114a6b0f8f0afc3bf578a2a00507e85afc9\"" Nov 8 00:29:46.037386 containerd[1547]: time="2025-11-08T00:29:46.037276193Z" level=info msg="CreateContainer within sandbox \"18b882265e4fec5ed8da48d850feaeb246d2c0470207606499eb1b30d741b5ea\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Nov 8 00:29:46.106245 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4054928518.mount: Deactivated successfully. Nov 8 00:29:46.111186 containerd[1547]: time="2025-11-08T00:29:46.111161844Z" level=info msg="CreateContainer within sandbox \"18b882265e4fec5ed8da48d850feaeb246d2c0470207606499eb1b30d741b5ea\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"96598afc89b896e3fa8797cd3c55e939bad8bb6dbd7cd7edcdefb7ccc109e36f\"" Nov 8 00:29:46.112842 containerd[1547]: time="2025-11-08T00:29:46.112810843Z" level=info msg="StartContainer for \"96598afc89b896e3fa8797cd3c55e939bad8bb6dbd7cd7edcdefb7ccc109e36f\"" Nov 8 00:29:46.184729 systemd[1]: Started cri-containerd-96598afc89b896e3fa8797cd3c55e939bad8bb6dbd7cd7edcdefb7ccc109e36f.scope - libcontainer container 96598afc89b896e3fa8797cd3c55e939bad8bb6dbd7cd7edcdefb7ccc109e36f. Nov 8 00:29:46.219626 containerd[1547]: time="2025-11-08T00:29:46.218895177Z" level=info msg="StartContainer for \"96598afc89b896e3fa8797cd3c55e939bad8bb6dbd7cd7edcdefb7ccc109e36f\" returns successfully" Nov 8 00:29:46.322972 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Nov 8 00:29:46.328541 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Nov 8 00:29:46.982371 systemd[1]: run-containerd-runc-k8s.io-96598afc89b896e3fa8797cd3c55e939bad8bb6dbd7cd7edcdefb7ccc109e36f-runc.XV6tSH.mount: Deactivated successfully. Nov 8 00:29:47.676147 kubelet[2738]: I1108 00:29:47.673034 2738 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-tppxc" podStartSLOduration=2.721639098 podStartE2EDuration="17.667340735s" podCreationTimestamp="2025-11-08 00:29:30 +0000 UTC" firstStartedPulling="2025-11-08 00:29:30.964511094 +0000 UTC m=+18.581023010" lastFinishedPulling="2025-11-08 00:29:45.910212725 +0000 UTC m=+33.526724647" observedRunningTime="2025-11-08 00:29:46.947439309 +0000 UTC m=+34.563951234" watchObservedRunningTime="2025-11-08 00:29:47.667340735 +0000 UTC m=+35.283852668" Nov 8 00:29:47.686781 containerd[1547]: time="2025-11-08T00:29:47.686655980Z" level=info msg="StopPodSandbox for \"80507d9e861ad2c8a455bbe41b1344d725eb73c0d460351190e10f277e760590\"" Nov 8 00:29:48.336630 containerd[1547]: 2025-11-08 00:29:47.772 [INFO][4080] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="80507d9e861ad2c8a455bbe41b1344d725eb73c0d460351190e10f277e760590" Nov 8 00:29:48.336630 containerd[1547]: 2025-11-08 00:29:47.774 [INFO][4080] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="80507d9e861ad2c8a455bbe41b1344d725eb73c0d460351190e10f277e760590" iface="eth0" netns="/var/run/netns/cni-27ec54d0-38aa-54bb-4312-291bf610105b" Nov 8 00:29:48.336630 containerd[1547]: 2025-11-08 00:29:47.775 [INFO][4080] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="80507d9e861ad2c8a455bbe41b1344d725eb73c0d460351190e10f277e760590" iface="eth0" netns="/var/run/netns/cni-27ec54d0-38aa-54bb-4312-291bf610105b" Nov 8 00:29:48.336630 containerd[1547]: 2025-11-08 00:29:47.776 [INFO][4080] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="80507d9e861ad2c8a455bbe41b1344d725eb73c0d460351190e10f277e760590" iface="eth0" netns="/var/run/netns/cni-27ec54d0-38aa-54bb-4312-291bf610105b" Nov 8 00:29:48.336630 containerd[1547]: 2025-11-08 00:29:47.776 [INFO][4080] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="80507d9e861ad2c8a455bbe41b1344d725eb73c0d460351190e10f277e760590" Nov 8 00:29:48.336630 containerd[1547]: 2025-11-08 00:29:47.776 [INFO][4080] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="80507d9e861ad2c8a455bbe41b1344d725eb73c0d460351190e10f277e760590" Nov 8 00:29:48.336630 containerd[1547]: 2025-11-08 00:29:48.316 [INFO][4087] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="80507d9e861ad2c8a455bbe41b1344d725eb73c0d460351190e10f277e760590" HandleID="k8s-pod-network.80507d9e861ad2c8a455bbe41b1344d725eb73c0d460351190e10f277e760590" Workload="localhost-k8s-whisker--6f4bb8f964--gbjvr-eth0" Nov 8 00:29:48.336630 containerd[1547]: 2025-11-08 00:29:48.318 [INFO][4087] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 8 00:29:48.336630 containerd[1547]: 2025-11-08 00:29:48.318 [INFO][4087] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 8 00:29:48.336630 containerd[1547]: 2025-11-08 00:29:48.329 [WARNING][4087] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="80507d9e861ad2c8a455bbe41b1344d725eb73c0d460351190e10f277e760590" HandleID="k8s-pod-network.80507d9e861ad2c8a455bbe41b1344d725eb73c0d460351190e10f277e760590" Workload="localhost-k8s-whisker--6f4bb8f964--gbjvr-eth0" Nov 8 00:29:48.336630 containerd[1547]: 2025-11-08 00:29:48.329 [INFO][4087] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="80507d9e861ad2c8a455bbe41b1344d725eb73c0d460351190e10f277e760590" HandleID="k8s-pod-network.80507d9e861ad2c8a455bbe41b1344d725eb73c0d460351190e10f277e760590" Workload="localhost-k8s-whisker--6f4bb8f964--gbjvr-eth0" Nov 8 00:29:48.336630 containerd[1547]: 2025-11-08 00:29:48.331 [INFO][4087] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 8 00:29:48.336630 containerd[1547]: 2025-11-08 00:29:48.332 [INFO][4080] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="80507d9e861ad2c8a455bbe41b1344d725eb73c0d460351190e10f277e760590" Nov 8 00:29:48.336479 systemd[1]: run-netns-cni\x2d27ec54d0\x2d38aa\x2d54bb\x2d4312\x2d291bf610105b.mount: Deactivated successfully. Nov 8 00:29:48.350987 containerd[1547]: time="2025-11-08T00:29:48.342874260Z" level=info msg="TearDown network for sandbox \"80507d9e861ad2c8a455bbe41b1344d725eb73c0d460351190e10f277e760590\" successfully" Nov 8 00:29:48.350987 containerd[1547]: time="2025-11-08T00:29:48.342901165Z" level=info msg="StopPodSandbox for \"80507d9e861ad2c8a455bbe41b1344d725eb73c0d460351190e10f277e760590\" returns successfully" Nov 8 00:29:48.431980 kubelet[2738]: I1108 00:29:48.431954 2738 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c5cc6a0f-3bc6-4948-a19c-70ab4e2da335-whisker-ca-bundle\") pod \"c5cc6a0f-3bc6-4948-a19c-70ab4e2da335\" (UID: \"c5cc6a0f-3bc6-4948-a19c-70ab4e2da335\") " Nov 8 00:29:48.432146 kubelet[2738]: I1108 00:29:48.432133 2738 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-79m85\" (UniqueName: \"kubernetes.io/projected/c5cc6a0f-3bc6-4948-a19c-70ab4e2da335-kube-api-access-79m85\") pod \"c5cc6a0f-3bc6-4948-a19c-70ab4e2da335\" (UID: \"c5cc6a0f-3bc6-4948-a19c-70ab4e2da335\") " Nov 8 00:29:48.432210 kubelet[2738]: I1108 00:29:48.432202 2738 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/c5cc6a0f-3bc6-4948-a19c-70ab4e2da335-whisker-backend-key-pair\") pod \"c5cc6a0f-3bc6-4948-a19c-70ab4e2da335\" (UID: \"c5cc6a0f-3bc6-4948-a19c-70ab4e2da335\") " Nov 8 00:29:48.442578 kubelet[2738]: I1108 00:29:48.440167 2738 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c5cc6a0f-3bc6-4948-a19c-70ab4e2da335-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "c5cc6a0f-3bc6-4948-a19c-70ab4e2da335" (UID: "c5cc6a0f-3bc6-4948-a19c-70ab4e2da335"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Nov 8 00:29:48.451456 kubelet[2738]: I1108 00:29:48.451408 2738 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c5cc6a0f-3bc6-4948-a19c-70ab4e2da335-kube-api-access-79m85" (OuterVolumeSpecName: "kube-api-access-79m85") pod "c5cc6a0f-3bc6-4948-a19c-70ab4e2da335" (UID: "c5cc6a0f-3bc6-4948-a19c-70ab4e2da335"). InnerVolumeSpecName "kube-api-access-79m85". PluginName "kubernetes.io/projected", VolumeGIDValue "" Nov 8 00:29:48.452629 kubelet[2738]: I1108 00:29:48.451644 2738 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c5cc6a0f-3bc6-4948-a19c-70ab4e2da335-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "c5cc6a0f-3bc6-4948-a19c-70ab4e2da335" (UID: "c5cc6a0f-3bc6-4948-a19c-70ab4e2da335"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGIDValue "" Nov 8 00:29:48.453220 systemd[1]: var-lib-kubelet-pods-c5cc6a0f\x2d3bc6\x2d4948\x2da19c\x2d70ab4e2da335-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d79m85.mount: Deactivated successfully. Nov 8 00:29:48.453375 systemd[1]: var-lib-kubelet-pods-c5cc6a0f\x2d3bc6\x2d4948\x2da19c\x2d70ab4e2da335-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Nov 8 00:29:48.532851 kubelet[2738]: I1108 00:29:48.532821 2738 reconciler_common.go:299] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c5cc6a0f-3bc6-4948-a19c-70ab4e2da335-whisker-ca-bundle\") on node \"localhost\" DevicePath \"\"" Nov 8 00:29:48.532851 kubelet[2738]: I1108 00:29:48.532843 2738 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-79m85\" (UniqueName: \"kubernetes.io/projected/c5cc6a0f-3bc6-4948-a19c-70ab4e2da335-kube-api-access-79m85\") on node \"localhost\" DevicePath \"\"" Nov 8 00:29:48.532851 kubelet[2738]: I1108 00:29:48.532850 2738 reconciler_common.go:299] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/c5cc6a0f-3bc6-4948-a19c-70ab4e2da335-whisker-backend-key-pair\") on node \"localhost\" DevicePath \"\"" Nov 8 00:29:48.536117 systemd[1]: Removed slice kubepods-besteffort-podc5cc6a0f_3bc6_4948_a19c_70ab4e2da335.slice - libcontainer container kubepods-besteffort-podc5cc6a0f_3bc6_4948_a19c_70ab4e2da335.slice. Nov 8 00:29:49.000635 systemd[1]: Created slice kubepods-besteffort-pod926ce8dd_4771_4d76_a928_b17ff008cf2e.slice - libcontainer container kubepods-besteffort-pod926ce8dd_4771_4d76_a928_b17ff008cf2e.slice. Nov 8 00:29:49.006620 kernel: bpftool[4242]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set Nov 8 00:29:49.037547 kubelet[2738]: I1108 00:29:49.037014 2738 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/926ce8dd-4771-4d76-a928-b17ff008cf2e-whisker-backend-key-pair\") pod \"whisker-84848b66c4-gnwcd\" (UID: \"926ce8dd-4771-4d76-a928-b17ff008cf2e\") " pod="calico-system/whisker-84848b66c4-gnwcd" Nov 8 00:29:49.037547 kubelet[2738]: I1108 00:29:49.037475 2738 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vlmq4\" (UniqueName: \"kubernetes.io/projected/926ce8dd-4771-4d76-a928-b17ff008cf2e-kube-api-access-vlmq4\") pod \"whisker-84848b66c4-gnwcd\" (UID: \"926ce8dd-4771-4d76-a928-b17ff008cf2e\") " pod="calico-system/whisker-84848b66c4-gnwcd" Nov 8 00:29:49.037547 kubelet[2738]: I1108 00:29:49.037499 2738 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/926ce8dd-4771-4d76-a928-b17ff008cf2e-whisker-ca-bundle\") pod \"whisker-84848b66c4-gnwcd\" (UID: \"926ce8dd-4771-4d76-a928-b17ff008cf2e\") " pod="calico-system/whisker-84848b66c4-gnwcd" Nov 8 00:29:49.194484 systemd-networkd[1441]: vxlan.calico: Link UP Nov 8 00:29:49.194491 systemd-networkd[1441]: vxlan.calico: Gained carrier Nov 8 00:29:49.349268 containerd[1547]: time="2025-11-08T00:29:49.348958636Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-84848b66c4-gnwcd,Uid:926ce8dd-4771-4d76-a928-b17ff008cf2e,Namespace:calico-system,Attempt:0,}" Nov 8 00:29:49.476993 systemd-networkd[1441]: calieae1fb74fb8: Link UP Nov 8 00:29:49.477489 systemd-networkd[1441]: calieae1fb74fb8: Gained carrier Nov 8 00:29:49.495301 containerd[1547]: 2025-11-08 00:29:49.401 [INFO][4290] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-whisker--84848b66c4--gnwcd-eth0 whisker-84848b66c4- calico-system 926ce8dd-4771-4d76-a928-b17ff008cf2e 933 0 2025-11-08 00:29:48 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:84848b66c4 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s localhost whisker-84848b66c4-gnwcd eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] calieae1fb74fb8 [] [] }} ContainerID="8205e433dfbe09439912b88422c4755690b7c63004dd9da42eb7157b8cb398dc" Namespace="calico-system" Pod="whisker-84848b66c4-gnwcd" WorkloadEndpoint="localhost-k8s-whisker--84848b66c4--gnwcd-" Nov 8 00:29:49.495301 containerd[1547]: 2025-11-08 00:29:49.401 [INFO][4290] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="8205e433dfbe09439912b88422c4755690b7c63004dd9da42eb7157b8cb398dc" Namespace="calico-system" Pod="whisker-84848b66c4-gnwcd" WorkloadEndpoint="localhost-k8s-whisker--84848b66c4--gnwcd-eth0" Nov 8 00:29:49.495301 containerd[1547]: 2025-11-08 00:29:49.435 [INFO][4312] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="8205e433dfbe09439912b88422c4755690b7c63004dd9da42eb7157b8cb398dc" HandleID="k8s-pod-network.8205e433dfbe09439912b88422c4755690b7c63004dd9da42eb7157b8cb398dc" Workload="localhost-k8s-whisker--84848b66c4--gnwcd-eth0" Nov 8 00:29:49.495301 containerd[1547]: 2025-11-08 00:29:49.436 [INFO][4312] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="8205e433dfbe09439912b88422c4755690b7c63004dd9da42eb7157b8cb398dc" HandleID="k8s-pod-network.8205e433dfbe09439912b88422c4755690b7c63004dd9da42eb7157b8cb398dc" Workload="localhost-k8s-whisker--84848b66c4--gnwcd-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002cb5b0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"whisker-84848b66c4-gnwcd", "timestamp":"2025-11-08 00:29:49.435593929 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 8 00:29:49.495301 containerd[1547]: 2025-11-08 00:29:49.436 [INFO][4312] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 8 00:29:49.495301 containerd[1547]: 2025-11-08 00:29:49.436 [INFO][4312] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 8 00:29:49.495301 containerd[1547]: 2025-11-08 00:29:49.436 [INFO][4312] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Nov 8 00:29:49.495301 containerd[1547]: 2025-11-08 00:29:49.443 [INFO][4312] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.8205e433dfbe09439912b88422c4755690b7c63004dd9da42eb7157b8cb398dc" host="localhost" Nov 8 00:29:49.495301 containerd[1547]: 2025-11-08 00:29:49.451 [INFO][4312] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Nov 8 00:29:49.495301 containerd[1547]: 2025-11-08 00:29:49.453 [INFO][4312] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Nov 8 00:29:49.495301 containerd[1547]: 2025-11-08 00:29:49.454 [INFO][4312] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Nov 8 00:29:49.495301 containerd[1547]: 2025-11-08 00:29:49.455 [INFO][4312] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Nov 8 00:29:49.495301 containerd[1547]: 2025-11-08 00:29:49.455 [INFO][4312] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.8205e433dfbe09439912b88422c4755690b7c63004dd9da42eb7157b8cb398dc" host="localhost" Nov 8 00:29:49.495301 containerd[1547]: 2025-11-08 00:29:49.456 [INFO][4312] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.8205e433dfbe09439912b88422c4755690b7c63004dd9da42eb7157b8cb398dc Nov 8 00:29:49.495301 containerd[1547]: 2025-11-08 00:29:49.458 [INFO][4312] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.8205e433dfbe09439912b88422c4755690b7c63004dd9da42eb7157b8cb398dc" host="localhost" Nov 8 00:29:49.495301 containerd[1547]: 2025-11-08 00:29:49.462 [INFO][4312] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.129/26] block=192.168.88.128/26 handle="k8s-pod-network.8205e433dfbe09439912b88422c4755690b7c63004dd9da42eb7157b8cb398dc" host="localhost" Nov 8 00:29:49.495301 containerd[1547]: 2025-11-08 00:29:49.462 [INFO][4312] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.129/26] handle="k8s-pod-network.8205e433dfbe09439912b88422c4755690b7c63004dd9da42eb7157b8cb398dc" host="localhost" Nov 8 00:29:49.495301 containerd[1547]: 2025-11-08 00:29:49.462 [INFO][4312] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 8 00:29:49.495301 containerd[1547]: 2025-11-08 00:29:49.462 [INFO][4312] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.129/26] IPv6=[] ContainerID="8205e433dfbe09439912b88422c4755690b7c63004dd9da42eb7157b8cb398dc" HandleID="k8s-pod-network.8205e433dfbe09439912b88422c4755690b7c63004dd9da42eb7157b8cb398dc" Workload="localhost-k8s-whisker--84848b66c4--gnwcd-eth0" Nov 8 00:29:49.498992 containerd[1547]: 2025-11-08 00:29:49.465 [INFO][4290] cni-plugin/k8s.go 418: Populated endpoint ContainerID="8205e433dfbe09439912b88422c4755690b7c63004dd9da42eb7157b8cb398dc" Namespace="calico-system" Pod="whisker-84848b66c4-gnwcd" WorkloadEndpoint="localhost-k8s-whisker--84848b66c4--gnwcd-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--84848b66c4--gnwcd-eth0", GenerateName:"whisker-84848b66c4-", Namespace:"calico-system", SelfLink:"", UID:"926ce8dd-4771-4d76-a928-b17ff008cf2e", ResourceVersion:"933", Generation:0, CreationTimestamp:time.Date(2025, time.November, 8, 0, 29, 48, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"84848b66c4", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"whisker-84848b66c4-gnwcd", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"calieae1fb74fb8", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 8 00:29:49.498992 containerd[1547]: 2025-11-08 00:29:49.468 [INFO][4290] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.129/32] ContainerID="8205e433dfbe09439912b88422c4755690b7c63004dd9da42eb7157b8cb398dc" Namespace="calico-system" Pod="whisker-84848b66c4-gnwcd" WorkloadEndpoint="localhost-k8s-whisker--84848b66c4--gnwcd-eth0" Nov 8 00:29:49.498992 containerd[1547]: 2025-11-08 00:29:49.470 [INFO][4290] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calieae1fb74fb8 ContainerID="8205e433dfbe09439912b88422c4755690b7c63004dd9da42eb7157b8cb398dc" Namespace="calico-system" Pod="whisker-84848b66c4-gnwcd" WorkloadEndpoint="localhost-k8s-whisker--84848b66c4--gnwcd-eth0" Nov 8 00:29:49.498992 containerd[1547]: 2025-11-08 00:29:49.478 [INFO][4290] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="8205e433dfbe09439912b88422c4755690b7c63004dd9da42eb7157b8cb398dc" Namespace="calico-system" Pod="whisker-84848b66c4-gnwcd" WorkloadEndpoint="localhost-k8s-whisker--84848b66c4--gnwcd-eth0" Nov 8 00:29:49.498992 containerd[1547]: 2025-11-08 00:29:49.479 [INFO][4290] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="8205e433dfbe09439912b88422c4755690b7c63004dd9da42eb7157b8cb398dc" Namespace="calico-system" Pod="whisker-84848b66c4-gnwcd" WorkloadEndpoint="localhost-k8s-whisker--84848b66c4--gnwcd-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--84848b66c4--gnwcd-eth0", GenerateName:"whisker-84848b66c4-", Namespace:"calico-system", SelfLink:"", UID:"926ce8dd-4771-4d76-a928-b17ff008cf2e", ResourceVersion:"933", Generation:0, CreationTimestamp:time.Date(2025, time.November, 8, 0, 29, 48, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"84848b66c4", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"8205e433dfbe09439912b88422c4755690b7c63004dd9da42eb7157b8cb398dc", Pod:"whisker-84848b66c4-gnwcd", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"calieae1fb74fb8", MAC:"12:c6:6b:e9:44:c2", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 8 00:29:49.498992 containerd[1547]: 2025-11-08 00:29:49.491 [INFO][4290] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="8205e433dfbe09439912b88422c4755690b7c63004dd9da42eb7157b8cb398dc" Namespace="calico-system" Pod="whisker-84848b66c4-gnwcd" WorkloadEndpoint="localhost-k8s-whisker--84848b66c4--gnwcd-eth0" Nov 8 00:29:49.513288 containerd[1547]: time="2025-11-08T00:29:49.512653308Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 8 00:29:49.513288 containerd[1547]: time="2025-11-08T00:29:49.512697876Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 8 00:29:49.513288 containerd[1547]: time="2025-11-08T00:29:49.512705994Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 8 00:29:49.513288 containerd[1547]: time="2025-11-08T00:29:49.512765617Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 8 00:29:49.527728 systemd[1]: Started cri-containerd-8205e433dfbe09439912b88422c4755690b7c63004dd9da42eb7157b8cb398dc.scope - libcontainer container 8205e433dfbe09439912b88422c4755690b7c63004dd9da42eb7157b8cb398dc. Nov 8 00:29:49.537313 systemd-resolved[1442]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Nov 8 00:29:49.568983 containerd[1547]: time="2025-11-08T00:29:49.567736907Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-84848b66c4-gnwcd,Uid:926ce8dd-4771-4d76-a928-b17ff008cf2e,Namespace:calico-system,Attempt:0,} returns sandbox id \"8205e433dfbe09439912b88422c4755690b7c63004dd9da42eb7157b8cb398dc\"" Nov 8 00:29:49.603067 containerd[1547]: time="2025-11-08T00:29:49.602156662Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Nov 8 00:29:49.985410 containerd[1547]: time="2025-11-08T00:29:49.985310384Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 8 00:29:49.990699 containerd[1547]: time="2025-11-08T00:29:49.986521184Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Nov 8 00:29:49.990787 containerd[1547]: time="2025-11-08T00:29:49.987396872Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Nov 8 00:29:50.011509 kubelet[2738]: E1108 00:29:50.011374 2738 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Nov 8 00:29:50.011509 kubelet[2738]: E1108 00:29:50.011443 2738 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Nov 8 00:29:50.213438 kubelet[2738]: E1108 00:29:50.213379 2738 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:ac9f2fab8b1a41b6acb7bc84bb1a359e,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-vlmq4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-84848b66c4-gnwcd_calico-system(926ce8dd-4771-4d76-a928-b17ff008cf2e): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Nov 8 00:29:50.215265 containerd[1547]: time="2025-11-08T00:29:50.215072456Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Nov 8 00:29:50.495778 kubelet[2738]: I1108 00:29:50.495738 2738 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c5cc6a0f-3bc6-4948-a19c-70ab4e2da335" path="/var/lib/kubelet/pods/c5cc6a0f-3bc6-4948-a19c-70ab4e2da335/volumes" Nov 8 00:29:50.578801 containerd[1547]: time="2025-11-08T00:29:50.578757397Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 8 00:29:50.594219 containerd[1547]: time="2025-11-08T00:29:50.594186269Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Nov 8 00:29:50.594371 containerd[1547]: time="2025-11-08T00:29:50.594261301Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Nov 8 00:29:50.594405 kubelet[2738]: E1108 00:29:50.594367 2738 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Nov 8 00:29:50.594405 kubelet[2738]: E1108 00:29:50.594399 2738 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Nov 8 00:29:50.594542 kubelet[2738]: E1108 00:29:50.594500 2738 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-vlmq4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-84848b66c4-gnwcd_calico-system(926ce8dd-4771-4d76-a928-b17ff008cf2e): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Nov 8 00:29:50.595689 kubelet[2738]: E1108 00:29:50.595659 2738 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-84848b66c4-gnwcd" podUID="926ce8dd-4771-4d76-a928-b17ff008cf2e" Nov 8 00:29:50.907750 systemd-networkd[1441]: vxlan.calico: Gained IPv6LL Nov 8 00:29:50.932039 kubelet[2738]: E1108 00:29:50.931966 2738 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-84848b66c4-gnwcd" podUID="926ce8dd-4771-4d76-a928-b17ff008cf2e" Nov 8 00:29:51.035741 systemd-networkd[1441]: calieae1fb74fb8: Gained IPv6LL Nov 8 00:29:53.488411 containerd[1547]: time="2025-11-08T00:29:53.488179375Z" level=info msg="StopPodSandbox for \"7e21492b8efb94462f2859236a0b114fb1df55989465021aff0a26e7124d4ffc\"" Nov 8 00:29:53.488411 containerd[1547]: time="2025-11-08T00:29:53.488234475Z" level=info msg="StopPodSandbox for \"16ee7b0656a955c5eeb3aac32acbc5e262d9ed7a5d11f72e1bd5f47b98c075e1\"" Nov 8 00:29:53.489880 containerd[1547]: time="2025-11-08T00:29:53.489719933Z" level=info msg="StopPodSandbox for \"5172712585d1bf9ef5c0faa0219e0f32c4c93d030eab81dc4074bf3c7ef369da\"" Nov 8 00:29:53.490176 containerd[1547]: time="2025-11-08T00:29:53.490157281Z" level=info msg="StopPodSandbox for \"dbef0dff2b1755620ff851e1a9d0ce276f603f1685c53aa100b939e8f1fc9044\"" Nov 8 00:29:53.619338 containerd[1547]: 2025-11-08 00:29:53.566 [INFO][4431] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="16ee7b0656a955c5eeb3aac32acbc5e262d9ed7a5d11f72e1bd5f47b98c075e1" Nov 8 00:29:53.619338 containerd[1547]: 2025-11-08 00:29:53.567 [INFO][4431] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="16ee7b0656a955c5eeb3aac32acbc5e262d9ed7a5d11f72e1bd5f47b98c075e1" iface="eth0" netns="/var/run/netns/cni-f4352b67-d272-4ce1-f19a-b243b54e1b93" Nov 8 00:29:53.619338 containerd[1547]: 2025-11-08 00:29:53.567 [INFO][4431] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="16ee7b0656a955c5eeb3aac32acbc5e262d9ed7a5d11f72e1bd5f47b98c075e1" iface="eth0" netns="/var/run/netns/cni-f4352b67-d272-4ce1-f19a-b243b54e1b93" Nov 8 00:29:53.619338 containerd[1547]: 2025-11-08 00:29:53.567 [INFO][4431] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="16ee7b0656a955c5eeb3aac32acbc5e262d9ed7a5d11f72e1bd5f47b98c075e1" iface="eth0" netns="/var/run/netns/cni-f4352b67-d272-4ce1-f19a-b243b54e1b93" Nov 8 00:29:53.619338 containerd[1547]: 2025-11-08 00:29:53.567 [INFO][4431] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="16ee7b0656a955c5eeb3aac32acbc5e262d9ed7a5d11f72e1bd5f47b98c075e1" Nov 8 00:29:53.619338 containerd[1547]: 2025-11-08 00:29:53.567 [INFO][4431] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="16ee7b0656a955c5eeb3aac32acbc5e262d9ed7a5d11f72e1bd5f47b98c075e1" Nov 8 00:29:53.619338 containerd[1547]: 2025-11-08 00:29:53.605 [INFO][4463] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="16ee7b0656a955c5eeb3aac32acbc5e262d9ed7a5d11f72e1bd5f47b98c075e1" HandleID="k8s-pod-network.16ee7b0656a955c5eeb3aac32acbc5e262d9ed7a5d11f72e1bd5f47b98c075e1" Workload="localhost-k8s-calico--apiserver--7f5dbf8768--lwfmb-eth0" Nov 8 00:29:53.619338 containerd[1547]: 2025-11-08 00:29:53.605 [INFO][4463] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 8 00:29:53.619338 containerd[1547]: 2025-11-08 00:29:53.605 [INFO][4463] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 8 00:29:53.619338 containerd[1547]: 2025-11-08 00:29:53.612 [WARNING][4463] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="16ee7b0656a955c5eeb3aac32acbc5e262d9ed7a5d11f72e1bd5f47b98c075e1" HandleID="k8s-pod-network.16ee7b0656a955c5eeb3aac32acbc5e262d9ed7a5d11f72e1bd5f47b98c075e1" Workload="localhost-k8s-calico--apiserver--7f5dbf8768--lwfmb-eth0" Nov 8 00:29:53.619338 containerd[1547]: 2025-11-08 00:29:53.612 [INFO][4463] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="16ee7b0656a955c5eeb3aac32acbc5e262d9ed7a5d11f72e1bd5f47b98c075e1" HandleID="k8s-pod-network.16ee7b0656a955c5eeb3aac32acbc5e262d9ed7a5d11f72e1bd5f47b98c075e1" Workload="localhost-k8s-calico--apiserver--7f5dbf8768--lwfmb-eth0" Nov 8 00:29:53.619338 containerd[1547]: 2025-11-08 00:29:53.613 [INFO][4463] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 8 00:29:53.619338 containerd[1547]: 2025-11-08 00:29:53.617 [INFO][4431] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="16ee7b0656a955c5eeb3aac32acbc5e262d9ed7a5d11f72e1bd5f47b98c075e1" Nov 8 00:29:53.622888 containerd[1547]: time="2025-11-08T00:29:53.619628262Z" level=info msg="TearDown network for sandbox \"16ee7b0656a955c5eeb3aac32acbc5e262d9ed7a5d11f72e1bd5f47b98c075e1\" successfully" Nov 8 00:29:53.622888 containerd[1547]: time="2025-11-08T00:29:53.619650689Z" level=info msg="StopPodSandbox for \"16ee7b0656a955c5eeb3aac32acbc5e262d9ed7a5d11f72e1bd5f47b98c075e1\" returns successfully" Nov 8 00:29:53.621154 systemd[1]: run-netns-cni\x2df4352b67\x2dd272\x2d4ce1\x2df19a\x2db243b54e1b93.mount: Deactivated successfully. Nov 8 00:29:53.623943 containerd[1547]: time="2025-11-08T00:29:53.623917874Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7f5dbf8768-lwfmb,Uid:dc1a7be6-78b9-4b63-807c-f29c0ef99466,Namespace:calico-apiserver,Attempt:1,}" Nov 8 00:29:53.628684 containerd[1547]: 2025-11-08 00:29:53.571 [INFO][4428] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="7e21492b8efb94462f2859236a0b114fb1df55989465021aff0a26e7124d4ffc" Nov 8 00:29:53.628684 containerd[1547]: 2025-11-08 00:29:53.572 [INFO][4428] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="7e21492b8efb94462f2859236a0b114fb1df55989465021aff0a26e7124d4ffc" iface="eth0" netns="/var/run/netns/cni-ae1132e4-93f0-909f-0ced-4a40d916493f" Nov 8 00:29:53.628684 containerd[1547]: 2025-11-08 00:29:53.572 [INFO][4428] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="7e21492b8efb94462f2859236a0b114fb1df55989465021aff0a26e7124d4ffc" iface="eth0" netns="/var/run/netns/cni-ae1132e4-93f0-909f-0ced-4a40d916493f" Nov 8 00:29:53.628684 containerd[1547]: 2025-11-08 00:29:53.572 [INFO][4428] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="7e21492b8efb94462f2859236a0b114fb1df55989465021aff0a26e7124d4ffc" iface="eth0" netns="/var/run/netns/cni-ae1132e4-93f0-909f-0ced-4a40d916493f" Nov 8 00:29:53.628684 containerd[1547]: 2025-11-08 00:29:53.572 [INFO][4428] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="7e21492b8efb94462f2859236a0b114fb1df55989465021aff0a26e7124d4ffc" Nov 8 00:29:53.628684 containerd[1547]: 2025-11-08 00:29:53.572 [INFO][4428] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="7e21492b8efb94462f2859236a0b114fb1df55989465021aff0a26e7124d4ffc" Nov 8 00:29:53.628684 containerd[1547]: 2025-11-08 00:29:53.609 [INFO][4471] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="7e21492b8efb94462f2859236a0b114fb1df55989465021aff0a26e7124d4ffc" HandleID="k8s-pod-network.7e21492b8efb94462f2859236a0b114fb1df55989465021aff0a26e7124d4ffc" Workload="localhost-k8s-goldmane--666569f655--tnwtm-eth0" Nov 8 00:29:53.628684 containerd[1547]: 2025-11-08 00:29:53.609 [INFO][4471] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 8 00:29:53.628684 containerd[1547]: 2025-11-08 00:29:53.613 [INFO][4471] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 8 00:29:53.628684 containerd[1547]: 2025-11-08 00:29:53.623 [WARNING][4471] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="7e21492b8efb94462f2859236a0b114fb1df55989465021aff0a26e7124d4ffc" HandleID="k8s-pod-network.7e21492b8efb94462f2859236a0b114fb1df55989465021aff0a26e7124d4ffc" Workload="localhost-k8s-goldmane--666569f655--tnwtm-eth0" Nov 8 00:29:53.628684 containerd[1547]: 2025-11-08 00:29:53.623 [INFO][4471] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="7e21492b8efb94462f2859236a0b114fb1df55989465021aff0a26e7124d4ffc" HandleID="k8s-pod-network.7e21492b8efb94462f2859236a0b114fb1df55989465021aff0a26e7124d4ffc" Workload="localhost-k8s-goldmane--666569f655--tnwtm-eth0" Nov 8 00:29:53.628684 containerd[1547]: 2025-11-08 00:29:53.625 [INFO][4471] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 8 00:29:53.628684 containerd[1547]: 2025-11-08 00:29:53.627 [INFO][4428] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="7e21492b8efb94462f2859236a0b114fb1df55989465021aff0a26e7124d4ffc" Nov 8 00:29:53.630087 containerd[1547]: time="2025-11-08T00:29:53.630000208Z" level=info msg="TearDown network for sandbox \"7e21492b8efb94462f2859236a0b114fb1df55989465021aff0a26e7124d4ffc\" successfully" Nov 8 00:29:53.630087 containerd[1547]: time="2025-11-08T00:29:53.630018523Z" level=info msg="StopPodSandbox for \"7e21492b8efb94462f2859236a0b114fb1df55989465021aff0a26e7124d4ffc\" returns successfully" Nov 8 00:29:53.630480 containerd[1547]: time="2025-11-08T00:29:53.630454998Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-tnwtm,Uid:007a5707-c952-467d-a723-faa6baf2e9bc,Namespace:calico-system,Attempt:1,}" Nov 8 00:29:53.632180 systemd[1]: run-netns-cni\x2dae1132e4\x2d93f0\x2d909f\x2d0ced\x2d4a40d916493f.mount: Deactivated successfully. Nov 8 00:29:53.642322 containerd[1547]: 2025-11-08 00:29:53.566 [INFO][4441] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="5172712585d1bf9ef5c0faa0219e0f32c4c93d030eab81dc4074bf3c7ef369da" Nov 8 00:29:53.642322 containerd[1547]: 2025-11-08 00:29:53.567 [INFO][4441] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="5172712585d1bf9ef5c0faa0219e0f32c4c93d030eab81dc4074bf3c7ef369da" iface="eth0" netns="/var/run/netns/cni-7515de2b-60fe-5623-c671-531045579295" Nov 8 00:29:53.642322 containerd[1547]: 2025-11-08 00:29:53.567 [INFO][4441] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="5172712585d1bf9ef5c0faa0219e0f32c4c93d030eab81dc4074bf3c7ef369da" iface="eth0" netns="/var/run/netns/cni-7515de2b-60fe-5623-c671-531045579295" Nov 8 00:29:53.642322 containerd[1547]: 2025-11-08 00:29:53.567 [INFO][4441] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="5172712585d1bf9ef5c0faa0219e0f32c4c93d030eab81dc4074bf3c7ef369da" iface="eth0" netns="/var/run/netns/cni-7515de2b-60fe-5623-c671-531045579295" Nov 8 00:29:53.642322 containerd[1547]: 2025-11-08 00:29:53.567 [INFO][4441] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="5172712585d1bf9ef5c0faa0219e0f32c4c93d030eab81dc4074bf3c7ef369da" Nov 8 00:29:53.642322 containerd[1547]: 2025-11-08 00:29:53.567 [INFO][4441] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="5172712585d1bf9ef5c0faa0219e0f32c4c93d030eab81dc4074bf3c7ef369da" Nov 8 00:29:53.642322 containerd[1547]: 2025-11-08 00:29:53.610 [INFO][4461] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="5172712585d1bf9ef5c0faa0219e0f32c4c93d030eab81dc4074bf3c7ef369da" HandleID="k8s-pod-network.5172712585d1bf9ef5c0faa0219e0f32c4c93d030eab81dc4074bf3c7ef369da" Workload="localhost-k8s-calico--apiserver--7f5dbf8768--w74ds-eth0" Nov 8 00:29:53.642322 containerd[1547]: 2025-11-08 00:29:53.610 [INFO][4461] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 8 00:29:53.642322 containerd[1547]: 2025-11-08 00:29:53.625 [INFO][4461] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 8 00:29:53.642322 containerd[1547]: 2025-11-08 00:29:53.631 [WARNING][4461] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="5172712585d1bf9ef5c0faa0219e0f32c4c93d030eab81dc4074bf3c7ef369da" HandleID="k8s-pod-network.5172712585d1bf9ef5c0faa0219e0f32c4c93d030eab81dc4074bf3c7ef369da" Workload="localhost-k8s-calico--apiserver--7f5dbf8768--w74ds-eth0" Nov 8 00:29:53.642322 containerd[1547]: 2025-11-08 00:29:53.631 [INFO][4461] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="5172712585d1bf9ef5c0faa0219e0f32c4c93d030eab81dc4074bf3c7ef369da" HandleID="k8s-pod-network.5172712585d1bf9ef5c0faa0219e0f32c4c93d030eab81dc4074bf3c7ef369da" Workload="localhost-k8s-calico--apiserver--7f5dbf8768--w74ds-eth0" Nov 8 00:29:53.642322 containerd[1547]: 2025-11-08 00:29:53.634 [INFO][4461] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 8 00:29:53.642322 containerd[1547]: 2025-11-08 00:29:53.636 [INFO][4441] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="5172712585d1bf9ef5c0faa0219e0f32c4c93d030eab81dc4074bf3c7ef369da" Nov 8 00:29:53.645633 containerd[1547]: time="2025-11-08T00:29:53.642522805Z" level=info msg="TearDown network for sandbox \"5172712585d1bf9ef5c0faa0219e0f32c4c93d030eab81dc4074bf3c7ef369da\" successfully" Nov 8 00:29:53.645633 containerd[1547]: time="2025-11-08T00:29:53.642538968Z" level=info msg="StopPodSandbox for \"5172712585d1bf9ef5c0faa0219e0f32c4c93d030eab81dc4074bf3c7ef369da\" returns successfully" Nov 8 00:29:53.647106 systemd[1]: run-netns-cni\x2d7515de2b\x2d60fe\x2d5623\x2dc671\x2d531045579295.mount: Deactivated successfully. Nov 8 00:29:53.653869 containerd[1547]: time="2025-11-08T00:29:53.653844374Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7f5dbf8768-w74ds,Uid:a6c7b38c-00b0-4b95-83b4-14d8b8afda37,Namespace:calico-apiserver,Attempt:1,}" Nov 8 00:29:53.657744 containerd[1547]: 2025-11-08 00:29:53.583 [INFO][4442] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="dbef0dff2b1755620ff851e1a9d0ce276f603f1685c53aa100b939e8f1fc9044" Nov 8 00:29:53.657744 containerd[1547]: 2025-11-08 00:29:53.583 [INFO][4442] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="dbef0dff2b1755620ff851e1a9d0ce276f603f1685c53aa100b939e8f1fc9044" iface="eth0" netns="/var/run/netns/cni-eba7ef41-d8b5-b056-b1f6-084953f6477f" Nov 8 00:29:53.657744 containerd[1547]: 2025-11-08 00:29:53.585 [INFO][4442] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="dbef0dff2b1755620ff851e1a9d0ce276f603f1685c53aa100b939e8f1fc9044" iface="eth0" netns="/var/run/netns/cni-eba7ef41-d8b5-b056-b1f6-084953f6477f" Nov 8 00:29:53.657744 containerd[1547]: 2025-11-08 00:29:53.586 [INFO][4442] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="dbef0dff2b1755620ff851e1a9d0ce276f603f1685c53aa100b939e8f1fc9044" iface="eth0" netns="/var/run/netns/cni-eba7ef41-d8b5-b056-b1f6-084953f6477f" Nov 8 00:29:53.657744 containerd[1547]: 2025-11-08 00:29:53.586 [INFO][4442] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="dbef0dff2b1755620ff851e1a9d0ce276f603f1685c53aa100b939e8f1fc9044" Nov 8 00:29:53.657744 containerd[1547]: 2025-11-08 00:29:53.586 [INFO][4442] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="dbef0dff2b1755620ff851e1a9d0ce276f603f1685c53aa100b939e8f1fc9044" Nov 8 00:29:53.657744 containerd[1547]: 2025-11-08 00:29:53.623 [INFO][4476] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="dbef0dff2b1755620ff851e1a9d0ce276f603f1685c53aa100b939e8f1fc9044" HandleID="k8s-pod-network.dbef0dff2b1755620ff851e1a9d0ce276f603f1685c53aa100b939e8f1fc9044" Workload="localhost-k8s-calico--kube--controllers--57d6675b9f--clrr6-eth0" Nov 8 00:29:53.657744 containerd[1547]: 2025-11-08 00:29:53.623 [INFO][4476] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 8 00:29:53.657744 containerd[1547]: 2025-11-08 00:29:53.634 [INFO][4476] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 8 00:29:53.657744 containerd[1547]: 2025-11-08 00:29:53.645 [WARNING][4476] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="dbef0dff2b1755620ff851e1a9d0ce276f603f1685c53aa100b939e8f1fc9044" HandleID="k8s-pod-network.dbef0dff2b1755620ff851e1a9d0ce276f603f1685c53aa100b939e8f1fc9044" Workload="localhost-k8s-calico--kube--controllers--57d6675b9f--clrr6-eth0" Nov 8 00:29:53.657744 containerd[1547]: 2025-11-08 00:29:53.646 [INFO][4476] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="dbef0dff2b1755620ff851e1a9d0ce276f603f1685c53aa100b939e8f1fc9044" HandleID="k8s-pod-network.dbef0dff2b1755620ff851e1a9d0ce276f603f1685c53aa100b939e8f1fc9044" Workload="localhost-k8s-calico--kube--controllers--57d6675b9f--clrr6-eth0" Nov 8 00:29:53.657744 containerd[1547]: 2025-11-08 00:29:53.652 [INFO][4476] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 8 00:29:53.657744 containerd[1547]: 2025-11-08 00:29:53.655 [INFO][4442] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="dbef0dff2b1755620ff851e1a9d0ce276f603f1685c53aa100b939e8f1fc9044" Nov 8 00:29:53.659006 containerd[1547]: time="2025-11-08T00:29:53.657814510Z" level=info msg="TearDown network for sandbox \"dbef0dff2b1755620ff851e1a9d0ce276f603f1685c53aa100b939e8f1fc9044\" successfully" Nov 8 00:29:53.659006 containerd[1547]: time="2025-11-08T00:29:53.657827848Z" level=info msg="StopPodSandbox for \"dbef0dff2b1755620ff851e1a9d0ce276f603f1685c53aa100b939e8f1fc9044\" returns successfully" Nov 8 00:29:53.659006 containerd[1547]: time="2025-11-08T00:29:53.658526428Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-57d6675b9f-clrr6,Uid:6a9d7321-1148-43be-b5df-da7f193de30d,Namespace:calico-system,Attempt:1,}" Nov 8 00:29:53.770730 systemd-networkd[1441]: cali1a25e0949aa: Link UP Nov 8 00:29:53.770861 systemd-networkd[1441]: cali1a25e0949aa: Gained carrier Nov 8 00:29:53.796825 containerd[1547]: 2025-11-08 00:29:53.687 [INFO][4491] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--7f5dbf8768--lwfmb-eth0 calico-apiserver-7f5dbf8768- calico-apiserver dc1a7be6-78b9-4b63-807c-f29c0ef99466 966 0 2025-11-08 00:29:27 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:7f5dbf8768 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-7f5dbf8768-lwfmb eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali1a25e0949aa [] [] }} ContainerID="902d48432040b13b3abb8b185f001628a826599dc216ebfa3fd62e8c31f438c9" Namespace="calico-apiserver" Pod="calico-apiserver-7f5dbf8768-lwfmb" WorkloadEndpoint="localhost-k8s-calico--apiserver--7f5dbf8768--lwfmb-" Nov 8 00:29:53.796825 containerd[1547]: 2025-11-08 00:29:53.687 [INFO][4491] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="902d48432040b13b3abb8b185f001628a826599dc216ebfa3fd62e8c31f438c9" Namespace="calico-apiserver" Pod="calico-apiserver-7f5dbf8768-lwfmb" WorkloadEndpoint="localhost-k8s-calico--apiserver--7f5dbf8768--lwfmb-eth0" Nov 8 00:29:53.796825 containerd[1547]: 2025-11-08 00:29:53.715 [INFO][4514] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="902d48432040b13b3abb8b185f001628a826599dc216ebfa3fd62e8c31f438c9" HandleID="k8s-pod-network.902d48432040b13b3abb8b185f001628a826599dc216ebfa3fd62e8c31f438c9" Workload="localhost-k8s-calico--apiserver--7f5dbf8768--lwfmb-eth0" Nov 8 00:29:53.796825 containerd[1547]: 2025-11-08 00:29:53.715 [INFO][4514] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="902d48432040b13b3abb8b185f001628a826599dc216ebfa3fd62e8c31f438c9" HandleID="k8s-pod-network.902d48432040b13b3abb8b185f001628a826599dc216ebfa3fd62e8c31f438c9" Workload="localhost-k8s-calico--apiserver--7f5dbf8768--lwfmb-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00024f260), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-7f5dbf8768-lwfmb", "timestamp":"2025-11-08 00:29:53.715089315 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 8 00:29:53.796825 containerd[1547]: 2025-11-08 00:29:53.715 [INFO][4514] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 8 00:29:53.796825 containerd[1547]: 2025-11-08 00:29:53.715 [INFO][4514] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 8 00:29:53.796825 containerd[1547]: 2025-11-08 00:29:53.715 [INFO][4514] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Nov 8 00:29:53.796825 containerd[1547]: 2025-11-08 00:29:53.721 [INFO][4514] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.902d48432040b13b3abb8b185f001628a826599dc216ebfa3fd62e8c31f438c9" host="localhost" Nov 8 00:29:53.796825 containerd[1547]: 2025-11-08 00:29:53.739 [INFO][4514] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Nov 8 00:29:53.796825 containerd[1547]: 2025-11-08 00:29:53.742 [INFO][4514] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Nov 8 00:29:53.796825 containerd[1547]: 2025-11-08 00:29:53.743 [INFO][4514] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Nov 8 00:29:53.796825 containerd[1547]: 2025-11-08 00:29:53.744 [INFO][4514] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Nov 8 00:29:53.796825 containerd[1547]: 2025-11-08 00:29:53.744 [INFO][4514] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.902d48432040b13b3abb8b185f001628a826599dc216ebfa3fd62e8c31f438c9" host="localhost" Nov 8 00:29:53.796825 containerd[1547]: 2025-11-08 00:29:53.745 [INFO][4514] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.902d48432040b13b3abb8b185f001628a826599dc216ebfa3fd62e8c31f438c9 Nov 8 00:29:53.796825 containerd[1547]: 2025-11-08 00:29:53.756 [INFO][4514] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.902d48432040b13b3abb8b185f001628a826599dc216ebfa3fd62e8c31f438c9" host="localhost" Nov 8 00:29:53.796825 containerd[1547]: 2025-11-08 00:29:53.764 [INFO][4514] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.130/26] block=192.168.88.128/26 handle="k8s-pod-network.902d48432040b13b3abb8b185f001628a826599dc216ebfa3fd62e8c31f438c9" host="localhost" Nov 8 00:29:53.796825 containerd[1547]: 2025-11-08 00:29:53.764 [INFO][4514] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.130/26] handle="k8s-pod-network.902d48432040b13b3abb8b185f001628a826599dc216ebfa3fd62e8c31f438c9" host="localhost" Nov 8 00:29:53.796825 containerd[1547]: 2025-11-08 00:29:53.765 [INFO][4514] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 8 00:29:53.796825 containerd[1547]: 2025-11-08 00:29:53.765 [INFO][4514] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.130/26] IPv6=[] ContainerID="902d48432040b13b3abb8b185f001628a826599dc216ebfa3fd62e8c31f438c9" HandleID="k8s-pod-network.902d48432040b13b3abb8b185f001628a826599dc216ebfa3fd62e8c31f438c9" Workload="localhost-k8s-calico--apiserver--7f5dbf8768--lwfmb-eth0" Nov 8 00:29:53.797342 containerd[1547]: 2025-11-08 00:29:53.767 [INFO][4491] cni-plugin/k8s.go 418: Populated endpoint ContainerID="902d48432040b13b3abb8b185f001628a826599dc216ebfa3fd62e8c31f438c9" Namespace="calico-apiserver" Pod="calico-apiserver-7f5dbf8768-lwfmb" WorkloadEndpoint="localhost-k8s-calico--apiserver--7f5dbf8768--lwfmb-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--7f5dbf8768--lwfmb-eth0", GenerateName:"calico-apiserver-7f5dbf8768-", Namespace:"calico-apiserver", SelfLink:"", UID:"dc1a7be6-78b9-4b63-807c-f29c0ef99466", ResourceVersion:"966", Generation:0, CreationTimestamp:time.Date(2025, time.November, 8, 0, 29, 27, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7f5dbf8768", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-7f5dbf8768-lwfmb", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali1a25e0949aa", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 8 00:29:53.797342 containerd[1547]: 2025-11-08 00:29:53.768 [INFO][4491] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.130/32] ContainerID="902d48432040b13b3abb8b185f001628a826599dc216ebfa3fd62e8c31f438c9" Namespace="calico-apiserver" Pod="calico-apiserver-7f5dbf8768-lwfmb" WorkloadEndpoint="localhost-k8s-calico--apiserver--7f5dbf8768--lwfmb-eth0" Nov 8 00:29:53.797342 containerd[1547]: 2025-11-08 00:29:53.768 [INFO][4491] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali1a25e0949aa ContainerID="902d48432040b13b3abb8b185f001628a826599dc216ebfa3fd62e8c31f438c9" Namespace="calico-apiserver" Pod="calico-apiserver-7f5dbf8768-lwfmb" WorkloadEndpoint="localhost-k8s-calico--apiserver--7f5dbf8768--lwfmb-eth0" Nov 8 00:29:53.797342 containerd[1547]: 2025-11-08 00:29:53.776 [INFO][4491] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="902d48432040b13b3abb8b185f001628a826599dc216ebfa3fd62e8c31f438c9" Namespace="calico-apiserver" Pod="calico-apiserver-7f5dbf8768-lwfmb" WorkloadEndpoint="localhost-k8s-calico--apiserver--7f5dbf8768--lwfmb-eth0" Nov 8 00:29:53.797342 containerd[1547]: 2025-11-08 00:29:53.778 [INFO][4491] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="902d48432040b13b3abb8b185f001628a826599dc216ebfa3fd62e8c31f438c9" Namespace="calico-apiserver" Pod="calico-apiserver-7f5dbf8768-lwfmb" WorkloadEndpoint="localhost-k8s-calico--apiserver--7f5dbf8768--lwfmb-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--7f5dbf8768--lwfmb-eth0", GenerateName:"calico-apiserver-7f5dbf8768-", Namespace:"calico-apiserver", SelfLink:"", UID:"dc1a7be6-78b9-4b63-807c-f29c0ef99466", ResourceVersion:"966", Generation:0, CreationTimestamp:time.Date(2025, time.November, 8, 0, 29, 27, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7f5dbf8768", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"902d48432040b13b3abb8b185f001628a826599dc216ebfa3fd62e8c31f438c9", Pod:"calico-apiserver-7f5dbf8768-lwfmb", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali1a25e0949aa", MAC:"62:7a:e9:3f:c1:9c", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 8 00:29:53.797342 containerd[1547]: 2025-11-08 00:29:53.786 [INFO][4491] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="902d48432040b13b3abb8b185f001628a826599dc216ebfa3fd62e8c31f438c9" Namespace="calico-apiserver" Pod="calico-apiserver-7f5dbf8768-lwfmb" WorkloadEndpoint="localhost-k8s-calico--apiserver--7f5dbf8768--lwfmb-eth0" Nov 8 00:29:53.844597 containerd[1547]: time="2025-11-08T00:29:53.844532307Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 8 00:29:53.844597 containerd[1547]: time="2025-11-08T00:29:53.844571819Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 8 00:29:53.844597 containerd[1547]: time="2025-11-08T00:29:53.844583792Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 8 00:29:53.844766 containerd[1547]: time="2025-11-08T00:29:53.844666688Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 8 00:29:53.866706 systemd[1]: Started cri-containerd-902d48432040b13b3abb8b185f001628a826599dc216ebfa3fd62e8c31f438c9.scope - libcontainer container 902d48432040b13b3abb8b185f001628a826599dc216ebfa3fd62e8c31f438c9. Nov 8 00:29:53.880427 systemd-networkd[1441]: cali16c0dec5d8a: Link UP Nov 8 00:29:53.881343 systemd-networkd[1441]: cali16c0dec5d8a: Gained carrier Nov 8 00:29:53.887235 systemd-resolved[1442]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Nov 8 00:29:53.895066 containerd[1547]: 2025-11-08 00:29:53.694 [INFO][4503] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-goldmane--666569f655--tnwtm-eth0 goldmane-666569f655- calico-system 007a5707-c952-467d-a723-faa6baf2e9bc 967 0 2025-11-08 00:29:28 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:666569f655 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s localhost goldmane-666569f655-tnwtm eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] cali16c0dec5d8a [] [] }} ContainerID="876952453b096d7a9124c0d59a399daff9505c898d15fffb6cd756bff15020bb" Namespace="calico-system" Pod="goldmane-666569f655-tnwtm" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--tnwtm-" Nov 8 00:29:53.895066 containerd[1547]: 2025-11-08 00:29:53.694 [INFO][4503] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="876952453b096d7a9124c0d59a399daff9505c898d15fffb6cd756bff15020bb" Namespace="calico-system" Pod="goldmane-666569f655-tnwtm" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--tnwtm-eth0" Nov 8 00:29:53.895066 containerd[1547]: 2025-11-08 00:29:53.722 [INFO][4519] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="876952453b096d7a9124c0d59a399daff9505c898d15fffb6cd756bff15020bb" HandleID="k8s-pod-network.876952453b096d7a9124c0d59a399daff9505c898d15fffb6cd756bff15020bb" Workload="localhost-k8s-goldmane--666569f655--tnwtm-eth0" Nov 8 00:29:53.895066 containerd[1547]: 2025-11-08 00:29:53.722 [INFO][4519] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="876952453b096d7a9124c0d59a399daff9505c898d15fffb6cd756bff15020bb" HandleID="k8s-pod-network.876952453b096d7a9124c0d59a399daff9505c898d15fffb6cd756bff15020bb" Workload="localhost-k8s-goldmane--666569f655--tnwtm-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00024f590), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"goldmane-666569f655-tnwtm", "timestamp":"2025-11-08 00:29:53.722329339 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 8 00:29:53.895066 containerd[1547]: 2025-11-08 00:29:53.722 [INFO][4519] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 8 00:29:53.895066 containerd[1547]: 2025-11-08 00:29:53.766 [INFO][4519] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 8 00:29:53.895066 containerd[1547]: 2025-11-08 00:29:53.766 [INFO][4519] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Nov 8 00:29:53.895066 containerd[1547]: 2025-11-08 00:29:53.820 [INFO][4519] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.876952453b096d7a9124c0d59a399daff9505c898d15fffb6cd756bff15020bb" host="localhost" Nov 8 00:29:53.895066 containerd[1547]: 2025-11-08 00:29:53.848 [INFO][4519] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Nov 8 00:29:53.895066 containerd[1547]: 2025-11-08 00:29:53.853 [INFO][4519] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Nov 8 00:29:53.895066 containerd[1547]: 2025-11-08 00:29:53.854 [INFO][4519] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Nov 8 00:29:53.895066 containerd[1547]: 2025-11-08 00:29:53.855 [INFO][4519] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Nov 8 00:29:53.895066 containerd[1547]: 2025-11-08 00:29:53.855 [INFO][4519] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.876952453b096d7a9124c0d59a399daff9505c898d15fffb6cd756bff15020bb" host="localhost" Nov 8 00:29:53.895066 containerd[1547]: 2025-11-08 00:29:53.856 [INFO][4519] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.876952453b096d7a9124c0d59a399daff9505c898d15fffb6cd756bff15020bb Nov 8 00:29:53.895066 containerd[1547]: 2025-11-08 00:29:53.860 [INFO][4519] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.876952453b096d7a9124c0d59a399daff9505c898d15fffb6cd756bff15020bb" host="localhost" Nov 8 00:29:53.895066 containerd[1547]: 2025-11-08 00:29:53.872 [INFO][4519] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.131/26] block=192.168.88.128/26 handle="k8s-pod-network.876952453b096d7a9124c0d59a399daff9505c898d15fffb6cd756bff15020bb" host="localhost" Nov 8 00:29:53.895066 containerd[1547]: 2025-11-08 00:29:53.872 [INFO][4519] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.131/26] handle="k8s-pod-network.876952453b096d7a9124c0d59a399daff9505c898d15fffb6cd756bff15020bb" host="localhost" Nov 8 00:29:53.895066 containerd[1547]: 2025-11-08 00:29:53.872 [INFO][4519] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 8 00:29:53.895066 containerd[1547]: 2025-11-08 00:29:53.872 [INFO][4519] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.131/26] IPv6=[] ContainerID="876952453b096d7a9124c0d59a399daff9505c898d15fffb6cd756bff15020bb" HandleID="k8s-pod-network.876952453b096d7a9124c0d59a399daff9505c898d15fffb6cd756bff15020bb" Workload="localhost-k8s-goldmane--666569f655--tnwtm-eth0" Nov 8 00:29:53.895483 containerd[1547]: 2025-11-08 00:29:53.877 [INFO][4503] cni-plugin/k8s.go 418: Populated endpoint ContainerID="876952453b096d7a9124c0d59a399daff9505c898d15fffb6cd756bff15020bb" Namespace="calico-system" Pod="goldmane-666569f655-tnwtm" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--tnwtm-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--666569f655--tnwtm-eth0", GenerateName:"goldmane-666569f655-", Namespace:"calico-system", SelfLink:"", UID:"007a5707-c952-467d-a723-faa6baf2e9bc", ResourceVersion:"967", Generation:0, CreationTimestamp:time.Date(2025, time.November, 8, 0, 29, 28, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"666569f655", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"goldmane-666569f655-tnwtm", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali16c0dec5d8a", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 8 00:29:53.895483 containerd[1547]: 2025-11-08 00:29:53.877 [INFO][4503] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.131/32] ContainerID="876952453b096d7a9124c0d59a399daff9505c898d15fffb6cd756bff15020bb" Namespace="calico-system" Pod="goldmane-666569f655-tnwtm" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--tnwtm-eth0" Nov 8 00:29:53.895483 containerd[1547]: 2025-11-08 00:29:53.877 [INFO][4503] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali16c0dec5d8a ContainerID="876952453b096d7a9124c0d59a399daff9505c898d15fffb6cd756bff15020bb" Namespace="calico-system" Pod="goldmane-666569f655-tnwtm" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--tnwtm-eth0" Nov 8 00:29:53.895483 containerd[1547]: 2025-11-08 00:29:53.881 [INFO][4503] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="876952453b096d7a9124c0d59a399daff9505c898d15fffb6cd756bff15020bb" Namespace="calico-system" Pod="goldmane-666569f655-tnwtm" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--tnwtm-eth0" Nov 8 00:29:53.895483 containerd[1547]: 2025-11-08 00:29:53.881 [INFO][4503] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="876952453b096d7a9124c0d59a399daff9505c898d15fffb6cd756bff15020bb" Namespace="calico-system" Pod="goldmane-666569f655-tnwtm" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--tnwtm-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--666569f655--tnwtm-eth0", GenerateName:"goldmane-666569f655-", Namespace:"calico-system", SelfLink:"", UID:"007a5707-c952-467d-a723-faa6baf2e9bc", ResourceVersion:"967", Generation:0, CreationTimestamp:time.Date(2025, time.November, 8, 0, 29, 28, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"666569f655", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"876952453b096d7a9124c0d59a399daff9505c898d15fffb6cd756bff15020bb", Pod:"goldmane-666569f655-tnwtm", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali16c0dec5d8a", MAC:"1e:57:b5:6e:10:f9", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 8 00:29:53.895483 containerd[1547]: 2025-11-08 00:29:53.891 [INFO][4503] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="876952453b096d7a9124c0d59a399daff9505c898d15fffb6cd756bff15020bb" Namespace="calico-system" Pod="goldmane-666569f655-tnwtm" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--tnwtm-eth0" Nov 8 00:29:53.911241 containerd[1547]: time="2025-11-08T00:29:53.911165953Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 8 00:29:53.911241 containerd[1547]: time="2025-11-08T00:29:53.911217305Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 8 00:29:53.911241 containerd[1547]: time="2025-11-08T00:29:53.911228218Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 8 00:29:53.911882 containerd[1547]: time="2025-11-08T00:29:53.911283526Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 8 00:29:53.931277 systemd[1]: Started cri-containerd-876952453b096d7a9124c0d59a399daff9505c898d15fffb6cd756bff15020bb.scope - libcontainer container 876952453b096d7a9124c0d59a399daff9505c898d15fffb6cd756bff15020bb. Nov 8 00:29:53.937078 containerd[1547]: time="2025-11-08T00:29:53.937014347Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7f5dbf8768-lwfmb,Uid:dc1a7be6-78b9-4b63-807c-f29c0ef99466,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"902d48432040b13b3abb8b185f001628a826599dc216ebfa3fd62e8c31f438c9\"" Nov 8 00:29:53.938359 containerd[1547]: time="2025-11-08T00:29:53.938330663Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 8 00:29:53.946517 systemd-resolved[1442]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Nov 8 00:29:53.976574 systemd-networkd[1441]: cali68566c519b2: Link UP Nov 8 00:29:53.977081 systemd-networkd[1441]: cali68566c519b2: Gained carrier Nov 8 00:29:53.981421 containerd[1547]: time="2025-11-08T00:29:53.981393718Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-tnwtm,Uid:007a5707-c952-467d-a723-faa6baf2e9bc,Namespace:calico-system,Attempt:1,} returns sandbox id \"876952453b096d7a9124c0d59a399daff9505c898d15fffb6cd756bff15020bb\"" Nov 8 00:29:53.994482 containerd[1547]: 2025-11-08 00:29:53.767 [INFO][4526] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--7f5dbf8768--w74ds-eth0 calico-apiserver-7f5dbf8768- calico-apiserver a6c7b38c-00b0-4b95-83b4-14d8b8afda37 965 0 2025-11-08 00:29:27 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:7f5dbf8768 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-7f5dbf8768-w74ds eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali68566c519b2 [] [] }} ContainerID="2e704ccd883082694115be4bd02082794415afcc263e7b7d47ae00e0b3af0e81" Namespace="calico-apiserver" Pod="calico-apiserver-7f5dbf8768-w74ds" WorkloadEndpoint="localhost-k8s-calico--apiserver--7f5dbf8768--w74ds-" Nov 8 00:29:53.994482 containerd[1547]: 2025-11-08 00:29:53.768 [INFO][4526] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="2e704ccd883082694115be4bd02082794415afcc263e7b7d47ae00e0b3af0e81" Namespace="calico-apiserver" Pod="calico-apiserver-7f5dbf8768-w74ds" WorkloadEndpoint="localhost-k8s-calico--apiserver--7f5dbf8768--w74ds-eth0" Nov 8 00:29:53.994482 containerd[1547]: 2025-11-08 00:29:53.835 [INFO][4554] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="2e704ccd883082694115be4bd02082794415afcc263e7b7d47ae00e0b3af0e81" HandleID="k8s-pod-network.2e704ccd883082694115be4bd02082794415afcc263e7b7d47ae00e0b3af0e81" Workload="localhost-k8s-calico--apiserver--7f5dbf8768--w74ds-eth0" Nov 8 00:29:53.994482 containerd[1547]: 2025-11-08 00:29:53.835 [INFO][4554] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="2e704ccd883082694115be4bd02082794415afcc263e7b7d47ae00e0b3af0e81" HandleID="k8s-pod-network.2e704ccd883082694115be4bd02082794415afcc263e7b7d47ae00e0b3af0e81" Workload="localhost-k8s-calico--apiserver--7f5dbf8768--w74ds-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002d4fe0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-7f5dbf8768-w74ds", "timestamp":"2025-11-08 00:29:53.835810931 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 8 00:29:53.994482 containerd[1547]: 2025-11-08 00:29:53.835 [INFO][4554] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 8 00:29:53.994482 containerd[1547]: 2025-11-08 00:29:53.873 [INFO][4554] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 8 00:29:53.994482 containerd[1547]: 2025-11-08 00:29:53.873 [INFO][4554] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Nov 8 00:29:53.994482 containerd[1547]: 2025-11-08 00:29:53.922 [INFO][4554] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.2e704ccd883082694115be4bd02082794415afcc263e7b7d47ae00e0b3af0e81" host="localhost" Nov 8 00:29:53.994482 containerd[1547]: 2025-11-08 00:29:53.941 [INFO][4554] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Nov 8 00:29:53.994482 containerd[1547]: 2025-11-08 00:29:53.952 [INFO][4554] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Nov 8 00:29:53.994482 containerd[1547]: 2025-11-08 00:29:53.953 [INFO][4554] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Nov 8 00:29:53.994482 containerd[1547]: 2025-11-08 00:29:53.954 [INFO][4554] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Nov 8 00:29:53.994482 containerd[1547]: 2025-11-08 00:29:53.954 [INFO][4554] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.2e704ccd883082694115be4bd02082794415afcc263e7b7d47ae00e0b3af0e81" host="localhost" Nov 8 00:29:53.994482 containerd[1547]: 2025-11-08 00:29:53.955 [INFO][4554] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.2e704ccd883082694115be4bd02082794415afcc263e7b7d47ae00e0b3af0e81 Nov 8 00:29:53.994482 containerd[1547]: 2025-11-08 00:29:53.963 [INFO][4554] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.2e704ccd883082694115be4bd02082794415afcc263e7b7d47ae00e0b3af0e81" host="localhost" Nov 8 00:29:53.994482 containerd[1547]: 2025-11-08 00:29:53.971 [INFO][4554] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.132/26] block=192.168.88.128/26 handle="k8s-pod-network.2e704ccd883082694115be4bd02082794415afcc263e7b7d47ae00e0b3af0e81" host="localhost" Nov 8 00:29:53.994482 containerd[1547]: 2025-11-08 00:29:53.971 [INFO][4554] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.132/26] handle="k8s-pod-network.2e704ccd883082694115be4bd02082794415afcc263e7b7d47ae00e0b3af0e81" host="localhost" Nov 8 00:29:53.994482 containerd[1547]: 2025-11-08 00:29:53.971 [INFO][4554] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 8 00:29:53.994482 containerd[1547]: 2025-11-08 00:29:53.971 [INFO][4554] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.132/26] IPv6=[] ContainerID="2e704ccd883082694115be4bd02082794415afcc263e7b7d47ae00e0b3af0e81" HandleID="k8s-pod-network.2e704ccd883082694115be4bd02082794415afcc263e7b7d47ae00e0b3af0e81" Workload="localhost-k8s-calico--apiserver--7f5dbf8768--w74ds-eth0" Nov 8 00:29:53.994980 containerd[1547]: 2025-11-08 00:29:53.973 [INFO][4526] cni-plugin/k8s.go 418: Populated endpoint ContainerID="2e704ccd883082694115be4bd02082794415afcc263e7b7d47ae00e0b3af0e81" Namespace="calico-apiserver" Pod="calico-apiserver-7f5dbf8768-w74ds" WorkloadEndpoint="localhost-k8s-calico--apiserver--7f5dbf8768--w74ds-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--7f5dbf8768--w74ds-eth0", GenerateName:"calico-apiserver-7f5dbf8768-", Namespace:"calico-apiserver", SelfLink:"", UID:"a6c7b38c-00b0-4b95-83b4-14d8b8afda37", ResourceVersion:"965", Generation:0, CreationTimestamp:time.Date(2025, time.November, 8, 0, 29, 27, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7f5dbf8768", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-7f5dbf8768-w74ds", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali68566c519b2", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 8 00:29:53.994980 containerd[1547]: 2025-11-08 00:29:53.973 [INFO][4526] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.132/32] ContainerID="2e704ccd883082694115be4bd02082794415afcc263e7b7d47ae00e0b3af0e81" Namespace="calico-apiserver" Pod="calico-apiserver-7f5dbf8768-w74ds" WorkloadEndpoint="localhost-k8s-calico--apiserver--7f5dbf8768--w74ds-eth0" Nov 8 00:29:53.994980 containerd[1547]: 2025-11-08 00:29:53.973 [INFO][4526] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali68566c519b2 ContainerID="2e704ccd883082694115be4bd02082794415afcc263e7b7d47ae00e0b3af0e81" Namespace="calico-apiserver" Pod="calico-apiserver-7f5dbf8768-w74ds" WorkloadEndpoint="localhost-k8s-calico--apiserver--7f5dbf8768--w74ds-eth0" Nov 8 00:29:53.994980 containerd[1547]: 2025-11-08 00:29:53.978 [INFO][4526] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="2e704ccd883082694115be4bd02082794415afcc263e7b7d47ae00e0b3af0e81" Namespace="calico-apiserver" Pod="calico-apiserver-7f5dbf8768-w74ds" WorkloadEndpoint="localhost-k8s-calico--apiserver--7f5dbf8768--w74ds-eth0" Nov 8 00:29:53.994980 containerd[1547]: 2025-11-08 00:29:53.979 [INFO][4526] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="2e704ccd883082694115be4bd02082794415afcc263e7b7d47ae00e0b3af0e81" Namespace="calico-apiserver" Pod="calico-apiserver-7f5dbf8768-w74ds" WorkloadEndpoint="localhost-k8s-calico--apiserver--7f5dbf8768--w74ds-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--7f5dbf8768--w74ds-eth0", GenerateName:"calico-apiserver-7f5dbf8768-", Namespace:"calico-apiserver", SelfLink:"", UID:"a6c7b38c-00b0-4b95-83b4-14d8b8afda37", ResourceVersion:"965", Generation:0, CreationTimestamp:time.Date(2025, time.November, 8, 0, 29, 27, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7f5dbf8768", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"2e704ccd883082694115be4bd02082794415afcc263e7b7d47ae00e0b3af0e81", Pod:"calico-apiserver-7f5dbf8768-w74ds", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali68566c519b2", MAC:"16:06:1c:dd:06:c8", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 8 00:29:53.994980 containerd[1547]: 2025-11-08 00:29:53.991 [INFO][4526] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="2e704ccd883082694115be4bd02082794415afcc263e7b7d47ae00e0b3af0e81" Namespace="calico-apiserver" Pod="calico-apiserver-7f5dbf8768-w74ds" WorkloadEndpoint="localhost-k8s-calico--apiserver--7f5dbf8768--w74ds-eth0" Nov 8 00:29:54.008239 containerd[1547]: time="2025-11-08T00:29:54.007702793Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 8 00:29:54.008239 containerd[1547]: time="2025-11-08T00:29:54.007771810Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 8 00:29:54.008239 containerd[1547]: time="2025-11-08T00:29:54.007790995Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 8 00:29:54.008239 containerd[1547]: time="2025-11-08T00:29:54.007871073Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 8 00:29:54.021762 systemd[1]: Started cri-containerd-2e704ccd883082694115be4bd02082794415afcc263e7b7d47ae00e0b3af0e81.scope - libcontainer container 2e704ccd883082694115be4bd02082794415afcc263e7b7d47ae00e0b3af0e81. Nov 8 00:29:54.034595 systemd-resolved[1442]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Nov 8 00:29:54.064678 containerd[1547]: time="2025-11-08T00:29:54.064649093Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7f5dbf8768-w74ds,Uid:a6c7b38c-00b0-4b95-83b4-14d8b8afda37,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"2e704ccd883082694115be4bd02082794415afcc263e7b7d47ae00e0b3af0e81\"" Nov 8 00:29:54.076189 systemd-networkd[1441]: calib31b948a660: Link UP Nov 8 00:29:54.076479 systemd-networkd[1441]: calib31b948a660: Gained carrier Nov 8 00:29:54.102458 containerd[1547]: 2025-11-08 00:29:53.816 [INFO][4539] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--kube--controllers--57d6675b9f--clrr6-eth0 calico-kube-controllers-57d6675b9f- calico-system 6a9d7321-1148-43be-b5df-da7f193de30d 968 0 2025-11-08 00:29:30 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:57d6675b9f projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s localhost calico-kube-controllers-57d6675b9f-clrr6 eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] calib31b948a660 [] [] }} ContainerID="a1deeb7f5d99f18f73a80fa045364a7e1ada6279663f0ab2ce860e2404511409" Namespace="calico-system" Pod="calico-kube-controllers-57d6675b9f-clrr6" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--57d6675b9f--clrr6-" Nov 8 00:29:54.102458 containerd[1547]: 2025-11-08 00:29:53.816 [INFO][4539] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="a1deeb7f5d99f18f73a80fa045364a7e1ada6279663f0ab2ce860e2404511409" Namespace="calico-system" Pod="calico-kube-controllers-57d6675b9f-clrr6" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--57d6675b9f--clrr6-eth0" Nov 8 00:29:54.102458 containerd[1547]: 2025-11-08 00:29:53.848 [INFO][4572] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="a1deeb7f5d99f18f73a80fa045364a7e1ada6279663f0ab2ce860e2404511409" HandleID="k8s-pod-network.a1deeb7f5d99f18f73a80fa045364a7e1ada6279663f0ab2ce860e2404511409" Workload="localhost-k8s-calico--kube--controllers--57d6675b9f--clrr6-eth0" Nov 8 00:29:54.102458 containerd[1547]: 2025-11-08 00:29:53.848 [INFO][4572] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="a1deeb7f5d99f18f73a80fa045364a7e1ada6279663f0ab2ce860e2404511409" HandleID="k8s-pod-network.a1deeb7f5d99f18f73a80fa045364a7e1ada6279663f0ab2ce860e2404511409" Workload="localhost-k8s-calico--kube--controllers--57d6675b9f--clrr6-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00024f050), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"calico-kube-controllers-57d6675b9f-clrr6", "timestamp":"2025-11-08 00:29:53.847135894 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 8 00:29:54.102458 containerd[1547]: 2025-11-08 00:29:53.848 [INFO][4572] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 8 00:29:54.102458 containerd[1547]: 2025-11-08 00:29:53.971 [INFO][4572] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 8 00:29:54.102458 containerd[1547]: 2025-11-08 00:29:53.971 [INFO][4572] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Nov 8 00:29:54.102458 containerd[1547]: 2025-11-08 00:29:54.025 [INFO][4572] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.a1deeb7f5d99f18f73a80fa045364a7e1ada6279663f0ab2ce860e2404511409" host="localhost" Nov 8 00:29:54.102458 containerd[1547]: 2025-11-08 00:29:54.042 [INFO][4572] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Nov 8 00:29:54.102458 containerd[1547]: 2025-11-08 00:29:54.055 [INFO][4572] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Nov 8 00:29:54.102458 containerd[1547]: 2025-11-08 00:29:54.058 [INFO][4572] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Nov 8 00:29:54.102458 containerd[1547]: 2025-11-08 00:29:54.060 [INFO][4572] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Nov 8 00:29:54.102458 containerd[1547]: 2025-11-08 00:29:54.060 [INFO][4572] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.a1deeb7f5d99f18f73a80fa045364a7e1ada6279663f0ab2ce860e2404511409" host="localhost" Nov 8 00:29:54.102458 containerd[1547]: 2025-11-08 00:29:54.061 [INFO][4572] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.a1deeb7f5d99f18f73a80fa045364a7e1ada6279663f0ab2ce860e2404511409 Nov 8 00:29:54.102458 containerd[1547]: 2025-11-08 00:29:54.067 [INFO][4572] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.a1deeb7f5d99f18f73a80fa045364a7e1ada6279663f0ab2ce860e2404511409" host="localhost" Nov 8 00:29:54.102458 containerd[1547]: 2025-11-08 00:29:54.070 [INFO][4572] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.133/26] block=192.168.88.128/26 handle="k8s-pod-network.a1deeb7f5d99f18f73a80fa045364a7e1ada6279663f0ab2ce860e2404511409" host="localhost" Nov 8 00:29:54.102458 containerd[1547]: 2025-11-08 00:29:54.070 [INFO][4572] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.133/26] handle="k8s-pod-network.a1deeb7f5d99f18f73a80fa045364a7e1ada6279663f0ab2ce860e2404511409" host="localhost" Nov 8 00:29:54.102458 containerd[1547]: 2025-11-08 00:29:54.070 [INFO][4572] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 8 00:29:54.102458 containerd[1547]: 2025-11-08 00:29:54.070 [INFO][4572] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.133/26] IPv6=[] ContainerID="a1deeb7f5d99f18f73a80fa045364a7e1ada6279663f0ab2ce860e2404511409" HandleID="k8s-pod-network.a1deeb7f5d99f18f73a80fa045364a7e1ada6279663f0ab2ce860e2404511409" Workload="localhost-k8s-calico--kube--controllers--57d6675b9f--clrr6-eth0" Nov 8 00:29:54.104260 containerd[1547]: 2025-11-08 00:29:54.071 [INFO][4539] cni-plugin/k8s.go 418: Populated endpoint ContainerID="a1deeb7f5d99f18f73a80fa045364a7e1ada6279663f0ab2ce860e2404511409" Namespace="calico-system" Pod="calico-kube-controllers-57d6675b9f-clrr6" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--57d6675b9f--clrr6-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--57d6675b9f--clrr6-eth0", GenerateName:"calico-kube-controllers-57d6675b9f-", Namespace:"calico-system", SelfLink:"", UID:"6a9d7321-1148-43be-b5df-da7f193de30d", ResourceVersion:"968", Generation:0, CreationTimestamp:time.Date(2025, time.November, 8, 0, 29, 30, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"57d6675b9f", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-kube-controllers-57d6675b9f-clrr6", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calib31b948a660", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 8 00:29:54.104260 containerd[1547]: 2025-11-08 00:29:54.071 [INFO][4539] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.133/32] ContainerID="a1deeb7f5d99f18f73a80fa045364a7e1ada6279663f0ab2ce860e2404511409" Namespace="calico-system" Pod="calico-kube-controllers-57d6675b9f-clrr6" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--57d6675b9f--clrr6-eth0" Nov 8 00:29:54.104260 containerd[1547]: 2025-11-08 00:29:54.072 [INFO][4539] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calib31b948a660 ContainerID="a1deeb7f5d99f18f73a80fa045364a7e1ada6279663f0ab2ce860e2404511409" Namespace="calico-system" Pod="calico-kube-controllers-57d6675b9f-clrr6" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--57d6675b9f--clrr6-eth0" Nov 8 00:29:54.104260 containerd[1547]: 2025-11-08 00:29:54.075 [INFO][4539] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="a1deeb7f5d99f18f73a80fa045364a7e1ada6279663f0ab2ce860e2404511409" Namespace="calico-system" Pod="calico-kube-controllers-57d6675b9f-clrr6" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--57d6675b9f--clrr6-eth0" Nov 8 00:29:54.104260 containerd[1547]: 2025-11-08 00:29:54.075 [INFO][4539] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="a1deeb7f5d99f18f73a80fa045364a7e1ada6279663f0ab2ce860e2404511409" Namespace="calico-system" Pod="calico-kube-controllers-57d6675b9f-clrr6" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--57d6675b9f--clrr6-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--57d6675b9f--clrr6-eth0", GenerateName:"calico-kube-controllers-57d6675b9f-", Namespace:"calico-system", SelfLink:"", UID:"6a9d7321-1148-43be-b5df-da7f193de30d", ResourceVersion:"968", Generation:0, CreationTimestamp:time.Date(2025, time.November, 8, 0, 29, 30, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"57d6675b9f", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"a1deeb7f5d99f18f73a80fa045364a7e1ada6279663f0ab2ce860e2404511409", Pod:"calico-kube-controllers-57d6675b9f-clrr6", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calib31b948a660", MAC:"56:7a:21:a6:f5:58", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 8 00:29:54.104260 containerd[1547]: 2025-11-08 00:29:54.100 [INFO][4539] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="a1deeb7f5d99f18f73a80fa045364a7e1ada6279663f0ab2ce860e2404511409" Namespace="calico-system" Pod="calico-kube-controllers-57d6675b9f-clrr6" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--57d6675b9f--clrr6-eth0" Nov 8 00:29:54.125364 containerd[1547]: time="2025-11-08T00:29:54.125166198Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 8 00:29:54.125364 containerd[1547]: time="2025-11-08T00:29:54.125201054Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 8 00:29:54.125364 containerd[1547]: time="2025-11-08T00:29:54.125210765Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 8 00:29:54.125364 containerd[1547]: time="2025-11-08T00:29:54.125256589Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 8 00:29:54.143715 systemd[1]: Started cri-containerd-a1deeb7f5d99f18f73a80fa045364a7e1ada6279663f0ab2ce860e2404511409.scope - libcontainer container a1deeb7f5d99f18f73a80fa045364a7e1ada6279663f0ab2ce860e2404511409. Nov 8 00:29:54.172796 systemd-resolved[1442]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Nov 8 00:29:54.194351 containerd[1547]: time="2025-11-08T00:29:54.194326403Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-57d6675b9f-clrr6,Uid:6a9d7321-1148-43be-b5df-da7f193de30d,Namespace:calico-system,Attempt:1,} returns sandbox id \"a1deeb7f5d99f18f73a80fa045364a7e1ada6279663f0ab2ce860e2404511409\"" Nov 8 00:29:54.276456 containerd[1547]: time="2025-11-08T00:29:54.276349637Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 8 00:29:54.276807 containerd[1547]: time="2025-11-08T00:29:54.276758924Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 8 00:29:54.277355 containerd[1547]: time="2025-11-08T00:29:54.276824038Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Nov 8 00:29:54.277397 kubelet[2738]: E1108 00:29:54.276944 2738 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 8 00:29:54.277397 kubelet[2738]: E1108 00:29:54.276981 2738 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 8 00:29:54.277397 kubelet[2738]: E1108 00:29:54.277146 2738 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-6658x,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-7f5dbf8768-lwfmb_calico-apiserver(dc1a7be6-78b9-4b63-807c-f29c0ef99466): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 8 00:29:54.278725 kubelet[2738]: E1108 00:29:54.278228 2738 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7f5dbf8768-lwfmb" podUID="dc1a7be6-78b9-4b63-807c-f29c0ef99466" Nov 8 00:29:54.278784 containerd[1547]: time="2025-11-08T00:29:54.277708769Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Nov 8 00:29:54.489815 containerd[1547]: time="2025-11-08T00:29:54.489786864Z" level=info msg="StopPodSandbox for \"264f8590fe4f357643cea05a5d2fcf0e900dec9405587a87e7b08a6f22f38702\"" Nov 8 00:29:54.490216 containerd[1547]: time="2025-11-08T00:29:54.490185819Z" level=info msg="StopPodSandbox for \"889a01e4428cbf25f34fb7e7458a2e1f1c4cd45285473cf4a813857154399f91\"" Nov 8 00:29:54.579060 containerd[1547]: 2025-11-08 00:29:54.534 [INFO][4792] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="889a01e4428cbf25f34fb7e7458a2e1f1c4cd45285473cf4a813857154399f91" Nov 8 00:29:54.579060 containerd[1547]: 2025-11-08 00:29:54.535 [INFO][4792] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="889a01e4428cbf25f34fb7e7458a2e1f1c4cd45285473cf4a813857154399f91" iface="eth0" netns="/var/run/netns/cni-ca6d1597-6ac2-f954-5035-0880f1ba726f" Nov 8 00:29:54.579060 containerd[1547]: 2025-11-08 00:29:54.535 [INFO][4792] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="889a01e4428cbf25f34fb7e7458a2e1f1c4cd45285473cf4a813857154399f91" iface="eth0" netns="/var/run/netns/cni-ca6d1597-6ac2-f954-5035-0880f1ba726f" Nov 8 00:29:54.579060 containerd[1547]: 2025-11-08 00:29:54.535 [INFO][4792] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="889a01e4428cbf25f34fb7e7458a2e1f1c4cd45285473cf4a813857154399f91" iface="eth0" netns="/var/run/netns/cni-ca6d1597-6ac2-f954-5035-0880f1ba726f" Nov 8 00:29:54.579060 containerd[1547]: 2025-11-08 00:29:54.535 [INFO][4792] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="889a01e4428cbf25f34fb7e7458a2e1f1c4cd45285473cf4a813857154399f91" Nov 8 00:29:54.579060 containerd[1547]: 2025-11-08 00:29:54.535 [INFO][4792] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="889a01e4428cbf25f34fb7e7458a2e1f1c4cd45285473cf4a813857154399f91" Nov 8 00:29:54.579060 containerd[1547]: 2025-11-08 00:29:54.550 [INFO][4804] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="889a01e4428cbf25f34fb7e7458a2e1f1c4cd45285473cf4a813857154399f91" HandleID="k8s-pod-network.889a01e4428cbf25f34fb7e7458a2e1f1c4cd45285473cf4a813857154399f91" Workload="localhost-k8s-coredns--674b8bbfcf--v5dvc-eth0" Nov 8 00:29:54.579060 containerd[1547]: 2025-11-08 00:29:54.550 [INFO][4804] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 8 00:29:54.579060 containerd[1547]: 2025-11-08 00:29:54.550 [INFO][4804] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 8 00:29:54.579060 containerd[1547]: 2025-11-08 00:29:54.575 [WARNING][4804] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="889a01e4428cbf25f34fb7e7458a2e1f1c4cd45285473cf4a813857154399f91" HandleID="k8s-pod-network.889a01e4428cbf25f34fb7e7458a2e1f1c4cd45285473cf4a813857154399f91" Workload="localhost-k8s-coredns--674b8bbfcf--v5dvc-eth0" Nov 8 00:29:54.579060 containerd[1547]: 2025-11-08 00:29:54.575 [INFO][4804] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="889a01e4428cbf25f34fb7e7458a2e1f1c4cd45285473cf4a813857154399f91" HandleID="k8s-pod-network.889a01e4428cbf25f34fb7e7458a2e1f1c4cd45285473cf4a813857154399f91" Workload="localhost-k8s-coredns--674b8bbfcf--v5dvc-eth0" Nov 8 00:29:54.579060 containerd[1547]: 2025-11-08 00:29:54.576 [INFO][4804] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 8 00:29:54.579060 containerd[1547]: 2025-11-08 00:29:54.577 [INFO][4792] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="889a01e4428cbf25f34fb7e7458a2e1f1c4cd45285473cf4a813857154399f91" Nov 8 00:29:54.579719 containerd[1547]: time="2025-11-08T00:29:54.579411414Z" level=info msg="TearDown network for sandbox \"889a01e4428cbf25f34fb7e7458a2e1f1c4cd45285473cf4a813857154399f91\" successfully" Nov 8 00:29:54.579719 containerd[1547]: time="2025-11-08T00:29:54.579428342Z" level=info msg="StopPodSandbox for \"889a01e4428cbf25f34fb7e7458a2e1f1c4cd45285473cf4a813857154399f91\" returns successfully" Nov 8 00:29:54.588262 containerd[1547]: time="2025-11-08T00:29:54.579986654Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-v5dvc,Uid:1885839c-21b3-4320-a460-ea9b5405da38,Namespace:kube-system,Attempt:1,}" Nov 8 00:29:54.589274 containerd[1547]: 2025-11-08 00:29:54.554 [INFO][4791] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="264f8590fe4f357643cea05a5d2fcf0e900dec9405587a87e7b08a6f22f38702" Nov 8 00:29:54.589274 containerd[1547]: 2025-11-08 00:29:54.555 [INFO][4791] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="264f8590fe4f357643cea05a5d2fcf0e900dec9405587a87e7b08a6f22f38702" iface="eth0" netns="/var/run/netns/cni-32465a29-776c-402e-0eb0-13a80caa5a3d" Nov 8 00:29:54.589274 containerd[1547]: 2025-11-08 00:29:54.555 [INFO][4791] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="264f8590fe4f357643cea05a5d2fcf0e900dec9405587a87e7b08a6f22f38702" iface="eth0" netns="/var/run/netns/cni-32465a29-776c-402e-0eb0-13a80caa5a3d" Nov 8 00:29:54.589274 containerd[1547]: 2025-11-08 00:29:54.555 [INFO][4791] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="264f8590fe4f357643cea05a5d2fcf0e900dec9405587a87e7b08a6f22f38702" iface="eth0" netns="/var/run/netns/cni-32465a29-776c-402e-0eb0-13a80caa5a3d" Nov 8 00:29:54.589274 containerd[1547]: 2025-11-08 00:29:54.556 [INFO][4791] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="264f8590fe4f357643cea05a5d2fcf0e900dec9405587a87e7b08a6f22f38702" Nov 8 00:29:54.589274 containerd[1547]: 2025-11-08 00:29:54.556 [INFO][4791] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="264f8590fe4f357643cea05a5d2fcf0e900dec9405587a87e7b08a6f22f38702" Nov 8 00:29:54.589274 containerd[1547]: 2025-11-08 00:29:54.581 [INFO][4811] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="264f8590fe4f357643cea05a5d2fcf0e900dec9405587a87e7b08a6f22f38702" HandleID="k8s-pod-network.264f8590fe4f357643cea05a5d2fcf0e900dec9405587a87e7b08a6f22f38702" Workload="localhost-k8s-csi--node--driver--w4kl5-eth0" Nov 8 00:29:54.589274 containerd[1547]: 2025-11-08 00:29:54.582 [INFO][4811] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 8 00:29:54.589274 containerd[1547]: 2025-11-08 00:29:54.582 [INFO][4811] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 8 00:29:54.589274 containerd[1547]: 2025-11-08 00:29:54.586 [WARNING][4811] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="264f8590fe4f357643cea05a5d2fcf0e900dec9405587a87e7b08a6f22f38702" HandleID="k8s-pod-network.264f8590fe4f357643cea05a5d2fcf0e900dec9405587a87e7b08a6f22f38702" Workload="localhost-k8s-csi--node--driver--w4kl5-eth0" Nov 8 00:29:54.589274 containerd[1547]: 2025-11-08 00:29:54.586 [INFO][4811] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="264f8590fe4f357643cea05a5d2fcf0e900dec9405587a87e7b08a6f22f38702" HandleID="k8s-pod-network.264f8590fe4f357643cea05a5d2fcf0e900dec9405587a87e7b08a6f22f38702" Workload="localhost-k8s-csi--node--driver--w4kl5-eth0" Nov 8 00:29:54.589274 containerd[1547]: 2025-11-08 00:29:54.586 [INFO][4811] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 8 00:29:54.589274 containerd[1547]: 2025-11-08 00:29:54.588 [INFO][4791] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="264f8590fe4f357643cea05a5d2fcf0e900dec9405587a87e7b08a6f22f38702" Nov 8 00:29:54.589537 containerd[1547]: time="2025-11-08T00:29:54.589382832Z" level=info msg="TearDown network for sandbox \"264f8590fe4f357643cea05a5d2fcf0e900dec9405587a87e7b08a6f22f38702\" successfully" Nov 8 00:29:54.589537 containerd[1547]: time="2025-11-08T00:29:54.589398725Z" level=info msg="StopPodSandbox for \"264f8590fe4f357643cea05a5d2fcf0e900dec9405587a87e7b08a6f22f38702\" returns successfully" Nov 8 00:29:54.590136 containerd[1547]: time="2025-11-08T00:29:54.589983716Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-w4kl5,Uid:a1ec52db-bd41-4d19-b1f6-a1fab4a28f01,Namespace:calico-system,Attempt:1,}" Nov 8 00:29:54.614136 containerd[1547]: time="2025-11-08T00:29:54.614101649Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 8 00:29:54.615217 containerd[1547]: time="2025-11-08T00:29:54.615190445Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Nov 8 00:29:54.615335 containerd[1547]: time="2025-11-08T00:29:54.615256895Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=77" Nov 8 00:29:54.615598 kubelet[2738]: E1108 00:29:54.615427 2738 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Nov 8 00:29:54.615598 kubelet[2738]: E1108 00:29:54.615461 2738 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Nov 8 00:29:54.616215 kubelet[2738]: E1108 00:29:54.615588 2738 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-f6tp5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-tnwtm_calico-system(007a5707-c952-467d-a723-faa6baf2e9bc): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Nov 8 00:29:54.617144 kubelet[2738]: E1108 00:29:54.616792 2738 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-tnwtm" podUID="007a5707-c952-467d-a723-faa6baf2e9bc" Nov 8 00:29:54.617408 containerd[1547]: time="2025-11-08T00:29:54.617372329Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 8 00:29:54.631787 systemd[1]: run-netns-cni\x2deba7ef41\x2dd8b5\x2db056\x2db1f6\x2d084953f6477f.mount: Deactivated successfully. Nov 8 00:29:54.631852 systemd[1]: run-netns-cni\x2dca6d1597\x2d6ac2\x2df954\x2d5035\x2d0880f1ba726f.mount: Deactivated successfully. Nov 8 00:29:54.631896 systemd[1]: run-netns-cni\x2d32465a29\x2d776c\x2d402e\x2d0eb0\x2d13a80caa5a3d.mount: Deactivated successfully. Nov 8 00:29:54.709116 systemd-networkd[1441]: cali194a5a45e66: Link UP Nov 8 00:29:54.709885 systemd-networkd[1441]: cali194a5a45e66: Gained carrier Nov 8 00:29:54.719484 containerd[1547]: 2025-11-08 00:29:54.644 [INFO][4817] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--674b8bbfcf--v5dvc-eth0 coredns-674b8bbfcf- kube-system 1885839c-21b3-4320-a460-ea9b5405da38 992 0 2025-11-08 00:29:17 +0000 UTC map[k8s-app:kube-dns pod-template-hash:674b8bbfcf projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-674b8bbfcf-v5dvc eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali194a5a45e66 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="4316dd815531bd3ee8b71ea0140505803e43b6dce6e582ff2fc8c511d404141a" Namespace="kube-system" Pod="coredns-674b8bbfcf-v5dvc" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--v5dvc-" Nov 8 00:29:54.719484 containerd[1547]: 2025-11-08 00:29:54.644 [INFO][4817] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="4316dd815531bd3ee8b71ea0140505803e43b6dce6e582ff2fc8c511d404141a" Namespace="kube-system" Pod="coredns-674b8bbfcf-v5dvc" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--v5dvc-eth0" Nov 8 00:29:54.719484 containerd[1547]: 2025-11-08 00:29:54.665 [INFO][4840] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="4316dd815531bd3ee8b71ea0140505803e43b6dce6e582ff2fc8c511d404141a" HandleID="k8s-pod-network.4316dd815531bd3ee8b71ea0140505803e43b6dce6e582ff2fc8c511d404141a" Workload="localhost-k8s-coredns--674b8bbfcf--v5dvc-eth0" Nov 8 00:29:54.719484 containerd[1547]: 2025-11-08 00:29:54.665 [INFO][4840] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="4316dd815531bd3ee8b71ea0140505803e43b6dce6e582ff2fc8c511d404141a" HandleID="k8s-pod-network.4316dd815531bd3ee8b71ea0140505803e43b6dce6e582ff2fc8c511d404141a" Workload="localhost-k8s-coredns--674b8bbfcf--v5dvc-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002d4fe0), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-674b8bbfcf-v5dvc", "timestamp":"2025-11-08 00:29:54.665450767 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 8 00:29:54.719484 containerd[1547]: 2025-11-08 00:29:54.665 [INFO][4840] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 8 00:29:54.719484 containerd[1547]: 2025-11-08 00:29:54.665 [INFO][4840] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 8 00:29:54.719484 containerd[1547]: 2025-11-08 00:29:54.665 [INFO][4840] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Nov 8 00:29:54.719484 containerd[1547]: 2025-11-08 00:29:54.670 [INFO][4840] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.4316dd815531bd3ee8b71ea0140505803e43b6dce6e582ff2fc8c511d404141a" host="localhost" Nov 8 00:29:54.719484 containerd[1547]: 2025-11-08 00:29:54.684 [INFO][4840] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Nov 8 00:29:54.719484 containerd[1547]: 2025-11-08 00:29:54.687 [INFO][4840] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Nov 8 00:29:54.719484 containerd[1547]: 2025-11-08 00:29:54.688 [INFO][4840] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Nov 8 00:29:54.719484 containerd[1547]: 2025-11-08 00:29:54.689 [INFO][4840] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Nov 8 00:29:54.719484 containerd[1547]: 2025-11-08 00:29:54.689 [INFO][4840] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.4316dd815531bd3ee8b71ea0140505803e43b6dce6e582ff2fc8c511d404141a" host="localhost" Nov 8 00:29:54.719484 containerd[1547]: 2025-11-08 00:29:54.689 [INFO][4840] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.4316dd815531bd3ee8b71ea0140505803e43b6dce6e582ff2fc8c511d404141a Nov 8 00:29:54.719484 containerd[1547]: 2025-11-08 00:29:54.696 [INFO][4840] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.4316dd815531bd3ee8b71ea0140505803e43b6dce6e582ff2fc8c511d404141a" host="localhost" Nov 8 00:29:54.719484 containerd[1547]: 2025-11-08 00:29:54.699 [INFO][4840] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.134/26] block=192.168.88.128/26 handle="k8s-pod-network.4316dd815531bd3ee8b71ea0140505803e43b6dce6e582ff2fc8c511d404141a" host="localhost" Nov 8 00:29:54.719484 containerd[1547]: 2025-11-08 00:29:54.699 [INFO][4840] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.134/26] handle="k8s-pod-network.4316dd815531bd3ee8b71ea0140505803e43b6dce6e582ff2fc8c511d404141a" host="localhost" Nov 8 00:29:54.719484 containerd[1547]: 2025-11-08 00:29:54.699 [INFO][4840] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 8 00:29:54.719484 containerd[1547]: 2025-11-08 00:29:54.699 [INFO][4840] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.134/26] IPv6=[] ContainerID="4316dd815531bd3ee8b71ea0140505803e43b6dce6e582ff2fc8c511d404141a" HandleID="k8s-pod-network.4316dd815531bd3ee8b71ea0140505803e43b6dce6e582ff2fc8c511d404141a" Workload="localhost-k8s-coredns--674b8bbfcf--v5dvc-eth0" Nov 8 00:29:54.720506 containerd[1547]: 2025-11-08 00:29:54.706 [INFO][4817] cni-plugin/k8s.go 418: Populated endpoint ContainerID="4316dd815531bd3ee8b71ea0140505803e43b6dce6e582ff2fc8c511d404141a" Namespace="kube-system" Pod="coredns-674b8bbfcf-v5dvc" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--v5dvc-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--674b8bbfcf--v5dvc-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"1885839c-21b3-4320-a460-ea9b5405da38", ResourceVersion:"992", Generation:0, CreationTimestamp:time.Date(2025, time.November, 8, 0, 29, 17, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-674b8bbfcf-v5dvc", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali194a5a45e66", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 8 00:29:54.720506 containerd[1547]: 2025-11-08 00:29:54.706 [INFO][4817] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.134/32] ContainerID="4316dd815531bd3ee8b71ea0140505803e43b6dce6e582ff2fc8c511d404141a" Namespace="kube-system" Pod="coredns-674b8bbfcf-v5dvc" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--v5dvc-eth0" Nov 8 00:29:54.720506 containerd[1547]: 2025-11-08 00:29:54.706 [INFO][4817] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali194a5a45e66 ContainerID="4316dd815531bd3ee8b71ea0140505803e43b6dce6e582ff2fc8c511d404141a" Namespace="kube-system" Pod="coredns-674b8bbfcf-v5dvc" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--v5dvc-eth0" Nov 8 00:29:54.720506 containerd[1547]: 2025-11-08 00:29:54.710 [INFO][4817] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="4316dd815531bd3ee8b71ea0140505803e43b6dce6e582ff2fc8c511d404141a" Namespace="kube-system" Pod="coredns-674b8bbfcf-v5dvc" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--v5dvc-eth0" Nov 8 00:29:54.720506 containerd[1547]: 2025-11-08 00:29:54.711 [INFO][4817] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="4316dd815531bd3ee8b71ea0140505803e43b6dce6e582ff2fc8c511d404141a" Namespace="kube-system" Pod="coredns-674b8bbfcf-v5dvc" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--v5dvc-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--674b8bbfcf--v5dvc-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"1885839c-21b3-4320-a460-ea9b5405da38", ResourceVersion:"992", Generation:0, CreationTimestamp:time.Date(2025, time.November, 8, 0, 29, 17, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"4316dd815531bd3ee8b71ea0140505803e43b6dce6e582ff2fc8c511d404141a", Pod:"coredns-674b8bbfcf-v5dvc", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali194a5a45e66", MAC:"6e:f2:88:9a:76:26", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 8 00:29:54.720506 containerd[1547]: 2025-11-08 00:29:54.716 [INFO][4817] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="4316dd815531bd3ee8b71ea0140505803e43b6dce6e582ff2fc8c511d404141a" Namespace="kube-system" Pod="coredns-674b8bbfcf-v5dvc" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--v5dvc-eth0" Nov 8 00:29:54.735020 containerd[1547]: time="2025-11-08T00:29:54.734676230Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 8 00:29:54.735195 containerd[1547]: time="2025-11-08T00:29:54.735035297Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 8 00:29:54.735195 containerd[1547]: time="2025-11-08T00:29:54.735060849Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 8 00:29:54.735338 containerd[1547]: time="2025-11-08T00:29:54.735190577Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 8 00:29:54.753714 systemd[1]: Started cri-containerd-4316dd815531bd3ee8b71ea0140505803e43b6dce6e582ff2fc8c511d404141a.scope - libcontainer container 4316dd815531bd3ee8b71ea0140505803e43b6dce6e582ff2fc8c511d404141a. Nov 8 00:29:54.762242 systemd-resolved[1442]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Nov 8 00:29:54.787834 containerd[1547]: time="2025-11-08T00:29:54.787800175Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-v5dvc,Uid:1885839c-21b3-4320-a460-ea9b5405da38,Namespace:kube-system,Attempt:1,} returns sandbox id \"4316dd815531bd3ee8b71ea0140505803e43b6dce6e582ff2fc8c511d404141a\"" Nov 8 00:29:54.808763 containerd[1547]: time="2025-11-08T00:29:54.808621231Z" level=info msg="CreateContainer within sandbox \"4316dd815531bd3ee8b71ea0140505803e43b6dce6e582ff2fc8c511d404141a\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Nov 8 00:29:54.852301 systemd-networkd[1441]: calicfa64c55d95: Link UP Nov 8 00:29:54.856493 systemd-networkd[1441]: calicfa64c55d95: Gained carrier Nov 8 00:29:54.898785 containerd[1547]: 2025-11-08 00:29:54.645 [INFO][4831] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-csi--node--driver--w4kl5-eth0 csi-node-driver- calico-system a1ec52db-bd41-4d19-b1f6-a1fab4a28f01 993 0 2025-11-08 00:29:30 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:857b56db8f k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s localhost csi-node-driver-w4kl5 eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] calicfa64c55d95 [] [] }} ContainerID="4e4912184c1742301e0654eefa70caf2e8c021b091520dd67e74a1794e7326f6" Namespace="calico-system" Pod="csi-node-driver-w4kl5" WorkloadEndpoint="localhost-k8s-csi--node--driver--w4kl5-" Nov 8 00:29:54.898785 containerd[1547]: 2025-11-08 00:29:54.646 [INFO][4831] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="4e4912184c1742301e0654eefa70caf2e8c021b091520dd67e74a1794e7326f6" Namespace="calico-system" Pod="csi-node-driver-w4kl5" WorkloadEndpoint="localhost-k8s-csi--node--driver--w4kl5-eth0" Nov 8 00:29:54.898785 containerd[1547]: 2025-11-08 00:29:54.665 [INFO][4842] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="4e4912184c1742301e0654eefa70caf2e8c021b091520dd67e74a1794e7326f6" HandleID="k8s-pod-network.4e4912184c1742301e0654eefa70caf2e8c021b091520dd67e74a1794e7326f6" Workload="localhost-k8s-csi--node--driver--w4kl5-eth0" Nov 8 00:29:54.898785 containerd[1547]: 2025-11-08 00:29:54.665 [INFO][4842] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="4e4912184c1742301e0654eefa70caf2e8c021b091520dd67e74a1794e7326f6" HandleID="k8s-pod-network.4e4912184c1742301e0654eefa70caf2e8c021b091520dd67e74a1794e7326f6" Workload="localhost-k8s-csi--node--driver--w4kl5-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002d4f70), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"csi-node-driver-w4kl5", "timestamp":"2025-11-08 00:29:54.665779582 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 8 00:29:54.898785 containerd[1547]: 2025-11-08 00:29:54.665 [INFO][4842] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 8 00:29:54.898785 containerd[1547]: 2025-11-08 00:29:54.699 [INFO][4842] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 8 00:29:54.898785 containerd[1547]: 2025-11-08 00:29:54.700 [INFO][4842] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Nov 8 00:29:54.898785 containerd[1547]: 2025-11-08 00:29:54.770 [INFO][4842] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.4e4912184c1742301e0654eefa70caf2e8c021b091520dd67e74a1794e7326f6" host="localhost" Nov 8 00:29:54.898785 containerd[1547]: 2025-11-08 00:29:54.807 [INFO][4842] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Nov 8 00:29:54.898785 containerd[1547]: 2025-11-08 00:29:54.819 [INFO][4842] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Nov 8 00:29:54.898785 containerd[1547]: 2025-11-08 00:29:54.820 [INFO][4842] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Nov 8 00:29:54.898785 containerd[1547]: 2025-11-08 00:29:54.822 [INFO][4842] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Nov 8 00:29:54.898785 containerd[1547]: 2025-11-08 00:29:54.822 [INFO][4842] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.4e4912184c1742301e0654eefa70caf2e8c021b091520dd67e74a1794e7326f6" host="localhost" Nov 8 00:29:54.898785 containerd[1547]: 2025-11-08 00:29:54.823 [INFO][4842] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.4e4912184c1742301e0654eefa70caf2e8c021b091520dd67e74a1794e7326f6 Nov 8 00:29:54.898785 containerd[1547]: 2025-11-08 00:29:54.836 [INFO][4842] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.4e4912184c1742301e0654eefa70caf2e8c021b091520dd67e74a1794e7326f6" host="localhost" Nov 8 00:29:54.898785 containerd[1547]: 2025-11-08 00:29:54.842 [INFO][4842] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.135/26] block=192.168.88.128/26 handle="k8s-pod-network.4e4912184c1742301e0654eefa70caf2e8c021b091520dd67e74a1794e7326f6" host="localhost" Nov 8 00:29:54.898785 containerd[1547]: 2025-11-08 00:29:54.842 [INFO][4842] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.135/26] handle="k8s-pod-network.4e4912184c1742301e0654eefa70caf2e8c021b091520dd67e74a1794e7326f6" host="localhost" Nov 8 00:29:54.898785 containerd[1547]: 2025-11-08 00:29:54.842 [INFO][4842] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 8 00:29:54.898785 containerd[1547]: 2025-11-08 00:29:54.842 [INFO][4842] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.135/26] IPv6=[] ContainerID="4e4912184c1742301e0654eefa70caf2e8c021b091520dd67e74a1794e7326f6" HandleID="k8s-pod-network.4e4912184c1742301e0654eefa70caf2e8c021b091520dd67e74a1794e7326f6" Workload="localhost-k8s-csi--node--driver--w4kl5-eth0" Nov 8 00:29:54.907124 containerd[1547]: 2025-11-08 00:29:54.844 [INFO][4831] cni-plugin/k8s.go 418: Populated endpoint ContainerID="4e4912184c1742301e0654eefa70caf2e8c021b091520dd67e74a1794e7326f6" Namespace="calico-system" Pod="csi-node-driver-w4kl5" WorkloadEndpoint="localhost-k8s-csi--node--driver--w4kl5-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--w4kl5-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"a1ec52db-bd41-4d19-b1f6-a1fab4a28f01", ResourceVersion:"993", Generation:0, CreationTimestamp:time.Date(2025, time.November, 8, 0, 29, 30, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"857b56db8f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"csi-node-driver-w4kl5", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calicfa64c55d95", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 8 00:29:54.907124 containerd[1547]: 2025-11-08 00:29:54.844 [INFO][4831] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.135/32] ContainerID="4e4912184c1742301e0654eefa70caf2e8c021b091520dd67e74a1794e7326f6" Namespace="calico-system" Pod="csi-node-driver-w4kl5" WorkloadEndpoint="localhost-k8s-csi--node--driver--w4kl5-eth0" Nov 8 00:29:54.907124 containerd[1547]: 2025-11-08 00:29:54.844 [INFO][4831] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calicfa64c55d95 ContainerID="4e4912184c1742301e0654eefa70caf2e8c021b091520dd67e74a1794e7326f6" Namespace="calico-system" Pod="csi-node-driver-w4kl5" WorkloadEndpoint="localhost-k8s-csi--node--driver--w4kl5-eth0" Nov 8 00:29:54.907124 containerd[1547]: 2025-11-08 00:29:54.857 [INFO][4831] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="4e4912184c1742301e0654eefa70caf2e8c021b091520dd67e74a1794e7326f6" Namespace="calico-system" Pod="csi-node-driver-w4kl5" WorkloadEndpoint="localhost-k8s-csi--node--driver--w4kl5-eth0" Nov 8 00:29:54.907124 containerd[1547]: 2025-11-08 00:29:54.858 [INFO][4831] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="4e4912184c1742301e0654eefa70caf2e8c021b091520dd67e74a1794e7326f6" Namespace="calico-system" Pod="csi-node-driver-w4kl5" WorkloadEndpoint="localhost-k8s-csi--node--driver--w4kl5-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--w4kl5-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"a1ec52db-bd41-4d19-b1f6-a1fab4a28f01", ResourceVersion:"993", Generation:0, CreationTimestamp:time.Date(2025, time.November, 8, 0, 29, 30, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"857b56db8f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"4e4912184c1742301e0654eefa70caf2e8c021b091520dd67e74a1794e7326f6", Pod:"csi-node-driver-w4kl5", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calicfa64c55d95", MAC:"56:58:38:e5:e4:f1", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 8 00:29:54.907124 containerd[1547]: 2025-11-08 00:29:54.896 [INFO][4831] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="4e4912184c1742301e0654eefa70caf2e8c021b091520dd67e74a1794e7326f6" Namespace="calico-system" Pod="csi-node-driver-w4kl5" WorkloadEndpoint="localhost-k8s-csi--node--driver--w4kl5-eth0" Nov 8 00:29:54.919311 containerd[1547]: time="2025-11-08T00:29:54.919164089Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 8 00:29:54.919311 containerd[1547]: time="2025-11-08T00:29:54.919222936Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 8 00:29:54.919311 containerd[1547]: time="2025-11-08T00:29:54.919230746Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 8 00:29:54.919311 containerd[1547]: time="2025-11-08T00:29:54.919279471Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 8 00:29:54.934707 systemd[1]: Started cri-containerd-4e4912184c1742301e0654eefa70caf2e8c021b091520dd67e74a1794e7326f6.scope - libcontainer container 4e4912184c1742301e0654eefa70caf2e8c021b091520dd67e74a1794e7326f6. Nov 8 00:29:54.942673 kubelet[2738]: E1108 00:29:54.942638 2738 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-tnwtm" podUID="007a5707-c952-467d-a723-faa6baf2e9bc" Nov 8 00:29:54.953450 systemd-resolved[1442]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Nov 8 00:29:54.965172 kubelet[2738]: E1108 00:29:54.964413 2738 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7f5dbf8768-lwfmb" podUID="dc1a7be6-78b9-4b63-807c-f29c0ef99466" Nov 8 00:29:54.967368 containerd[1547]: time="2025-11-08T00:29:54.967345887Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-w4kl5,Uid:a1ec52db-bd41-4d19-b1f6-a1fab4a28f01,Namespace:calico-system,Attempt:1,} returns sandbox id \"4e4912184c1742301e0654eefa70caf2e8c021b091520dd67e74a1794e7326f6\"" Nov 8 00:29:54.974535 containerd[1547]: time="2025-11-08T00:29:54.974506956Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 8 00:29:54.980861 containerd[1547]: time="2025-11-08T00:29:54.980820832Z" level=info msg="CreateContainer within sandbox \"4316dd815531bd3ee8b71ea0140505803e43b6dce6e582ff2fc8c511d404141a\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"ab496a3bdd24fe670939c3f9f27ec7c093f864f260faf8b670347c6149faa6bb\"" Nov 8 00:29:54.988086 containerd[1547]: time="2025-11-08T00:29:54.983133777Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 8 00:29:54.988086 containerd[1547]: time="2025-11-08T00:29:54.983174319Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Nov 8 00:29:54.991869 kubelet[2738]: E1108 00:29:54.991813 2738 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 8 00:29:54.992695 kubelet[2738]: E1108 00:29:54.992062 2738 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 8 00:29:54.992695 kubelet[2738]: E1108 00:29:54.992166 2738 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-s9wmf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-7f5dbf8768-w74ds_calico-apiserver(a6c7b38c-00b0-4b95-83b4-14d8b8afda37): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 8 00:29:54.993202 containerd[1547]: time="2025-11-08T00:29:54.992960798Z" level=info msg="StartContainer for \"ab496a3bdd24fe670939c3f9f27ec7c093f864f260faf8b670347c6149faa6bb\"" Nov 8 00:29:54.993370 containerd[1547]: time="2025-11-08T00:29:54.993354371Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Nov 8 00:29:54.994152 kubelet[2738]: E1108 00:29:54.993565 2738 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7f5dbf8768-w74ds" podUID="a6c7b38c-00b0-4b95-83b4-14d8b8afda37" Nov 8 00:29:55.019718 systemd[1]: Started cri-containerd-ab496a3bdd24fe670939c3f9f27ec7c093f864f260faf8b670347c6149faa6bb.scope - libcontainer container ab496a3bdd24fe670939c3f9f27ec7c093f864f260faf8b670347c6149faa6bb. Nov 8 00:29:55.067748 systemd-networkd[1441]: cali68566c519b2: Gained IPv6LL Nov 8 00:29:55.103539 containerd[1547]: time="2025-11-08T00:29:55.102692343Z" level=info msg="StartContainer for \"ab496a3bdd24fe670939c3f9f27ec7c093f864f260faf8b670347c6149faa6bb\" returns successfully" Nov 8 00:29:55.195904 systemd-networkd[1441]: cali16c0dec5d8a: Gained IPv6LL Nov 8 00:29:55.196352 systemd-networkd[1441]: calib31b948a660: Gained IPv6LL Nov 8 00:29:55.376703 containerd[1547]: time="2025-11-08T00:29:55.376075665Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 8 00:29:55.381852 containerd[1547]: time="2025-11-08T00:29:55.381817454Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Nov 8 00:29:55.381955 containerd[1547]: time="2025-11-08T00:29:55.381886794Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=85" Nov 8 00:29:55.382198 kubelet[2738]: E1108 00:29:55.382102 2738 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Nov 8 00:29:55.382198 kubelet[2738]: E1108 00:29:55.382142 2738 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Nov 8 00:29:55.383822 kubelet[2738]: E1108 00:29:55.382590 2738 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-srjmq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-57d6675b9f-clrr6_calico-system(6a9d7321-1148-43be-b5df-da7f193de30d): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Nov 8 00:29:55.384383 containerd[1547]: time="2025-11-08T00:29:55.384085674Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Nov 8 00:29:55.384425 kubelet[2738]: E1108 00:29:55.384344 2738 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-57d6675b9f-clrr6" podUID="6a9d7321-1148-43be-b5df-da7f193de30d" Nov 8 00:29:55.621705 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount196240469.mount: Deactivated successfully. Nov 8 00:29:55.643832 systemd-networkd[1441]: cali1a25e0949aa: Gained IPv6LL Nov 8 00:29:55.836439 containerd[1547]: time="2025-11-08T00:29:55.836410579Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 8 00:29:55.840419 containerd[1547]: time="2025-11-08T00:29:55.840322048Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Nov 8 00:29:55.840419 containerd[1547]: time="2025-11-08T00:29:55.840379173Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Nov 8 00:29:55.847563 containerd[1547]: time="2025-11-08T00:29:55.842322311Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Nov 8 00:29:55.847621 kubelet[2738]: E1108 00:29:55.840525 2738 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Nov 8 00:29:55.847621 kubelet[2738]: E1108 00:29:55.840561 2738 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Nov 8 00:29:55.847621 kubelet[2738]: E1108 00:29:55.840676 2738 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-bjqld,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-w4kl5_calico-system(a1ec52db-bd41-4d19-b1f6-a1fab4a28f01): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Nov 8 00:29:55.969037 kubelet[2738]: E1108 00:29:55.969008 2738 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7f5dbf8768-lwfmb" podUID="dc1a7be6-78b9-4b63-807c-f29c0ef99466" Nov 8 00:29:55.969960 kubelet[2738]: E1108 00:29:55.969171 2738 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-tnwtm" podUID="007a5707-c952-467d-a723-faa6baf2e9bc" Nov 8 00:29:55.969960 kubelet[2738]: E1108 00:29:55.969795 2738 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7f5dbf8768-w74ds" podUID="a6c7b38c-00b0-4b95-83b4-14d8b8afda37" Nov 8 00:29:55.970128 kubelet[2738]: E1108 00:29:55.970043 2738 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-57d6675b9f-clrr6" podUID="6a9d7321-1148-43be-b5df-da7f193de30d" Nov 8 00:29:55.992345 kubelet[2738]: I1108 00:29:55.991714 2738 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-v5dvc" podStartSLOduration=38.991702594 podStartE2EDuration="38.991702594s" podCreationTimestamp="2025-11-08 00:29:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-08 00:29:55.979820377 +0000 UTC m=+43.596332303" watchObservedRunningTime="2025-11-08 00:29:55.991702594 +0000 UTC m=+43.608214513" Nov 8 00:29:56.221687 containerd[1547]: time="2025-11-08T00:29:56.221387757Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 8 00:29:56.229130 containerd[1547]: time="2025-11-08T00:29:56.229005583Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Nov 8 00:29:56.229130 containerd[1547]: time="2025-11-08T00:29:56.229085005Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Nov 8 00:29:56.229238 kubelet[2738]: E1108 00:29:56.229205 2738 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Nov 8 00:29:56.229271 kubelet[2738]: E1108 00:29:56.229247 2738 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Nov 8 00:29:56.229358 kubelet[2738]: E1108 00:29:56.229329 2738 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-bjqld,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-w4kl5_calico-system(a1ec52db-bd41-4d19-b1f6-a1fab4a28f01): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Nov 8 00:29:56.230823 kubelet[2738]: E1108 00:29:56.230796 2738 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-w4kl5" podUID="a1ec52db-bd41-4d19-b1f6-a1fab4a28f01" Nov 8 00:29:56.283717 systemd-networkd[1441]: calicfa64c55d95: Gained IPv6LL Nov 8 00:29:56.489889 containerd[1547]: time="2025-11-08T00:29:56.489514982Z" level=info msg="StopPodSandbox for \"20116e07a9e27f80e6ec41695f7f4f93fb7d93cf60575368a1d9ed5bc0f9c053\"" Nov 8 00:29:56.552870 containerd[1547]: 2025-11-08 00:29:56.530 [INFO][5007] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="20116e07a9e27f80e6ec41695f7f4f93fb7d93cf60575368a1d9ed5bc0f9c053" Nov 8 00:29:56.552870 containerd[1547]: 2025-11-08 00:29:56.531 [INFO][5007] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="20116e07a9e27f80e6ec41695f7f4f93fb7d93cf60575368a1d9ed5bc0f9c053" iface="eth0" netns="/var/run/netns/cni-30544137-5866-31eb-48d4-356850eda5d9" Nov 8 00:29:56.552870 containerd[1547]: 2025-11-08 00:29:56.531 [INFO][5007] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="20116e07a9e27f80e6ec41695f7f4f93fb7d93cf60575368a1d9ed5bc0f9c053" iface="eth0" netns="/var/run/netns/cni-30544137-5866-31eb-48d4-356850eda5d9" Nov 8 00:29:56.552870 containerd[1547]: 2025-11-08 00:29:56.531 [INFO][5007] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="20116e07a9e27f80e6ec41695f7f4f93fb7d93cf60575368a1d9ed5bc0f9c053" iface="eth0" netns="/var/run/netns/cni-30544137-5866-31eb-48d4-356850eda5d9" Nov 8 00:29:56.552870 containerd[1547]: 2025-11-08 00:29:56.531 [INFO][5007] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="20116e07a9e27f80e6ec41695f7f4f93fb7d93cf60575368a1d9ed5bc0f9c053" Nov 8 00:29:56.552870 containerd[1547]: 2025-11-08 00:29:56.531 [INFO][5007] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="20116e07a9e27f80e6ec41695f7f4f93fb7d93cf60575368a1d9ed5bc0f9c053" Nov 8 00:29:56.552870 containerd[1547]: 2025-11-08 00:29:56.546 [INFO][5014] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="20116e07a9e27f80e6ec41695f7f4f93fb7d93cf60575368a1d9ed5bc0f9c053" HandleID="k8s-pod-network.20116e07a9e27f80e6ec41695f7f4f93fb7d93cf60575368a1d9ed5bc0f9c053" Workload="localhost-k8s-coredns--674b8bbfcf--4xpww-eth0" Nov 8 00:29:56.552870 containerd[1547]: 2025-11-08 00:29:56.546 [INFO][5014] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 8 00:29:56.552870 containerd[1547]: 2025-11-08 00:29:56.546 [INFO][5014] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 8 00:29:56.552870 containerd[1547]: 2025-11-08 00:29:56.550 [WARNING][5014] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="20116e07a9e27f80e6ec41695f7f4f93fb7d93cf60575368a1d9ed5bc0f9c053" HandleID="k8s-pod-network.20116e07a9e27f80e6ec41695f7f4f93fb7d93cf60575368a1d9ed5bc0f9c053" Workload="localhost-k8s-coredns--674b8bbfcf--4xpww-eth0" Nov 8 00:29:56.552870 containerd[1547]: 2025-11-08 00:29:56.550 [INFO][5014] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="20116e07a9e27f80e6ec41695f7f4f93fb7d93cf60575368a1d9ed5bc0f9c053" HandleID="k8s-pod-network.20116e07a9e27f80e6ec41695f7f4f93fb7d93cf60575368a1d9ed5bc0f9c053" Workload="localhost-k8s-coredns--674b8bbfcf--4xpww-eth0" Nov 8 00:29:56.552870 containerd[1547]: 2025-11-08 00:29:56.550 [INFO][5014] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 8 00:29:56.552870 containerd[1547]: 2025-11-08 00:29:56.551 [INFO][5007] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="20116e07a9e27f80e6ec41695f7f4f93fb7d93cf60575368a1d9ed5bc0f9c053" Nov 8 00:29:56.554594 containerd[1547]: time="2025-11-08T00:29:56.554574299Z" level=info msg="TearDown network for sandbox \"20116e07a9e27f80e6ec41695f7f4f93fb7d93cf60575368a1d9ed5bc0f9c053\" successfully" Nov 8 00:29:56.554594 containerd[1547]: time="2025-11-08T00:29:56.554593406Z" level=info msg="StopPodSandbox for \"20116e07a9e27f80e6ec41695f7f4f93fb7d93cf60575368a1d9ed5bc0f9c053\" returns successfully" Nov 8 00:29:56.555314 containerd[1547]: time="2025-11-08T00:29:56.555298415Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-4xpww,Uid:d48d5302-73ac-4c35-86c4-ee48c074bbf4,Namespace:kube-system,Attempt:1,}" Nov 8 00:29:56.555544 systemd[1]: run-netns-cni\x2d30544137\x2d5866\x2d31eb\x2d48d4\x2d356850eda5d9.mount: Deactivated successfully. Nov 8 00:29:56.603730 systemd-networkd[1441]: cali194a5a45e66: Gained IPv6LL Nov 8 00:29:56.629111 systemd-networkd[1441]: calideffba17c54: Link UP Nov 8 00:29:56.629229 systemd-networkd[1441]: calideffba17c54: Gained carrier Nov 8 00:29:56.642865 containerd[1547]: 2025-11-08 00:29:56.586 [INFO][5020] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--674b8bbfcf--4xpww-eth0 coredns-674b8bbfcf- kube-system d48d5302-73ac-4c35-86c4-ee48c074bbf4 1057 0 2025-11-08 00:29:17 +0000 UTC map[k8s-app:kube-dns pod-template-hash:674b8bbfcf projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-674b8bbfcf-4xpww eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calideffba17c54 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="76b7a05c69dd695db76ac52711e2d9782fee45f9d7999e58f5b5ecfaf0ea0ebe" Namespace="kube-system" Pod="coredns-674b8bbfcf-4xpww" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--4xpww-" Nov 8 00:29:56.642865 containerd[1547]: 2025-11-08 00:29:56.586 [INFO][5020] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="76b7a05c69dd695db76ac52711e2d9782fee45f9d7999e58f5b5ecfaf0ea0ebe" Namespace="kube-system" Pod="coredns-674b8bbfcf-4xpww" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--4xpww-eth0" Nov 8 00:29:56.642865 containerd[1547]: 2025-11-08 00:29:56.604 [INFO][5033] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="76b7a05c69dd695db76ac52711e2d9782fee45f9d7999e58f5b5ecfaf0ea0ebe" HandleID="k8s-pod-network.76b7a05c69dd695db76ac52711e2d9782fee45f9d7999e58f5b5ecfaf0ea0ebe" Workload="localhost-k8s-coredns--674b8bbfcf--4xpww-eth0" Nov 8 00:29:56.642865 containerd[1547]: 2025-11-08 00:29:56.604 [INFO][5033] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="76b7a05c69dd695db76ac52711e2d9782fee45f9d7999e58f5b5ecfaf0ea0ebe" HandleID="k8s-pod-network.76b7a05c69dd695db76ac52711e2d9782fee45f9d7999e58f5b5ecfaf0ea0ebe" Workload="localhost-k8s-coredns--674b8bbfcf--4xpww-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00024f590), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-674b8bbfcf-4xpww", "timestamp":"2025-11-08 00:29:56.604254535 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 8 00:29:56.642865 containerd[1547]: 2025-11-08 00:29:56.604 [INFO][5033] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 8 00:29:56.642865 containerd[1547]: 2025-11-08 00:29:56.604 [INFO][5033] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 8 00:29:56.642865 containerd[1547]: 2025-11-08 00:29:56.604 [INFO][5033] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Nov 8 00:29:56.642865 containerd[1547]: 2025-11-08 00:29:56.610 [INFO][5033] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.76b7a05c69dd695db76ac52711e2d9782fee45f9d7999e58f5b5ecfaf0ea0ebe" host="localhost" Nov 8 00:29:56.642865 containerd[1547]: 2025-11-08 00:29:56.612 [INFO][5033] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Nov 8 00:29:56.642865 containerd[1547]: 2025-11-08 00:29:56.616 [INFO][5033] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Nov 8 00:29:56.642865 containerd[1547]: 2025-11-08 00:29:56.617 [INFO][5033] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Nov 8 00:29:56.642865 containerd[1547]: 2025-11-08 00:29:56.618 [INFO][5033] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Nov 8 00:29:56.642865 containerd[1547]: 2025-11-08 00:29:56.619 [INFO][5033] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.76b7a05c69dd695db76ac52711e2d9782fee45f9d7999e58f5b5ecfaf0ea0ebe" host="localhost" Nov 8 00:29:56.642865 containerd[1547]: 2025-11-08 00:29:56.619 [INFO][5033] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.76b7a05c69dd695db76ac52711e2d9782fee45f9d7999e58f5b5ecfaf0ea0ebe Nov 8 00:29:56.642865 containerd[1547]: 2025-11-08 00:29:56.621 [INFO][5033] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.76b7a05c69dd695db76ac52711e2d9782fee45f9d7999e58f5b5ecfaf0ea0ebe" host="localhost" Nov 8 00:29:56.642865 containerd[1547]: 2025-11-08 00:29:56.624 [INFO][5033] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.136/26] block=192.168.88.128/26 handle="k8s-pod-network.76b7a05c69dd695db76ac52711e2d9782fee45f9d7999e58f5b5ecfaf0ea0ebe" host="localhost" Nov 8 00:29:56.642865 containerd[1547]: 2025-11-08 00:29:56.624 [INFO][5033] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.136/26] handle="k8s-pod-network.76b7a05c69dd695db76ac52711e2d9782fee45f9d7999e58f5b5ecfaf0ea0ebe" host="localhost" Nov 8 00:29:56.642865 containerd[1547]: 2025-11-08 00:29:56.624 [INFO][5033] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 8 00:29:56.642865 containerd[1547]: 2025-11-08 00:29:56.624 [INFO][5033] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.136/26] IPv6=[] ContainerID="76b7a05c69dd695db76ac52711e2d9782fee45f9d7999e58f5b5ecfaf0ea0ebe" HandleID="k8s-pod-network.76b7a05c69dd695db76ac52711e2d9782fee45f9d7999e58f5b5ecfaf0ea0ebe" Workload="localhost-k8s-coredns--674b8bbfcf--4xpww-eth0" Nov 8 00:29:56.645438 containerd[1547]: 2025-11-08 00:29:56.626 [INFO][5020] cni-plugin/k8s.go 418: Populated endpoint ContainerID="76b7a05c69dd695db76ac52711e2d9782fee45f9d7999e58f5b5ecfaf0ea0ebe" Namespace="kube-system" Pod="coredns-674b8bbfcf-4xpww" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--4xpww-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--674b8bbfcf--4xpww-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"d48d5302-73ac-4c35-86c4-ee48c074bbf4", ResourceVersion:"1057", Generation:0, CreationTimestamp:time.Date(2025, time.November, 8, 0, 29, 17, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-674b8bbfcf-4xpww", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calideffba17c54", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 8 00:29:56.645438 containerd[1547]: 2025-11-08 00:29:56.627 [INFO][5020] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.136/32] ContainerID="76b7a05c69dd695db76ac52711e2d9782fee45f9d7999e58f5b5ecfaf0ea0ebe" Namespace="kube-system" Pod="coredns-674b8bbfcf-4xpww" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--4xpww-eth0" Nov 8 00:29:56.645438 containerd[1547]: 2025-11-08 00:29:56.627 [INFO][5020] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calideffba17c54 ContainerID="76b7a05c69dd695db76ac52711e2d9782fee45f9d7999e58f5b5ecfaf0ea0ebe" Namespace="kube-system" Pod="coredns-674b8bbfcf-4xpww" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--4xpww-eth0" Nov 8 00:29:56.645438 containerd[1547]: 2025-11-08 00:29:56.628 [INFO][5020] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="76b7a05c69dd695db76ac52711e2d9782fee45f9d7999e58f5b5ecfaf0ea0ebe" Namespace="kube-system" Pod="coredns-674b8bbfcf-4xpww" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--4xpww-eth0" Nov 8 00:29:56.645438 containerd[1547]: 2025-11-08 00:29:56.629 [INFO][5020] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="76b7a05c69dd695db76ac52711e2d9782fee45f9d7999e58f5b5ecfaf0ea0ebe" Namespace="kube-system" Pod="coredns-674b8bbfcf-4xpww" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--4xpww-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--674b8bbfcf--4xpww-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"d48d5302-73ac-4c35-86c4-ee48c074bbf4", ResourceVersion:"1057", Generation:0, CreationTimestamp:time.Date(2025, time.November, 8, 0, 29, 17, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"76b7a05c69dd695db76ac52711e2d9782fee45f9d7999e58f5b5ecfaf0ea0ebe", Pod:"coredns-674b8bbfcf-4xpww", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calideffba17c54", MAC:"82:82:d2:0b:bf:80", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 8 00:29:56.645438 containerd[1547]: 2025-11-08 00:29:56.639 [INFO][5020] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="76b7a05c69dd695db76ac52711e2d9782fee45f9d7999e58f5b5ecfaf0ea0ebe" Namespace="kube-system" Pod="coredns-674b8bbfcf-4xpww" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--4xpww-eth0" Nov 8 00:29:56.659835 containerd[1547]: time="2025-11-08T00:29:56.659759000Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 8 00:29:56.659969 containerd[1547]: time="2025-11-08T00:29:56.659830582Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 8 00:29:56.659969 containerd[1547]: time="2025-11-08T00:29:56.659915930Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 8 00:29:56.660080 containerd[1547]: time="2025-11-08T00:29:56.660029527Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 8 00:29:56.672055 systemd[1]: run-containerd-runc-k8s.io-76b7a05c69dd695db76ac52711e2d9782fee45f9d7999e58f5b5ecfaf0ea0ebe-runc.jLiFgP.mount: Deactivated successfully. Nov 8 00:29:56.682846 systemd[1]: Started cri-containerd-76b7a05c69dd695db76ac52711e2d9782fee45f9d7999e58f5b5ecfaf0ea0ebe.scope - libcontainer container 76b7a05c69dd695db76ac52711e2d9782fee45f9d7999e58f5b5ecfaf0ea0ebe. Nov 8 00:29:56.692643 systemd-resolved[1442]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Nov 8 00:29:56.720956 containerd[1547]: time="2025-11-08T00:29:56.719983027Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-4xpww,Uid:d48d5302-73ac-4c35-86c4-ee48c074bbf4,Namespace:kube-system,Attempt:1,} returns sandbox id \"76b7a05c69dd695db76ac52711e2d9782fee45f9d7999e58f5b5ecfaf0ea0ebe\"" Nov 8 00:29:56.726143 containerd[1547]: time="2025-11-08T00:29:56.726123017Z" level=info msg="CreateContainer within sandbox \"76b7a05c69dd695db76ac52711e2d9782fee45f9d7999e58f5b5ecfaf0ea0ebe\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Nov 8 00:29:56.735102 containerd[1547]: time="2025-11-08T00:29:56.735074329Z" level=info msg="CreateContainer within sandbox \"76b7a05c69dd695db76ac52711e2d9782fee45f9d7999e58f5b5ecfaf0ea0ebe\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"71242df527cc289ae5d72d4c989e90f0fec8f229040018a77699232b8a6cae0f\"" Nov 8 00:29:56.736700 containerd[1547]: time="2025-11-08T00:29:56.736682357Z" level=info msg="StartContainer for \"71242df527cc289ae5d72d4c989e90f0fec8f229040018a77699232b8a6cae0f\"" Nov 8 00:29:56.756945 systemd[1]: Started cri-containerd-71242df527cc289ae5d72d4c989e90f0fec8f229040018a77699232b8a6cae0f.scope - libcontainer container 71242df527cc289ae5d72d4c989e90f0fec8f229040018a77699232b8a6cae0f. Nov 8 00:29:56.781272 containerd[1547]: time="2025-11-08T00:29:56.781206802Z" level=info msg="StartContainer for \"71242df527cc289ae5d72d4c989e90f0fec8f229040018a77699232b8a6cae0f\" returns successfully" Nov 8 00:29:56.972627 kubelet[2738]: E1108 00:29:56.972565 2738 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-w4kl5" podUID="a1ec52db-bd41-4d19-b1f6-a1fab4a28f01" Nov 8 00:29:56.984366 kubelet[2738]: I1108 00:29:56.983680 2738 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-4xpww" podStartSLOduration=39.983668638 podStartE2EDuration="39.983668638s" podCreationTimestamp="2025-11-08 00:29:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-08 00:29:56.982735776 +0000 UTC m=+44.599247716" watchObservedRunningTime="2025-11-08 00:29:56.983668638 +0000 UTC m=+44.600180571" Nov 8 00:29:57.664014 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3564930594.mount: Deactivated successfully. Nov 8 00:29:58.011813 systemd-networkd[1441]: calideffba17c54: Gained IPv6LL Nov 8 00:30:04.489500 containerd[1547]: time="2025-11-08T00:30:04.489238054Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Nov 8 00:30:04.847558 containerd[1547]: time="2025-11-08T00:30:04.847459397Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 8 00:30:04.847986 containerd[1547]: time="2025-11-08T00:30:04.847958039Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Nov 8 00:30:04.848062 containerd[1547]: time="2025-11-08T00:30:04.848012696Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Nov 8 00:30:04.848179 kubelet[2738]: E1108 00:30:04.848146 2738 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Nov 8 00:30:04.848400 kubelet[2738]: E1108 00:30:04.848188 2738 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Nov 8 00:30:04.848400 kubelet[2738]: E1108 00:30:04.848281 2738 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:ac9f2fab8b1a41b6acb7bc84bb1a359e,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-vlmq4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-84848b66c4-gnwcd_calico-system(926ce8dd-4771-4d76-a928-b17ff008cf2e): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Nov 8 00:30:04.851070 containerd[1547]: time="2025-11-08T00:30:04.850518261Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Nov 8 00:30:05.239174 containerd[1547]: time="2025-11-08T00:30:05.239138499Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 8 00:30:05.239670 containerd[1547]: time="2025-11-08T00:30:05.239649518Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Nov 8 00:30:05.239778 containerd[1547]: time="2025-11-08T00:30:05.239697217Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Nov 8 00:30:05.239802 kubelet[2738]: E1108 00:30:05.239763 2738 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Nov 8 00:30:05.239802 kubelet[2738]: E1108 00:30:05.239790 2738 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Nov 8 00:30:05.239887 kubelet[2738]: E1108 00:30:05.239861 2738 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-vlmq4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-84848b66c4-gnwcd_calico-system(926ce8dd-4771-4d76-a928-b17ff008cf2e): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Nov 8 00:30:05.241013 kubelet[2738]: E1108 00:30:05.240991 2738 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-84848b66c4-gnwcd" podUID="926ce8dd-4771-4d76-a928-b17ff008cf2e" Nov 8 00:30:07.490265 containerd[1547]: time="2025-11-08T00:30:07.490226114Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 8 00:30:07.811886 containerd[1547]: time="2025-11-08T00:30:07.811778626Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 8 00:30:07.812282 containerd[1547]: time="2025-11-08T00:30:07.812252434Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 8 00:30:07.812620 containerd[1547]: time="2025-11-08T00:30:07.812326715Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Nov 8 00:30:07.812661 kubelet[2738]: E1108 00:30:07.812415 2738 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 8 00:30:07.812661 kubelet[2738]: E1108 00:30:07.812449 2738 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 8 00:30:07.812661 kubelet[2738]: E1108 00:30:07.812562 2738 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-s9wmf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-7f5dbf8768-w74ds_calico-apiserver(a6c7b38c-00b0-4b95-83b4-14d8b8afda37): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 8 00:30:07.813943 kubelet[2738]: E1108 00:30:07.813923 2738 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7f5dbf8768-w74ds" podUID="a6c7b38c-00b0-4b95-83b4-14d8b8afda37" Nov 8 00:30:08.490730 containerd[1547]: time="2025-11-08T00:30:08.490266709Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Nov 8 00:30:08.851503 containerd[1547]: time="2025-11-08T00:30:08.851266280Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 8 00:30:08.851761 containerd[1547]: time="2025-11-08T00:30:08.851685778Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Nov 8 00:30:08.851761 containerd[1547]: time="2025-11-08T00:30:08.851731564Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Nov 8 00:30:08.851876 kubelet[2738]: E1108 00:30:08.851847 2738 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Nov 8 00:30:08.851876 kubelet[2738]: E1108 00:30:08.851885 2738 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Nov 8 00:30:08.853595 kubelet[2738]: E1108 00:30:08.852024 2738 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-bjqld,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-w4kl5_calico-system(a1ec52db-bd41-4d19-b1f6-a1fab4a28f01): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Nov 8 00:30:08.853825 containerd[1547]: time="2025-11-08T00:30:08.852081201Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 8 00:30:09.216110 containerd[1547]: time="2025-11-08T00:30:09.216076716Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 8 00:30:09.216576 containerd[1547]: time="2025-11-08T00:30:09.216537752Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 8 00:30:09.216674 containerd[1547]: time="2025-11-08T00:30:09.216548437Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Nov 8 00:30:09.216810 kubelet[2738]: E1108 00:30:09.216776 2738 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 8 00:30:09.216860 kubelet[2738]: E1108 00:30:09.216842 2738 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 8 00:30:09.217384 kubelet[2738]: E1108 00:30:09.217126 2738 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-6658x,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-7f5dbf8768-lwfmb_calico-apiserver(dc1a7be6-78b9-4b63-807c-f29c0ef99466): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 8 00:30:09.217474 containerd[1547]: time="2025-11-08T00:30:09.217197598Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Nov 8 00:30:09.218774 kubelet[2738]: E1108 00:30:09.218756 2738 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7f5dbf8768-lwfmb" podUID="dc1a7be6-78b9-4b63-807c-f29c0ef99466" Nov 8 00:30:09.585872 containerd[1547]: time="2025-11-08T00:30:09.585578535Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 8 00:30:09.586464 containerd[1547]: time="2025-11-08T00:30:09.586321537Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Nov 8 00:30:09.586464 containerd[1547]: time="2025-11-08T00:30:09.586368491Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Nov 8 00:30:09.586530 kubelet[2738]: E1108 00:30:09.586496 2738 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Nov 8 00:30:09.586572 kubelet[2738]: E1108 00:30:09.586535 2738 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Nov 8 00:30:09.587105 kubelet[2738]: E1108 00:30:09.586729 2738 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-bjqld,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-w4kl5_calico-system(a1ec52db-bd41-4d19-b1f6-a1fab4a28f01): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Nov 8 00:30:09.587211 containerd[1547]: time="2025-11-08T00:30:09.586752882Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Nov 8 00:30:09.587876 kubelet[2738]: E1108 00:30:09.587842 2738 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-w4kl5" podUID="a1ec52db-bd41-4d19-b1f6-a1fab4a28f01" Nov 8 00:30:09.970344 containerd[1547]: time="2025-11-08T00:30:09.970302357Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 8 00:30:09.976553 containerd[1547]: time="2025-11-08T00:30:09.976497986Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Nov 8 00:30:09.976681 containerd[1547]: time="2025-11-08T00:30:09.976592171Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=85" Nov 8 00:30:09.976957 kubelet[2738]: E1108 00:30:09.976762 2738 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Nov 8 00:30:09.976957 kubelet[2738]: E1108 00:30:09.976800 2738 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Nov 8 00:30:09.976957 kubelet[2738]: E1108 00:30:09.976919 2738 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-srjmq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-57d6675b9f-clrr6_calico-system(6a9d7321-1148-43be-b5df-da7f193de30d): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Nov 8 00:30:09.978933 kubelet[2738]: E1108 00:30:09.978395 2738 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-57d6675b9f-clrr6" podUID="6a9d7321-1148-43be-b5df-da7f193de30d" Nov 8 00:30:10.489344 containerd[1547]: time="2025-11-08T00:30:10.489236222Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Nov 8 00:30:10.854268 containerd[1547]: time="2025-11-08T00:30:10.854024401Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 8 00:30:10.854728 containerd[1547]: time="2025-11-08T00:30:10.854642615Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Nov 8 00:30:10.854728 containerd[1547]: time="2025-11-08T00:30:10.854696720Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=77" Nov 8 00:30:10.854850 kubelet[2738]: E1108 00:30:10.854821 2738 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Nov 8 00:30:10.854895 kubelet[2738]: E1108 00:30:10.854858 2738 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Nov 8 00:30:10.854991 kubelet[2738]: E1108 00:30:10.854955 2738 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-f6tp5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-tnwtm_calico-system(007a5707-c952-467d-a723-faa6baf2e9bc): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Nov 8 00:30:10.856143 kubelet[2738]: E1108 00:30:10.856080 2738 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-tnwtm" podUID="007a5707-c952-467d-a723-faa6baf2e9bc" Nov 8 00:30:12.520510 containerd[1547]: time="2025-11-08T00:30:12.520366473Z" level=info msg="StopPodSandbox for \"264f8590fe4f357643cea05a5d2fcf0e900dec9405587a87e7b08a6f22f38702\"" Nov 8 00:30:12.575361 containerd[1547]: 2025-11-08 00:30:12.551 [WARNING][5157] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="264f8590fe4f357643cea05a5d2fcf0e900dec9405587a87e7b08a6f22f38702" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--w4kl5-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"a1ec52db-bd41-4d19-b1f6-a1fab4a28f01", ResourceVersion:"1112", Generation:0, CreationTimestamp:time.Date(2025, time.November, 8, 0, 29, 30, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"857b56db8f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"4e4912184c1742301e0654eefa70caf2e8c021b091520dd67e74a1794e7326f6", Pod:"csi-node-driver-w4kl5", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calicfa64c55d95", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 8 00:30:12.575361 containerd[1547]: 2025-11-08 00:30:12.551 [INFO][5157] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="264f8590fe4f357643cea05a5d2fcf0e900dec9405587a87e7b08a6f22f38702" Nov 8 00:30:12.575361 containerd[1547]: 2025-11-08 00:30:12.551 [INFO][5157] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="264f8590fe4f357643cea05a5d2fcf0e900dec9405587a87e7b08a6f22f38702" iface="eth0" netns="" Nov 8 00:30:12.575361 containerd[1547]: 2025-11-08 00:30:12.551 [INFO][5157] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="264f8590fe4f357643cea05a5d2fcf0e900dec9405587a87e7b08a6f22f38702" Nov 8 00:30:12.575361 containerd[1547]: 2025-11-08 00:30:12.551 [INFO][5157] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="264f8590fe4f357643cea05a5d2fcf0e900dec9405587a87e7b08a6f22f38702" Nov 8 00:30:12.575361 containerd[1547]: 2025-11-08 00:30:12.566 [INFO][5164] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="264f8590fe4f357643cea05a5d2fcf0e900dec9405587a87e7b08a6f22f38702" HandleID="k8s-pod-network.264f8590fe4f357643cea05a5d2fcf0e900dec9405587a87e7b08a6f22f38702" Workload="localhost-k8s-csi--node--driver--w4kl5-eth0" Nov 8 00:30:12.575361 containerd[1547]: 2025-11-08 00:30:12.566 [INFO][5164] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 8 00:30:12.575361 containerd[1547]: 2025-11-08 00:30:12.566 [INFO][5164] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 8 00:30:12.575361 containerd[1547]: 2025-11-08 00:30:12.571 [WARNING][5164] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="264f8590fe4f357643cea05a5d2fcf0e900dec9405587a87e7b08a6f22f38702" HandleID="k8s-pod-network.264f8590fe4f357643cea05a5d2fcf0e900dec9405587a87e7b08a6f22f38702" Workload="localhost-k8s-csi--node--driver--w4kl5-eth0" Nov 8 00:30:12.575361 containerd[1547]: 2025-11-08 00:30:12.571 [INFO][5164] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="264f8590fe4f357643cea05a5d2fcf0e900dec9405587a87e7b08a6f22f38702" HandleID="k8s-pod-network.264f8590fe4f357643cea05a5d2fcf0e900dec9405587a87e7b08a6f22f38702" Workload="localhost-k8s-csi--node--driver--w4kl5-eth0" Nov 8 00:30:12.575361 containerd[1547]: 2025-11-08 00:30:12.572 [INFO][5164] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 8 00:30:12.575361 containerd[1547]: 2025-11-08 00:30:12.574 [INFO][5157] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="264f8590fe4f357643cea05a5d2fcf0e900dec9405587a87e7b08a6f22f38702" Nov 8 00:30:12.575361 containerd[1547]: time="2025-11-08T00:30:12.575068834Z" level=info msg="TearDown network for sandbox \"264f8590fe4f357643cea05a5d2fcf0e900dec9405587a87e7b08a6f22f38702\" successfully" Nov 8 00:30:12.575361 containerd[1547]: time="2025-11-08T00:30:12.575086402Z" level=info msg="StopPodSandbox for \"264f8590fe4f357643cea05a5d2fcf0e900dec9405587a87e7b08a6f22f38702\" returns successfully" Nov 8 00:30:12.576106 containerd[1547]: time="2025-11-08T00:30:12.575793835Z" level=info msg="RemovePodSandbox for \"264f8590fe4f357643cea05a5d2fcf0e900dec9405587a87e7b08a6f22f38702\"" Nov 8 00:30:12.576106 containerd[1547]: time="2025-11-08T00:30:12.575812074Z" level=info msg="Forcibly stopping sandbox \"264f8590fe4f357643cea05a5d2fcf0e900dec9405587a87e7b08a6f22f38702\"" Nov 8 00:30:12.620782 containerd[1547]: 2025-11-08 00:30:12.600 [WARNING][5179] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="264f8590fe4f357643cea05a5d2fcf0e900dec9405587a87e7b08a6f22f38702" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--w4kl5-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"a1ec52db-bd41-4d19-b1f6-a1fab4a28f01", ResourceVersion:"1112", Generation:0, CreationTimestamp:time.Date(2025, time.November, 8, 0, 29, 30, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"857b56db8f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"4e4912184c1742301e0654eefa70caf2e8c021b091520dd67e74a1794e7326f6", Pod:"csi-node-driver-w4kl5", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calicfa64c55d95", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 8 00:30:12.620782 containerd[1547]: 2025-11-08 00:30:12.600 [INFO][5179] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="264f8590fe4f357643cea05a5d2fcf0e900dec9405587a87e7b08a6f22f38702" Nov 8 00:30:12.620782 containerd[1547]: 2025-11-08 00:30:12.600 [INFO][5179] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="264f8590fe4f357643cea05a5d2fcf0e900dec9405587a87e7b08a6f22f38702" iface="eth0" netns="" Nov 8 00:30:12.620782 containerd[1547]: 2025-11-08 00:30:12.600 [INFO][5179] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="264f8590fe4f357643cea05a5d2fcf0e900dec9405587a87e7b08a6f22f38702" Nov 8 00:30:12.620782 containerd[1547]: 2025-11-08 00:30:12.600 [INFO][5179] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="264f8590fe4f357643cea05a5d2fcf0e900dec9405587a87e7b08a6f22f38702" Nov 8 00:30:12.620782 containerd[1547]: 2025-11-08 00:30:12.613 [INFO][5187] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="264f8590fe4f357643cea05a5d2fcf0e900dec9405587a87e7b08a6f22f38702" HandleID="k8s-pod-network.264f8590fe4f357643cea05a5d2fcf0e900dec9405587a87e7b08a6f22f38702" Workload="localhost-k8s-csi--node--driver--w4kl5-eth0" Nov 8 00:30:12.620782 containerd[1547]: 2025-11-08 00:30:12.613 [INFO][5187] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 8 00:30:12.620782 containerd[1547]: 2025-11-08 00:30:12.613 [INFO][5187] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 8 00:30:12.620782 containerd[1547]: 2025-11-08 00:30:12.617 [WARNING][5187] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="264f8590fe4f357643cea05a5d2fcf0e900dec9405587a87e7b08a6f22f38702" HandleID="k8s-pod-network.264f8590fe4f357643cea05a5d2fcf0e900dec9405587a87e7b08a6f22f38702" Workload="localhost-k8s-csi--node--driver--w4kl5-eth0" Nov 8 00:30:12.620782 containerd[1547]: 2025-11-08 00:30:12.617 [INFO][5187] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="264f8590fe4f357643cea05a5d2fcf0e900dec9405587a87e7b08a6f22f38702" HandleID="k8s-pod-network.264f8590fe4f357643cea05a5d2fcf0e900dec9405587a87e7b08a6f22f38702" Workload="localhost-k8s-csi--node--driver--w4kl5-eth0" Nov 8 00:30:12.620782 containerd[1547]: 2025-11-08 00:30:12.618 [INFO][5187] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 8 00:30:12.620782 containerd[1547]: 2025-11-08 00:30:12.619 [INFO][5179] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="264f8590fe4f357643cea05a5d2fcf0e900dec9405587a87e7b08a6f22f38702" Nov 8 00:30:12.621710 containerd[1547]: time="2025-11-08T00:30:12.620814545Z" level=info msg="TearDown network for sandbox \"264f8590fe4f357643cea05a5d2fcf0e900dec9405587a87e7b08a6f22f38702\" successfully" Nov 8 00:30:12.624973 containerd[1547]: time="2025-11-08T00:30:12.624944219Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"264f8590fe4f357643cea05a5d2fcf0e900dec9405587a87e7b08a6f22f38702\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Nov 8 00:30:12.625035 containerd[1547]: time="2025-11-08T00:30:12.624999923Z" level=info msg="RemovePodSandbox \"264f8590fe4f357643cea05a5d2fcf0e900dec9405587a87e7b08a6f22f38702\" returns successfully" Nov 8 00:30:12.625415 containerd[1547]: time="2025-11-08T00:30:12.625397481Z" level=info msg="StopPodSandbox for \"5172712585d1bf9ef5c0faa0219e0f32c4c93d030eab81dc4074bf3c7ef369da\"" Nov 8 00:30:12.669377 containerd[1547]: 2025-11-08 00:30:12.647 [WARNING][5202] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="5172712585d1bf9ef5c0faa0219e0f32c4c93d030eab81dc4074bf3c7ef369da" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--7f5dbf8768--w74ds-eth0", GenerateName:"calico-apiserver-7f5dbf8768-", Namespace:"calico-apiserver", SelfLink:"", UID:"a6c7b38c-00b0-4b95-83b4-14d8b8afda37", ResourceVersion:"1104", Generation:0, CreationTimestamp:time.Date(2025, time.November, 8, 0, 29, 27, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7f5dbf8768", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"2e704ccd883082694115be4bd02082794415afcc263e7b7d47ae00e0b3af0e81", Pod:"calico-apiserver-7f5dbf8768-w74ds", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali68566c519b2", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 8 00:30:12.669377 containerd[1547]: 2025-11-08 00:30:12.647 [INFO][5202] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="5172712585d1bf9ef5c0faa0219e0f32c4c93d030eab81dc4074bf3c7ef369da" Nov 8 00:30:12.669377 containerd[1547]: 2025-11-08 00:30:12.647 [INFO][5202] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="5172712585d1bf9ef5c0faa0219e0f32c4c93d030eab81dc4074bf3c7ef369da" iface="eth0" netns="" Nov 8 00:30:12.669377 containerd[1547]: 2025-11-08 00:30:12.647 [INFO][5202] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="5172712585d1bf9ef5c0faa0219e0f32c4c93d030eab81dc4074bf3c7ef369da" Nov 8 00:30:12.669377 containerd[1547]: 2025-11-08 00:30:12.647 [INFO][5202] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="5172712585d1bf9ef5c0faa0219e0f32c4c93d030eab81dc4074bf3c7ef369da" Nov 8 00:30:12.669377 containerd[1547]: 2025-11-08 00:30:12.660 [INFO][5209] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="5172712585d1bf9ef5c0faa0219e0f32c4c93d030eab81dc4074bf3c7ef369da" HandleID="k8s-pod-network.5172712585d1bf9ef5c0faa0219e0f32c4c93d030eab81dc4074bf3c7ef369da" Workload="localhost-k8s-calico--apiserver--7f5dbf8768--w74ds-eth0" Nov 8 00:30:12.669377 containerd[1547]: 2025-11-08 00:30:12.661 [INFO][5209] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 8 00:30:12.669377 containerd[1547]: 2025-11-08 00:30:12.661 [INFO][5209] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 8 00:30:12.669377 containerd[1547]: 2025-11-08 00:30:12.666 [WARNING][5209] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="5172712585d1bf9ef5c0faa0219e0f32c4c93d030eab81dc4074bf3c7ef369da" HandleID="k8s-pod-network.5172712585d1bf9ef5c0faa0219e0f32c4c93d030eab81dc4074bf3c7ef369da" Workload="localhost-k8s-calico--apiserver--7f5dbf8768--w74ds-eth0" Nov 8 00:30:12.669377 containerd[1547]: 2025-11-08 00:30:12.666 [INFO][5209] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="5172712585d1bf9ef5c0faa0219e0f32c4c93d030eab81dc4074bf3c7ef369da" HandleID="k8s-pod-network.5172712585d1bf9ef5c0faa0219e0f32c4c93d030eab81dc4074bf3c7ef369da" Workload="localhost-k8s-calico--apiserver--7f5dbf8768--w74ds-eth0" Nov 8 00:30:12.669377 containerd[1547]: 2025-11-08 00:30:12.667 [INFO][5209] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 8 00:30:12.669377 containerd[1547]: 2025-11-08 00:30:12.668 [INFO][5202] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="5172712585d1bf9ef5c0faa0219e0f32c4c93d030eab81dc4074bf3c7ef369da" Nov 8 00:30:12.669771 containerd[1547]: time="2025-11-08T00:30:12.669409294Z" level=info msg="TearDown network for sandbox \"5172712585d1bf9ef5c0faa0219e0f32c4c93d030eab81dc4074bf3c7ef369da\" successfully" Nov 8 00:30:12.669771 containerd[1547]: time="2025-11-08T00:30:12.669429820Z" level=info msg="StopPodSandbox for \"5172712585d1bf9ef5c0faa0219e0f32c4c93d030eab81dc4074bf3c7ef369da\" returns successfully" Nov 8 00:30:12.670179 containerd[1547]: time="2025-11-08T00:30:12.670161035Z" level=info msg="RemovePodSandbox for \"5172712585d1bf9ef5c0faa0219e0f32c4c93d030eab81dc4074bf3c7ef369da\"" Nov 8 00:30:12.670212 containerd[1547]: time="2025-11-08T00:30:12.670184250Z" level=info msg="Forcibly stopping sandbox \"5172712585d1bf9ef5c0faa0219e0f32c4c93d030eab81dc4074bf3c7ef369da\"" Nov 8 00:30:12.711151 containerd[1547]: 2025-11-08 00:30:12.691 [WARNING][5223] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="5172712585d1bf9ef5c0faa0219e0f32c4c93d030eab81dc4074bf3c7ef369da" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--7f5dbf8768--w74ds-eth0", GenerateName:"calico-apiserver-7f5dbf8768-", Namespace:"calico-apiserver", SelfLink:"", UID:"a6c7b38c-00b0-4b95-83b4-14d8b8afda37", ResourceVersion:"1104", Generation:0, CreationTimestamp:time.Date(2025, time.November, 8, 0, 29, 27, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7f5dbf8768", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"2e704ccd883082694115be4bd02082794415afcc263e7b7d47ae00e0b3af0e81", Pod:"calico-apiserver-7f5dbf8768-w74ds", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali68566c519b2", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 8 00:30:12.711151 containerd[1547]: 2025-11-08 00:30:12.691 [INFO][5223] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="5172712585d1bf9ef5c0faa0219e0f32c4c93d030eab81dc4074bf3c7ef369da" Nov 8 00:30:12.711151 containerd[1547]: 2025-11-08 00:30:12.691 [INFO][5223] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="5172712585d1bf9ef5c0faa0219e0f32c4c93d030eab81dc4074bf3c7ef369da" iface="eth0" netns="" Nov 8 00:30:12.711151 containerd[1547]: 2025-11-08 00:30:12.691 [INFO][5223] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="5172712585d1bf9ef5c0faa0219e0f32c4c93d030eab81dc4074bf3c7ef369da" Nov 8 00:30:12.711151 containerd[1547]: 2025-11-08 00:30:12.691 [INFO][5223] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="5172712585d1bf9ef5c0faa0219e0f32c4c93d030eab81dc4074bf3c7ef369da" Nov 8 00:30:12.711151 containerd[1547]: 2025-11-08 00:30:12.704 [INFO][5230] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="5172712585d1bf9ef5c0faa0219e0f32c4c93d030eab81dc4074bf3c7ef369da" HandleID="k8s-pod-network.5172712585d1bf9ef5c0faa0219e0f32c4c93d030eab81dc4074bf3c7ef369da" Workload="localhost-k8s-calico--apiserver--7f5dbf8768--w74ds-eth0" Nov 8 00:30:12.711151 containerd[1547]: 2025-11-08 00:30:12.704 [INFO][5230] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 8 00:30:12.711151 containerd[1547]: 2025-11-08 00:30:12.704 [INFO][5230] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 8 00:30:12.711151 containerd[1547]: 2025-11-08 00:30:12.708 [WARNING][5230] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="5172712585d1bf9ef5c0faa0219e0f32c4c93d030eab81dc4074bf3c7ef369da" HandleID="k8s-pod-network.5172712585d1bf9ef5c0faa0219e0f32c4c93d030eab81dc4074bf3c7ef369da" Workload="localhost-k8s-calico--apiserver--7f5dbf8768--w74ds-eth0" Nov 8 00:30:12.711151 containerd[1547]: 2025-11-08 00:30:12.708 [INFO][5230] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="5172712585d1bf9ef5c0faa0219e0f32c4c93d030eab81dc4074bf3c7ef369da" HandleID="k8s-pod-network.5172712585d1bf9ef5c0faa0219e0f32c4c93d030eab81dc4074bf3c7ef369da" Workload="localhost-k8s-calico--apiserver--7f5dbf8768--w74ds-eth0" Nov 8 00:30:12.711151 containerd[1547]: 2025-11-08 00:30:12.709 [INFO][5230] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 8 00:30:12.711151 containerd[1547]: 2025-11-08 00:30:12.710 [INFO][5223] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="5172712585d1bf9ef5c0faa0219e0f32c4c93d030eab81dc4074bf3c7ef369da" Nov 8 00:30:12.711630 containerd[1547]: time="2025-11-08T00:30:12.711174536Z" level=info msg="TearDown network for sandbox \"5172712585d1bf9ef5c0faa0219e0f32c4c93d030eab81dc4074bf3c7ef369da\" successfully" Nov 8 00:30:12.716269 containerd[1547]: time="2025-11-08T00:30:12.716239182Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"5172712585d1bf9ef5c0faa0219e0f32c4c93d030eab81dc4074bf3c7ef369da\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Nov 8 00:30:12.716334 containerd[1547]: time="2025-11-08T00:30:12.716290117Z" level=info msg="RemovePodSandbox \"5172712585d1bf9ef5c0faa0219e0f32c4c93d030eab81dc4074bf3c7ef369da\" returns successfully" Nov 8 00:30:12.716777 containerd[1547]: time="2025-11-08T00:30:12.716622415Z" level=info msg="StopPodSandbox for \"7e21492b8efb94462f2859236a0b114fb1df55989465021aff0a26e7124d4ffc\"" Nov 8 00:30:12.758093 containerd[1547]: 2025-11-08 00:30:12.737 [WARNING][5244] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="7e21492b8efb94462f2859236a0b114fb1df55989465021aff0a26e7124d4ffc" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--666569f655--tnwtm-eth0", GenerateName:"goldmane-666569f655-", Namespace:"calico-system", SelfLink:"", UID:"007a5707-c952-467d-a723-faa6baf2e9bc", ResourceVersion:"1039", Generation:0, CreationTimestamp:time.Date(2025, time.November, 8, 0, 29, 28, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"666569f655", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"876952453b096d7a9124c0d59a399daff9505c898d15fffb6cd756bff15020bb", Pod:"goldmane-666569f655-tnwtm", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali16c0dec5d8a", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 8 00:30:12.758093 containerd[1547]: 2025-11-08 00:30:12.737 [INFO][5244] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="7e21492b8efb94462f2859236a0b114fb1df55989465021aff0a26e7124d4ffc" Nov 8 00:30:12.758093 containerd[1547]: 2025-11-08 00:30:12.737 [INFO][5244] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="7e21492b8efb94462f2859236a0b114fb1df55989465021aff0a26e7124d4ffc" iface="eth0" netns="" Nov 8 00:30:12.758093 containerd[1547]: 2025-11-08 00:30:12.737 [INFO][5244] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="7e21492b8efb94462f2859236a0b114fb1df55989465021aff0a26e7124d4ffc" Nov 8 00:30:12.758093 containerd[1547]: 2025-11-08 00:30:12.737 [INFO][5244] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="7e21492b8efb94462f2859236a0b114fb1df55989465021aff0a26e7124d4ffc" Nov 8 00:30:12.758093 containerd[1547]: 2025-11-08 00:30:12.750 [INFO][5251] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="7e21492b8efb94462f2859236a0b114fb1df55989465021aff0a26e7124d4ffc" HandleID="k8s-pod-network.7e21492b8efb94462f2859236a0b114fb1df55989465021aff0a26e7124d4ffc" Workload="localhost-k8s-goldmane--666569f655--tnwtm-eth0" Nov 8 00:30:12.758093 containerd[1547]: 2025-11-08 00:30:12.750 [INFO][5251] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 8 00:30:12.758093 containerd[1547]: 2025-11-08 00:30:12.750 [INFO][5251] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 8 00:30:12.758093 containerd[1547]: 2025-11-08 00:30:12.754 [WARNING][5251] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="7e21492b8efb94462f2859236a0b114fb1df55989465021aff0a26e7124d4ffc" HandleID="k8s-pod-network.7e21492b8efb94462f2859236a0b114fb1df55989465021aff0a26e7124d4ffc" Workload="localhost-k8s-goldmane--666569f655--tnwtm-eth0" Nov 8 00:30:12.758093 containerd[1547]: 2025-11-08 00:30:12.754 [INFO][5251] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="7e21492b8efb94462f2859236a0b114fb1df55989465021aff0a26e7124d4ffc" HandleID="k8s-pod-network.7e21492b8efb94462f2859236a0b114fb1df55989465021aff0a26e7124d4ffc" Workload="localhost-k8s-goldmane--666569f655--tnwtm-eth0" Nov 8 00:30:12.758093 containerd[1547]: 2025-11-08 00:30:12.755 [INFO][5251] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 8 00:30:12.758093 containerd[1547]: 2025-11-08 00:30:12.757 [INFO][5244] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="7e21492b8efb94462f2859236a0b114fb1df55989465021aff0a26e7124d4ffc" Nov 8 00:30:12.758798 containerd[1547]: time="2025-11-08T00:30:12.758414979Z" level=info msg="TearDown network for sandbox \"7e21492b8efb94462f2859236a0b114fb1df55989465021aff0a26e7124d4ffc\" successfully" Nov 8 00:30:12.758798 containerd[1547]: time="2025-11-08T00:30:12.758432337Z" level=info msg="StopPodSandbox for \"7e21492b8efb94462f2859236a0b114fb1df55989465021aff0a26e7124d4ffc\" returns successfully" Nov 8 00:30:12.759034 containerd[1547]: time="2025-11-08T00:30:12.759019652Z" level=info msg="RemovePodSandbox for \"7e21492b8efb94462f2859236a0b114fb1df55989465021aff0a26e7124d4ffc\"" Nov 8 00:30:12.759064 containerd[1547]: time="2025-11-08T00:30:12.759040366Z" level=info msg="Forcibly stopping sandbox \"7e21492b8efb94462f2859236a0b114fb1df55989465021aff0a26e7124d4ffc\"" Nov 8 00:30:12.804173 containerd[1547]: 2025-11-08 00:30:12.779 [WARNING][5265] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="7e21492b8efb94462f2859236a0b114fb1df55989465021aff0a26e7124d4ffc" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--666569f655--tnwtm-eth0", GenerateName:"goldmane-666569f655-", Namespace:"calico-system", SelfLink:"", UID:"007a5707-c952-467d-a723-faa6baf2e9bc", ResourceVersion:"1039", Generation:0, CreationTimestamp:time.Date(2025, time.November, 8, 0, 29, 28, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"666569f655", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"876952453b096d7a9124c0d59a399daff9505c898d15fffb6cd756bff15020bb", Pod:"goldmane-666569f655-tnwtm", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali16c0dec5d8a", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 8 00:30:12.804173 containerd[1547]: 2025-11-08 00:30:12.779 [INFO][5265] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="7e21492b8efb94462f2859236a0b114fb1df55989465021aff0a26e7124d4ffc" Nov 8 00:30:12.804173 containerd[1547]: 2025-11-08 00:30:12.779 [INFO][5265] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="7e21492b8efb94462f2859236a0b114fb1df55989465021aff0a26e7124d4ffc" iface="eth0" netns="" Nov 8 00:30:12.804173 containerd[1547]: 2025-11-08 00:30:12.779 [INFO][5265] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="7e21492b8efb94462f2859236a0b114fb1df55989465021aff0a26e7124d4ffc" Nov 8 00:30:12.804173 containerd[1547]: 2025-11-08 00:30:12.779 [INFO][5265] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="7e21492b8efb94462f2859236a0b114fb1df55989465021aff0a26e7124d4ffc" Nov 8 00:30:12.804173 containerd[1547]: 2025-11-08 00:30:12.795 [INFO][5272] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="7e21492b8efb94462f2859236a0b114fb1df55989465021aff0a26e7124d4ffc" HandleID="k8s-pod-network.7e21492b8efb94462f2859236a0b114fb1df55989465021aff0a26e7124d4ffc" Workload="localhost-k8s-goldmane--666569f655--tnwtm-eth0" Nov 8 00:30:12.804173 containerd[1547]: 2025-11-08 00:30:12.795 [INFO][5272] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 8 00:30:12.804173 containerd[1547]: 2025-11-08 00:30:12.795 [INFO][5272] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 8 00:30:12.804173 containerd[1547]: 2025-11-08 00:30:12.800 [WARNING][5272] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="7e21492b8efb94462f2859236a0b114fb1df55989465021aff0a26e7124d4ffc" HandleID="k8s-pod-network.7e21492b8efb94462f2859236a0b114fb1df55989465021aff0a26e7124d4ffc" Workload="localhost-k8s-goldmane--666569f655--tnwtm-eth0" Nov 8 00:30:12.804173 containerd[1547]: 2025-11-08 00:30:12.800 [INFO][5272] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="7e21492b8efb94462f2859236a0b114fb1df55989465021aff0a26e7124d4ffc" HandleID="k8s-pod-network.7e21492b8efb94462f2859236a0b114fb1df55989465021aff0a26e7124d4ffc" Workload="localhost-k8s-goldmane--666569f655--tnwtm-eth0" Nov 8 00:30:12.804173 containerd[1547]: 2025-11-08 00:30:12.801 [INFO][5272] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 8 00:30:12.804173 containerd[1547]: 2025-11-08 00:30:12.802 [INFO][5265] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="7e21492b8efb94462f2859236a0b114fb1df55989465021aff0a26e7124d4ffc" Nov 8 00:30:12.805631 containerd[1547]: time="2025-11-08T00:30:12.804516884Z" level=info msg="TearDown network for sandbox \"7e21492b8efb94462f2859236a0b114fb1df55989465021aff0a26e7124d4ffc\" successfully" Nov 8 00:30:12.815867 containerd[1547]: time="2025-11-08T00:30:12.815696896Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"7e21492b8efb94462f2859236a0b114fb1df55989465021aff0a26e7124d4ffc\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Nov 8 00:30:12.815867 containerd[1547]: time="2025-11-08T00:30:12.815765028Z" level=info msg="RemovePodSandbox \"7e21492b8efb94462f2859236a0b114fb1df55989465021aff0a26e7124d4ffc\" returns successfully" Nov 8 00:30:12.816441 containerd[1547]: time="2025-11-08T00:30:12.816317801Z" level=info msg="StopPodSandbox for \"889a01e4428cbf25f34fb7e7458a2e1f1c4cd45285473cf4a813857154399f91\"" Nov 8 00:30:12.857119 containerd[1547]: 2025-11-08 00:30:12.836 [WARNING][5287] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="889a01e4428cbf25f34fb7e7458a2e1f1c4cd45285473cf4a813857154399f91" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--674b8bbfcf--v5dvc-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"1885839c-21b3-4320-a460-ea9b5405da38", ResourceVersion:"1041", Generation:0, CreationTimestamp:time.Date(2025, time.November, 8, 0, 29, 17, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"4316dd815531bd3ee8b71ea0140505803e43b6dce6e582ff2fc8c511d404141a", Pod:"coredns-674b8bbfcf-v5dvc", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali194a5a45e66", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 8 00:30:12.857119 containerd[1547]: 2025-11-08 00:30:12.837 [INFO][5287] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="889a01e4428cbf25f34fb7e7458a2e1f1c4cd45285473cf4a813857154399f91" Nov 8 00:30:12.857119 containerd[1547]: 2025-11-08 00:30:12.837 [INFO][5287] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="889a01e4428cbf25f34fb7e7458a2e1f1c4cd45285473cf4a813857154399f91" iface="eth0" netns="" Nov 8 00:30:12.857119 containerd[1547]: 2025-11-08 00:30:12.837 [INFO][5287] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="889a01e4428cbf25f34fb7e7458a2e1f1c4cd45285473cf4a813857154399f91" Nov 8 00:30:12.857119 containerd[1547]: 2025-11-08 00:30:12.837 [INFO][5287] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="889a01e4428cbf25f34fb7e7458a2e1f1c4cd45285473cf4a813857154399f91" Nov 8 00:30:12.857119 containerd[1547]: 2025-11-08 00:30:12.850 [INFO][5294] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="889a01e4428cbf25f34fb7e7458a2e1f1c4cd45285473cf4a813857154399f91" HandleID="k8s-pod-network.889a01e4428cbf25f34fb7e7458a2e1f1c4cd45285473cf4a813857154399f91" Workload="localhost-k8s-coredns--674b8bbfcf--v5dvc-eth0" Nov 8 00:30:12.857119 containerd[1547]: 2025-11-08 00:30:12.850 [INFO][5294] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 8 00:30:12.857119 containerd[1547]: 2025-11-08 00:30:12.850 [INFO][5294] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 8 00:30:12.857119 containerd[1547]: 2025-11-08 00:30:12.854 [WARNING][5294] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="889a01e4428cbf25f34fb7e7458a2e1f1c4cd45285473cf4a813857154399f91" HandleID="k8s-pod-network.889a01e4428cbf25f34fb7e7458a2e1f1c4cd45285473cf4a813857154399f91" Workload="localhost-k8s-coredns--674b8bbfcf--v5dvc-eth0" Nov 8 00:30:12.857119 containerd[1547]: 2025-11-08 00:30:12.854 [INFO][5294] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="889a01e4428cbf25f34fb7e7458a2e1f1c4cd45285473cf4a813857154399f91" HandleID="k8s-pod-network.889a01e4428cbf25f34fb7e7458a2e1f1c4cd45285473cf4a813857154399f91" Workload="localhost-k8s-coredns--674b8bbfcf--v5dvc-eth0" Nov 8 00:30:12.857119 containerd[1547]: 2025-11-08 00:30:12.854 [INFO][5294] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 8 00:30:12.857119 containerd[1547]: 2025-11-08 00:30:12.856 [INFO][5287] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="889a01e4428cbf25f34fb7e7458a2e1f1c4cd45285473cf4a813857154399f91" Nov 8 00:30:12.857554 containerd[1547]: time="2025-11-08T00:30:12.857477824Z" level=info msg="TearDown network for sandbox \"889a01e4428cbf25f34fb7e7458a2e1f1c4cd45285473cf4a813857154399f91\" successfully" Nov 8 00:30:12.857554 containerd[1547]: time="2025-11-08T00:30:12.857495530Z" level=info msg="StopPodSandbox for \"889a01e4428cbf25f34fb7e7458a2e1f1c4cd45285473cf4a813857154399f91\" returns successfully" Nov 8 00:30:12.857978 containerd[1547]: time="2025-11-08T00:30:12.857817002Z" level=info msg="RemovePodSandbox for \"889a01e4428cbf25f34fb7e7458a2e1f1c4cd45285473cf4a813857154399f91\"" Nov 8 00:30:12.857978 containerd[1547]: time="2025-11-08T00:30:12.857833648Z" level=info msg="Forcibly stopping sandbox \"889a01e4428cbf25f34fb7e7458a2e1f1c4cd45285473cf4a813857154399f91\"" Nov 8 00:30:12.900134 containerd[1547]: 2025-11-08 00:30:12.877 [WARNING][5308] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="889a01e4428cbf25f34fb7e7458a2e1f1c4cd45285473cf4a813857154399f91" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--674b8bbfcf--v5dvc-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"1885839c-21b3-4320-a460-ea9b5405da38", ResourceVersion:"1041", Generation:0, CreationTimestamp:time.Date(2025, time.November, 8, 0, 29, 17, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"4316dd815531bd3ee8b71ea0140505803e43b6dce6e582ff2fc8c511d404141a", Pod:"coredns-674b8bbfcf-v5dvc", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali194a5a45e66", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 8 00:30:12.900134 containerd[1547]: 2025-11-08 00:30:12.878 [INFO][5308] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="889a01e4428cbf25f34fb7e7458a2e1f1c4cd45285473cf4a813857154399f91" Nov 8 00:30:12.900134 containerd[1547]: 2025-11-08 00:30:12.878 [INFO][5308] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="889a01e4428cbf25f34fb7e7458a2e1f1c4cd45285473cf4a813857154399f91" iface="eth0" netns="" Nov 8 00:30:12.900134 containerd[1547]: 2025-11-08 00:30:12.878 [INFO][5308] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="889a01e4428cbf25f34fb7e7458a2e1f1c4cd45285473cf4a813857154399f91" Nov 8 00:30:12.900134 containerd[1547]: 2025-11-08 00:30:12.878 [INFO][5308] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="889a01e4428cbf25f34fb7e7458a2e1f1c4cd45285473cf4a813857154399f91" Nov 8 00:30:12.900134 containerd[1547]: 2025-11-08 00:30:12.892 [INFO][5315] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="889a01e4428cbf25f34fb7e7458a2e1f1c4cd45285473cf4a813857154399f91" HandleID="k8s-pod-network.889a01e4428cbf25f34fb7e7458a2e1f1c4cd45285473cf4a813857154399f91" Workload="localhost-k8s-coredns--674b8bbfcf--v5dvc-eth0" Nov 8 00:30:12.900134 containerd[1547]: 2025-11-08 00:30:12.892 [INFO][5315] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 8 00:30:12.900134 containerd[1547]: 2025-11-08 00:30:12.893 [INFO][5315] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 8 00:30:12.900134 containerd[1547]: 2025-11-08 00:30:12.896 [WARNING][5315] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="889a01e4428cbf25f34fb7e7458a2e1f1c4cd45285473cf4a813857154399f91" HandleID="k8s-pod-network.889a01e4428cbf25f34fb7e7458a2e1f1c4cd45285473cf4a813857154399f91" Workload="localhost-k8s-coredns--674b8bbfcf--v5dvc-eth0" Nov 8 00:30:12.900134 containerd[1547]: 2025-11-08 00:30:12.897 [INFO][5315] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="889a01e4428cbf25f34fb7e7458a2e1f1c4cd45285473cf4a813857154399f91" HandleID="k8s-pod-network.889a01e4428cbf25f34fb7e7458a2e1f1c4cd45285473cf4a813857154399f91" Workload="localhost-k8s-coredns--674b8bbfcf--v5dvc-eth0" Nov 8 00:30:12.900134 containerd[1547]: 2025-11-08 00:30:12.897 [INFO][5315] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 8 00:30:12.900134 containerd[1547]: 2025-11-08 00:30:12.899 [INFO][5308] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="889a01e4428cbf25f34fb7e7458a2e1f1c4cd45285473cf4a813857154399f91" Nov 8 00:30:12.901475 containerd[1547]: time="2025-11-08T00:30:12.900526919Z" level=info msg="TearDown network for sandbox \"889a01e4428cbf25f34fb7e7458a2e1f1c4cd45285473cf4a813857154399f91\" successfully" Nov 8 00:30:12.901903 containerd[1547]: time="2025-11-08T00:30:12.901885588Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"889a01e4428cbf25f34fb7e7458a2e1f1c4cd45285473cf4a813857154399f91\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Nov 8 00:30:12.901980 containerd[1547]: time="2025-11-08T00:30:12.901969350Z" level=info msg="RemovePodSandbox \"889a01e4428cbf25f34fb7e7458a2e1f1c4cd45285473cf4a813857154399f91\" returns successfully" Nov 8 00:30:12.902409 containerd[1547]: time="2025-11-08T00:30:12.902389764Z" level=info msg="StopPodSandbox for \"20116e07a9e27f80e6ec41695f7f4f93fb7d93cf60575368a1d9ed5bc0f9c053\"" Nov 8 00:30:12.971215 containerd[1547]: 2025-11-08 00:30:12.936 [WARNING][5329] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="20116e07a9e27f80e6ec41695f7f4f93fb7d93cf60575368a1d9ed5bc0f9c053" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--674b8bbfcf--4xpww-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"d48d5302-73ac-4c35-86c4-ee48c074bbf4", ResourceVersion:"1072", Generation:0, CreationTimestamp:time.Date(2025, time.November, 8, 0, 29, 17, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"76b7a05c69dd695db76ac52711e2d9782fee45f9d7999e58f5b5ecfaf0ea0ebe", Pod:"coredns-674b8bbfcf-4xpww", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calideffba17c54", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 8 00:30:12.971215 containerd[1547]: 2025-11-08 00:30:12.936 [INFO][5329] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="20116e07a9e27f80e6ec41695f7f4f93fb7d93cf60575368a1d9ed5bc0f9c053" Nov 8 00:30:12.971215 containerd[1547]: 2025-11-08 00:30:12.936 [INFO][5329] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="20116e07a9e27f80e6ec41695f7f4f93fb7d93cf60575368a1d9ed5bc0f9c053" iface="eth0" netns="" Nov 8 00:30:12.971215 containerd[1547]: 2025-11-08 00:30:12.936 [INFO][5329] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="20116e07a9e27f80e6ec41695f7f4f93fb7d93cf60575368a1d9ed5bc0f9c053" Nov 8 00:30:12.971215 containerd[1547]: 2025-11-08 00:30:12.936 [INFO][5329] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="20116e07a9e27f80e6ec41695f7f4f93fb7d93cf60575368a1d9ed5bc0f9c053" Nov 8 00:30:12.971215 containerd[1547]: 2025-11-08 00:30:12.963 [INFO][5336] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="20116e07a9e27f80e6ec41695f7f4f93fb7d93cf60575368a1d9ed5bc0f9c053" HandleID="k8s-pod-network.20116e07a9e27f80e6ec41695f7f4f93fb7d93cf60575368a1d9ed5bc0f9c053" Workload="localhost-k8s-coredns--674b8bbfcf--4xpww-eth0" Nov 8 00:30:12.971215 containerd[1547]: 2025-11-08 00:30:12.963 [INFO][5336] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 8 00:30:12.971215 containerd[1547]: 2025-11-08 00:30:12.963 [INFO][5336] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 8 00:30:12.971215 containerd[1547]: 2025-11-08 00:30:12.967 [WARNING][5336] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="20116e07a9e27f80e6ec41695f7f4f93fb7d93cf60575368a1d9ed5bc0f9c053" HandleID="k8s-pod-network.20116e07a9e27f80e6ec41695f7f4f93fb7d93cf60575368a1d9ed5bc0f9c053" Workload="localhost-k8s-coredns--674b8bbfcf--4xpww-eth0" Nov 8 00:30:12.971215 containerd[1547]: 2025-11-08 00:30:12.967 [INFO][5336] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="20116e07a9e27f80e6ec41695f7f4f93fb7d93cf60575368a1d9ed5bc0f9c053" HandleID="k8s-pod-network.20116e07a9e27f80e6ec41695f7f4f93fb7d93cf60575368a1d9ed5bc0f9c053" Workload="localhost-k8s-coredns--674b8bbfcf--4xpww-eth0" Nov 8 00:30:12.971215 containerd[1547]: 2025-11-08 00:30:12.968 [INFO][5336] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 8 00:30:12.971215 containerd[1547]: 2025-11-08 00:30:12.969 [INFO][5329] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="20116e07a9e27f80e6ec41695f7f4f93fb7d93cf60575368a1d9ed5bc0f9c053" Nov 8 00:30:12.971215 containerd[1547]: time="2025-11-08T00:30:12.970978732Z" level=info msg="TearDown network for sandbox \"20116e07a9e27f80e6ec41695f7f4f93fb7d93cf60575368a1d9ed5bc0f9c053\" successfully" Nov 8 00:30:12.971215 containerd[1547]: time="2025-11-08T00:30:12.970997065Z" level=info msg="StopPodSandbox for \"20116e07a9e27f80e6ec41695f7f4f93fb7d93cf60575368a1d9ed5bc0f9c053\" returns successfully" Nov 8 00:30:12.972140 containerd[1547]: time="2025-11-08T00:30:12.971959130Z" level=info msg="RemovePodSandbox for \"20116e07a9e27f80e6ec41695f7f4f93fb7d93cf60575368a1d9ed5bc0f9c053\"" Nov 8 00:30:12.972140 containerd[1547]: time="2025-11-08T00:30:12.971976062Z" level=info msg="Forcibly stopping sandbox \"20116e07a9e27f80e6ec41695f7f4f93fb7d93cf60575368a1d9ed5bc0f9c053\"" Nov 8 00:30:13.021068 containerd[1547]: 2025-11-08 00:30:12.996 [WARNING][5350] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="20116e07a9e27f80e6ec41695f7f4f93fb7d93cf60575368a1d9ed5bc0f9c053" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--674b8bbfcf--4xpww-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"d48d5302-73ac-4c35-86c4-ee48c074bbf4", ResourceVersion:"1072", Generation:0, CreationTimestamp:time.Date(2025, time.November, 8, 0, 29, 17, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"76b7a05c69dd695db76ac52711e2d9782fee45f9d7999e58f5b5ecfaf0ea0ebe", Pod:"coredns-674b8bbfcf-4xpww", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calideffba17c54", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 8 00:30:13.021068 containerd[1547]: 2025-11-08 00:30:12.996 [INFO][5350] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="20116e07a9e27f80e6ec41695f7f4f93fb7d93cf60575368a1d9ed5bc0f9c053" Nov 8 00:30:13.021068 containerd[1547]: 2025-11-08 00:30:12.996 [INFO][5350] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="20116e07a9e27f80e6ec41695f7f4f93fb7d93cf60575368a1d9ed5bc0f9c053" iface="eth0" netns="" Nov 8 00:30:13.021068 containerd[1547]: 2025-11-08 00:30:12.996 [INFO][5350] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="20116e07a9e27f80e6ec41695f7f4f93fb7d93cf60575368a1d9ed5bc0f9c053" Nov 8 00:30:13.021068 containerd[1547]: 2025-11-08 00:30:12.996 [INFO][5350] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="20116e07a9e27f80e6ec41695f7f4f93fb7d93cf60575368a1d9ed5bc0f9c053" Nov 8 00:30:13.021068 containerd[1547]: 2025-11-08 00:30:13.013 [INFO][5357] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="20116e07a9e27f80e6ec41695f7f4f93fb7d93cf60575368a1d9ed5bc0f9c053" HandleID="k8s-pod-network.20116e07a9e27f80e6ec41695f7f4f93fb7d93cf60575368a1d9ed5bc0f9c053" Workload="localhost-k8s-coredns--674b8bbfcf--4xpww-eth0" Nov 8 00:30:13.021068 containerd[1547]: 2025-11-08 00:30:13.013 [INFO][5357] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 8 00:30:13.021068 containerd[1547]: 2025-11-08 00:30:13.013 [INFO][5357] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 8 00:30:13.021068 containerd[1547]: 2025-11-08 00:30:13.017 [WARNING][5357] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="20116e07a9e27f80e6ec41695f7f4f93fb7d93cf60575368a1d9ed5bc0f9c053" HandleID="k8s-pod-network.20116e07a9e27f80e6ec41695f7f4f93fb7d93cf60575368a1d9ed5bc0f9c053" Workload="localhost-k8s-coredns--674b8bbfcf--4xpww-eth0" Nov 8 00:30:13.021068 containerd[1547]: 2025-11-08 00:30:13.017 [INFO][5357] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="20116e07a9e27f80e6ec41695f7f4f93fb7d93cf60575368a1d9ed5bc0f9c053" HandleID="k8s-pod-network.20116e07a9e27f80e6ec41695f7f4f93fb7d93cf60575368a1d9ed5bc0f9c053" Workload="localhost-k8s-coredns--674b8bbfcf--4xpww-eth0" Nov 8 00:30:13.021068 containerd[1547]: 2025-11-08 00:30:13.018 [INFO][5357] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 8 00:30:13.021068 containerd[1547]: 2025-11-08 00:30:13.019 [INFO][5350] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="20116e07a9e27f80e6ec41695f7f4f93fb7d93cf60575368a1d9ed5bc0f9c053" Nov 8 00:30:13.021068 containerd[1547]: time="2025-11-08T00:30:13.020647683Z" level=info msg="TearDown network for sandbox \"20116e07a9e27f80e6ec41695f7f4f93fb7d93cf60575368a1d9ed5bc0f9c053\" successfully" Nov 8 00:30:13.022834 containerd[1547]: time="2025-11-08T00:30:13.022706977Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"20116e07a9e27f80e6ec41695f7f4f93fb7d93cf60575368a1d9ed5bc0f9c053\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Nov 8 00:30:13.022834 containerd[1547]: time="2025-11-08T00:30:13.022766814Z" level=info msg="RemovePodSandbox \"20116e07a9e27f80e6ec41695f7f4f93fb7d93cf60575368a1d9ed5bc0f9c053\" returns successfully" Nov 8 00:30:13.023346 containerd[1547]: time="2025-11-08T00:30:13.023186698Z" level=info msg="StopPodSandbox for \"dbef0dff2b1755620ff851e1a9d0ce276f603f1685c53aa100b939e8f1fc9044\"" Nov 8 00:30:13.073382 containerd[1547]: 2025-11-08 00:30:13.049 [WARNING][5371] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="dbef0dff2b1755620ff851e1a9d0ce276f603f1685c53aa100b939e8f1fc9044" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--57d6675b9f--clrr6-eth0", GenerateName:"calico-kube-controllers-57d6675b9f-", Namespace:"calico-system", SelfLink:"", UID:"6a9d7321-1148-43be-b5df-da7f193de30d", ResourceVersion:"1119", Generation:0, CreationTimestamp:time.Date(2025, time.November, 8, 0, 29, 30, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"57d6675b9f", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"a1deeb7f5d99f18f73a80fa045364a7e1ada6279663f0ab2ce860e2404511409", Pod:"calico-kube-controllers-57d6675b9f-clrr6", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calib31b948a660", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 8 00:30:13.073382 containerd[1547]: 2025-11-08 00:30:13.049 [INFO][5371] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="dbef0dff2b1755620ff851e1a9d0ce276f603f1685c53aa100b939e8f1fc9044" Nov 8 00:30:13.073382 containerd[1547]: 2025-11-08 00:30:13.049 [INFO][5371] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="dbef0dff2b1755620ff851e1a9d0ce276f603f1685c53aa100b939e8f1fc9044" iface="eth0" netns="" Nov 8 00:30:13.073382 containerd[1547]: 2025-11-08 00:30:13.049 [INFO][5371] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="dbef0dff2b1755620ff851e1a9d0ce276f603f1685c53aa100b939e8f1fc9044" Nov 8 00:30:13.073382 containerd[1547]: 2025-11-08 00:30:13.049 [INFO][5371] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="dbef0dff2b1755620ff851e1a9d0ce276f603f1685c53aa100b939e8f1fc9044" Nov 8 00:30:13.073382 containerd[1547]: 2025-11-08 00:30:13.065 [INFO][5378] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="dbef0dff2b1755620ff851e1a9d0ce276f603f1685c53aa100b939e8f1fc9044" HandleID="k8s-pod-network.dbef0dff2b1755620ff851e1a9d0ce276f603f1685c53aa100b939e8f1fc9044" Workload="localhost-k8s-calico--kube--controllers--57d6675b9f--clrr6-eth0" Nov 8 00:30:13.073382 containerd[1547]: 2025-11-08 00:30:13.065 [INFO][5378] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 8 00:30:13.073382 containerd[1547]: 2025-11-08 00:30:13.065 [INFO][5378] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 8 00:30:13.073382 containerd[1547]: 2025-11-08 00:30:13.070 [WARNING][5378] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="dbef0dff2b1755620ff851e1a9d0ce276f603f1685c53aa100b939e8f1fc9044" HandleID="k8s-pod-network.dbef0dff2b1755620ff851e1a9d0ce276f603f1685c53aa100b939e8f1fc9044" Workload="localhost-k8s-calico--kube--controllers--57d6675b9f--clrr6-eth0" Nov 8 00:30:13.073382 containerd[1547]: 2025-11-08 00:30:13.070 [INFO][5378] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="dbef0dff2b1755620ff851e1a9d0ce276f603f1685c53aa100b939e8f1fc9044" HandleID="k8s-pod-network.dbef0dff2b1755620ff851e1a9d0ce276f603f1685c53aa100b939e8f1fc9044" Workload="localhost-k8s-calico--kube--controllers--57d6675b9f--clrr6-eth0" Nov 8 00:30:13.073382 containerd[1547]: 2025-11-08 00:30:13.070 [INFO][5378] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 8 00:30:13.073382 containerd[1547]: 2025-11-08 00:30:13.072 [INFO][5371] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="dbef0dff2b1755620ff851e1a9d0ce276f603f1685c53aa100b939e8f1fc9044" Nov 8 00:30:13.073382 containerd[1547]: time="2025-11-08T00:30:13.073263867Z" level=info msg="TearDown network for sandbox \"dbef0dff2b1755620ff851e1a9d0ce276f603f1685c53aa100b939e8f1fc9044\" successfully" Nov 8 00:30:13.073382 containerd[1547]: time="2025-11-08T00:30:13.073279341Z" level=info msg="StopPodSandbox for \"dbef0dff2b1755620ff851e1a9d0ce276f603f1685c53aa100b939e8f1fc9044\" returns successfully" Nov 8 00:30:13.083377 containerd[1547]: time="2025-11-08T00:30:13.074042941Z" level=info msg="RemovePodSandbox for \"dbef0dff2b1755620ff851e1a9d0ce276f603f1685c53aa100b939e8f1fc9044\"" Nov 8 00:30:13.083377 containerd[1547]: time="2025-11-08T00:30:13.074062010Z" level=info msg="Forcibly stopping sandbox \"dbef0dff2b1755620ff851e1a9d0ce276f603f1685c53aa100b939e8f1fc9044\"" Nov 8 00:30:13.118499 containerd[1547]: 2025-11-08 00:30:13.097 [WARNING][5392] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="dbef0dff2b1755620ff851e1a9d0ce276f603f1685c53aa100b939e8f1fc9044" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--57d6675b9f--clrr6-eth0", GenerateName:"calico-kube-controllers-57d6675b9f-", Namespace:"calico-system", SelfLink:"", UID:"6a9d7321-1148-43be-b5df-da7f193de30d", ResourceVersion:"1119", Generation:0, CreationTimestamp:time.Date(2025, time.November, 8, 0, 29, 30, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"57d6675b9f", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"a1deeb7f5d99f18f73a80fa045364a7e1ada6279663f0ab2ce860e2404511409", Pod:"calico-kube-controllers-57d6675b9f-clrr6", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calib31b948a660", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 8 00:30:13.118499 containerd[1547]: 2025-11-08 00:30:13.097 [INFO][5392] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="dbef0dff2b1755620ff851e1a9d0ce276f603f1685c53aa100b939e8f1fc9044" Nov 8 00:30:13.118499 containerd[1547]: 2025-11-08 00:30:13.097 [INFO][5392] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="dbef0dff2b1755620ff851e1a9d0ce276f603f1685c53aa100b939e8f1fc9044" iface="eth0" netns="" Nov 8 00:30:13.118499 containerd[1547]: 2025-11-08 00:30:13.097 [INFO][5392] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="dbef0dff2b1755620ff851e1a9d0ce276f603f1685c53aa100b939e8f1fc9044" Nov 8 00:30:13.118499 containerd[1547]: 2025-11-08 00:30:13.097 [INFO][5392] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="dbef0dff2b1755620ff851e1a9d0ce276f603f1685c53aa100b939e8f1fc9044" Nov 8 00:30:13.118499 containerd[1547]: 2025-11-08 00:30:13.111 [INFO][5399] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="dbef0dff2b1755620ff851e1a9d0ce276f603f1685c53aa100b939e8f1fc9044" HandleID="k8s-pod-network.dbef0dff2b1755620ff851e1a9d0ce276f603f1685c53aa100b939e8f1fc9044" Workload="localhost-k8s-calico--kube--controllers--57d6675b9f--clrr6-eth0" Nov 8 00:30:13.118499 containerd[1547]: 2025-11-08 00:30:13.112 [INFO][5399] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 8 00:30:13.118499 containerd[1547]: 2025-11-08 00:30:13.112 [INFO][5399] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 8 00:30:13.118499 containerd[1547]: 2025-11-08 00:30:13.115 [WARNING][5399] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="dbef0dff2b1755620ff851e1a9d0ce276f603f1685c53aa100b939e8f1fc9044" HandleID="k8s-pod-network.dbef0dff2b1755620ff851e1a9d0ce276f603f1685c53aa100b939e8f1fc9044" Workload="localhost-k8s-calico--kube--controllers--57d6675b9f--clrr6-eth0" Nov 8 00:30:13.118499 containerd[1547]: 2025-11-08 00:30:13.115 [INFO][5399] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="dbef0dff2b1755620ff851e1a9d0ce276f603f1685c53aa100b939e8f1fc9044" HandleID="k8s-pod-network.dbef0dff2b1755620ff851e1a9d0ce276f603f1685c53aa100b939e8f1fc9044" Workload="localhost-k8s-calico--kube--controllers--57d6675b9f--clrr6-eth0" Nov 8 00:30:13.118499 containerd[1547]: 2025-11-08 00:30:13.116 [INFO][5399] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 8 00:30:13.118499 containerd[1547]: 2025-11-08 00:30:13.117 [INFO][5392] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="dbef0dff2b1755620ff851e1a9d0ce276f603f1685c53aa100b939e8f1fc9044" Nov 8 00:30:13.118499 containerd[1547]: time="2025-11-08T00:30:13.118559107Z" level=info msg="TearDown network for sandbox \"dbef0dff2b1755620ff851e1a9d0ce276f603f1685c53aa100b939e8f1fc9044\" successfully" Nov 8 00:30:13.129902 containerd[1547]: time="2025-11-08T00:30:13.129712289Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"dbef0dff2b1755620ff851e1a9d0ce276f603f1685c53aa100b939e8f1fc9044\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Nov 8 00:30:13.129902 containerd[1547]: time="2025-11-08T00:30:13.129765456Z" level=info msg="RemovePodSandbox \"dbef0dff2b1755620ff851e1a9d0ce276f603f1685c53aa100b939e8f1fc9044\" returns successfully" Nov 8 00:30:13.130333 containerd[1547]: time="2025-11-08T00:30:13.130148677Z" level=info msg="StopPodSandbox for \"16ee7b0656a955c5eeb3aac32acbc5e262d9ed7a5d11f72e1bd5f47b98c075e1\"" Nov 8 00:30:13.182966 containerd[1547]: 2025-11-08 00:30:13.158 [WARNING][5413] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="16ee7b0656a955c5eeb3aac32acbc5e262d9ed7a5d11f72e1bd5f47b98c075e1" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--7f5dbf8768--lwfmb-eth0", GenerateName:"calico-apiserver-7f5dbf8768-", Namespace:"calico-apiserver", SelfLink:"", UID:"dc1a7be6-78b9-4b63-807c-f29c0ef99466", ResourceVersion:"1040", Generation:0, CreationTimestamp:time.Date(2025, time.November, 8, 0, 29, 27, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7f5dbf8768", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"902d48432040b13b3abb8b185f001628a826599dc216ebfa3fd62e8c31f438c9", Pod:"calico-apiserver-7f5dbf8768-lwfmb", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali1a25e0949aa", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 8 00:30:13.182966 containerd[1547]: 2025-11-08 00:30:13.158 [INFO][5413] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="16ee7b0656a955c5eeb3aac32acbc5e262d9ed7a5d11f72e1bd5f47b98c075e1" Nov 8 00:30:13.182966 containerd[1547]: 2025-11-08 00:30:13.158 [INFO][5413] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="16ee7b0656a955c5eeb3aac32acbc5e262d9ed7a5d11f72e1bd5f47b98c075e1" iface="eth0" netns="" Nov 8 00:30:13.182966 containerd[1547]: 2025-11-08 00:30:13.158 [INFO][5413] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="16ee7b0656a955c5eeb3aac32acbc5e262d9ed7a5d11f72e1bd5f47b98c075e1" Nov 8 00:30:13.182966 containerd[1547]: 2025-11-08 00:30:13.158 [INFO][5413] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="16ee7b0656a955c5eeb3aac32acbc5e262d9ed7a5d11f72e1bd5f47b98c075e1" Nov 8 00:30:13.182966 containerd[1547]: 2025-11-08 00:30:13.174 [INFO][5420] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="16ee7b0656a955c5eeb3aac32acbc5e262d9ed7a5d11f72e1bd5f47b98c075e1" HandleID="k8s-pod-network.16ee7b0656a955c5eeb3aac32acbc5e262d9ed7a5d11f72e1bd5f47b98c075e1" Workload="localhost-k8s-calico--apiserver--7f5dbf8768--lwfmb-eth0" Nov 8 00:30:13.182966 containerd[1547]: 2025-11-08 00:30:13.174 [INFO][5420] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 8 00:30:13.182966 containerd[1547]: 2025-11-08 00:30:13.174 [INFO][5420] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 8 00:30:13.182966 containerd[1547]: 2025-11-08 00:30:13.179 [WARNING][5420] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="16ee7b0656a955c5eeb3aac32acbc5e262d9ed7a5d11f72e1bd5f47b98c075e1" HandleID="k8s-pod-network.16ee7b0656a955c5eeb3aac32acbc5e262d9ed7a5d11f72e1bd5f47b98c075e1" Workload="localhost-k8s-calico--apiserver--7f5dbf8768--lwfmb-eth0" Nov 8 00:30:13.182966 containerd[1547]: 2025-11-08 00:30:13.179 [INFO][5420] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="16ee7b0656a955c5eeb3aac32acbc5e262d9ed7a5d11f72e1bd5f47b98c075e1" HandleID="k8s-pod-network.16ee7b0656a955c5eeb3aac32acbc5e262d9ed7a5d11f72e1bd5f47b98c075e1" Workload="localhost-k8s-calico--apiserver--7f5dbf8768--lwfmb-eth0" Nov 8 00:30:13.182966 containerd[1547]: 2025-11-08 00:30:13.180 [INFO][5420] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 8 00:30:13.182966 containerd[1547]: 2025-11-08 00:30:13.181 [INFO][5413] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="16ee7b0656a955c5eeb3aac32acbc5e262d9ed7a5d11f72e1bd5f47b98c075e1" Nov 8 00:30:13.188885 containerd[1547]: time="2025-11-08T00:30:13.183335122Z" level=info msg="TearDown network for sandbox \"16ee7b0656a955c5eeb3aac32acbc5e262d9ed7a5d11f72e1bd5f47b98c075e1\" successfully" Nov 8 00:30:13.188885 containerd[1547]: time="2025-11-08T00:30:13.183366412Z" level=info msg="StopPodSandbox for \"16ee7b0656a955c5eeb3aac32acbc5e262d9ed7a5d11f72e1bd5f47b98c075e1\" returns successfully" Nov 8 00:30:13.188885 containerd[1547]: time="2025-11-08T00:30:13.183804764Z" level=info msg="RemovePodSandbox for \"16ee7b0656a955c5eeb3aac32acbc5e262d9ed7a5d11f72e1bd5f47b98c075e1\"" Nov 8 00:30:13.188885 containerd[1547]: time="2025-11-08T00:30:13.183822306Z" level=info msg="Forcibly stopping sandbox \"16ee7b0656a955c5eeb3aac32acbc5e262d9ed7a5d11f72e1bd5f47b98c075e1\"" Nov 8 00:30:13.234481 containerd[1547]: 2025-11-08 00:30:13.208 [WARNING][5434] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="16ee7b0656a955c5eeb3aac32acbc5e262d9ed7a5d11f72e1bd5f47b98c075e1" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--7f5dbf8768--lwfmb-eth0", GenerateName:"calico-apiserver-7f5dbf8768-", Namespace:"calico-apiserver", SelfLink:"", UID:"dc1a7be6-78b9-4b63-807c-f29c0ef99466", ResourceVersion:"1040", Generation:0, CreationTimestamp:time.Date(2025, time.November, 8, 0, 29, 27, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7f5dbf8768", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"902d48432040b13b3abb8b185f001628a826599dc216ebfa3fd62e8c31f438c9", Pod:"calico-apiserver-7f5dbf8768-lwfmb", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali1a25e0949aa", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 8 00:30:13.234481 containerd[1547]: 2025-11-08 00:30:13.208 [INFO][5434] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="16ee7b0656a955c5eeb3aac32acbc5e262d9ed7a5d11f72e1bd5f47b98c075e1" Nov 8 00:30:13.234481 containerd[1547]: 2025-11-08 00:30:13.208 [INFO][5434] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="16ee7b0656a955c5eeb3aac32acbc5e262d9ed7a5d11f72e1bd5f47b98c075e1" iface="eth0" netns="" Nov 8 00:30:13.234481 containerd[1547]: 2025-11-08 00:30:13.208 [INFO][5434] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="16ee7b0656a955c5eeb3aac32acbc5e262d9ed7a5d11f72e1bd5f47b98c075e1" Nov 8 00:30:13.234481 containerd[1547]: 2025-11-08 00:30:13.208 [INFO][5434] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="16ee7b0656a955c5eeb3aac32acbc5e262d9ed7a5d11f72e1bd5f47b98c075e1" Nov 8 00:30:13.234481 containerd[1547]: 2025-11-08 00:30:13.225 [INFO][5441] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="16ee7b0656a955c5eeb3aac32acbc5e262d9ed7a5d11f72e1bd5f47b98c075e1" HandleID="k8s-pod-network.16ee7b0656a955c5eeb3aac32acbc5e262d9ed7a5d11f72e1bd5f47b98c075e1" Workload="localhost-k8s-calico--apiserver--7f5dbf8768--lwfmb-eth0" Nov 8 00:30:13.234481 containerd[1547]: 2025-11-08 00:30:13.225 [INFO][5441] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 8 00:30:13.234481 containerd[1547]: 2025-11-08 00:30:13.225 [INFO][5441] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 8 00:30:13.234481 containerd[1547]: 2025-11-08 00:30:13.230 [WARNING][5441] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="16ee7b0656a955c5eeb3aac32acbc5e262d9ed7a5d11f72e1bd5f47b98c075e1" HandleID="k8s-pod-network.16ee7b0656a955c5eeb3aac32acbc5e262d9ed7a5d11f72e1bd5f47b98c075e1" Workload="localhost-k8s-calico--apiserver--7f5dbf8768--lwfmb-eth0" Nov 8 00:30:13.234481 containerd[1547]: 2025-11-08 00:30:13.230 [INFO][5441] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="16ee7b0656a955c5eeb3aac32acbc5e262d9ed7a5d11f72e1bd5f47b98c075e1" HandleID="k8s-pod-network.16ee7b0656a955c5eeb3aac32acbc5e262d9ed7a5d11f72e1bd5f47b98c075e1" Workload="localhost-k8s-calico--apiserver--7f5dbf8768--lwfmb-eth0" Nov 8 00:30:13.234481 containerd[1547]: 2025-11-08 00:30:13.231 [INFO][5441] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 8 00:30:13.234481 containerd[1547]: 2025-11-08 00:30:13.232 [INFO][5434] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="16ee7b0656a955c5eeb3aac32acbc5e262d9ed7a5d11f72e1bd5f47b98c075e1" Nov 8 00:30:13.234481 containerd[1547]: time="2025-11-08T00:30:13.233524318Z" level=info msg="TearDown network for sandbox \"16ee7b0656a955c5eeb3aac32acbc5e262d9ed7a5d11f72e1bd5f47b98c075e1\" successfully" Nov 8 00:30:13.243773 containerd[1547]: time="2025-11-08T00:30:13.243745299Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"16ee7b0656a955c5eeb3aac32acbc5e262d9ed7a5d11f72e1bd5f47b98c075e1\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Nov 8 00:30:13.243903 containerd[1547]: time="2025-11-08T00:30:13.243883022Z" level=info msg="RemovePodSandbox \"16ee7b0656a955c5eeb3aac32acbc5e262d9ed7a5d11f72e1bd5f47b98c075e1\" returns successfully" Nov 8 00:30:13.246626 containerd[1547]: time="2025-11-08T00:30:13.246567945Z" level=info msg="StopPodSandbox for \"80507d9e861ad2c8a455bbe41b1344d725eb73c0d460351190e10f277e760590\"" Nov 8 00:30:13.289365 containerd[1547]: 2025-11-08 00:30:13.266 [WARNING][5455] cni-plugin/k8s.go 598: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="80507d9e861ad2c8a455bbe41b1344d725eb73c0d460351190e10f277e760590" WorkloadEndpoint="localhost-k8s-whisker--6f4bb8f964--gbjvr-eth0" Nov 8 00:30:13.289365 containerd[1547]: 2025-11-08 00:30:13.267 [INFO][5455] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="80507d9e861ad2c8a455bbe41b1344d725eb73c0d460351190e10f277e760590" Nov 8 00:30:13.289365 containerd[1547]: 2025-11-08 00:30:13.267 [INFO][5455] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="80507d9e861ad2c8a455bbe41b1344d725eb73c0d460351190e10f277e760590" iface="eth0" netns="" Nov 8 00:30:13.289365 containerd[1547]: 2025-11-08 00:30:13.267 [INFO][5455] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="80507d9e861ad2c8a455bbe41b1344d725eb73c0d460351190e10f277e760590" Nov 8 00:30:13.289365 containerd[1547]: 2025-11-08 00:30:13.267 [INFO][5455] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="80507d9e861ad2c8a455bbe41b1344d725eb73c0d460351190e10f277e760590" Nov 8 00:30:13.289365 containerd[1547]: 2025-11-08 00:30:13.281 [INFO][5462] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="80507d9e861ad2c8a455bbe41b1344d725eb73c0d460351190e10f277e760590" HandleID="k8s-pod-network.80507d9e861ad2c8a455bbe41b1344d725eb73c0d460351190e10f277e760590" Workload="localhost-k8s-whisker--6f4bb8f964--gbjvr-eth0" Nov 8 00:30:13.289365 containerd[1547]: 2025-11-08 00:30:13.281 [INFO][5462] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 8 00:30:13.289365 containerd[1547]: 2025-11-08 00:30:13.281 [INFO][5462] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 8 00:30:13.289365 containerd[1547]: 2025-11-08 00:30:13.285 [WARNING][5462] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="80507d9e861ad2c8a455bbe41b1344d725eb73c0d460351190e10f277e760590" HandleID="k8s-pod-network.80507d9e861ad2c8a455bbe41b1344d725eb73c0d460351190e10f277e760590" Workload="localhost-k8s-whisker--6f4bb8f964--gbjvr-eth0" Nov 8 00:30:13.289365 containerd[1547]: 2025-11-08 00:30:13.285 [INFO][5462] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="80507d9e861ad2c8a455bbe41b1344d725eb73c0d460351190e10f277e760590" HandleID="k8s-pod-network.80507d9e861ad2c8a455bbe41b1344d725eb73c0d460351190e10f277e760590" Workload="localhost-k8s-whisker--6f4bb8f964--gbjvr-eth0" Nov 8 00:30:13.289365 containerd[1547]: 2025-11-08 00:30:13.286 [INFO][5462] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 8 00:30:13.289365 containerd[1547]: 2025-11-08 00:30:13.288 [INFO][5455] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="80507d9e861ad2c8a455bbe41b1344d725eb73c0d460351190e10f277e760590" Nov 8 00:30:13.289850 containerd[1547]: time="2025-11-08T00:30:13.289754827Z" level=info msg="TearDown network for sandbox \"80507d9e861ad2c8a455bbe41b1344d725eb73c0d460351190e10f277e760590\" successfully" Nov 8 00:30:13.289850 containerd[1547]: time="2025-11-08T00:30:13.289778855Z" level=info msg="StopPodSandbox for \"80507d9e861ad2c8a455bbe41b1344d725eb73c0d460351190e10f277e760590\" returns successfully" Nov 8 00:30:13.290273 containerd[1547]: time="2025-11-08T00:30:13.290257889Z" level=info msg="RemovePodSandbox for \"80507d9e861ad2c8a455bbe41b1344d725eb73c0d460351190e10f277e760590\"" Nov 8 00:30:13.290484 containerd[1547]: time="2025-11-08T00:30:13.290353770Z" level=info msg="Forcibly stopping sandbox \"80507d9e861ad2c8a455bbe41b1344d725eb73c0d460351190e10f277e760590\"" Nov 8 00:30:13.340552 containerd[1547]: 2025-11-08 00:30:13.315 [WARNING][5476] cni-plugin/k8s.go 598: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="80507d9e861ad2c8a455bbe41b1344d725eb73c0d460351190e10f277e760590" WorkloadEndpoint="localhost-k8s-whisker--6f4bb8f964--gbjvr-eth0" Nov 8 00:30:13.340552 containerd[1547]: 2025-11-08 00:30:13.316 [INFO][5476] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="80507d9e861ad2c8a455bbe41b1344d725eb73c0d460351190e10f277e760590" Nov 8 00:30:13.340552 containerd[1547]: 2025-11-08 00:30:13.316 [INFO][5476] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="80507d9e861ad2c8a455bbe41b1344d725eb73c0d460351190e10f277e760590" iface="eth0" netns="" Nov 8 00:30:13.340552 containerd[1547]: 2025-11-08 00:30:13.316 [INFO][5476] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="80507d9e861ad2c8a455bbe41b1344d725eb73c0d460351190e10f277e760590" Nov 8 00:30:13.340552 containerd[1547]: 2025-11-08 00:30:13.316 [INFO][5476] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="80507d9e861ad2c8a455bbe41b1344d725eb73c0d460351190e10f277e760590" Nov 8 00:30:13.340552 containerd[1547]: 2025-11-08 00:30:13.330 [INFO][5483] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="80507d9e861ad2c8a455bbe41b1344d725eb73c0d460351190e10f277e760590" HandleID="k8s-pod-network.80507d9e861ad2c8a455bbe41b1344d725eb73c0d460351190e10f277e760590" Workload="localhost-k8s-whisker--6f4bb8f964--gbjvr-eth0" Nov 8 00:30:13.340552 containerd[1547]: 2025-11-08 00:30:13.331 [INFO][5483] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 8 00:30:13.340552 containerd[1547]: 2025-11-08 00:30:13.331 [INFO][5483] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 8 00:30:13.340552 containerd[1547]: 2025-11-08 00:30:13.335 [WARNING][5483] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="80507d9e861ad2c8a455bbe41b1344d725eb73c0d460351190e10f277e760590" HandleID="k8s-pod-network.80507d9e861ad2c8a455bbe41b1344d725eb73c0d460351190e10f277e760590" Workload="localhost-k8s-whisker--6f4bb8f964--gbjvr-eth0" Nov 8 00:30:13.340552 containerd[1547]: 2025-11-08 00:30:13.335 [INFO][5483] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="80507d9e861ad2c8a455bbe41b1344d725eb73c0d460351190e10f277e760590" HandleID="k8s-pod-network.80507d9e861ad2c8a455bbe41b1344d725eb73c0d460351190e10f277e760590" Workload="localhost-k8s-whisker--6f4bb8f964--gbjvr-eth0" Nov 8 00:30:13.340552 containerd[1547]: 2025-11-08 00:30:13.336 [INFO][5483] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 8 00:30:13.340552 containerd[1547]: 2025-11-08 00:30:13.338 [INFO][5476] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="80507d9e861ad2c8a455bbe41b1344d725eb73c0d460351190e10f277e760590" Nov 8 00:30:13.340552 containerd[1547]: time="2025-11-08T00:30:13.339513357Z" level=info msg="TearDown network for sandbox \"80507d9e861ad2c8a455bbe41b1344d725eb73c0d460351190e10f277e760590\" successfully" Nov 8 00:30:13.343206 containerd[1547]: time="2025-11-08T00:30:13.342887846Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"80507d9e861ad2c8a455bbe41b1344d725eb73c0d460351190e10f277e760590\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Nov 8 00:30:13.343206 containerd[1547]: time="2025-11-08T00:30:13.342966865Z" level=info msg="RemovePodSandbox \"80507d9e861ad2c8a455bbe41b1344d725eb73c0d460351190e10f277e760590\" returns successfully" Nov 8 00:30:17.490703 kubelet[2738]: E1108 00:30:17.490660 2738 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-84848b66c4-gnwcd" podUID="926ce8dd-4771-4d76-a928-b17ff008cf2e" Nov 8 00:30:21.488893 kubelet[2738]: E1108 00:30:21.488860 2738 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7f5dbf8768-lwfmb" podUID="dc1a7be6-78b9-4b63-807c-f29c0ef99466" Nov 8 00:30:22.507303 kubelet[2738]: E1108 00:30:22.507165 2738 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7f5dbf8768-w74ds" podUID="a6c7b38c-00b0-4b95-83b4-14d8b8afda37" Nov 8 00:30:22.507303 kubelet[2738]: E1108 00:30:22.507191 2738 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-tnwtm" podUID="007a5707-c952-467d-a723-faa6baf2e9bc" Nov 8 00:30:23.710741 systemd[1]: Started sshd@7-139.178.70.106:22-147.75.109.163:33816.service - OpenSSH per-connection server daemon (147.75.109.163:33816). Nov 8 00:30:23.790743 sshd[5522]: Accepted publickey for core from 147.75.109.163 port 33816 ssh2: RSA SHA256:w/xGfnUobRwx5tSVykkPMEgd5qYNjwGoDH1wcB/4M9g Nov 8 00:30:23.792417 sshd[5522]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 8 00:30:23.797640 systemd-logind[1519]: New session 10 of user core. Nov 8 00:30:23.802700 systemd[1]: Started session-10.scope - Session 10 of User core. Nov 8 00:30:24.245535 sshd[5522]: pam_unix(sshd:session): session closed for user core Nov 8 00:30:24.248538 systemd[1]: sshd@7-139.178.70.106:22-147.75.109.163:33816.service: Deactivated successfully. Nov 8 00:30:24.249950 systemd[1]: session-10.scope: Deactivated successfully. Nov 8 00:30:24.251550 systemd-logind[1519]: Session 10 logged out. Waiting for processes to exit. Nov 8 00:30:24.252193 systemd-logind[1519]: Removed session 10. Nov 8 00:30:24.490552 kubelet[2738]: E1108 00:30:24.490509 2738 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-w4kl5" podUID="a1ec52db-bd41-4d19-b1f6-a1fab4a28f01" Nov 8 00:30:25.489023 kubelet[2738]: E1108 00:30:25.488745 2738 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-57d6675b9f-clrr6" podUID="6a9d7321-1148-43be-b5df-da7f193de30d" Nov 8 00:30:29.254734 systemd[1]: Started sshd@8-139.178.70.106:22-147.75.109.163:33828.service - OpenSSH per-connection server daemon (147.75.109.163:33828). Nov 8 00:30:29.581909 sshd[5544]: Accepted publickey for core from 147.75.109.163 port 33828 ssh2: RSA SHA256:w/xGfnUobRwx5tSVykkPMEgd5qYNjwGoDH1wcB/4M9g Nov 8 00:30:29.582567 sshd[5544]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 8 00:30:29.587743 systemd-logind[1519]: New session 11 of user core. Nov 8 00:30:29.592749 systemd[1]: Started session-11.scope - Session 11 of User core. Nov 8 00:30:29.775858 sshd[5544]: pam_unix(sshd:session): session closed for user core Nov 8 00:30:29.778236 systemd[1]: sshd@8-139.178.70.106:22-147.75.109.163:33828.service: Deactivated successfully. Nov 8 00:30:29.779532 systemd[1]: session-11.scope: Deactivated successfully. Nov 8 00:30:29.780190 systemd-logind[1519]: Session 11 logged out. Waiting for processes to exit. Nov 8 00:30:29.780811 systemd-logind[1519]: Removed session 11. Nov 8 00:30:31.489977 containerd[1547]: time="2025-11-08T00:30:31.489720508Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Nov 8 00:30:31.880130 containerd[1547]: time="2025-11-08T00:30:31.880036177Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 8 00:30:31.880980 containerd[1547]: time="2025-11-08T00:30:31.880896689Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Nov 8 00:30:31.880980 containerd[1547]: time="2025-11-08T00:30:31.880953170Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Nov 8 00:30:31.881138 kubelet[2738]: E1108 00:30:31.881049 2738 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Nov 8 00:30:31.881138 kubelet[2738]: E1108 00:30:31.881083 2738 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Nov 8 00:30:31.881740 kubelet[2738]: E1108 00:30:31.881165 2738 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:ac9f2fab8b1a41b6acb7bc84bb1a359e,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-vlmq4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-84848b66c4-gnwcd_calico-system(926ce8dd-4771-4d76-a928-b17ff008cf2e): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Nov 8 00:30:31.883520 containerd[1547]: time="2025-11-08T00:30:31.883126629Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Nov 8 00:30:32.257212 containerd[1547]: time="2025-11-08T00:30:32.257159883Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 8 00:30:32.263484 containerd[1547]: time="2025-11-08T00:30:32.263454697Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Nov 8 00:30:32.263554 containerd[1547]: time="2025-11-08T00:30:32.263512745Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Nov 8 00:30:32.263673 kubelet[2738]: E1108 00:30:32.263638 2738 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Nov 8 00:30:32.263726 kubelet[2738]: E1108 00:30:32.263680 2738 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Nov 8 00:30:32.263811 kubelet[2738]: E1108 00:30:32.263772 2738 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-vlmq4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-84848b66c4-gnwcd_calico-system(926ce8dd-4771-4d76-a928-b17ff008cf2e): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Nov 8 00:30:32.264981 kubelet[2738]: E1108 00:30:32.264943 2738 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-84848b66c4-gnwcd" podUID="926ce8dd-4771-4d76-a928-b17ff008cf2e" Nov 8 00:30:34.490799 containerd[1547]: time="2025-11-08T00:30:34.490329838Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 8 00:30:34.785265 systemd[1]: Started sshd@9-139.178.70.106:22-147.75.109.163:41048.service - OpenSSH per-connection server daemon (147.75.109.163:41048). Nov 8 00:30:34.827177 sshd[5563]: Accepted publickey for core from 147.75.109.163 port 41048 ssh2: RSA SHA256:w/xGfnUobRwx5tSVykkPMEgd5qYNjwGoDH1wcB/4M9g Nov 8 00:30:34.828106 sshd[5563]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 8 00:30:34.831365 systemd-logind[1519]: New session 12 of user core. Nov 8 00:30:34.832525 containerd[1547]: time="2025-11-08T00:30:34.832408216Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 8 00:30:34.832764 containerd[1547]: time="2025-11-08T00:30:34.832745012Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 8 00:30:34.832890 containerd[1547]: time="2025-11-08T00:30:34.832792915Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Nov 8 00:30:34.832938 kubelet[2738]: E1108 00:30:34.832914 2738 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 8 00:30:34.836153 kubelet[2738]: E1108 00:30:34.832945 2738 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 8 00:30:34.836153 kubelet[2738]: E1108 00:30:34.833032 2738 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-s9wmf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-7f5dbf8768-w74ds_calico-apiserver(a6c7b38c-00b0-4b95-83b4-14d8b8afda37): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 8 00:30:34.836153 kubelet[2738]: E1108 00:30:34.834751 2738 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7f5dbf8768-w74ds" podUID="a6c7b38c-00b0-4b95-83b4-14d8b8afda37" Nov 8 00:30:34.835777 systemd[1]: Started session-12.scope - Session 12 of User core. Nov 8 00:30:34.940198 sshd[5563]: pam_unix(sshd:session): session closed for user core Nov 8 00:30:34.949770 systemd[1]: sshd@9-139.178.70.106:22-147.75.109.163:41048.service: Deactivated successfully. Nov 8 00:30:34.950771 systemd[1]: session-12.scope: Deactivated successfully. Nov 8 00:30:34.951872 systemd-logind[1519]: Session 12 logged out. Waiting for processes to exit. Nov 8 00:30:34.953153 systemd[1]: Started sshd@10-139.178.70.106:22-147.75.109.163:41064.service - OpenSSH per-connection server daemon (147.75.109.163:41064). Nov 8 00:30:34.953793 systemd-logind[1519]: Removed session 12. Nov 8 00:30:34.991906 sshd[5577]: Accepted publickey for core from 147.75.109.163 port 41064 ssh2: RSA SHA256:w/xGfnUobRwx5tSVykkPMEgd5qYNjwGoDH1wcB/4M9g Nov 8 00:30:34.992778 sshd[5577]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 8 00:30:34.995393 systemd-logind[1519]: New session 13 of user core. Nov 8 00:30:34.998726 systemd[1]: Started session-13.scope - Session 13 of User core. Nov 8 00:30:35.123700 sshd[5577]: pam_unix(sshd:session): session closed for user core Nov 8 00:30:35.132466 systemd[1]: sshd@10-139.178.70.106:22-147.75.109.163:41064.service: Deactivated successfully. Nov 8 00:30:35.134003 systemd[1]: session-13.scope: Deactivated successfully. Nov 8 00:30:35.135796 systemd-logind[1519]: Session 13 logged out. Waiting for processes to exit. Nov 8 00:30:35.142619 systemd[1]: Started sshd@11-139.178.70.106:22-147.75.109.163:41074.service - OpenSSH per-connection server daemon (147.75.109.163:41074). Nov 8 00:30:35.148801 systemd-logind[1519]: Removed session 13. Nov 8 00:30:35.178371 sshd[5587]: Accepted publickey for core from 147.75.109.163 port 41074 ssh2: RSA SHA256:w/xGfnUobRwx5tSVykkPMEgd5qYNjwGoDH1wcB/4M9g Nov 8 00:30:35.179535 sshd[5587]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 8 00:30:35.182435 systemd-logind[1519]: New session 14 of user core. Nov 8 00:30:35.188718 systemd[1]: Started session-14.scope - Session 14 of User core. Nov 8 00:30:35.290144 sshd[5587]: pam_unix(sshd:session): session closed for user core Nov 8 00:30:35.292384 systemd-logind[1519]: Session 14 logged out. Waiting for processes to exit. Nov 8 00:30:35.292477 systemd[1]: sshd@11-139.178.70.106:22-147.75.109.163:41074.service: Deactivated successfully. Nov 8 00:30:35.294001 systemd[1]: session-14.scope: Deactivated successfully. Nov 8 00:30:35.294553 systemd-logind[1519]: Removed session 14. Nov 8 00:30:35.489474 containerd[1547]: time="2025-11-08T00:30:35.489449319Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Nov 8 00:30:35.839551 containerd[1547]: time="2025-11-08T00:30:35.839370840Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 8 00:30:35.839881 containerd[1547]: time="2025-11-08T00:30:35.839725488Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Nov 8 00:30:35.839881 containerd[1547]: time="2025-11-08T00:30:35.839786867Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Nov 8 00:30:35.840732 kubelet[2738]: E1108 00:30:35.840700 2738 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Nov 8 00:30:35.840889 kubelet[2738]: E1108 00:30:35.840742 2738 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Nov 8 00:30:35.840946 containerd[1547]: time="2025-11-08T00:30:35.840930172Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Nov 8 00:30:35.841267 kubelet[2738]: E1108 00:30:35.841239 2738 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-bjqld,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-w4kl5_calico-system(a1ec52db-bd41-4d19-b1f6-a1fab4a28f01): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Nov 8 00:30:36.176253 containerd[1547]: time="2025-11-08T00:30:36.176101075Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 8 00:30:36.177246 containerd[1547]: time="2025-11-08T00:30:36.176597992Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Nov 8 00:30:36.177246 containerd[1547]: time="2025-11-08T00:30:36.176629701Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=77" Nov 8 00:30:36.177246 containerd[1547]: time="2025-11-08T00:30:36.177046483Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 8 00:30:36.177524 kubelet[2738]: E1108 00:30:36.176783 2738 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Nov 8 00:30:36.177524 kubelet[2738]: E1108 00:30:36.176830 2738 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Nov 8 00:30:36.177524 kubelet[2738]: E1108 00:30:36.177053 2738 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-f6tp5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-tnwtm_calico-system(007a5707-c952-467d-a723-faa6baf2e9bc): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Nov 8 00:30:36.178589 kubelet[2738]: E1108 00:30:36.178561 2738 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-tnwtm" podUID="007a5707-c952-467d-a723-faa6baf2e9bc" Nov 8 00:30:36.696535 containerd[1547]: time="2025-11-08T00:30:36.696495972Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 8 00:30:36.702894 containerd[1547]: time="2025-11-08T00:30:36.702852561Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 8 00:30:36.702979 containerd[1547]: time="2025-11-08T00:30:36.702934366Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Nov 8 00:30:36.703088 kubelet[2738]: E1108 00:30:36.703056 2738 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 8 00:30:36.703151 kubelet[2738]: E1108 00:30:36.703099 2738 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 8 00:30:36.703316 kubelet[2738]: E1108 00:30:36.703276 2738 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-6658x,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-7f5dbf8768-lwfmb_calico-apiserver(dc1a7be6-78b9-4b63-807c-f29c0ef99466): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 8 00:30:36.704019 containerd[1547]: time="2025-11-08T00:30:36.703891256Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Nov 8 00:30:36.704954 kubelet[2738]: E1108 00:30:36.704894 2738 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7f5dbf8768-lwfmb" podUID="dc1a7be6-78b9-4b63-807c-f29c0ef99466" Nov 8 00:30:37.085677 containerd[1547]: time="2025-11-08T00:30:37.085566818Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 8 00:30:37.086891 containerd[1547]: time="2025-11-08T00:30:37.085897741Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Nov 8 00:30:37.086891 containerd[1547]: time="2025-11-08T00:30:37.085944393Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Nov 8 00:30:37.086946 kubelet[2738]: E1108 00:30:37.086032 2738 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Nov 8 00:30:37.086946 kubelet[2738]: E1108 00:30:37.086064 2738 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Nov 8 00:30:37.086946 kubelet[2738]: E1108 00:30:37.086135 2738 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-bjqld,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-w4kl5_calico-system(a1ec52db-bd41-4d19-b1f6-a1fab4a28f01): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Nov 8 00:30:37.087253 kubelet[2738]: E1108 00:30:37.087227 2738 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-w4kl5" podUID="a1ec52db-bd41-4d19-b1f6-a1fab4a28f01" Nov 8 00:30:38.489017 containerd[1547]: time="2025-11-08T00:30:38.488709586Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Nov 8 00:30:38.838106 containerd[1547]: time="2025-11-08T00:30:38.837993106Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 8 00:30:38.838624 containerd[1547]: time="2025-11-08T00:30:38.838579326Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Nov 8 00:30:38.839147 containerd[1547]: time="2025-11-08T00:30:38.838657891Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=85" Nov 8 00:30:38.839190 kubelet[2738]: E1108 00:30:38.838771 2738 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Nov 8 00:30:38.839190 kubelet[2738]: E1108 00:30:38.838804 2738 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Nov 8 00:30:38.839190 kubelet[2738]: E1108 00:30:38.838909 2738 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-srjmq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-57d6675b9f-clrr6_calico-system(6a9d7321-1148-43be-b5df-da7f193de30d): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Nov 8 00:30:38.840956 kubelet[2738]: E1108 00:30:38.840922 2738 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-57d6675b9f-clrr6" podUID="6a9d7321-1148-43be-b5df-da7f193de30d" Nov 8 00:30:40.307913 systemd[1]: Started sshd@12-139.178.70.106:22-147.75.109.163:51504.service - OpenSSH per-connection server daemon (147.75.109.163:51504). Nov 8 00:30:40.355378 sshd[5606]: Accepted publickey for core from 147.75.109.163 port 51504 ssh2: RSA SHA256:w/xGfnUobRwx5tSVykkPMEgd5qYNjwGoDH1wcB/4M9g Nov 8 00:30:40.356079 sshd[5606]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 8 00:30:40.360744 systemd-logind[1519]: New session 15 of user core. Nov 8 00:30:40.365846 systemd[1]: Started session-15.scope - Session 15 of User core. Nov 8 00:30:40.518387 sshd[5606]: pam_unix(sshd:session): session closed for user core Nov 8 00:30:40.520309 systemd[1]: sshd@12-139.178.70.106:22-147.75.109.163:51504.service: Deactivated successfully. Nov 8 00:30:40.523178 systemd[1]: session-15.scope: Deactivated successfully. Nov 8 00:30:40.525157 systemd-logind[1519]: Session 15 logged out. Waiting for processes to exit. Nov 8 00:30:40.525875 systemd-logind[1519]: Removed session 15. Nov 8 00:30:45.534820 systemd[1]: Started sshd@13-139.178.70.106:22-147.75.109.163:51514.service - OpenSSH per-connection server daemon (147.75.109.163:51514). Nov 8 00:30:46.047348 sshd[5621]: Accepted publickey for core from 147.75.109.163 port 51514 ssh2: RSA SHA256:w/xGfnUobRwx5tSVykkPMEgd5qYNjwGoDH1wcB/4M9g Nov 8 00:30:46.049085 sshd[5621]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 8 00:30:46.053725 systemd-logind[1519]: New session 16 of user core. Nov 8 00:30:46.062797 systemd[1]: Started session-16.scope - Session 16 of User core. Nov 8 00:30:46.237401 sshd[5621]: pam_unix(sshd:session): session closed for user core Nov 8 00:30:46.241916 systemd[1]: sshd@13-139.178.70.106:22-147.75.109.163:51514.service: Deactivated successfully. Nov 8 00:30:46.243245 systemd[1]: session-16.scope: Deactivated successfully. Nov 8 00:30:46.243796 systemd-logind[1519]: Session 16 logged out. Waiting for processes to exit. Nov 8 00:30:46.244696 systemd-logind[1519]: Removed session 16. Nov 8 00:30:46.490275 kubelet[2738]: E1108 00:30:46.489949 2738 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7f5dbf8768-w74ds" podUID="a6c7b38c-00b0-4b95-83b4-14d8b8afda37" Nov 8 00:30:47.489748 kubelet[2738]: E1108 00:30:47.489687 2738 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-84848b66c4-gnwcd" podUID="926ce8dd-4771-4d76-a928-b17ff008cf2e" Nov 8 00:30:47.993972 systemd[1]: run-containerd-runc-k8s.io-96598afc89b896e3fa8797cd3c55e939bad8bb6dbd7cd7edcdefb7ccc109e36f-runc.pzYDc3.mount: Deactivated successfully. Nov 8 00:30:49.489488 kubelet[2738]: E1108 00:30:49.489193 2738 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-tnwtm" podUID="007a5707-c952-467d-a723-faa6baf2e9bc" Nov 8 00:30:49.489831 kubelet[2738]: E1108 00:30:49.489528 2738 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-w4kl5" podUID="a1ec52db-bd41-4d19-b1f6-a1fab4a28f01" Nov 8 00:30:51.245573 systemd[1]: Started sshd@14-139.178.70.106:22-147.75.109.163:41678.service - OpenSSH per-connection server daemon (147.75.109.163:41678). Nov 8 00:30:51.440254 sshd[5656]: Accepted publickey for core from 147.75.109.163 port 41678 ssh2: RSA SHA256:w/xGfnUobRwx5tSVykkPMEgd5qYNjwGoDH1wcB/4M9g Nov 8 00:30:51.441439 sshd[5656]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 8 00:30:51.445277 systemd-logind[1519]: New session 17 of user core. Nov 8 00:30:51.450719 systemd[1]: Started session-17.scope - Session 17 of User core. Nov 8 00:30:51.490137 kubelet[2738]: E1108 00:30:51.488963 2738 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-57d6675b9f-clrr6" podUID="6a9d7321-1148-43be-b5df-da7f193de30d" Nov 8 00:30:51.497175 kubelet[2738]: E1108 00:30:51.490305 2738 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7f5dbf8768-lwfmb" podUID="dc1a7be6-78b9-4b63-807c-f29c0ef99466" Nov 8 00:30:51.703368 sshd[5656]: pam_unix(sshd:session): session closed for user core Nov 8 00:30:51.708140 systemd[1]: sshd@14-139.178.70.106:22-147.75.109.163:41678.service: Deactivated successfully. Nov 8 00:30:51.710230 systemd[1]: session-17.scope: Deactivated successfully. Nov 8 00:30:51.712209 systemd-logind[1519]: Session 17 logged out. Waiting for processes to exit. Nov 8 00:30:51.713319 systemd-logind[1519]: Removed session 17. Nov 8 00:30:56.712772 systemd[1]: Started sshd@15-139.178.70.106:22-147.75.109.163:41684.service - OpenSSH per-connection server daemon (147.75.109.163:41684). Nov 8 00:30:56.752018 sshd[5669]: Accepted publickey for core from 147.75.109.163 port 41684 ssh2: RSA SHA256:w/xGfnUobRwx5tSVykkPMEgd5qYNjwGoDH1wcB/4M9g Nov 8 00:30:56.753262 sshd[5669]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 8 00:30:56.757714 systemd-logind[1519]: New session 18 of user core. Nov 8 00:30:56.762711 systemd[1]: Started session-18.scope - Session 18 of User core. Nov 8 00:30:56.852306 sshd[5669]: pam_unix(sshd:session): session closed for user core Nov 8 00:30:56.858070 systemd[1]: sshd@15-139.178.70.106:22-147.75.109.163:41684.service: Deactivated successfully. Nov 8 00:30:56.859118 systemd[1]: session-18.scope: Deactivated successfully. Nov 8 00:30:56.859918 systemd-logind[1519]: Session 18 logged out. Waiting for processes to exit. Nov 8 00:30:56.861067 systemd[1]: Started sshd@16-139.178.70.106:22-147.75.109.163:41690.service - OpenSSH per-connection server daemon (147.75.109.163:41690). Nov 8 00:30:56.862437 systemd-logind[1519]: Removed session 18. Nov 8 00:30:56.891400 sshd[5682]: Accepted publickey for core from 147.75.109.163 port 41690 ssh2: RSA SHA256:w/xGfnUobRwx5tSVykkPMEgd5qYNjwGoDH1wcB/4M9g Nov 8 00:30:56.892289 sshd[5682]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 8 00:30:56.894693 systemd-logind[1519]: New session 19 of user core. Nov 8 00:30:56.900843 systemd[1]: Started session-19.scope - Session 19 of User core. Nov 8 00:30:57.278740 sshd[5682]: pam_unix(sshd:session): session closed for user core Nov 8 00:30:57.285701 systemd[1]: sshd@16-139.178.70.106:22-147.75.109.163:41690.service: Deactivated successfully. Nov 8 00:30:57.287114 systemd[1]: session-19.scope: Deactivated successfully. Nov 8 00:30:57.288008 systemd-logind[1519]: Session 19 logged out. Waiting for processes to exit. Nov 8 00:30:57.290841 systemd[1]: Started sshd@17-139.178.70.106:22-147.75.109.163:41692.service - OpenSSH per-connection server daemon (147.75.109.163:41692). Nov 8 00:30:57.291749 systemd-logind[1519]: Removed session 19. Nov 8 00:30:57.342924 sshd[5693]: Accepted publickey for core from 147.75.109.163 port 41692 ssh2: RSA SHA256:w/xGfnUobRwx5tSVykkPMEgd5qYNjwGoDH1wcB/4M9g Nov 8 00:30:57.343466 sshd[5693]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 8 00:30:57.348068 systemd-logind[1519]: New session 20 of user core. Nov 8 00:30:57.350709 systemd[1]: Started session-20.scope - Session 20 of User core. Nov 8 00:30:57.495034 kubelet[2738]: E1108 00:30:57.494977 2738 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7f5dbf8768-w74ds" podUID="a6c7b38c-00b0-4b95-83b4-14d8b8afda37" Nov 8 00:30:58.552992 sshd[5693]: pam_unix(sshd:session): session closed for user core Nov 8 00:30:58.565335 systemd[1]: Started sshd@18-139.178.70.106:22-147.75.109.163:41700.service - OpenSSH per-connection server daemon (147.75.109.163:41700). Nov 8 00:30:58.567399 systemd[1]: sshd@17-139.178.70.106:22-147.75.109.163:41692.service: Deactivated successfully. Nov 8 00:30:58.568572 systemd[1]: session-20.scope: Deactivated successfully. Nov 8 00:30:58.576837 systemd-logind[1519]: Session 20 logged out. Waiting for processes to exit. Nov 8 00:30:58.584343 systemd-logind[1519]: Removed session 20. Nov 8 00:30:58.680892 sshd[5706]: Accepted publickey for core from 147.75.109.163 port 41700 ssh2: RSA SHA256:w/xGfnUobRwx5tSVykkPMEgd5qYNjwGoDH1wcB/4M9g Nov 8 00:30:58.682579 sshd[5706]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 8 00:30:58.687107 systemd-logind[1519]: New session 21 of user core. Nov 8 00:30:58.692769 systemd[1]: Started session-21.scope - Session 21 of User core. Nov 8 00:30:59.565716 sshd[5706]: pam_unix(sshd:session): session closed for user core Nov 8 00:30:59.570835 systemd[1]: Started sshd@19-139.178.70.106:22-147.75.109.163:41702.service - OpenSSH per-connection server daemon (147.75.109.163:41702). Nov 8 00:30:59.581331 systemd[1]: sshd@18-139.178.70.106:22-147.75.109.163:41700.service: Deactivated successfully. Nov 8 00:30:59.582740 systemd-logind[1519]: Session 21 logged out. Waiting for processes to exit. Nov 8 00:30:59.583060 systemd[1]: session-21.scope: Deactivated successfully. Nov 8 00:30:59.587755 systemd-logind[1519]: Removed session 21. Nov 8 00:30:59.965556 sshd[5721]: Accepted publickey for core from 147.75.109.163 port 41702 ssh2: RSA SHA256:w/xGfnUobRwx5tSVykkPMEgd5qYNjwGoDH1wcB/4M9g Nov 8 00:30:59.966797 sshd[5721]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 8 00:30:59.972527 systemd-logind[1519]: New session 22 of user core. Nov 8 00:30:59.976762 systemd[1]: Started session-22.scope - Session 22 of User core. Nov 8 00:31:00.099982 sshd[5721]: pam_unix(sshd:session): session closed for user core Nov 8 00:31:00.102909 systemd-logind[1519]: Session 22 logged out. Waiting for processes to exit. Nov 8 00:31:00.103307 systemd[1]: sshd@19-139.178.70.106:22-147.75.109.163:41702.service: Deactivated successfully. Nov 8 00:31:00.105937 systemd[1]: session-22.scope: Deactivated successfully. Nov 8 00:31:00.109102 systemd-logind[1519]: Removed session 22. Nov 8 00:31:01.857674 kubelet[2738]: E1108 00:31:01.856943 2738 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-tnwtm" podUID="007a5707-c952-467d-a723-faa6baf2e9bc" Nov 8 00:31:01.871336 kubelet[2738]: E1108 00:31:01.871053 2738 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-w4kl5" podUID="a1ec52db-bd41-4d19-b1f6-a1fab4a28f01" Nov 8 00:31:02.490684 kubelet[2738]: E1108 00:31:02.490649 2738 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-84848b66c4-gnwcd" podUID="926ce8dd-4771-4d76-a928-b17ff008cf2e" Nov 8 00:31:03.489050 kubelet[2738]: E1108 00:31:03.489015 2738 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7f5dbf8768-lwfmb" podUID="dc1a7be6-78b9-4b63-807c-f29c0ef99466" Nov 8 00:31:05.111759 systemd[1]: Started sshd@20-139.178.70.106:22-147.75.109.163:49354.service - OpenSSH per-connection server daemon (147.75.109.163:49354). Nov 8 00:31:05.144892 sshd[5738]: Accepted publickey for core from 147.75.109.163 port 49354 ssh2: RSA SHA256:w/xGfnUobRwx5tSVykkPMEgd5qYNjwGoDH1wcB/4M9g Nov 8 00:31:05.146180 sshd[5738]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 8 00:31:05.150589 systemd-logind[1519]: New session 23 of user core. Nov 8 00:31:05.157954 systemd[1]: Started session-23.scope - Session 23 of User core. Nov 8 00:31:05.307691 sshd[5738]: pam_unix(sshd:session): session closed for user core Nov 8 00:31:05.315385 systemd[1]: sshd@20-139.178.70.106:22-147.75.109.163:49354.service: Deactivated successfully. Nov 8 00:31:05.316336 systemd[1]: session-23.scope: Deactivated successfully. Nov 8 00:31:05.316966 systemd-logind[1519]: Session 23 logged out. Waiting for processes to exit. Nov 8 00:31:05.317555 systemd-logind[1519]: Removed session 23. Nov 8 00:31:06.492504 kubelet[2738]: E1108 00:31:06.492406 2738 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-57d6675b9f-clrr6" podUID="6a9d7321-1148-43be-b5df-da7f193de30d" Nov 8 00:31:10.318183 systemd[1]: Started sshd@21-139.178.70.106:22-147.75.109.163:39872.service - OpenSSH per-connection server daemon (147.75.109.163:39872). Nov 8 00:31:10.365334 sshd[5756]: Accepted publickey for core from 147.75.109.163 port 39872 ssh2: RSA SHA256:w/xGfnUobRwx5tSVykkPMEgd5qYNjwGoDH1wcB/4M9g Nov 8 00:31:10.365700 sshd[5756]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 8 00:31:10.368438 systemd-logind[1519]: New session 24 of user core. Nov 8 00:31:10.373694 systemd[1]: Started session-24.scope - Session 24 of User core. Nov 8 00:31:10.466077 sshd[5756]: pam_unix(sshd:session): session closed for user core Nov 8 00:31:10.467932 systemd[1]: sshd@21-139.178.70.106:22-147.75.109.163:39872.service: Deactivated successfully. Nov 8 00:31:10.469310 systemd[1]: session-24.scope: Deactivated successfully. Nov 8 00:31:10.471079 systemd-logind[1519]: Session 24 logged out. Waiting for processes to exit. Nov 8 00:31:10.471590 systemd-logind[1519]: Removed session 24. Nov 8 00:31:12.533140 kubelet[2738]: E1108 00:31:12.533076 2738 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7f5dbf8768-w74ds" podUID="a6c7b38c-00b0-4b95-83b4-14d8b8afda37" Nov 8 00:31:12.533440 kubelet[2738]: E1108 00:31:12.533128 2738 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-w4kl5" podUID="a1ec52db-bd41-4d19-b1f6-a1fab4a28f01"