Nov 8 00:14:24.737835 kernel: Linux version 6.6.113-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT_DYNAMIC Fri Nov 7 22:45:04 -00 2025 Nov 8 00:14:24.737851 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=vmware flatcar.autologin verity.usrhash=480a02cf7f2001774aa495c3e719d4173e968e6839485a7d2b207ef2facca472 Nov 8 00:14:24.737858 kernel: Disabled fast string operations Nov 8 00:14:24.737862 kernel: BIOS-provided physical RAM map: Nov 8 00:14:24.737865 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009ebff] usable Nov 8 00:14:24.737869 kernel: BIOS-e820: [mem 0x000000000009ec00-0x000000000009ffff] reserved Nov 8 00:14:24.737875 kernel: BIOS-e820: [mem 0x00000000000dc000-0x00000000000fffff] reserved Nov 8 00:14:24.737879 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000007fedffff] usable Nov 8 00:14:24.737884 kernel: BIOS-e820: [mem 0x000000007fee0000-0x000000007fefefff] ACPI data Nov 8 00:14:24.737888 kernel: BIOS-e820: [mem 0x000000007feff000-0x000000007fefffff] ACPI NVS Nov 8 00:14:24.737892 kernel: BIOS-e820: [mem 0x000000007ff00000-0x000000007fffffff] usable Nov 8 00:14:24.737896 kernel: BIOS-e820: [mem 0x00000000f0000000-0x00000000f7ffffff] reserved Nov 8 00:14:24.737900 kernel: BIOS-e820: [mem 0x00000000fec00000-0x00000000fec0ffff] reserved Nov 8 00:14:24.737904 kernel: BIOS-e820: [mem 0x00000000fee00000-0x00000000fee00fff] reserved Nov 8 00:14:24.737910 kernel: BIOS-e820: [mem 0x00000000fffe0000-0x00000000ffffffff] reserved Nov 8 00:14:24.737915 kernel: NX (Execute Disable) protection: active Nov 8 00:14:24.737919 kernel: APIC: Static calls initialized Nov 8 00:14:24.737924 kernel: SMBIOS 2.7 present. Nov 8 00:14:24.737929 kernel: DMI: VMware, Inc. VMware Virtual Platform/440BX Desktop Reference Platform, BIOS 6.00 05/28/2020 Nov 8 00:14:24.737934 kernel: vmware: hypercall mode: 0x00 Nov 8 00:14:24.737938 kernel: Hypervisor detected: VMware Nov 8 00:14:24.737943 kernel: vmware: TSC freq read from hypervisor : 3408.000 MHz Nov 8 00:14:24.737948 kernel: vmware: Host bus clock speed read from hypervisor : 66000000 Hz Nov 8 00:14:24.737953 kernel: vmware: using clock offset of 2938560806 ns Nov 8 00:14:24.737957 kernel: tsc: Detected 3408.000 MHz processor Nov 8 00:14:24.737963 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Nov 8 00:14:24.737968 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Nov 8 00:14:24.737972 kernel: last_pfn = 0x80000 max_arch_pfn = 0x400000000 Nov 8 00:14:24.737977 kernel: total RAM covered: 3072M Nov 8 00:14:24.737982 kernel: Found optimal setting for mtrr clean up Nov 8 00:14:24.737988 kernel: gran_size: 64K chunk_size: 64K num_reg: 2 lose cover RAM: 0G Nov 8 00:14:24.737994 kernel: MTRR map: 6 entries (5 fixed + 1 variable; max 21), built from 8 variable MTRRs Nov 8 00:14:24.737999 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Nov 8 00:14:24.738004 kernel: Using GB pages for direct mapping Nov 8 00:14:24.738008 kernel: ACPI: Early table checksum verification disabled Nov 8 00:14:24.738013 kernel: ACPI: RSDP 0x00000000000F6A00 000024 (v02 PTLTD ) Nov 8 00:14:24.738018 kernel: ACPI: XSDT 0x000000007FEE965B 00005C (v01 INTEL 440BX 06040000 VMW 01324272) Nov 8 00:14:24.738022 kernel: ACPI: FACP 0x000000007FEFEE73 0000F4 (v04 INTEL 440BX 06040000 PTL 000F4240) Nov 8 00:14:24.738027 kernel: ACPI: DSDT 0x000000007FEEAD55 01411E (v01 PTLTD Custom 06040000 MSFT 03000001) Nov 8 00:14:24.738039 kernel: ACPI: FACS 0x000000007FEFFFC0 000040 Nov 8 00:14:24.738047 kernel: ACPI: FACS 0x000000007FEFFFC0 000040 Nov 8 00:14:24.738052 kernel: ACPI: BOOT 0x000000007FEEAD2D 000028 (v01 PTLTD $SBFTBL$ 06040000 LTP 00000001) Nov 8 00:14:24.738057 kernel: ACPI: APIC 0x000000007FEEA5EB 000742 (v01 PTLTD ? APIC 06040000 LTP 00000000) Nov 8 00:14:24.739022 kernel: ACPI: MCFG 0x000000007FEEA5AF 00003C (v01 PTLTD $PCITBL$ 06040000 LTP 00000001) Nov 8 00:14:24.739063 kernel: ACPI: SRAT 0x000000007FEE9757 0008A8 (v02 VMWARE MEMPLUG 06040000 VMW 00000001) Nov 8 00:14:24.739071 kernel: ACPI: HPET 0x000000007FEE971F 000038 (v01 VMWARE VMW HPET 06040000 VMW 00000001) Nov 8 00:14:24.739076 kernel: ACPI: WAET 0x000000007FEE96F7 000028 (v01 VMWARE VMW WAET 06040000 VMW 00000001) Nov 8 00:14:24.739081 kernel: ACPI: Reserving FACP table memory at [mem 0x7fefee73-0x7fefef66] Nov 8 00:14:24.739086 kernel: ACPI: Reserving DSDT table memory at [mem 0x7feead55-0x7fefee72] Nov 8 00:14:24.739091 kernel: ACPI: Reserving FACS table memory at [mem 0x7fefffc0-0x7fefffff] Nov 8 00:14:24.739096 kernel: ACPI: Reserving FACS table memory at [mem 0x7fefffc0-0x7fefffff] Nov 8 00:14:24.739101 kernel: ACPI: Reserving BOOT table memory at [mem 0x7feead2d-0x7feead54] Nov 8 00:14:24.739106 kernel: ACPI: Reserving APIC table memory at [mem 0x7feea5eb-0x7feead2c] Nov 8 00:14:24.739111 kernel: ACPI: Reserving MCFG table memory at [mem 0x7feea5af-0x7feea5ea] Nov 8 00:14:24.739116 kernel: ACPI: Reserving SRAT table memory at [mem 0x7fee9757-0x7fee9ffe] Nov 8 00:14:24.739122 kernel: ACPI: Reserving HPET table memory at [mem 0x7fee971f-0x7fee9756] Nov 8 00:14:24.739127 kernel: ACPI: Reserving WAET table memory at [mem 0x7fee96f7-0x7fee971e] Nov 8 00:14:24.739132 kernel: system APIC only can use physical flat Nov 8 00:14:24.739138 kernel: APIC: Switched APIC routing to: physical flat Nov 8 00:14:24.739143 kernel: SRAT: PXM 0 -> APIC 0x00 -> Node 0 Nov 8 00:14:24.739148 kernel: SRAT: PXM 0 -> APIC 0x02 -> Node 0 Nov 8 00:14:24.739153 kernel: SRAT: PXM 0 -> APIC 0x04 -> Node 0 Nov 8 00:14:24.739158 kernel: SRAT: PXM 0 -> APIC 0x06 -> Node 0 Nov 8 00:14:24.739163 kernel: SRAT: PXM 0 -> APIC 0x08 -> Node 0 Nov 8 00:14:24.739168 kernel: SRAT: PXM 0 -> APIC 0x0a -> Node 0 Nov 8 00:14:24.739173 kernel: SRAT: PXM 0 -> APIC 0x0c -> Node 0 Nov 8 00:14:24.739178 kernel: SRAT: PXM 0 -> APIC 0x0e -> Node 0 Nov 8 00:14:24.739183 kernel: SRAT: PXM 0 -> APIC 0x10 -> Node 0 Nov 8 00:14:24.739188 kernel: SRAT: PXM 0 -> APIC 0x12 -> Node 0 Nov 8 00:14:24.739193 kernel: SRAT: PXM 0 -> APIC 0x14 -> Node 0 Nov 8 00:14:24.739198 kernel: SRAT: PXM 0 -> APIC 0x16 -> Node 0 Nov 8 00:14:24.739203 kernel: SRAT: PXM 0 -> APIC 0x18 -> Node 0 Nov 8 00:14:24.739207 kernel: SRAT: PXM 0 -> APIC 0x1a -> Node 0 Nov 8 00:14:24.739212 kernel: SRAT: PXM 0 -> APIC 0x1c -> Node 0 Nov 8 00:14:24.739218 kernel: SRAT: PXM 0 -> APIC 0x1e -> Node 0 Nov 8 00:14:24.739223 kernel: SRAT: PXM 0 -> APIC 0x20 -> Node 0 Nov 8 00:14:24.739228 kernel: SRAT: PXM 0 -> APIC 0x22 -> Node 0 Nov 8 00:14:24.739233 kernel: SRAT: PXM 0 -> APIC 0x24 -> Node 0 Nov 8 00:14:24.739237 kernel: SRAT: PXM 0 -> APIC 0x26 -> Node 0 Nov 8 00:14:24.739242 kernel: SRAT: PXM 0 -> APIC 0x28 -> Node 0 Nov 8 00:14:24.739247 kernel: SRAT: PXM 0 -> APIC 0x2a -> Node 0 Nov 8 00:14:24.739252 kernel: SRAT: PXM 0 -> APIC 0x2c -> Node 0 Nov 8 00:14:24.739257 kernel: SRAT: PXM 0 -> APIC 0x2e -> Node 0 Nov 8 00:14:24.739262 kernel: SRAT: PXM 0 -> APIC 0x30 -> Node 0 Nov 8 00:14:24.739266 kernel: SRAT: PXM 0 -> APIC 0x32 -> Node 0 Nov 8 00:14:24.739272 kernel: SRAT: PXM 0 -> APIC 0x34 -> Node 0 Nov 8 00:14:24.739277 kernel: SRAT: PXM 0 -> APIC 0x36 -> Node 0 Nov 8 00:14:24.739282 kernel: SRAT: PXM 0 -> APIC 0x38 -> Node 0 Nov 8 00:14:24.739287 kernel: SRAT: PXM 0 -> APIC 0x3a -> Node 0 Nov 8 00:14:24.739292 kernel: SRAT: PXM 0 -> APIC 0x3c -> Node 0 Nov 8 00:14:24.739297 kernel: SRAT: PXM 0 -> APIC 0x3e -> Node 0 Nov 8 00:14:24.739301 kernel: SRAT: PXM 0 -> APIC 0x40 -> Node 0 Nov 8 00:14:24.739306 kernel: SRAT: PXM 0 -> APIC 0x42 -> Node 0 Nov 8 00:14:24.739311 kernel: SRAT: PXM 0 -> APIC 0x44 -> Node 0 Nov 8 00:14:24.739316 kernel: SRAT: PXM 0 -> APIC 0x46 -> Node 0 Nov 8 00:14:24.739322 kernel: SRAT: PXM 0 -> APIC 0x48 -> Node 0 Nov 8 00:14:24.739327 kernel: SRAT: PXM 0 -> APIC 0x4a -> Node 0 Nov 8 00:14:24.739331 kernel: SRAT: PXM 0 -> APIC 0x4c -> Node 0 Nov 8 00:14:24.739336 kernel: SRAT: PXM 0 -> APIC 0x4e -> Node 0 Nov 8 00:14:24.739341 kernel: SRAT: PXM 0 -> APIC 0x50 -> Node 0 Nov 8 00:14:24.739346 kernel: SRAT: PXM 0 -> APIC 0x52 -> Node 0 Nov 8 00:14:24.739351 kernel: SRAT: PXM 0 -> APIC 0x54 -> Node 0 Nov 8 00:14:24.739356 kernel: SRAT: PXM 0 -> APIC 0x56 -> Node 0 Nov 8 00:14:24.739360 kernel: SRAT: PXM 0 -> APIC 0x58 -> Node 0 Nov 8 00:14:24.739368 kernel: SRAT: PXM 0 -> APIC 0x5a -> Node 0 Nov 8 00:14:24.739375 kernel: SRAT: PXM 0 -> APIC 0x5c -> Node 0 Nov 8 00:14:24.739398 kernel: SRAT: PXM 0 -> APIC 0x5e -> Node 0 Nov 8 00:14:24.739403 kernel: SRAT: PXM 0 -> APIC 0x60 -> Node 0 Nov 8 00:14:24.739411 kernel: SRAT: PXM 0 -> APIC 0x62 -> Node 0 Nov 8 00:14:24.739417 kernel: SRAT: PXM 0 -> APIC 0x64 -> Node 0 Nov 8 00:14:24.739431 kernel: SRAT: PXM 0 -> APIC 0x66 -> Node 0 Nov 8 00:14:24.739446 kernel: SRAT: PXM 0 -> APIC 0x68 -> Node 0 Nov 8 00:14:24.739458 kernel: SRAT: PXM 0 -> APIC 0x6a -> Node 0 Nov 8 00:14:24.739474 kernel: SRAT: PXM 0 -> APIC 0x6c -> Node 0 Nov 8 00:14:24.739486 kernel: SRAT: PXM 0 -> APIC 0x6e -> Node 0 Nov 8 00:14:24.739518 kernel: SRAT: PXM 0 -> APIC 0x70 -> Node 0 Nov 8 00:14:24.739524 kernel: SRAT: PXM 0 -> APIC 0x72 -> Node 0 Nov 8 00:14:24.739530 kernel: SRAT: PXM 0 -> APIC 0x74 -> Node 0 Nov 8 00:14:24.739539 kernel: SRAT: PXM 0 -> APIC 0x76 -> Node 0 Nov 8 00:14:24.739546 kernel: SRAT: PXM 0 -> APIC 0x78 -> Node 0 Nov 8 00:14:24.739551 kernel: SRAT: PXM 0 -> APIC 0x7a -> Node 0 Nov 8 00:14:24.739556 kernel: SRAT: PXM 0 -> APIC 0x7c -> Node 0 Nov 8 00:14:24.739561 kernel: SRAT: PXM 0 -> APIC 0x7e -> Node 0 Nov 8 00:14:24.739567 kernel: SRAT: PXM 0 -> APIC 0x80 -> Node 0 Nov 8 00:14:24.739573 kernel: SRAT: PXM 0 -> APIC 0x82 -> Node 0 Nov 8 00:14:24.739578 kernel: SRAT: PXM 0 -> APIC 0x84 -> Node 0 Nov 8 00:14:24.739583 kernel: SRAT: PXM 0 -> APIC 0x86 -> Node 0 Nov 8 00:14:24.739588 kernel: SRAT: PXM 0 -> APIC 0x88 -> Node 0 Nov 8 00:14:24.739594 kernel: SRAT: PXM 0 -> APIC 0x8a -> Node 0 Nov 8 00:14:24.739599 kernel: SRAT: PXM 0 -> APIC 0x8c -> Node 0 Nov 8 00:14:24.739604 kernel: SRAT: PXM 0 -> APIC 0x8e -> Node 0 Nov 8 00:14:24.739609 kernel: SRAT: PXM 0 -> APIC 0x90 -> Node 0 Nov 8 00:14:24.739614 kernel: SRAT: PXM 0 -> APIC 0x92 -> Node 0 Nov 8 00:14:24.739619 kernel: SRAT: PXM 0 -> APIC 0x94 -> Node 0 Nov 8 00:14:24.739625 kernel: SRAT: PXM 0 -> APIC 0x96 -> Node 0 Nov 8 00:14:24.739631 kernel: SRAT: PXM 0 -> APIC 0x98 -> Node 0 Nov 8 00:14:24.739636 kernel: SRAT: PXM 0 -> APIC 0x9a -> Node 0 Nov 8 00:14:24.739641 kernel: SRAT: PXM 0 -> APIC 0x9c -> Node 0 Nov 8 00:14:24.739646 kernel: SRAT: PXM 0 -> APIC 0x9e -> Node 0 Nov 8 00:14:24.739652 kernel: SRAT: PXM 0 -> APIC 0xa0 -> Node 0 Nov 8 00:14:24.739657 kernel: SRAT: PXM 0 -> APIC 0xa2 -> Node 0 Nov 8 00:14:24.739662 kernel: SRAT: PXM 0 -> APIC 0xa4 -> Node 0 Nov 8 00:14:24.739667 kernel: SRAT: PXM 0 -> APIC 0xa6 -> Node 0 Nov 8 00:14:24.739673 kernel: SRAT: PXM 0 -> APIC 0xa8 -> Node 0 Nov 8 00:14:24.739679 kernel: SRAT: PXM 0 -> APIC 0xaa -> Node 0 Nov 8 00:14:24.739684 kernel: SRAT: PXM 0 -> APIC 0xac -> Node 0 Nov 8 00:14:24.739689 kernel: SRAT: PXM 0 -> APIC 0xae -> Node 0 Nov 8 00:14:24.739695 kernel: SRAT: PXM 0 -> APIC 0xb0 -> Node 0 Nov 8 00:14:24.739700 kernel: SRAT: PXM 0 -> APIC 0xb2 -> Node 0 Nov 8 00:14:24.739705 kernel: SRAT: PXM 0 -> APIC 0xb4 -> Node 0 Nov 8 00:14:24.739710 kernel: SRAT: PXM 0 -> APIC 0xb6 -> Node 0 Nov 8 00:14:24.739715 kernel: SRAT: PXM 0 -> APIC 0xb8 -> Node 0 Nov 8 00:14:24.739720 kernel: SRAT: PXM 0 -> APIC 0xba -> Node 0 Nov 8 00:14:24.739726 kernel: SRAT: PXM 0 -> APIC 0xbc -> Node 0 Nov 8 00:14:24.739732 kernel: SRAT: PXM 0 -> APIC 0xbe -> Node 0 Nov 8 00:14:24.739737 kernel: SRAT: PXM 0 -> APIC 0xc0 -> Node 0 Nov 8 00:14:24.739742 kernel: SRAT: PXM 0 -> APIC 0xc2 -> Node 0 Nov 8 00:14:24.739747 kernel: SRAT: PXM 0 -> APIC 0xc4 -> Node 0 Nov 8 00:14:24.739752 kernel: SRAT: PXM 0 -> APIC 0xc6 -> Node 0 Nov 8 00:14:24.739757 kernel: SRAT: PXM 0 -> APIC 0xc8 -> Node 0 Nov 8 00:14:24.739763 kernel: SRAT: PXM 0 -> APIC 0xca -> Node 0 Nov 8 00:14:24.739768 kernel: SRAT: PXM 0 -> APIC 0xcc -> Node 0 Nov 8 00:14:24.739773 kernel: SRAT: PXM 0 -> APIC 0xce -> Node 0 Nov 8 00:14:24.739778 kernel: SRAT: PXM 0 -> APIC 0xd0 -> Node 0 Nov 8 00:14:24.739783 kernel: SRAT: PXM 0 -> APIC 0xd2 -> Node 0 Nov 8 00:14:24.739790 kernel: SRAT: PXM 0 -> APIC 0xd4 -> Node 0 Nov 8 00:14:24.739795 kernel: SRAT: PXM 0 -> APIC 0xd6 -> Node 0 Nov 8 00:14:24.739800 kernel: SRAT: PXM 0 -> APIC 0xd8 -> Node 0 Nov 8 00:14:24.739805 kernel: SRAT: PXM 0 -> APIC 0xda -> Node 0 Nov 8 00:14:24.739810 kernel: SRAT: PXM 0 -> APIC 0xdc -> Node 0 Nov 8 00:14:24.739815 kernel: SRAT: PXM 0 -> APIC 0xde -> Node 0 Nov 8 00:14:24.739820 kernel: SRAT: PXM 0 -> APIC 0xe0 -> Node 0 Nov 8 00:14:24.739826 kernel: SRAT: PXM 0 -> APIC 0xe2 -> Node 0 Nov 8 00:14:24.739831 kernel: SRAT: PXM 0 -> APIC 0xe4 -> Node 0 Nov 8 00:14:24.739836 kernel: SRAT: PXM 0 -> APIC 0xe6 -> Node 0 Nov 8 00:14:24.739842 kernel: SRAT: PXM 0 -> APIC 0xe8 -> Node 0 Nov 8 00:14:24.739848 kernel: SRAT: PXM 0 -> APIC 0xea -> Node 0 Nov 8 00:14:24.739853 kernel: SRAT: PXM 0 -> APIC 0xec -> Node 0 Nov 8 00:14:24.739858 kernel: SRAT: PXM 0 -> APIC 0xee -> Node 0 Nov 8 00:14:24.739863 kernel: SRAT: PXM 0 -> APIC 0xf0 -> Node 0 Nov 8 00:14:24.739868 kernel: SRAT: PXM 0 -> APIC 0xf2 -> Node 0 Nov 8 00:14:24.739888 kernel: SRAT: PXM 0 -> APIC 0xf4 -> Node 0 Nov 8 00:14:24.739900 kernel: SRAT: PXM 0 -> APIC 0xf6 -> Node 0 Nov 8 00:14:24.739907 kernel: SRAT: PXM 0 -> APIC 0xf8 -> Node 0 Nov 8 00:14:24.739912 kernel: SRAT: PXM 0 -> APIC 0xfa -> Node 0 Nov 8 00:14:24.739927 kernel: SRAT: PXM 0 -> APIC 0xfc -> Node 0 Nov 8 00:14:24.739934 kernel: SRAT: PXM 0 -> APIC 0xfe -> Node 0 Nov 8 00:14:24.739939 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x0009ffff] Nov 8 00:14:24.739944 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00100000-0x7fffffff] Nov 8 00:14:24.739950 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x80000000-0xbfffffff] hotplug Nov 8 00:14:24.739956 kernel: NUMA: Node 0 [mem 0x00000000-0x0009ffff] + [mem 0x00100000-0x7fffffff] -> [mem 0x00000000-0x7fffffff] Nov 8 00:14:24.739961 kernel: NODE_DATA(0) allocated [mem 0x7fffa000-0x7fffffff] Nov 8 00:14:24.739967 kernel: Zone ranges: Nov 8 00:14:24.739972 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Nov 8 00:14:24.739979 kernel: DMA32 [mem 0x0000000001000000-0x000000007fffffff] Nov 8 00:14:24.739985 kernel: Normal empty Nov 8 00:14:24.739990 kernel: Movable zone start for each node Nov 8 00:14:24.739995 kernel: Early memory node ranges Nov 8 00:14:24.740001 kernel: node 0: [mem 0x0000000000001000-0x000000000009dfff] Nov 8 00:14:24.740006 kernel: node 0: [mem 0x0000000000100000-0x000000007fedffff] Nov 8 00:14:24.740011 kernel: node 0: [mem 0x000000007ff00000-0x000000007fffffff] Nov 8 00:14:24.740016 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000007fffffff] Nov 8 00:14:24.740022 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Nov 8 00:14:24.740027 kernel: On node 0, zone DMA: 98 pages in unavailable ranges Nov 8 00:14:24.740051 kernel: On node 0, zone DMA32: 32 pages in unavailable ranges Nov 8 00:14:24.740057 kernel: ACPI: PM-Timer IO Port: 0x1008 Nov 8 00:14:24.740062 kernel: system APIC only can use physical flat Nov 8 00:14:24.740067 kernel: ACPI: LAPIC_NMI (acpi_id[0x00] high edge lint[0x1]) Nov 8 00:14:24.740073 kernel: ACPI: LAPIC_NMI (acpi_id[0x01] high edge lint[0x1]) Nov 8 00:14:24.740078 kernel: ACPI: LAPIC_NMI (acpi_id[0x02] high edge lint[0x1]) Nov 8 00:14:24.740083 kernel: ACPI: LAPIC_NMI (acpi_id[0x03] high edge lint[0x1]) Nov 8 00:14:24.740089 kernel: ACPI: LAPIC_NMI (acpi_id[0x04] high edge lint[0x1]) Nov 8 00:14:24.740094 kernel: ACPI: LAPIC_NMI (acpi_id[0x05] high edge lint[0x1]) Nov 8 00:14:24.740099 kernel: ACPI: LAPIC_NMI (acpi_id[0x06] high edge lint[0x1]) Nov 8 00:14:24.740106 kernel: ACPI: LAPIC_NMI (acpi_id[0x07] high edge lint[0x1]) Nov 8 00:14:24.740112 kernel: ACPI: LAPIC_NMI (acpi_id[0x08] high edge lint[0x1]) Nov 8 00:14:24.740117 kernel: ACPI: LAPIC_NMI (acpi_id[0x09] high edge lint[0x1]) Nov 8 00:14:24.740122 kernel: ACPI: LAPIC_NMI (acpi_id[0x0a] high edge lint[0x1]) Nov 8 00:14:24.740127 kernel: ACPI: LAPIC_NMI (acpi_id[0x0b] high edge lint[0x1]) Nov 8 00:14:24.740133 kernel: ACPI: LAPIC_NMI (acpi_id[0x0c] high edge lint[0x1]) Nov 8 00:14:24.740138 kernel: ACPI: LAPIC_NMI (acpi_id[0x0d] high edge lint[0x1]) Nov 8 00:14:24.740143 kernel: ACPI: LAPIC_NMI (acpi_id[0x0e] high edge lint[0x1]) Nov 8 00:14:24.740148 kernel: ACPI: LAPIC_NMI (acpi_id[0x0f] high edge lint[0x1]) Nov 8 00:14:24.740155 kernel: ACPI: LAPIC_NMI (acpi_id[0x10] high edge lint[0x1]) Nov 8 00:14:24.740160 kernel: ACPI: LAPIC_NMI (acpi_id[0x11] high edge lint[0x1]) Nov 8 00:14:24.740165 kernel: ACPI: LAPIC_NMI (acpi_id[0x12] high edge lint[0x1]) Nov 8 00:14:24.740171 kernel: ACPI: LAPIC_NMI (acpi_id[0x13] high edge lint[0x1]) Nov 8 00:14:24.740176 kernel: ACPI: LAPIC_NMI (acpi_id[0x14] high edge lint[0x1]) Nov 8 00:14:24.740181 kernel: ACPI: LAPIC_NMI (acpi_id[0x15] high edge lint[0x1]) Nov 8 00:14:24.740187 kernel: ACPI: LAPIC_NMI (acpi_id[0x16] high edge lint[0x1]) Nov 8 00:14:24.740192 kernel: ACPI: LAPIC_NMI (acpi_id[0x17] high edge lint[0x1]) Nov 8 00:14:24.740197 kernel: ACPI: LAPIC_NMI (acpi_id[0x18] high edge lint[0x1]) Nov 8 00:14:24.740202 kernel: ACPI: LAPIC_NMI (acpi_id[0x19] high edge lint[0x1]) Nov 8 00:14:24.740209 kernel: ACPI: LAPIC_NMI (acpi_id[0x1a] high edge lint[0x1]) Nov 8 00:14:24.740214 kernel: ACPI: LAPIC_NMI (acpi_id[0x1b] high edge lint[0x1]) Nov 8 00:14:24.740219 kernel: ACPI: LAPIC_NMI (acpi_id[0x1c] high edge lint[0x1]) Nov 8 00:14:24.740225 kernel: ACPI: LAPIC_NMI (acpi_id[0x1d] high edge lint[0x1]) Nov 8 00:14:24.740230 kernel: ACPI: LAPIC_NMI (acpi_id[0x1e] high edge lint[0x1]) Nov 8 00:14:24.740235 kernel: ACPI: LAPIC_NMI (acpi_id[0x1f] high edge lint[0x1]) Nov 8 00:14:24.740240 kernel: ACPI: LAPIC_NMI (acpi_id[0x20] high edge lint[0x1]) Nov 8 00:14:24.740246 kernel: ACPI: LAPIC_NMI (acpi_id[0x21] high edge lint[0x1]) Nov 8 00:14:24.740251 kernel: ACPI: LAPIC_NMI (acpi_id[0x22] high edge lint[0x1]) Nov 8 00:14:24.740256 kernel: ACPI: LAPIC_NMI (acpi_id[0x23] high edge lint[0x1]) Nov 8 00:14:24.740263 kernel: ACPI: LAPIC_NMI (acpi_id[0x24] high edge lint[0x1]) Nov 8 00:14:24.740268 kernel: ACPI: LAPIC_NMI (acpi_id[0x25] high edge lint[0x1]) Nov 8 00:14:24.740273 kernel: ACPI: LAPIC_NMI (acpi_id[0x26] high edge lint[0x1]) Nov 8 00:14:24.740278 kernel: ACPI: LAPIC_NMI (acpi_id[0x27] high edge lint[0x1]) Nov 8 00:14:24.740284 kernel: ACPI: LAPIC_NMI (acpi_id[0x28] high edge lint[0x1]) Nov 8 00:14:24.740289 kernel: ACPI: LAPIC_NMI (acpi_id[0x29] high edge lint[0x1]) Nov 8 00:14:24.740294 kernel: ACPI: LAPIC_NMI (acpi_id[0x2a] high edge lint[0x1]) Nov 8 00:14:24.740299 kernel: ACPI: LAPIC_NMI (acpi_id[0x2b] high edge lint[0x1]) Nov 8 00:14:24.740305 kernel: ACPI: LAPIC_NMI (acpi_id[0x2c] high edge lint[0x1]) Nov 8 00:14:24.740311 kernel: ACPI: LAPIC_NMI (acpi_id[0x2d] high edge lint[0x1]) Nov 8 00:14:24.740316 kernel: ACPI: LAPIC_NMI (acpi_id[0x2e] high edge lint[0x1]) Nov 8 00:14:24.740321 kernel: ACPI: LAPIC_NMI (acpi_id[0x2f] high edge lint[0x1]) Nov 8 00:14:24.740327 kernel: ACPI: LAPIC_NMI (acpi_id[0x30] high edge lint[0x1]) Nov 8 00:14:24.740332 kernel: ACPI: LAPIC_NMI (acpi_id[0x31] high edge lint[0x1]) Nov 8 00:14:24.740337 kernel: ACPI: LAPIC_NMI (acpi_id[0x32] high edge lint[0x1]) Nov 8 00:14:24.740342 kernel: ACPI: LAPIC_NMI (acpi_id[0x33] high edge lint[0x1]) Nov 8 00:14:24.740348 kernel: ACPI: LAPIC_NMI (acpi_id[0x34] high edge lint[0x1]) Nov 8 00:14:24.740353 kernel: ACPI: LAPIC_NMI (acpi_id[0x35] high edge lint[0x1]) Nov 8 00:14:24.740379 kernel: ACPI: LAPIC_NMI (acpi_id[0x36] high edge lint[0x1]) Nov 8 00:14:24.740385 kernel: ACPI: LAPIC_NMI (acpi_id[0x37] high edge lint[0x1]) Nov 8 00:14:24.740391 kernel: ACPI: LAPIC_NMI (acpi_id[0x38] high edge lint[0x1]) Nov 8 00:14:24.740396 kernel: ACPI: LAPIC_NMI (acpi_id[0x39] high edge lint[0x1]) Nov 8 00:14:24.740401 kernel: ACPI: LAPIC_NMI (acpi_id[0x3a] high edge lint[0x1]) Nov 8 00:14:24.740407 kernel: ACPI: LAPIC_NMI (acpi_id[0x3b] high edge lint[0x1]) Nov 8 00:14:24.740429 kernel: ACPI: LAPIC_NMI (acpi_id[0x3c] high edge lint[0x1]) Nov 8 00:14:24.740435 kernel: ACPI: LAPIC_NMI (acpi_id[0x3d] high edge lint[0x1]) Nov 8 00:14:24.740440 kernel: ACPI: LAPIC_NMI (acpi_id[0x3e] high edge lint[0x1]) Nov 8 00:14:24.740445 kernel: ACPI: LAPIC_NMI (acpi_id[0x3f] high edge lint[0x1]) Nov 8 00:14:24.740450 kernel: ACPI: LAPIC_NMI (acpi_id[0x40] high edge lint[0x1]) Nov 8 00:14:24.740457 kernel: ACPI: LAPIC_NMI (acpi_id[0x41] high edge lint[0x1]) Nov 8 00:14:24.740462 kernel: ACPI: LAPIC_NMI (acpi_id[0x42] high edge lint[0x1]) Nov 8 00:14:24.740467 kernel: ACPI: LAPIC_NMI (acpi_id[0x43] high edge lint[0x1]) Nov 8 00:14:24.740472 kernel: ACPI: LAPIC_NMI (acpi_id[0x44] high edge lint[0x1]) Nov 8 00:14:24.740478 kernel: ACPI: LAPIC_NMI (acpi_id[0x45] high edge lint[0x1]) Nov 8 00:14:24.740483 kernel: ACPI: LAPIC_NMI (acpi_id[0x46] high edge lint[0x1]) Nov 8 00:14:24.740488 kernel: ACPI: LAPIC_NMI (acpi_id[0x47] high edge lint[0x1]) Nov 8 00:14:24.740494 kernel: ACPI: LAPIC_NMI (acpi_id[0x48] high edge lint[0x1]) Nov 8 00:14:24.740499 kernel: ACPI: LAPIC_NMI (acpi_id[0x49] high edge lint[0x1]) Nov 8 00:14:24.740505 kernel: ACPI: LAPIC_NMI (acpi_id[0x4a] high edge lint[0x1]) Nov 8 00:14:24.740511 kernel: ACPI: LAPIC_NMI (acpi_id[0x4b] high edge lint[0x1]) Nov 8 00:14:24.740516 kernel: ACPI: LAPIC_NMI (acpi_id[0x4c] high edge lint[0x1]) Nov 8 00:14:24.740521 kernel: ACPI: LAPIC_NMI (acpi_id[0x4d] high edge lint[0x1]) Nov 8 00:14:24.740526 kernel: ACPI: LAPIC_NMI (acpi_id[0x4e] high edge lint[0x1]) Nov 8 00:14:24.740532 kernel: ACPI: LAPIC_NMI (acpi_id[0x4f] high edge lint[0x1]) Nov 8 00:14:24.740537 kernel: ACPI: LAPIC_NMI (acpi_id[0x50] high edge lint[0x1]) Nov 8 00:14:24.740542 kernel: ACPI: LAPIC_NMI (acpi_id[0x51] high edge lint[0x1]) Nov 8 00:14:24.740548 kernel: ACPI: LAPIC_NMI (acpi_id[0x52] high edge lint[0x1]) Nov 8 00:14:24.740553 kernel: ACPI: LAPIC_NMI (acpi_id[0x53] high edge lint[0x1]) Nov 8 00:14:24.740559 kernel: ACPI: LAPIC_NMI (acpi_id[0x54] high edge lint[0x1]) Nov 8 00:14:24.740564 kernel: ACPI: LAPIC_NMI (acpi_id[0x55] high edge lint[0x1]) Nov 8 00:14:24.740570 kernel: ACPI: LAPIC_NMI (acpi_id[0x56] high edge lint[0x1]) Nov 8 00:14:24.740575 kernel: ACPI: LAPIC_NMI (acpi_id[0x57] high edge lint[0x1]) Nov 8 00:14:24.740580 kernel: ACPI: LAPIC_NMI (acpi_id[0x58] high edge lint[0x1]) Nov 8 00:14:24.740586 kernel: ACPI: LAPIC_NMI (acpi_id[0x59] high edge lint[0x1]) Nov 8 00:14:24.740591 kernel: ACPI: LAPIC_NMI (acpi_id[0x5a] high edge lint[0x1]) Nov 8 00:14:24.740596 kernel: ACPI: LAPIC_NMI (acpi_id[0x5b] high edge lint[0x1]) Nov 8 00:14:24.740602 kernel: ACPI: LAPIC_NMI (acpi_id[0x5c] high edge lint[0x1]) Nov 8 00:14:24.740607 kernel: ACPI: LAPIC_NMI (acpi_id[0x5d] high edge lint[0x1]) Nov 8 00:14:24.740613 kernel: ACPI: LAPIC_NMI (acpi_id[0x5e] high edge lint[0x1]) Nov 8 00:14:24.740618 kernel: ACPI: LAPIC_NMI (acpi_id[0x5f] high edge lint[0x1]) Nov 8 00:14:24.740624 kernel: ACPI: LAPIC_NMI (acpi_id[0x60] high edge lint[0x1]) Nov 8 00:14:24.740629 kernel: ACPI: LAPIC_NMI (acpi_id[0x61] high edge lint[0x1]) Nov 8 00:14:24.740634 kernel: ACPI: LAPIC_NMI (acpi_id[0x62] high edge lint[0x1]) Nov 8 00:14:24.740640 kernel: ACPI: LAPIC_NMI (acpi_id[0x63] high edge lint[0x1]) Nov 8 00:14:24.740645 kernel: ACPI: LAPIC_NMI (acpi_id[0x64] high edge lint[0x1]) Nov 8 00:14:24.740650 kernel: ACPI: LAPIC_NMI (acpi_id[0x65] high edge lint[0x1]) Nov 8 00:14:24.740655 kernel: ACPI: LAPIC_NMI (acpi_id[0x66] high edge lint[0x1]) Nov 8 00:14:24.740662 kernel: ACPI: LAPIC_NMI (acpi_id[0x67] high edge lint[0x1]) Nov 8 00:14:24.740667 kernel: ACPI: LAPIC_NMI (acpi_id[0x68] high edge lint[0x1]) Nov 8 00:14:24.740672 kernel: ACPI: LAPIC_NMI (acpi_id[0x69] high edge lint[0x1]) Nov 8 00:14:24.740678 kernel: ACPI: LAPIC_NMI (acpi_id[0x6a] high edge lint[0x1]) Nov 8 00:14:24.740683 kernel: ACPI: LAPIC_NMI (acpi_id[0x6b] high edge lint[0x1]) Nov 8 00:14:24.740688 kernel: ACPI: LAPIC_NMI (acpi_id[0x6c] high edge lint[0x1]) Nov 8 00:14:24.740694 kernel: ACPI: LAPIC_NMI (acpi_id[0x6d] high edge lint[0x1]) Nov 8 00:14:24.740699 kernel: ACPI: LAPIC_NMI (acpi_id[0x6e] high edge lint[0x1]) Nov 8 00:14:24.740704 kernel: ACPI: LAPIC_NMI (acpi_id[0x6f] high edge lint[0x1]) Nov 8 00:14:24.740709 kernel: ACPI: LAPIC_NMI (acpi_id[0x70] high edge lint[0x1]) Nov 8 00:14:24.740716 kernel: ACPI: LAPIC_NMI (acpi_id[0x71] high edge lint[0x1]) Nov 8 00:14:24.740721 kernel: ACPI: LAPIC_NMI (acpi_id[0x72] high edge lint[0x1]) Nov 8 00:14:24.740727 kernel: ACPI: LAPIC_NMI (acpi_id[0x73] high edge lint[0x1]) Nov 8 00:14:24.740732 kernel: ACPI: LAPIC_NMI (acpi_id[0x74] high edge lint[0x1]) Nov 8 00:14:24.740737 kernel: ACPI: LAPIC_NMI (acpi_id[0x75] high edge lint[0x1]) Nov 8 00:14:24.740742 kernel: ACPI: LAPIC_NMI (acpi_id[0x76] high edge lint[0x1]) Nov 8 00:14:24.740747 kernel: ACPI: LAPIC_NMI (acpi_id[0x77] high edge lint[0x1]) Nov 8 00:14:24.740753 kernel: ACPI: LAPIC_NMI (acpi_id[0x78] high edge lint[0x1]) Nov 8 00:14:24.740758 kernel: ACPI: LAPIC_NMI (acpi_id[0x79] high edge lint[0x1]) Nov 8 00:14:24.740763 kernel: ACPI: LAPIC_NMI (acpi_id[0x7a] high edge lint[0x1]) Nov 8 00:14:24.740778 kernel: ACPI: LAPIC_NMI (acpi_id[0x7b] high edge lint[0x1]) Nov 8 00:14:24.740801 kernel: ACPI: LAPIC_NMI (acpi_id[0x7c] high edge lint[0x1]) Nov 8 00:14:24.740826 kernel: ACPI: LAPIC_NMI (acpi_id[0x7d] high edge lint[0x1]) Nov 8 00:14:24.740840 kernel: ACPI: LAPIC_NMI (acpi_id[0x7e] high edge lint[0x1]) Nov 8 00:14:24.740846 kernel: ACPI: LAPIC_NMI (acpi_id[0x7f] high edge lint[0x1]) Nov 8 00:14:24.740851 kernel: IOAPIC[0]: apic_id 1, version 17, address 0xfec00000, GSI 0-23 Nov 8 00:14:24.740857 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 high edge) Nov 8 00:14:24.740862 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Nov 8 00:14:24.740868 kernel: ACPI: HPET id: 0x8086af01 base: 0xfed00000 Nov 8 00:14:24.740876 kernel: TSC deadline timer available Nov 8 00:14:24.740881 kernel: smpboot: Allowing 128 CPUs, 126 hotplug CPUs Nov 8 00:14:24.740887 kernel: [mem 0x80000000-0xefffffff] available for PCI devices Nov 8 00:14:24.740892 kernel: Booting paravirtualized kernel on VMware hypervisor Nov 8 00:14:24.740898 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Nov 8 00:14:24.740903 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:128 nr_cpu_ids:128 nr_node_ids:1 Nov 8 00:14:24.740909 kernel: percpu: Embedded 58 pages/cpu s196712 r8192 d32664 u262144 Nov 8 00:14:24.740914 kernel: pcpu-alloc: s196712 r8192 d32664 u262144 alloc=1*2097152 Nov 8 00:14:24.740919 kernel: pcpu-alloc: [0] 000 001 002 003 004 005 006 007 Nov 8 00:14:24.740926 kernel: pcpu-alloc: [0] 008 009 010 011 012 013 014 015 Nov 8 00:14:24.740931 kernel: pcpu-alloc: [0] 016 017 018 019 020 021 022 023 Nov 8 00:14:24.740936 kernel: pcpu-alloc: [0] 024 025 026 027 028 029 030 031 Nov 8 00:14:24.740945 kernel: pcpu-alloc: [0] 032 033 034 035 036 037 038 039 Nov 8 00:14:24.740961 kernel: pcpu-alloc: [0] 040 041 042 043 044 045 046 047 Nov 8 00:14:24.740968 kernel: pcpu-alloc: [0] 048 049 050 051 052 053 054 055 Nov 8 00:14:24.740974 kernel: pcpu-alloc: [0] 056 057 058 059 060 061 062 063 Nov 8 00:14:24.740980 kernel: pcpu-alloc: [0] 064 065 066 067 068 069 070 071 Nov 8 00:14:24.740986 kernel: pcpu-alloc: [0] 072 073 074 075 076 077 078 079 Nov 8 00:14:24.740992 kernel: pcpu-alloc: [0] 080 081 082 083 084 085 086 087 Nov 8 00:14:24.740998 kernel: pcpu-alloc: [0] 088 089 090 091 092 093 094 095 Nov 8 00:14:24.741004 kernel: pcpu-alloc: [0] 096 097 098 099 100 101 102 103 Nov 8 00:14:24.741009 kernel: pcpu-alloc: [0] 104 105 106 107 108 109 110 111 Nov 8 00:14:24.741015 kernel: pcpu-alloc: [0] 112 113 114 115 116 117 118 119 Nov 8 00:14:24.741020 kernel: pcpu-alloc: [0] 120 121 122 123 124 125 126 127 Nov 8 00:14:24.741026 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=vmware flatcar.autologin verity.usrhash=480a02cf7f2001774aa495c3e719d4173e968e6839485a7d2b207ef2facca472 Nov 8 00:14:24.741044 kernel: random: crng init done Nov 8 00:14:24.741052 kernel: printk: log_buf_len individual max cpu contribution: 4096 bytes Nov 8 00:14:24.741058 kernel: printk: log_buf_len total cpu_extra contributions: 520192 bytes Nov 8 00:14:24.741064 kernel: printk: log_buf_len min size: 262144 bytes Nov 8 00:14:24.741070 kernel: printk: log_buf_len: 1048576 bytes Nov 8 00:14:24.741075 kernel: printk: early log buf free: 239760(91%) Nov 8 00:14:24.741082 kernel: Dentry cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Nov 8 00:14:24.741087 kernel: Inode-cache hash table entries: 131072 (order: 8, 1048576 bytes, linear) Nov 8 00:14:24.741093 kernel: Fallback order for Node 0: 0 Nov 8 00:14:24.741099 kernel: Built 1 zonelists, mobility grouping on. Total pages: 515808 Nov 8 00:14:24.741106 kernel: Policy zone: DMA32 Nov 8 00:14:24.741111 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Nov 8 00:14:24.741117 kernel: Memory: 1936388K/2096628K available (12288K kernel code, 2288K rwdata, 22748K rodata, 42880K init, 2320K bss, 159980K reserved, 0K cma-reserved) Nov 8 00:14:24.741124 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=128, Nodes=1 Nov 8 00:14:24.741130 kernel: ftrace: allocating 37980 entries in 149 pages Nov 8 00:14:24.741137 kernel: ftrace: allocated 149 pages with 4 groups Nov 8 00:14:24.741143 kernel: Dynamic Preempt: voluntary Nov 8 00:14:24.741148 kernel: rcu: Preemptible hierarchical RCU implementation. Nov 8 00:14:24.741154 kernel: rcu: RCU event tracing is enabled. Nov 8 00:14:24.741160 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=128. Nov 8 00:14:24.741166 kernel: Trampoline variant of Tasks RCU enabled. Nov 8 00:14:24.741172 kernel: Rude variant of Tasks RCU enabled. Nov 8 00:14:24.741177 kernel: Tracing variant of Tasks RCU enabled. Nov 8 00:14:24.741183 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Nov 8 00:14:24.741189 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=128 Nov 8 00:14:24.741195 kernel: NR_IRQS: 33024, nr_irqs: 1448, preallocated irqs: 16 Nov 8 00:14:24.741201 kernel: rcu: srcu_init: Setting srcu_struct sizes to big. Nov 8 00:14:24.741207 kernel: Console: colour VGA+ 80x25 Nov 8 00:14:24.741212 kernel: printk: console [tty0] enabled Nov 8 00:14:24.741218 kernel: printk: console [ttyS0] enabled Nov 8 00:14:24.741224 kernel: ACPI: Core revision 20230628 Nov 8 00:14:24.741230 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 133484882848 ns Nov 8 00:14:24.741236 kernel: APIC: Switch to symmetric I/O mode setup Nov 8 00:14:24.741242 kernel: x2apic enabled Nov 8 00:14:24.741249 kernel: APIC: Switched APIC routing to: physical x2apic Nov 8 00:14:24.741254 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Nov 8 00:14:24.741260 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x311fd3cd494, max_idle_ns: 440795223879 ns Nov 8 00:14:24.741266 kernel: Calibrating delay loop (skipped) preset value.. 6816.00 BogoMIPS (lpj=3408000) Nov 8 00:14:24.741272 kernel: Disabled fast string operations Nov 8 00:14:24.741277 kernel: Last level iTLB entries: 4KB 64, 2MB 8, 4MB 8 Nov 8 00:14:24.741283 kernel: Last level dTLB entries: 4KB 64, 2MB 32, 4MB 32, 1GB 4 Nov 8 00:14:24.741290 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Nov 8 00:14:24.741296 kernel: Spectre V2 : Spectre BHI mitigation: SW BHB clearing on vm exit Nov 8 00:14:24.741302 kernel: Spectre V2 : Spectre BHI mitigation: SW BHB clearing on syscall Nov 8 00:14:24.741308 kernel: Spectre V2 : Mitigation: Enhanced / Automatic IBRS Nov 8 00:14:24.741314 kernel: Spectre V2 : Spectre v2 / PBRSB-eIBRS: Retire a single CALL on VMEXIT Nov 8 00:14:24.741319 kernel: RETBleed: Mitigation: Enhanced IBRS Nov 8 00:14:24.741325 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Nov 8 00:14:24.741331 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Nov 8 00:14:24.741337 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode Nov 8 00:14:24.741342 kernel: SRBDS: Unknown: Dependent on hypervisor status Nov 8 00:14:24.741348 kernel: GDS: Unknown: Dependent on hypervisor status Nov 8 00:14:24.741358 kernel: active return thunk: its_return_thunk Nov 8 00:14:24.741364 kernel: ITS: Mitigation: Aligned branch/return thunks Nov 8 00:14:24.741370 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Nov 8 00:14:24.741376 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Nov 8 00:14:24.741381 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Nov 8 00:14:24.741387 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Nov 8 00:14:24.741393 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format. Nov 8 00:14:24.741399 kernel: Freeing SMP alternatives memory: 32K Nov 8 00:14:24.741405 kernel: pid_max: default: 131072 minimum: 1024 Nov 8 00:14:24.741411 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Nov 8 00:14:24.741417 kernel: landlock: Up and running. Nov 8 00:14:24.741423 kernel: SELinux: Initializing. Nov 8 00:14:24.741428 kernel: Mount-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Nov 8 00:14:24.741434 kernel: Mountpoint-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Nov 8 00:14:24.741440 kernel: smpboot: CPU0: Intel(R) Xeon(R) E-2278G CPU @ 3.40GHz (family: 0x6, model: 0x9e, stepping: 0xd) Nov 8 00:14:24.741446 kernel: RCU Tasks: Setting shift to 7 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=128. Nov 8 00:14:24.741452 kernel: RCU Tasks Rude: Setting shift to 7 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=128. Nov 8 00:14:24.741458 kernel: RCU Tasks Trace: Setting shift to 7 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=128. Nov 8 00:14:24.741464 kernel: Performance Events: Skylake events, core PMU driver. Nov 8 00:14:24.741470 kernel: core: CPUID marked event: 'cpu cycles' unavailable Nov 8 00:14:24.741476 kernel: core: CPUID marked event: 'instructions' unavailable Nov 8 00:14:24.741481 kernel: core: CPUID marked event: 'bus cycles' unavailable Nov 8 00:14:24.741487 kernel: core: CPUID marked event: 'cache references' unavailable Nov 8 00:14:24.741492 kernel: core: CPUID marked event: 'cache misses' unavailable Nov 8 00:14:24.741498 kernel: core: CPUID marked event: 'branch instructions' unavailable Nov 8 00:14:24.741504 kernel: core: CPUID marked event: 'branch misses' unavailable Nov 8 00:14:24.741510 kernel: ... version: 1 Nov 8 00:14:24.741516 kernel: ... bit width: 48 Nov 8 00:14:24.741521 kernel: ... generic registers: 4 Nov 8 00:14:24.741527 kernel: ... value mask: 0000ffffffffffff Nov 8 00:14:24.741533 kernel: ... max period: 000000007fffffff Nov 8 00:14:24.741539 kernel: ... fixed-purpose events: 0 Nov 8 00:14:24.741545 kernel: ... event mask: 000000000000000f Nov 8 00:14:24.741550 kernel: signal: max sigframe size: 1776 Nov 8 00:14:24.741556 kernel: rcu: Hierarchical SRCU implementation. Nov 8 00:14:24.741563 kernel: rcu: Max phase no-delay instances is 400. Nov 8 00:14:24.741569 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Nov 8 00:14:24.741574 kernel: smp: Bringing up secondary CPUs ... Nov 8 00:14:24.741580 kernel: smpboot: x86: Booting SMP configuration: Nov 8 00:14:24.741586 kernel: .... node #0, CPUs: #1 Nov 8 00:14:24.741592 kernel: Disabled fast string operations Nov 8 00:14:24.741597 kernel: smpboot: CPU 1 Converting physical 2 to logical package 1 Nov 8 00:14:24.741603 kernel: smpboot: CPU 1 Converting physical 0 to logical die 1 Nov 8 00:14:24.741608 kernel: smp: Brought up 1 node, 2 CPUs Nov 8 00:14:24.741614 kernel: smpboot: Max logical packages: 128 Nov 8 00:14:24.741621 kernel: smpboot: Total of 2 processors activated (13632.00 BogoMIPS) Nov 8 00:14:24.741627 kernel: devtmpfs: initialized Nov 8 00:14:24.741636 kernel: x86/mm: Memory block size: 128MB Nov 8 00:14:24.741643 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x7feff000-0x7fefffff] (4096 bytes) Nov 8 00:14:24.741653 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Nov 8 00:14:24.741666 kernel: futex hash table entries: 32768 (order: 9, 2097152 bytes, linear) Nov 8 00:14:24.741673 kernel: pinctrl core: initialized pinctrl subsystem Nov 8 00:14:24.741679 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Nov 8 00:14:24.741690 kernel: audit: initializing netlink subsys (disabled) Nov 8 00:14:24.741698 kernel: audit: type=2000 audit(1762560863.093:1): state=initialized audit_enabled=0 res=1 Nov 8 00:14:24.741704 kernel: thermal_sys: Registered thermal governor 'step_wise' Nov 8 00:14:24.741709 kernel: thermal_sys: Registered thermal governor 'user_space' Nov 8 00:14:24.741715 kernel: cpuidle: using governor menu Nov 8 00:14:24.741721 kernel: Simple Boot Flag at 0x36 set to 0x80 Nov 8 00:14:24.741729 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Nov 8 00:14:24.741735 kernel: dca service started, version 1.12.1 Nov 8 00:14:24.741741 kernel: PCI: MMCONFIG for domain 0000 [bus 00-7f] at [mem 0xf0000000-0xf7ffffff] (base 0xf0000000) Nov 8 00:14:24.741747 kernel: PCI: Using configuration type 1 for base access Nov 8 00:14:24.741754 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Nov 8 00:14:24.741760 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Nov 8 00:14:24.741766 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Nov 8 00:14:24.741771 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Nov 8 00:14:24.741777 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Nov 8 00:14:24.741783 kernel: ACPI: Added _OSI(Module Device) Nov 8 00:14:24.741789 kernel: ACPI: Added _OSI(Processor Device) Nov 8 00:14:24.741794 kernel: ACPI: Added _OSI(Processor Aggregator Device) Nov 8 00:14:24.741800 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Nov 8 00:14:24.741807 kernel: ACPI: [Firmware Bug]: BIOS _OSI(Linux) query ignored Nov 8 00:14:24.741812 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Nov 8 00:14:24.741818 kernel: ACPI: Interpreter enabled Nov 8 00:14:24.741824 kernel: ACPI: PM: (supports S0 S1 S5) Nov 8 00:14:24.741829 kernel: ACPI: Using IOAPIC for interrupt routing Nov 8 00:14:24.741835 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Nov 8 00:14:24.741841 kernel: PCI: Using E820 reservations for host bridge windows Nov 8 00:14:24.741847 kernel: ACPI: Enabled 4 GPEs in block 00 to 0F Nov 8 00:14:24.741852 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-7f]) Nov 8 00:14:24.741932 kernel: acpi PNP0A03:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Nov 8 00:14:24.741990 kernel: acpi PNP0A03:00: _OSC: platform does not support [AER LTR] Nov 8 00:14:24.744095 kernel: acpi PNP0A03:00: _OSC: OS now controls [PCIeHotplug PME PCIeCapability] Nov 8 00:14:24.744107 kernel: PCI host bridge to bus 0000:00 Nov 8 00:14:24.744166 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Nov 8 00:14:24.744215 kernel: pci_bus 0000:00: root bus resource [mem 0x000cc000-0x000dbfff window] Nov 8 00:14:24.744265 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Nov 8 00:14:24.744311 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Nov 8 00:14:24.744359 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xfeff window] Nov 8 00:14:24.744406 kernel: pci_bus 0000:00: root bus resource [bus 00-7f] Nov 8 00:14:24.744484 kernel: pci 0000:00:00.0: [8086:7190] type 00 class 0x060000 Nov 8 00:14:24.744545 kernel: pci 0000:00:01.0: [8086:7191] type 01 class 0x060400 Nov 8 00:14:24.744601 kernel: pci 0000:00:07.0: [8086:7110] type 00 class 0x060100 Nov 8 00:14:24.744662 kernel: pci 0000:00:07.1: [8086:7111] type 00 class 0x01018a Nov 8 00:14:24.744714 kernel: pci 0000:00:07.1: reg 0x20: [io 0x1060-0x106f] Nov 8 00:14:24.744766 kernel: pci 0000:00:07.1: legacy IDE quirk: reg 0x10: [io 0x01f0-0x01f7] Nov 8 00:14:24.744818 kernel: pci 0000:00:07.1: legacy IDE quirk: reg 0x14: [io 0x03f6] Nov 8 00:14:24.744869 kernel: pci 0000:00:07.1: legacy IDE quirk: reg 0x18: [io 0x0170-0x0177] Nov 8 00:14:24.744920 kernel: pci 0000:00:07.1: legacy IDE quirk: reg 0x1c: [io 0x0376] Nov 8 00:14:24.744978 kernel: pci 0000:00:07.3: [8086:7113] type 00 class 0x068000 Nov 8 00:14:24.745040 kernel: pci 0000:00:07.3: quirk: [io 0x1000-0x103f] claimed by PIIX4 ACPI Nov 8 00:14:24.745095 kernel: pci 0000:00:07.3: quirk: [io 0x1040-0x104f] claimed by PIIX4 SMB Nov 8 00:14:24.745151 kernel: pci 0000:00:07.7: [15ad:0740] type 00 class 0x088000 Nov 8 00:14:24.745203 kernel: pci 0000:00:07.7: reg 0x10: [io 0x1080-0x10bf] Nov 8 00:14:24.745255 kernel: pci 0000:00:07.7: reg 0x14: [mem 0xfebfe000-0xfebfffff 64bit] Nov 8 00:14:24.745311 kernel: pci 0000:00:0f.0: [15ad:0405] type 00 class 0x030000 Nov 8 00:14:24.745367 kernel: pci 0000:00:0f.0: reg 0x10: [io 0x1070-0x107f] Nov 8 00:14:24.745418 kernel: pci 0000:00:0f.0: reg 0x14: [mem 0xe8000000-0xefffffff pref] Nov 8 00:14:24.745468 kernel: pci 0000:00:0f.0: reg 0x18: [mem 0xfe000000-0xfe7fffff] Nov 8 00:14:24.745519 kernel: pci 0000:00:0f.0: reg 0x30: [mem 0x00000000-0x00007fff pref] Nov 8 00:14:24.745570 kernel: pci 0000:00:0f.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Nov 8 00:14:24.745625 kernel: pci 0000:00:11.0: [15ad:0790] type 01 class 0x060401 Nov 8 00:14:24.745683 kernel: pci 0000:00:15.0: [15ad:07a0] type 01 class 0x060400 Nov 8 00:14:24.745736 kernel: pci 0000:00:15.0: PME# supported from D0 D3hot D3cold Nov 8 00:14:24.745795 kernel: pci 0000:00:15.1: [15ad:07a0] type 01 class 0x060400 Nov 8 00:14:24.745848 kernel: pci 0000:00:15.1: PME# supported from D0 D3hot D3cold Nov 8 00:14:24.745905 kernel: pci 0000:00:15.2: [15ad:07a0] type 01 class 0x060400 Nov 8 00:14:24.745958 kernel: pci 0000:00:15.2: PME# supported from D0 D3hot D3cold Nov 8 00:14:24.746015 kernel: pci 0000:00:15.3: [15ad:07a0] type 01 class 0x060400 Nov 8 00:14:24.748126 kernel: pci 0000:00:15.3: PME# supported from D0 D3hot D3cold Nov 8 00:14:24.748189 kernel: pci 0000:00:15.4: [15ad:07a0] type 01 class 0x060400 Nov 8 00:14:24.748244 kernel: pci 0000:00:15.4: PME# supported from D0 D3hot D3cold Nov 8 00:14:24.748301 kernel: pci 0000:00:15.5: [15ad:07a0] type 01 class 0x060400 Nov 8 00:14:24.748361 kernel: pci 0000:00:15.5: PME# supported from D0 D3hot D3cold Nov 8 00:14:24.748423 kernel: pci 0000:00:15.6: [15ad:07a0] type 01 class 0x060400 Nov 8 00:14:24.748480 kernel: pci 0000:00:15.6: PME# supported from D0 D3hot D3cold Nov 8 00:14:24.748536 kernel: pci 0000:00:15.7: [15ad:07a0] type 01 class 0x060400 Nov 8 00:14:24.748589 kernel: pci 0000:00:15.7: PME# supported from D0 D3hot D3cold Nov 8 00:14:24.748645 kernel: pci 0000:00:16.0: [15ad:07a0] type 01 class 0x060400 Nov 8 00:14:24.748697 kernel: pci 0000:00:16.0: PME# supported from D0 D3hot D3cold Nov 8 00:14:24.748755 kernel: pci 0000:00:16.1: [15ad:07a0] type 01 class 0x060400 Nov 8 00:14:24.748808 kernel: pci 0000:00:16.1: PME# supported from D0 D3hot D3cold Nov 8 00:14:24.748865 kernel: pci 0000:00:16.2: [15ad:07a0] type 01 class 0x060400 Nov 8 00:14:24.748918 kernel: pci 0000:00:16.2: PME# supported from D0 D3hot D3cold Nov 8 00:14:24.748975 kernel: pci 0000:00:16.3: [15ad:07a0] type 01 class 0x060400 Nov 8 00:14:24.751034 kernel: pci 0000:00:16.3: PME# supported from D0 D3hot D3cold Nov 8 00:14:24.751113 kernel: pci 0000:00:16.4: [15ad:07a0] type 01 class 0x060400 Nov 8 00:14:24.751168 kernel: pci 0000:00:16.4: PME# supported from D0 D3hot D3cold Nov 8 00:14:24.751224 kernel: pci 0000:00:16.5: [15ad:07a0] type 01 class 0x060400 Nov 8 00:14:24.751277 kernel: pci 0000:00:16.5: PME# supported from D0 D3hot D3cold Nov 8 00:14:24.751332 kernel: pci 0000:00:16.6: [15ad:07a0] type 01 class 0x060400 Nov 8 00:14:24.751394 kernel: pci 0000:00:16.6: PME# supported from D0 D3hot D3cold Nov 8 00:14:24.751452 kernel: pci 0000:00:16.7: [15ad:07a0] type 01 class 0x060400 Nov 8 00:14:24.751508 kernel: pci 0000:00:16.7: PME# supported from D0 D3hot D3cold Nov 8 00:14:24.751565 kernel: pci 0000:00:17.0: [15ad:07a0] type 01 class 0x060400 Nov 8 00:14:24.751618 kernel: pci 0000:00:17.0: PME# supported from D0 D3hot D3cold Nov 8 00:14:24.751673 kernel: pci 0000:00:17.1: [15ad:07a0] type 01 class 0x060400 Nov 8 00:14:24.751726 kernel: pci 0000:00:17.1: PME# supported from D0 D3hot D3cold Nov 8 00:14:24.751781 kernel: pci 0000:00:17.2: [15ad:07a0] type 01 class 0x060400 Nov 8 00:14:24.751836 kernel: pci 0000:00:17.2: PME# supported from D0 D3hot D3cold Nov 8 00:14:24.751895 kernel: pci 0000:00:17.3: [15ad:07a0] type 01 class 0x060400 Nov 8 00:14:24.751947 kernel: pci 0000:00:17.3: PME# supported from D0 D3hot D3cold Nov 8 00:14:24.752002 kernel: pci 0000:00:17.4: [15ad:07a0] type 01 class 0x060400 Nov 8 00:14:24.752070 kernel: pci 0000:00:17.4: PME# supported from D0 D3hot D3cold Nov 8 00:14:24.752128 kernel: pci 0000:00:17.5: [15ad:07a0] type 01 class 0x060400 Nov 8 00:14:24.752184 kernel: pci 0000:00:17.5: PME# supported from D0 D3hot D3cold Nov 8 00:14:24.752240 kernel: pci 0000:00:17.6: [15ad:07a0] type 01 class 0x060400 Nov 8 00:14:24.752292 kernel: pci 0000:00:17.6: PME# supported from D0 D3hot D3cold Nov 8 00:14:24.752348 kernel: pci 0000:00:17.7: [15ad:07a0] type 01 class 0x060400 Nov 8 00:14:24.752401 kernel: pci 0000:00:17.7: PME# supported from D0 D3hot D3cold Nov 8 00:14:24.752456 kernel: pci 0000:00:18.0: [15ad:07a0] type 01 class 0x060400 Nov 8 00:14:24.752511 kernel: pci 0000:00:18.0: PME# supported from D0 D3hot D3cold Nov 8 00:14:24.752567 kernel: pci 0000:00:18.1: [15ad:07a0] type 01 class 0x060400 Nov 8 00:14:24.752620 kernel: pci 0000:00:18.1: PME# supported from D0 D3hot D3cold Nov 8 00:14:24.752677 kernel: pci 0000:00:18.2: [15ad:07a0] type 01 class 0x060400 Nov 8 00:14:24.752730 kernel: pci 0000:00:18.2: PME# supported from D0 D3hot D3cold Nov 8 00:14:24.752789 kernel: pci 0000:00:18.3: [15ad:07a0] type 01 class 0x060400 Nov 8 00:14:24.752845 kernel: pci 0000:00:18.3: PME# supported from D0 D3hot D3cold Nov 8 00:14:24.752901 kernel: pci 0000:00:18.4: [15ad:07a0] type 01 class 0x060400 Nov 8 00:14:24.752954 kernel: pci 0000:00:18.4: PME# supported from D0 D3hot D3cold Nov 8 00:14:24.753011 kernel: pci 0000:00:18.5: [15ad:07a0] type 01 class 0x060400 Nov 8 00:14:24.754906 kernel: pci 0000:00:18.5: PME# supported from D0 D3hot D3cold Nov 8 00:14:24.754967 kernel: pci 0000:00:18.6: [15ad:07a0] type 01 class 0x060400 Nov 8 00:14:24.755021 kernel: pci 0000:00:18.6: PME# supported from D0 D3hot D3cold Nov 8 00:14:24.755100 kernel: pci 0000:00:18.7: [15ad:07a0] type 01 class 0x060400 Nov 8 00:14:24.755154 kernel: pci 0000:00:18.7: PME# supported from D0 D3hot D3cold Nov 8 00:14:24.755210 kernel: pci_bus 0000:01: extended config space not accessible Nov 8 00:14:24.755265 kernel: pci 0000:00:01.0: PCI bridge to [bus 01] Nov 8 00:14:24.755320 kernel: pci_bus 0000:02: extended config space not accessible Nov 8 00:14:24.755329 kernel: acpiphp: Slot [32] registered Nov 8 00:14:24.755337 kernel: acpiphp: Slot [33] registered Nov 8 00:14:24.755343 kernel: acpiphp: Slot [34] registered Nov 8 00:14:24.755349 kernel: acpiphp: Slot [35] registered Nov 8 00:14:24.755355 kernel: acpiphp: Slot [36] registered Nov 8 00:14:24.755361 kernel: acpiphp: Slot [37] registered Nov 8 00:14:24.755367 kernel: acpiphp: Slot [38] registered Nov 8 00:14:24.755373 kernel: acpiphp: Slot [39] registered Nov 8 00:14:24.755379 kernel: acpiphp: Slot [40] registered Nov 8 00:14:24.755385 kernel: acpiphp: Slot [41] registered Nov 8 00:14:24.755391 kernel: acpiphp: Slot [42] registered Nov 8 00:14:24.755398 kernel: acpiphp: Slot [43] registered Nov 8 00:14:24.755404 kernel: acpiphp: Slot [44] registered Nov 8 00:14:24.755410 kernel: acpiphp: Slot [45] registered Nov 8 00:14:24.755415 kernel: acpiphp: Slot [46] registered Nov 8 00:14:24.755421 kernel: acpiphp: Slot [47] registered Nov 8 00:14:24.755427 kernel: acpiphp: Slot [48] registered Nov 8 00:14:24.755433 kernel: acpiphp: Slot [49] registered Nov 8 00:14:24.755439 kernel: acpiphp: Slot [50] registered Nov 8 00:14:24.755444 kernel: acpiphp: Slot [51] registered Nov 8 00:14:24.755451 kernel: acpiphp: Slot [52] registered Nov 8 00:14:24.755457 kernel: acpiphp: Slot [53] registered Nov 8 00:14:24.755463 kernel: acpiphp: Slot [54] registered Nov 8 00:14:24.755468 kernel: acpiphp: Slot [55] registered Nov 8 00:14:24.755474 kernel: acpiphp: Slot [56] registered Nov 8 00:14:24.755480 kernel: acpiphp: Slot [57] registered Nov 8 00:14:24.755486 kernel: acpiphp: Slot [58] registered Nov 8 00:14:24.755492 kernel: acpiphp: Slot [59] registered Nov 8 00:14:24.755498 kernel: acpiphp: Slot [60] registered Nov 8 00:14:24.755503 kernel: acpiphp: Slot [61] registered Nov 8 00:14:24.755510 kernel: acpiphp: Slot [62] registered Nov 8 00:14:24.755516 kernel: acpiphp: Slot [63] registered Nov 8 00:14:24.755568 kernel: pci 0000:00:11.0: PCI bridge to [bus 02] (subtractive decode) Nov 8 00:14:24.755668 kernel: pci 0000:00:11.0: bridge window [io 0x2000-0x3fff] Nov 8 00:14:24.755786 kernel: pci 0000:00:11.0: bridge window [mem 0xfd600000-0xfdffffff] Nov 8 00:14:24.755841 kernel: pci 0000:00:11.0: bridge window [mem 0xe7b00000-0xe7ffffff 64bit pref] Nov 8 00:14:24.755894 kernel: pci 0000:00:11.0: bridge window [mem 0x000a0000-0x000bffff window] (subtractive decode) Nov 8 00:14:24.755946 kernel: pci 0000:00:11.0: bridge window [mem 0x000cc000-0x000dbfff window] (subtractive decode) Nov 8 00:14:24.756001 kernel: pci 0000:00:11.0: bridge window [mem 0xc0000000-0xfebfffff window] (subtractive decode) Nov 8 00:14:24.756101 kernel: pci 0000:00:11.0: bridge window [io 0x0000-0x0cf7 window] (subtractive decode) Nov 8 00:14:24.756154 kernel: pci 0000:00:11.0: bridge window [io 0x0d00-0xfeff window] (subtractive decode) Nov 8 00:14:24.756212 kernel: pci 0000:03:00.0: [15ad:07c0] type 00 class 0x010700 Nov 8 00:14:24.756266 kernel: pci 0000:03:00.0: reg 0x10: [io 0x4000-0x4007] Nov 8 00:14:24.756319 kernel: pci 0000:03:00.0: reg 0x14: [mem 0xfd5f8000-0xfd5fffff 64bit] Nov 8 00:14:24.756372 kernel: pci 0000:03:00.0: reg 0x30: [mem 0x00000000-0x0000ffff pref] Nov 8 00:14:24.756443 kernel: pci 0000:03:00.0: PME# supported from D0 D3hot D3cold Nov 8 00:14:24.756805 kernel: pci 0000:03:00.0: disabling ASPM on pre-1.1 PCIe device. You can enable it with 'pcie_aspm=force' Nov 8 00:14:24.756861 kernel: pci 0000:00:15.0: PCI bridge to [bus 03] Nov 8 00:14:24.756914 kernel: pci 0000:00:15.0: bridge window [io 0x4000-0x4fff] Nov 8 00:14:24.756966 kernel: pci 0000:00:15.0: bridge window [mem 0xfd500000-0xfd5fffff] Nov 8 00:14:24.757018 kernel: pci 0000:00:15.1: PCI bridge to [bus 04] Nov 8 00:14:24.757106 kernel: pci 0000:00:15.1: bridge window [io 0x8000-0x8fff] Nov 8 00:14:24.757162 kernel: pci 0000:00:15.1: bridge window [mem 0xfd100000-0xfd1fffff] Nov 8 00:14:24.757213 kernel: pci 0000:00:15.1: bridge window [mem 0xe7800000-0xe78fffff 64bit pref] Nov 8 00:14:24.757267 kernel: pci 0000:00:15.2: PCI bridge to [bus 05] Nov 8 00:14:24.757318 kernel: pci 0000:00:15.2: bridge window [io 0xc000-0xcfff] Nov 8 00:14:24.757369 kernel: pci 0000:00:15.2: bridge window [mem 0xfcd00000-0xfcdfffff] Nov 8 00:14:24.757419 kernel: pci 0000:00:15.2: bridge window [mem 0xe7400000-0xe74fffff 64bit pref] Nov 8 00:14:24.757471 kernel: pci 0000:00:15.3: PCI bridge to [bus 06] Nov 8 00:14:24.757523 kernel: pci 0000:00:15.3: bridge window [mem 0xfc900000-0xfc9fffff] Nov 8 00:14:24.757577 kernel: pci 0000:00:15.3: bridge window [mem 0xe7000000-0xe70fffff 64bit pref] Nov 8 00:14:24.757630 kernel: pci 0000:00:15.4: PCI bridge to [bus 07] Nov 8 00:14:24.757681 kernel: pci 0000:00:15.4: bridge window [mem 0xfc500000-0xfc5fffff] Nov 8 00:14:24.757732 kernel: pci 0000:00:15.4: bridge window [mem 0xe6c00000-0xe6cfffff 64bit pref] Nov 8 00:14:24.757786 kernel: pci 0000:00:15.5: PCI bridge to [bus 08] Nov 8 00:14:24.757837 kernel: pci 0000:00:15.5: bridge window [mem 0xfc100000-0xfc1fffff] Nov 8 00:14:24.757888 kernel: pci 0000:00:15.5: bridge window [mem 0xe6800000-0xe68fffff 64bit pref] Nov 8 00:14:24.757941 kernel: pci 0000:00:15.6: PCI bridge to [bus 09] Nov 8 00:14:24.757992 kernel: pci 0000:00:15.6: bridge window [mem 0xfbd00000-0xfbdfffff] Nov 8 00:14:24.758060 kernel: pci 0000:00:15.6: bridge window [mem 0xe6400000-0xe64fffff 64bit pref] Nov 8 00:14:24.758116 kernel: pci 0000:00:15.7: PCI bridge to [bus 0a] Nov 8 00:14:24.758166 kernel: pci 0000:00:15.7: bridge window [mem 0xfb900000-0xfb9fffff] Nov 8 00:14:24.758220 kernel: pci 0000:00:15.7: bridge window [mem 0xe6000000-0xe60fffff 64bit pref] Nov 8 00:14:24.758311 kernel: pci 0000:0b:00.0: [15ad:07b0] type 00 class 0x020000 Nov 8 00:14:24.758391 kernel: pci 0000:0b:00.0: reg 0x10: [mem 0xfd4fc000-0xfd4fcfff] Nov 8 00:14:24.758461 kernel: pci 0000:0b:00.0: reg 0x14: [mem 0xfd4fd000-0xfd4fdfff] Nov 8 00:14:24.758513 kernel: pci 0000:0b:00.0: reg 0x18: [mem 0xfd4fe000-0xfd4fffff] Nov 8 00:14:24.758565 kernel: pci 0000:0b:00.0: reg 0x1c: [io 0x5000-0x500f] Nov 8 00:14:24.758617 kernel: pci 0000:0b:00.0: reg 0x30: [mem 0x00000000-0x0000ffff pref] Nov 8 00:14:24.758670 kernel: pci 0000:0b:00.0: supports D1 D2 Nov 8 00:14:24.758725 kernel: pci 0000:0b:00.0: PME# supported from D0 D1 D2 D3hot D3cold Nov 8 00:14:24.758777 kernel: pci 0000:0b:00.0: disabling ASPM on pre-1.1 PCIe device. You can enable it with 'pcie_aspm=force' Nov 8 00:14:24.758830 kernel: pci 0000:00:16.0: PCI bridge to [bus 0b] Nov 8 00:14:24.758881 kernel: pci 0000:00:16.0: bridge window [io 0x5000-0x5fff] Nov 8 00:14:24.758932 kernel: pci 0000:00:16.0: bridge window [mem 0xfd400000-0xfd4fffff] Nov 8 00:14:24.758985 kernel: pci 0000:00:16.1: PCI bridge to [bus 0c] Nov 8 00:14:24.761053 kernel: pci 0000:00:16.1: bridge window [io 0x9000-0x9fff] Nov 8 00:14:24.761116 kernel: pci 0000:00:16.1: bridge window [mem 0xfd000000-0xfd0fffff] Nov 8 00:14:24.761175 kernel: pci 0000:00:16.1: bridge window [mem 0xe7700000-0xe77fffff 64bit pref] Nov 8 00:14:24.761230 kernel: pci 0000:00:16.2: PCI bridge to [bus 0d] Nov 8 00:14:24.761282 kernel: pci 0000:00:16.2: bridge window [io 0xd000-0xdfff] Nov 8 00:14:24.761333 kernel: pci 0000:00:16.2: bridge window [mem 0xfcc00000-0xfccfffff] Nov 8 00:14:24.761389 kernel: pci 0000:00:16.2: bridge window [mem 0xe7300000-0xe73fffff 64bit pref] Nov 8 00:14:24.761478 kernel: pci 0000:00:16.3: PCI bridge to [bus 0e] Nov 8 00:14:24.761529 kernel: pci 0000:00:16.3: bridge window [mem 0xfc800000-0xfc8fffff] Nov 8 00:14:24.761583 kernel: pci 0000:00:16.3: bridge window [mem 0xe6f00000-0xe6ffffff 64bit pref] Nov 8 00:14:24.761635 kernel: pci 0000:00:16.4: PCI bridge to [bus 0f] Nov 8 00:14:24.761686 kernel: pci 0000:00:16.4: bridge window [mem 0xfc400000-0xfc4fffff] Nov 8 00:14:24.761736 kernel: pci 0000:00:16.4: bridge window [mem 0xe6b00000-0xe6bfffff 64bit pref] Nov 8 00:14:24.761789 kernel: pci 0000:00:16.5: PCI bridge to [bus 10] Nov 8 00:14:24.761840 kernel: pci 0000:00:16.5: bridge window [mem 0xfc000000-0xfc0fffff] Nov 8 00:14:24.761890 kernel: pci 0000:00:16.5: bridge window [mem 0xe6700000-0xe67fffff 64bit pref] Nov 8 00:14:24.761943 kernel: pci 0000:00:16.6: PCI bridge to [bus 11] Nov 8 00:14:24.761997 kernel: pci 0000:00:16.6: bridge window [mem 0xfbc00000-0xfbcfffff] Nov 8 00:14:24.762067 kernel: pci 0000:00:16.6: bridge window [mem 0xe6300000-0xe63fffff 64bit pref] Nov 8 00:14:24.762121 kernel: pci 0000:00:16.7: PCI bridge to [bus 12] Nov 8 00:14:24.762172 kernel: pci 0000:00:16.7: bridge window [mem 0xfb800000-0xfb8fffff] Nov 8 00:14:24.762222 kernel: pci 0000:00:16.7: bridge window [mem 0xe5f00000-0xe5ffffff 64bit pref] Nov 8 00:14:24.762274 kernel: pci 0000:00:17.0: PCI bridge to [bus 13] Nov 8 00:14:24.762325 kernel: pci 0000:00:17.0: bridge window [io 0x6000-0x6fff] Nov 8 00:14:24.762375 kernel: pci 0000:00:17.0: bridge window [mem 0xfd300000-0xfd3fffff] Nov 8 00:14:24.762437 kernel: pci 0000:00:17.0: bridge window [mem 0xe7a00000-0xe7afffff 64bit pref] Nov 8 00:14:24.762492 kernel: pci 0000:00:17.1: PCI bridge to [bus 14] Nov 8 00:14:24.762544 kernel: pci 0000:00:17.1: bridge window [io 0xa000-0xafff] Nov 8 00:14:24.762614 kernel: pci 0000:00:17.1: bridge window [mem 0xfcf00000-0xfcffffff] Nov 8 00:14:24.762681 kernel: pci 0000:00:17.1: bridge window [mem 0xe7600000-0xe76fffff 64bit pref] Nov 8 00:14:24.762733 kernel: pci 0000:00:17.2: PCI bridge to [bus 15] Nov 8 00:14:24.762785 kernel: pci 0000:00:17.2: bridge window [io 0xe000-0xefff] Nov 8 00:14:24.763082 kernel: pci 0000:00:17.2: bridge window [mem 0xfcb00000-0xfcbfffff] Nov 8 00:14:24.763143 kernel: pci 0000:00:17.2: bridge window [mem 0xe7200000-0xe72fffff 64bit pref] Nov 8 00:14:24.763199 kernel: pci 0000:00:17.3: PCI bridge to [bus 16] Nov 8 00:14:24.765051 kernel: pci 0000:00:17.3: bridge window [mem 0xfc700000-0xfc7fffff] Nov 8 00:14:24.765116 kernel: pci 0000:00:17.3: bridge window [mem 0xe6e00000-0xe6efffff 64bit pref] Nov 8 00:14:24.765173 kernel: pci 0000:00:17.4: PCI bridge to [bus 17] Nov 8 00:14:24.765228 kernel: pci 0000:00:17.4: bridge window [mem 0xfc300000-0xfc3fffff] Nov 8 00:14:24.765293 kernel: pci 0000:00:17.4: bridge window [mem 0xe6a00000-0xe6afffff 64bit pref] Nov 8 00:14:24.765351 kernel: pci 0000:00:17.5: PCI bridge to [bus 18] Nov 8 00:14:24.765403 kernel: pci 0000:00:17.5: bridge window [mem 0xfbf00000-0xfbffffff] Nov 8 00:14:24.765455 kernel: pci 0000:00:17.5: bridge window [mem 0xe6600000-0xe66fffff 64bit pref] Nov 8 00:14:24.765508 kernel: pci 0000:00:17.6: PCI bridge to [bus 19] Nov 8 00:14:24.765559 kernel: pci 0000:00:17.6: bridge window [mem 0xfbb00000-0xfbbfffff] Nov 8 00:14:24.765610 kernel: pci 0000:00:17.6: bridge window [mem 0xe6200000-0xe62fffff 64bit pref] Nov 8 00:14:24.765698 kernel: pci 0000:00:17.7: PCI bridge to [bus 1a] Nov 8 00:14:24.765749 kernel: pci 0000:00:17.7: bridge window [mem 0xfb700000-0xfb7fffff] Nov 8 00:14:24.765803 kernel: pci 0000:00:17.7: bridge window [mem 0xe5e00000-0xe5efffff 64bit pref] Nov 8 00:14:24.765855 kernel: pci 0000:00:18.0: PCI bridge to [bus 1b] Nov 8 00:14:24.765906 kernel: pci 0000:00:18.0: bridge window [io 0x7000-0x7fff] Nov 8 00:14:24.765956 kernel: pci 0000:00:18.0: bridge window [mem 0xfd200000-0xfd2fffff] Nov 8 00:14:24.766007 kernel: pci 0000:00:18.0: bridge window [mem 0xe7900000-0xe79fffff 64bit pref] Nov 8 00:14:24.766153 kernel: pci 0000:00:18.1: PCI bridge to [bus 1c] Nov 8 00:14:24.766206 kernel: pci 0000:00:18.1: bridge window [io 0xb000-0xbfff] Nov 8 00:14:24.766258 kernel: pci 0000:00:18.1: bridge window [mem 0xfce00000-0xfcefffff] Nov 8 00:14:24.766311 kernel: pci 0000:00:18.1: bridge window [mem 0xe7500000-0xe75fffff 64bit pref] Nov 8 00:14:24.766364 kernel: pci 0000:00:18.2: PCI bridge to [bus 1d] Nov 8 00:14:24.766416 kernel: pci 0000:00:18.2: bridge window [mem 0xfca00000-0xfcafffff] Nov 8 00:14:24.766467 kernel: pci 0000:00:18.2: bridge window [mem 0xe7100000-0xe71fffff 64bit pref] Nov 8 00:14:24.766529 kernel: pci 0000:00:18.3: PCI bridge to [bus 1e] Nov 8 00:14:24.766596 kernel: pci 0000:00:18.3: bridge window [mem 0xfc600000-0xfc6fffff] Nov 8 00:14:24.766649 kernel: pci 0000:00:18.3: bridge window [mem 0xe6d00000-0xe6dfffff 64bit pref] Nov 8 00:14:24.766702 kernel: pci 0000:00:18.4: PCI bridge to [bus 1f] Nov 8 00:14:24.766756 kernel: pci 0000:00:18.4: bridge window [mem 0xfc200000-0xfc2fffff] Nov 8 00:14:24.766807 kernel: pci 0000:00:18.4: bridge window [mem 0xe6900000-0xe69fffff 64bit pref] Nov 8 00:14:24.766858 kernel: pci 0000:00:18.5: PCI bridge to [bus 20] Nov 8 00:14:24.766909 kernel: pci 0000:00:18.5: bridge window [mem 0xfbe00000-0xfbefffff] Nov 8 00:14:24.766960 kernel: pci 0000:00:18.5: bridge window [mem 0xe6500000-0xe65fffff 64bit pref] Nov 8 00:14:24.767011 kernel: pci 0000:00:18.6: PCI bridge to [bus 21] Nov 8 00:14:24.767123 kernel: pci 0000:00:18.6: bridge window [mem 0xfba00000-0xfbafffff] Nov 8 00:14:24.767175 kernel: pci 0000:00:18.6: bridge window [mem 0xe6100000-0xe61fffff 64bit pref] Nov 8 00:14:24.767231 kernel: pci 0000:00:18.7: PCI bridge to [bus 22] Nov 8 00:14:24.767282 kernel: pci 0000:00:18.7: bridge window [mem 0xfb600000-0xfb6fffff] Nov 8 00:14:24.767333 kernel: pci 0000:00:18.7: bridge window [mem 0xe5d00000-0xe5dfffff 64bit pref] Nov 8 00:14:24.767342 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 9 Nov 8 00:14:24.767348 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 0 Nov 8 00:14:24.767359 kernel: ACPI: PCI: Interrupt link LNKB disabled Nov 8 00:14:24.767366 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Nov 8 00:14:24.767372 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 10 Nov 8 00:14:24.767380 kernel: iommu: Default domain type: Translated Nov 8 00:14:24.767385 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Nov 8 00:14:24.767391 kernel: PCI: Using ACPI for IRQ routing Nov 8 00:14:24.767397 kernel: PCI: pci_cache_line_size set to 64 bytes Nov 8 00:14:24.767403 kernel: e820: reserve RAM buffer [mem 0x0009ec00-0x0009ffff] Nov 8 00:14:24.767409 kernel: e820: reserve RAM buffer [mem 0x7fee0000-0x7fffffff] Nov 8 00:14:24.767464 kernel: pci 0000:00:0f.0: vgaarb: setting as boot VGA device Nov 8 00:14:24.767515 kernel: pci 0000:00:0f.0: vgaarb: bridge control possible Nov 8 00:14:24.767565 kernel: pci 0000:00:0f.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Nov 8 00:14:24.767576 kernel: vgaarb: loaded Nov 8 00:14:24.767582 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 Nov 8 00:14:24.767588 kernel: hpet0: 16 comparators, 64-bit 14.318180 MHz counter Nov 8 00:14:24.767594 kernel: clocksource: Switched to clocksource tsc-early Nov 8 00:14:24.767600 kernel: VFS: Disk quotas dquot_6.6.0 Nov 8 00:14:24.767606 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Nov 8 00:14:24.767612 kernel: pnp: PnP ACPI init Nov 8 00:14:24.767699 kernel: system 00:00: [io 0x1000-0x103f] has been reserved Nov 8 00:14:24.767769 kernel: system 00:00: [io 0x1040-0x104f] has been reserved Nov 8 00:14:24.768300 kernel: system 00:00: [io 0x0cf0-0x0cf1] has been reserved Nov 8 00:14:24.768387 kernel: system 00:04: [mem 0xfed00000-0xfed003ff] has been reserved Nov 8 00:14:24.768457 kernel: pnp 00:06: [dma 2] Nov 8 00:14:24.768509 kernel: system 00:07: [io 0xfce0-0xfcff] has been reserved Nov 8 00:14:24.768557 kernel: system 00:07: [mem 0xf0000000-0xf7ffffff] has been reserved Nov 8 00:14:24.768604 kernel: system 00:07: [mem 0xfe800000-0xfe9fffff] has been reserved Nov 8 00:14:24.768615 kernel: pnp: PnP ACPI: found 8 devices Nov 8 00:14:24.768621 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Nov 8 00:14:24.768627 kernel: NET: Registered PF_INET protocol family Nov 8 00:14:24.768633 kernel: IP idents hash table entries: 32768 (order: 6, 262144 bytes, linear) Nov 8 00:14:24.768639 kernel: tcp_listen_portaddr_hash hash table entries: 1024 (order: 2, 16384 bytes, linear) Nov 8 00:14:24.768645 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Nov 8 00:14:24.768650 kernel: TCP established hash table entries: 16384 (order: 5, 131072 bytes, linear) Nov 8 00:14:24.768656 kernel: TCP bind hash table entries: 16384 (order: 7, 524288 bytes, linear) Nov 8 00:14:24.768664 kernel: TCP: Hash tables configured (established 16384 bind 16384) Nov 8 00:14:24.768669 kernel: UDP hash table entries: 1024 (order: 3, 32768 bytes, linear) Nov 8 00:14:24.768675 kernel: UDP-Lite hash table entries: 1024 (order: 3, 32768 bytes, linear) Nov 8 00:14:24.768681 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Nov 8 00:14:24.768687 kernel: NET: Registered PF_XDP protocol family Nov 8 00:14:24.768741 kernel: pci 0000:00:15.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 03] add_size 200000 add_align 100000 Nov 8 00:14:24.768795 kernel: pci 0000:00:15.3: bridge window [io 0x1000-0x0fff] to [bus 06] add_size 1000 Nov 8 00:14:24.768857 kernel: pci 0000:00:15.4: bridge window [io 0x1000-0x0fff] to [bus 07] add_size 1000 Nov 8 00:14:24.768914 kernel: pci 0000:00:15.5: bridge window [io 0x1000-0x0fff] to [bus 08] add_size 1000 Nov 8 00:14:24.768967 kernel: pci 0000:00:15.6: bridge window [io 0x1000-0x0fff] to [bus 09] add_size 1000 Nov 8 00:14:24.769020 kernel: pci 0000:00:15.7: bridge window [io 0x1000-0x0fff] to [bus 0a] add_size 1000 Nov 8 00:14:24.769084 kernel: pci 0000:00:16.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 0b] add_size 200000 add_align 100000 Nov 8 00:14:24.769138 kernel: pci 0000:00:16.3: bridge window [io 0x1000-0x0fff] to [bus 0e] add_size 1000 Nov 8 00:14:24.769191 kernel: pci 0000:00:16.4: bridge window [io 0x1000-0x0fff] to [bus 0f] add_size 1000 Nov 8 00:14:24.769246 kernel: pci 0000:00:16.5: bridge window [io 0x1000-0x0fff] to [bus 10] add_size 1000 Nov 8 00:14:24.769297 kernel: pci 0000:00:16.6: bridge window [io 0x1000-0x0fff] to [bus 11] add_size 1000 Nov 8 00:14:24.769350 kernel: pci 0000:00:16.7: bridge window [io 0x1000-0x0fff] to [bus 12] add_size 1000 Nov 8 00:14:24.769401 kernel: pci 0000:00:17.3: bridge window [io 0x1000-0x0fff] to [bus 16] add_size 1000 Nov 8 00:14:24.769453 kernel: pci 0000:00:17.4: bridge window [io 0x1000-0x0fff] to [bus 17] add_size 1000 Nov 8 00:14:24.769505 kernel: pci 0000:00:17.5: bridge window [io 0x1000-0x0fff] to [bus 18] add_size 1000 Nov 8 00:14:24.769559 kernel: pci 0000:00:17.6: bridge window [io 0x1000-0x0fff] to [bus 19] add_size 1000 Nov 8 00:14:24.769611 kernel: pci 0000:00:17.7: bridge window [io 0x1000-0x0fff] to [bus 1a] add_size 1000 Nov 8 00:14:24.769662 kernel: pci 0000:00:18.2: bridge window [io 0x1000-0x0fff] to [bus 1d] add_size 1000 Nov 8 00:14:24.769713 kernel: pci 0000:00:18.3: bridge window [io 0x1000-0x0fff] to [bus 1e] add_size 1000 Nov 8 00:14:24.769764 kernel: pci 0000:00:18.4: bridge window [io 0x1000-0x0fff] to [bus 1f] add_size 1000 Nov 8 00:14:24.769818 kernel: pci 0000:00:18.5: bridge window [io 0x1000-0x0fff] to [bus 20] add_size 1000 Nov 8 00:14:24.769870 kernel: pci 0000:00:18.6: bridge window [io 0x1000-0x0fff] to [bus 21] add_size 1000 Nov 8 00:14:24.769922 kernel: pci 0000:00:18.7: bridge window [io 0x1000-0x0fff] to [bus 22] add_size 1000 Nov 8 00:14:24.769974 kernel: pci 0000:00:15.0: BAR 15: assigned [mem 0xc0000000-0xc01fffff 64bit pref] Nov 8 00:14:24.770024 kernel: pci 0000:00:16.0: BAR 15: assigned [mem 0xc0200000-0xc03fffff 64bit pref] Nov 8 00:14:24.770585 kernel: pci 0000:00:15.3: BAR 13: no space for [io size 0x1000] Nov 8 00:14:24.770644 kernel: pci 0000:00:15.3: BAR 13: failed to assign [io size 0x1000] Nov 8 00:14:24.770701 kernel: pci 0000:00:15.4: BAR 13: no space for [io size 0x1000] Nov 8 00:14:24.770753 kernel: pci 0000:00:15.4: BAR 13: failed to assign [io size 0x1000] Nov 8 00:14:24.770806 kernel: pci 0000:00:15.5: BAR 13: no space for [io size 0x1000] Nov 8 00:14:24.770858 kernel: pci 0000:00:15.5: BAR 13: failed to assign [io size 0x1000] Nov 8 00:14:24.770910 kernel: pci 0000:00:15.6: BAR 13: no space for [io size 0x1000] Nov 8 00:14:24.770962 kernel: pci 0000:00:15.6: BAR 13: failed to assign [io size 0x1000] Nov 8 00:14:24.771014 kernel: pci 0000:00:15.7: BAR 13: no space for [io size 0x1000] Nov 8 00:14:24.771091 kernel: pci 0000:00:15.7: BAR 13: failed to assign [io size 0x1000] Nov 8 00:14:24.771148 kernel: pci 0000:00:16.3: BAR 13: no space for [io size 0x1000] Nov 8 00:14:24.771200 kernel: pci 0000:00:16.3: BAR 13: failed to assign [io size 0x1000] Nov 8 00:14:24.771252 kernel: pci 0000:00:16.4: BAR 13: no space for [io size 0x1000] Nov 8 00:14:24.771304 kernel: pci 0000:00:16.4: BAR 13: failed to assign [io size 0x1000] Nov 8 00:14:24.771355 kernel: pci 0000:00:16.5: BAR 13: no space for [io size 0x1000] Nov 8 00:14:24.771425 kernel: pci 0000:00:16.5: BAR 13: failed to assign [io size 0x1000] Nov 8 00:14:24.771495 kernel: pci 0000:00:16.6: BAR 13: no space for [io size 0x1000] Nov 8 00:14:24.771546 kernel: pci 0000:00:16.6: BAR 13: failed to assign [io size 0x1000] Nov 8 00:14:24.771600 kernel: pci 0000:00:16.7: BAR 13: no space for [io size 0x1000] Nov 8 00:14:24.771652 kernel: pci 0000:00:16.7: BAR 13: failed to assign [io size 0x1000] Nov 8 00:14:24.771703 kernel: pci 0000:00:17.3: BAR 13: no space for [io size 0x1000] Nov 8 00:14:24.771755 kernel: pci 0000:00:17.3: BAR 13: failed to assign [io size 0x1000] Nov 8 00:14:24.771806 kernel: pci 0000:00:17.4: BAR 13: no space for [io size 0x1000] Nov 8 00:14:24.771858 kernel: pci 0000:00:17.4: BAR 13: failed to assign [io size 0x1000] Nov 8 00:14:24.771910 kernel: pci 0000:00:17.5: BAR 13: no space for [io size 0x1000] Nov 8 00:14:24.771963 kernel: pci 0000:00:17.5: BAR 13: failed to assign [io size 0x1000] Nov 8 00:14:24.772017 kernel: pci 0000:00:17.6: BAR 13: no space for [io size 0x1000] Nov 8 00:14:24.772096 kernel: pci 0000:00:17.6: BAR 13: failed to assign [io size 0x1000] Nov 8 00:14:24.772149 kernel: pci 0000:00:17.7: BAR 13: no space for [io size 0x1000] Nov 8 00:14:24.772200 kernel: pci 0000:00:17.7: BAR 13: failed to assign [io size 0x1000] Nov 8 00:14:24.772252 kernel: pci 0000:00:18.2: BAR 13: no space for [io size 0x1000] Nov 8 00:14:24.772303 kernel: pci 0000:00:18.2: BAR 13: failed to assign [io size 0x1000] Nov 8 00:14:24.772354 kernel: pci 0000:00:18.3: BAR 13: no space for [io size 0x1000] Nov 8 00:14:24.772439 kernel: pci 0000:00:18.3: BAR 13: failed to assign [io size 0x1000] Nov 8 00:14:24.772494 kernel: pci 0000:00:18.4: BAR 13: no space for [io size 0x1000] Nov 8 00:14:24.772546 kernel: pci 0000:00:18.4: BAR 13: failed to assign [io size 0x1000] Nov 8 00:14:24.772597 kernel: pci 0000:00:18.5: BAR 13: no space for [io size 0x1000] Nov 8 00:14:24.772649 kernel: pci 0000:00:18.5: BAR 13: failed to assign [io size 0x1000] Nov 8 00:14:24.772700 kernel: pci 0000:00:18.6: BAR 13: no space for [io size 0x1000] Nov 8 00:14:24.772752 kernel: pci 0000:00:18.6: BAR 13: failed to assign [io size 0x1000] Nov 8 00:14:24.772804 kernel: pci 0000:00:18.7: BAR 13: no space for [io size 0x1000] Nov 8 00:14:24.772855 kernel: pci 0000:00:18.7: BAR 13: failed to assign [io size 0x1000] Nov 8 00:14:24.772910 kernel: pci 0000:00:18.7: BAR 13: no space for [io size 0x1000] Nov 8 00:14:24.772962 kernel: pci 0000:00:18.7: BAR 13: failed to assign [io size 0x1000] Nov 8 00:14:24.773013 kernel: pci 0000:00:18.6: BAR 13: no space for [io size 0x1000] Nov 8 00:14:24.773143 kernel: pci 0000:00:18.6: BAR 13: failed to assign [io size 0x1000] Nov 8 00:14:24.773207 kernel: pci 0000:00:18.5: BAR 13: no space for [io size 0x1000] Nov 8 00:14:24.773260 kernel: pci 0000:00:18.5: BAR 13: failed to assign [io size 0x1000] Nov 8 00:14:24.773311 kernel: pci 0000:00:18.4: BAR 13: no space for [io size 0x1000] Nov 8 00:14:24.773362 kernel: pci 0000:00:18.4: BAR 13: failed to assign [io size 0x1000] Nov 8 00:14:24.773413 kernel: pci 0000:00:18.3: BAR 13: no space for [io size 0x1000] Nov 8 00:14:24.773463 kernel: pci 0000:00:18.3: BAR 13: failed to assign [io size 0x1000] Nov 8 00:14:24.773518 kernel: pci 0000:00:18.2: BAR 13: no space for [io size 0x1000] Nov 8 00:14:24.773569 kernel: pci 0000:00:18.2: BAR 13: failed to assign [io size 0x1000] Nov 8 00:14:24.773620 kernel: pci 0000:00:17.7: BAR 13: no space for [io size 0x1000] Nov 8 00:14:24.773672 kernel: pci 0000:00:17.7: BAR 13: failed to assign [io size 0x1000] Nov 8 00:14:24.773723 kernel: pci 0000:00:17.6: BAR 13: no space for [io size 0x1000] Nov 8 00:14:24.773774 kernel: pci 0000:00:17.6: BAR 13: failed to assign [io size 0x1000] Nov 8 00:14:24.773825 kernel: pci 0000:00:17.5: BAR 13: no space for [io size 0x1000] Nov 8 00:14:24.773876 kernel: pci 0000:00:17.5: BAR 13: failed to assign [io size 0x1000] Nov 8 00:14:24.773928 kernel: pci 0000:00:17.4: BAR 13: no space for [io size 0x1000] Nov 8 00:14:24.773979 kernel: pci 0000:00:17.4: BAR 13: failed to assign [io size 0x1000] Nov 8 00:14:24.776070 kernel: pci 0000:00:17.3: BAR 13: no space for [io size 0x1000] Nov 8 00:14:24.776139 kernel: pci 0000:00:17.3: BAR 13: failed to assign [io size 0x1000] Nov 8 00:14:24.776195 kernel: pci 0000:00:16.7: BAR 13: no space for [io size 0x1000] Nov 8 00:14:24.776272 kernel: pci 0000:00:16.7: BAR 13: failed to assign [io size 0x1000] Nov 8 00:14:24.776326 kernel: pci 0000:00:16.6: BAR 13: no space for [io size 0x1000] Nov 8 00:14:24.776378 kernel: pci 0000:00:16.6: BAR 13: failed to assign [io size 0x1000] Nov 8 00:14:24.776430 kernel: pci 0000:00:16.5: BAR 13: no space for [io size 0x1000] Nov 8 00:14:24.776481 kernel: pci 0000:00:16.5: BAR 13: failed to assign [io size 0x1000] Nov 8 00:14:24.776533 kernel: pci 0000:00:16.4: BAR 13: no space for [io size 0x1000] Nov 8 00:14:24.776588 kernel: pci 0000:00:16.4: BAR 13: failed to assign [io size 0x1000] Nov 8 00:14:24.776640 kernel: pci 0000:00:16.3: BAR 13: no space for [io size 0x1000] Nov 8 00:14:24.776692 kernel: pci 0000:00:16.3: BAR 13: failed to assign [io size 0x1000] Nov 8 00:14:24.776744 kernel: pci 0000:00:15.7: BAR 13: no space for [io size 0x1000] Nov 8 00:14:24.776797 kernel: pci 0000:00:15.7: BAR 13: failed to assign [io size 0x1000] Nov 8 00:14:24.776849 kernel: pci 0000:00:15.6: BAR 13: no space for [io size 0x1000] Nov 8 00:14:24.776901 kernel: pci 0000:00:15.6: BAR 13: failed to assign [io size 0x1000] Nov 8 00:14:24.776953 kernel: pci 0000:00:15.5: BAR 13: no space for [io size 0x1000] Nov 8 00:14:24.777005 kernel: pci 0000:00:15.5: BAR 13: failed to assign [io size 0x1000] Nov 8 00:14:24.777092 kernel: pci 0000:00:15.4: BAR 13: no space for [io size 0x1000] Nov 8 00:14:24.777143 kernel: pci 0000:00:15.4: BAR 13: failed to assign [io size 0x1000] Nov 8 00:14:24.777195 kernel: pci 0000:00:15.3: BAR 13: no space for [io size 0x1000] Nov 8 00:14:24.777245 kernel: pci 0000:00:15.3: BAR 13: failed to assign [io size 0x1000] Nov 8 00:14:24.777297 kernel: pci 0000:00:01.0: PCI bridge to [bus 01] Nov 8 00:14:24.777349 kernel: pci 0000:00:11.0: PCI bridge to [bus 02] Nov 8 00:14:24.777399 kernel: pci 0000:00:11.0: bridge window [io 0x2000-0x3fff] Nov 8 00:14:24.777450 kernel: pci 0000:00:11.0: bridge window [mem 0xfd600000-0xfdffffff] Nov 8 00:14:24.778456 kernel: pci 0000:00:11.0: bridge window [mem 0xe7b00000-0xe7ffffff 64bit pref] Nov 8 00:14:24.778525 kernel: pci 0000:03:00.0: BAR 6: assigned [mem 0xfd500000-0xfd50ffff pref] Nov 8 00:14:24.778583 kernel: pci 0000:00:15.0: PCI bridge to [bus 03] Nov 8 00:14:24.778676 kernel: pci 0000:00:15.0: bridge window [io 0x4000-0x4fff] Nov 8 00:14:24.778782 kernel: pci 0000:00:15.0: bridge window [mem 0xfd500000-0xfd5fffff] Nov 8 00:14:24.778835 kernel: pci 0000:00:15.0: bridge window [mem 0xc0000000-0xc01fffff 64bit pref] Nov 8 00:14:24.778889 kernel: pci 0000:00:15.1: PCI bridge to [bus 04] Nov 8 00:14:24.778941 kernel: pci 0000:00:15.1: bridge window [io 0x8000-0x8fff] Nov 8 00:14:24.778993 kernel: pci 0000:00:15.1: bridge window [mem 0xfd100000-0xfd1fffff] Nov 8 00:14:24.779053 kernel: pci 0000:00:15.1: bridge window [mem 0xe7800000-0xe78fffff 64bit pref] Nov 8 00:14:24.780127 kernel: pci 0000:00:15.2: PCI bridge to [bus 05] Nov 8 00:14:24.780187 kernel: pci 0000:00:15.2: bridge window [io 0xc000-0xcfff] Nov 8 00:14:24.780240 kernel: pci 0000:00:15.2: bridge window [mem 0xfcd00000-0xfcdfffff] Nov 8 00:14:24.780292 kernel: pci 0000:00:15.2: bridge window [mem 0xe7400000-0xe74fffff 64bit pref] Nov 8 00:14:24.780344 kernel: pci 0000:00:15.3: PCI bridge to [bus 06] Nov 8 00:14:24.780433 kernel: pci 0000:00:15.3: bridge window [mem 0xfc900000-0xfc9fffff] Nov 8 00:14:24.780504 kernel: pci 0000:00:15.3: bridge window [mem 0xe7000000-0xe70fffff 64bit pref] Nov 8 00:14:24.780556 kernel: pci 0000:00:15.4: PCI bridge to [bus 07] Nov 8 00:14:24.780608 kernel: pci 0000:00:15.4: bridge window [mem 0xfc500000-0xfc5fffff] Nov 8 00:14:24.780664 kernel: pci 0000:00:15.4: bridge window [mem 0xe6c00000-0xe6cfffff 64bit pref] Nov 8 00:14:24.780718 kernel: pci 0000:00:15.5: PCI bridge to [bus 08] Nov 8 00:14:24.780771 kernel: pci 0000:00:15.5: bridge window [mem 0xfc100000-0xfc1fffff] Nov 8 00:14:24.780822 kernel: pci 0000:00:15.5: bridge window [mem 0xe6800000-0xe68fffff 64bit pref] Nov 8 00:14:24.780874 kernel: pci 0000:00:15.6: PCI bridge to [bus 09] Nov 8 00:14:24.782208 kernel: pci 0000:00:15.6: bridge window [mem 0xfbd00000-0xfbdfffff] Nov 8 00:14:24.782280 kernel: pci 0000:00:15.6: bridge window [mem 0xe6400000-0xe64fffff 64bit pref] Nov 8 00:14:24.782338 kernel: pci 0000:00:15.7: PCI bridge to [bus 0a] Nov 8 00:14:24.782393 kernel: pci 0000:00:15.7: bridge window [mem 0xfb900000-0xfb9fffff] Nov 8 00:14:24.782446 kernel: pci 0000:00:15.7: bridge window [mem 0xe6000000-0xe60fffff 64bit pref] Nov 8 00:14:24.782505 kernel: pci 0000:0b:00.0: BAR 6: assigned [mem 0xfd400000-0xfd40ffff pref] Nov 8 00:14:24.782559 kernel: pci 0000:00:16.0: PCI bridge to [bus 0b] Nov 8 00:14:24.782613 kernel: pci 0000:00:16.0: bridge window [io 0x5000-0x5fff] Nov 8 00:14:24.782665 kernel: pci 0000:00:16.0: bridge window [mem 0xfd400000-0xfd4fffff] Nov 8 00:14:24.782718 kernel: pci 0000:00:16.0: bridge window [mem 0xc0200000-0xc03fffff 64bit pref] Nov 8 00:14:24.782775 kernel: pci 0000:00:16.1: PCI bridge to [bus 0c] Nov 8 00:14:24.782828 kernel: pci 0000:00:16.1: bridge window [io 0x9000-0x9fff] Nov 8 00:14:24.782882 kernel: pci 0000:00:16.1: bridge window [mem 0xfd000000-0xfd0fffff] Nov 8 00:14:24.782935 kernel: pci 0000:00:16.1: bridge window [mem 0xe7700000-0xe77fffff 64bit pref] Nov 8 00:14:24.784287 kernel: pci 0000:00:16.2: PCI bridge to [bus 0d] Nov 8 00:14:24.784347 kernel: pci 0000:00:16.2: bridge window [io 0xd000-0xdfff] Nov 8 00:14:24.784402 kernel: pci 0000:00:16.2: bridge window [mem 0xfcc00000-0xfccfffff] Nov 8 00:14:24.784456 kernel: pci 0000:00:16.2: bridge window [mem 0xe7300000-0xe73fffff 64bit pref] Nov 8 00:14:24.784509 kernel: pci 0000:00:16.3: PCI bridge to [bus 0e] Nov 8 00:14:24.784564 kernel: pci 0000:00:16.3: bridge window [mem 0xfc800000-0xfc8fffff] Nov 8 00:14:24.784620 kernel: pci 0000:00:16.3: bridge window [mem 0xe6f00000-0xe6ffffff 64bit pref] Nov 8 00:14:24.784674 kernel: pci 0000:00:16.4: PCI bridge to [bus 0f] Nov 8 00:14:24.784727 kernel: pci 0000:00:16.4: bridge window [mem 0xfc400000-0xfc4fffff] Nov 8 00:14:24.784779 kernel: pci 0000:00:16.4: bridge window [mem 0xe6b00000-0xe6bfffff 64bit pref] Nov 8 00:14:24.784833 kernel: pci 0000:00:16.5: PCI bridge to [bus 10] Nov 8 00:14:24.785250 kernel: pci 0000:00:16.5: bridge window [mem 0xfc000000-0xfc0fffff] Nov 8 00:14:24.785311 kernel: pci 0000:00:16.5: bridge window [mem 0xe6700000-0xe67fffff 64bit pref] Nov 8 00:14:24.785367 kernel: pci 0000:00:16.6: PCI bridge to [bus 11] Nov 8 00:14:24.785421 kernel: pci 0000:00:16.6: bridge window [mem 0xfbc00000-0xfbcfffff] Nov 8 00:14:24.785479 kernel: pci 0000:00:16.6: bridge window [mem 0xe6300000-0xe63fffff 64bit pref] Nov 8 00:14:24.785532 kernel: pci 0000:00:16.7: PCI bridge to [bus 12] Nov 8 00:14:24.785586 kernel: pci 0000:00:16.7: bridge window [mem 0xfb800000-0xfb8fffff] Nov 8 00:14:24.785639 kernel: pci 0000:00:16.7: bridge window [mem 0xe5f00000-0xe5ffffff 64bit pref] Nov 8 00:14:24.785693 kernel: pci 0000:00:17.0: PCI bridge to [bus 13] Nov 8 00:14:24.785747 kernel: pci 0000:00:17.0: bridge window [io 0x6000-0x6fff] Nov 8 00:14:24.785799 kernel: pci 0000:00:17.0: bridge window [mem 0xfd300000-0xfd3fffff] Nov 8 00:14:24.785853 kernel: pci 0000:00:17.0: bridge window [mem 0xe7a00000-0xe7afffff 64bit pref] Nov 8 00:14:24.785907 kernel: pci 0000:00:17.1: PCI bridge to [bus 14] Nov 8 00:14:24.785961 kernel: pci 0000:00:17.1: bridge window [io 0xa000-0xafff] Nov 8 00:14:24.786017 kernel: pci 0000:00:17.1: bridge window [mem 0xfcf00000-0xfcffffff] Nov 8 00:14:24.786086 kernel: pci 0000:00:17.1: bridge window [mem 0xe7600000-0xe76fffff 64bit pref] Nov 8 00:14:24.786142 kernel: pci 0000:00:17.2: PCI bridge to [bus 15] Nov 8 00:14:24.786195 kernel: pci 0000:00:17.2: bridge window [io 0xe000-0xefff] Nov 8 00:14:24.786248 kernel: pci 0000:00:17.2: bridge window [mem 0xfcb00000-0xfcbfffff] Nov 8 00:14:24.786300 kernel: pci 0000:00:17.2: bridge window [mem 0xe7200000-0xe72fffff 64bit pref] Nov 8 00:14:24.786354 kernel: pci 0000:00:17.3: PCI bridge to [bus 16] Nov 8 00:14:24.786407 kernel: pci 0000:00:17.3: bridge window [mem 0xfc700000-0xfc7fffff] Nov 8 00:14:24.786459 kernel: pci 0000:00:17.3: bridge window [mem 0xe6e00000-0xe6efffff 64bit pref] Nov 8 00:14:24.786516 kernel: pci 0000:00:17.4: PCI bridge to [bus 17] Nov 8 00:14:24.786568 kernel: pci 0000:00:17.4: bridge window [mem 0xfc300000-0xfc3fffff] Nov 8 00:14:24.786621 kernel: pci 0000:00:17.4: bridge window [mem 0xe6a00000-0xe6afffff 64bit pref] Nov 8 00:14:24.786675 kernel: pci 0000:00:17.5: PCI bridge to [bus 18] Nov 8 00:14:24.786729 kernel: pci 0000:00:17.5: bridge window [mem 0xfbf00000-0xfbffffff] Nov 8 00:14:24.786782 kernel: pci 0000:00:17.5: bridge window [mem 0xe6600000-0xe66fffff 64bit pref] Nov 8 00:14:24.786835 kernel: pci 0000:00:17.6: PCI bridge to [bus 19] Nov 8 00:14:24.786887 kernel: pci 0000:00:17.6: bridge window [mem 0xfbb00000-0xfbbfffff] Nov 8 00:14:24.786939 kernel: pci 0000:00:17.6: bridge window [mem 0xe6200000-0xe62fffff 64bit pref] Nov 8 00:14:24.786993 kernel: pci 0000:00:17.7: PCI bridge to [bus 1a] Nov 8 00:14:24.788081 kernel: pci 0000:00:17.7: bridge window [mem 0xfb700000-0xfb7fffff] Nov 8 00:14:24.788145 kernel: pci 0000:00:17.7: bridge window [mem 0xe5e00000-0xe5efffff 64bit pref] Nov 8 00:14:24.788214 kernel: pci 0000:00:18.0: PCI bridge to [bus 1b] Nov 8 00:14:24.788274 kernel: pci 0000:00:18.0: bridge window [io 0x7000-0x7fff] Nov 8 00:14:24.788328 kernel: pci 0000:00:18.0: bridge window [mem 0xfd200000-0xfd2fffff] Nov 8 00:14:24.788388 kernel: pci 0000:00:18.0: bridge window [mem 0xe7900000-0xe79fffff 64bit pref] Nov 8 00:14:24.788443 kernel: pci 0000:00:18.1: PCI bridge to [bus 1c] Nov 8 00:14:24.788497 kernel: pci 0000:00:18.1: bridge window [io 0xb000-0xbfff] Nov 8 00:14:24.788550 kernel: pci 0000:00:18.1: bridge window [mem 0xfce00000-0xfcefffff] Nov 8 00:14:24.788605 kernel: pci 0000:00:18.1: bridge window [mem 0xe7500000-0xe75fffff 64bit pref] Nov 8 00:14:24.788660 kernel: pci 0000:00:18.2: PCI bridge to [bus 1d] Nov 8 00:14:24.788712 kernel: pci 0000:00:18.2: bridge window [mem 0xfca00000-0xfcafffff] Nov 8 00:14:24.788764 kernel: pci 0000:00:18.2: bridge window [mem 0xe7100000-0xe71fffff 64bit pref] Nov 8 00:14:24.788819 kernel: pci 0000:00:18.3: PCI bridge to [bus 1e] Nov 8 00:14:24.788870 kernel: pci 0000:00:18.3: bridge window [mem 0xfc600000-0xfc6fffff] Nov 8 00:14:24.788922 kernel: pci 0000:00:18.3: bridge window [mem 0xe6d00000-0xe6dfffff 64bit pref] Nov 8 00:14:24.788976 kernel: pci 0000:00:18.4: PCI bridge to [bus 1f] Nov 8 00:14:24.789089 kernel: pci 0000:00:18.4: bridge window [mem 0xfc200000-0xfc2fffff] Nov 8 00:14:24.789147 kernel: pci 0000:00:18.4: bridge window [mem 0xe6900000-0xe69fffff 64bit pref] Nov 8 00:14:24.789204 kernel: pci 0000:00:18.5: PCI bridge to [bus 20] Nov 8 00:14:24.789257 kernel: pci 0000:00:18.5: bridge window [mem 0xfbe00000-0xfbefffff] Nov 8 00:14:24.789309 kernel: pci 0000:00:18.5: bridge window [mem 0xe6500000-0xe65fffff 64bit pref] Nov 8 00:14:24.789362 kernel: pci 0000:00:18.6: PCI bridge to [bus 21] Nov 8 00:14:24.789413 kernel: pci 0000:00:18.6: bridge window [mem 0xfba00000-0xfbafffff] Nov 8 00:14:24.789466 kernel: pci 0000:00:18.6: bridge window [mem 0xe6100000-0xe61fffff 64bit pref] Nov 8 00:14:24.789520 kernel: pci 0000:00:18.7: PCI bridge to [bus 22] Nov 8 00:14:24.789573 kernel: pci 0000:00:18.7: bridge window [mem 0xfb600000-0xfb6fffff] Nov 8 00:14:24.789626 kernel: pci 0000:00:18.7: bridge window [mem 0xe5d00000-0xe5dfffff 64bit pref] Nov 8 00:14:24.789690 kernel: pci_bus 0000:00: resource 4 [mem 0x000a0000-0x000bffff window] Nov 8 00:14:24.789738 kernel: pci_bus 0000:00: resource 5 [mem 0x000cc000-0x000dbfff window] Nov 8 00:14:24.789784 kernel: pci_bus 0000:00: resource 6 [mem 0xc0000000-0xfebfffff window] Nov 8 00:14:24.789830 kernel: pci_bus 0000:00: resource 7 [io 0x0000-0x0cf7 window] Nov 8 00:14:24.789875 kernel: pci_bus 0000:00: resource 8 [io 0x0d00-0xfeff window] Nov 8 00:14:24.789927 kernel: pci_bus 0000:02: resource 0 [io 0x2000-0x3fff] Nov 8 00:14:24.789976 kernel: pci_bus 0000:02: resource 1 [mem 0xfd600000-0xfdffffff] Nov 8 00:14:24.790023 kernel: pci_bus 0000:02: resource 2 [mem 0xe7b00000-0xe7ffffff 64bit pref] Nov 8 00:14:24.790109 kernel: pci_bus 0000:02: resource 4 [mem 0x000a0000-0x000bffff window] Nov 8 00:14:24.790157 kernel: pci_bus 0000:02: resource 5 [mem 0x000cc000-0x000dbfff window] Nov 8 00:14:24.790205 kernel: pci_bus 0000:02: resource 6 [mem 0xc0000000-0xfebfffff window] Nov 8 00:14:24.790252 kernel: pci_bus 0000:02: resource 7 [io 0x0000-0x0cf7 window] Nov 8 00:14:24.790299 kernel: pci_bus 0000:02: resource 8 [io 0x0d00-0xfeff window] Nov 8 00:14:24.790354 kernel: pci_bus 0000:03: resource 0 [io 0x4000-0x4fff] Nov 8 00:14:24.790406 kernel: pci_bus 0000:03: resource 1 [mem 0xfd500000-0xfd5fffff] Nov 8 00:14:24.790457 kernel: pci_bus 0000:03: resource 2 [mem 0xc0000000-0xc01fffff 64bit pref] Nov 8 00:14:24.790513 kernel: pci_bus 0000:04: resource 0 [io 0x8000-0x8fff] Nov 8 00:14:24.790562 kernel: pci_bus 0000:04: resource 1 [mem 0xfd100000-0xfd1fffff] Nov 8 00:14:24.790610 kernel: pci_bus 0000:04: resource 2 [mem 0xe7800000-0xe78fffff 64bit pref] Nov 8 00:14:24.790663 kernel: pci_bus 0000:05: resource 0 [io 0xc000-0xcfff] Nov 8 00:14:24.790712 kernel: pci_bus 0000:05: resource 1 [mem 0xfcd00000-0xfcdfffff] Nov 8 00:14:24.790760 kernel: pci_bus 0000:05: resource 2 [mem 0xe7400000-0xe74fffff 64bit pref] Nov 8 00:14:24.790815 kernel: pci_bus 0000:06: resource 1 [mem 0xfc900000-0xfc9fffff] Nov 8 00:14:24.790864 kernel: pci_bus 0000:06: resource 2 [mem 0xe7000000-0xe70fffff 64bit pref] Nov 8 00:14:24.790918 kernel: pci_bus 0000:07: resource 1 [mem 0xfc500000-0xfc5fffff] Nov 8 00:14:24.790969 kernel: pci_bus 0000:07: resource 2 [mem 0xe6c00000-0xe6cfffff 64bit pref] Nov 8 00:14:24.791021 kernel: pci_bus 0000:08: resource 1 [mem 0xfc100000-0xfc1fffff] Nov 8 00:14:24.791113 kernel: pci_bus 0000:08: resource 2 [mem 0xe6800000-0xe68fffff 64bit pref] Nov 8 00:14:24.791171 kernel: pci_bus 0000:09: resource 1 [mem 0xfbd00000-0xfbdfffff] Nov 8 00:14:24.791220 kernel: pci_bus 0000:09: resource 2 [mem 0xe6400000-0xe64fffff 64bit pref] Nov 8 00:14:24.791273 kernel: pci_bus 0000:0a: resource 1 [mem 0xfb900000-0xfb9fffff] Nov 8 00:14:24.791323 kernel: pci_bus 0000:0a: resource 2 [mem 0xe6000000-0xe60fffff 64bit pref] Nov 8 00:14:24.791401 kernel: pci_bus 0000:0b: resource 0 [io 0x5000-0x5fff] Nov 8 00:14:24.791454 kernel: pci_bus 0000:0b: resource 1 [mem 0xfd400000-0xfd4fffff] Nov 8 00:14:24.791502 kernel: pci_bus 0000:0b: resource 2 [mem 0xc0200000-0xc03fffff 64bit pref] Nov 8 00:14:24.791555 kernel: pci_bus 0000:0c: resource 0 [io 0x9000-0x9fff] Nov 8 00:14:24.791604 kernel: pci_bus 0000:0c: resource 1 [mem 0xfd000000-0xfd0fffff] Nov 8 00:14:24.791655 kernel: pci_bus 0000:0c: resource 2 [mem 0xe7700000-0xe77fffff 64bit pref] Nov 8 00:14:24.791710 kernel: pci_bus 0000:0d: resource 0 [io 0xd000-0xdfff] Nov 8 00:14:24.791760 kernel: pci_bus 0000:0d: resource 1 [mem 0xfcc00000-0xfccfffff] Nov 8 00:14:24.791814 kernel: pci_bus 0000:0d: resource 2 [mem 0xe7300000-0xe73fffff 64bit pref] Nov 8 00:14:24.791868 kernel: pci_bus 0000:0e: resource 1 [mem 0xfc800000-0xfc8fffff] Nov 8 00:14:24.791917 kernel: pci_bus 0000:0e: resource 2 [mem 0xe6f00000-0xe6ffffff 64bit pref] Nov 8 00:14:24.791970 kernel: pci_bus 0000:0f: resource 1 [mem 0xfc400000-0xfc4fffff] Nov 8 00:14:24.792020 kernel: pci_bus 0000:0f: resource 2 [mem 0xe6b00000-0xe6bfffff 64bit pref] Nov 8 00:14:24.792105 kernel: pci_bus 0000:10: resource 1 [mem 0xfc000000-0xfc0fffff] Nov 8 00:14:24.792155 kernel: pci_bus 0000:10: resource 2 [mem 0xe6700000-0xe67fffff 64bit pref] Nov 8 00:14:24.792211 kernel: pci_bus 0000:11: resource 1 [mem 0xfbc00000-0xfbcfffff] Nov 8 00:14:24.792260 kernel: pci_bus 0000:11: resource 2 [mem 0xe6300000-0xe63fffff 64bit pref] Nov 8 00:14:24.792312 kernel: pci_bus 0000:12: resource 1 [mem 0xfb800000-0xfb8fffff] Nov 8 00:14:24.792362 kernel: pci_bus 0000:12: resource 2 [mem 0xe5f00000-0xe5ffffff 64bit pref] Nov 8 00:14:24.792416 kernel: pci_bus 0000:13: resource 0 [io 0x6000-0x6fff] Nov 8 00:14:24.792465 kernel: pci_bus 0000:13: resource 1 [mem 0xfd300000-0xfd3fffff] Nov 8 00:14:24.792518 kernel: pci_bus 0000:13: resource 2 [mem 0xe7a00000-0xe7afffff 64bit pref] Nov 8 00:14:24.792572 kernel: pci_bus 0000:14: resource 0 [io 0xa000-0xafff] Nov 8 00:14:24.792622 kernel: pci_bus 0000:14: resource 1 [mem 0xfcf00000-0xfcffffff] Nov 8 00:14:24.792671 kernel: pci_bus 0000:14: resource 2 [mem 0xe7600000-0xe76fffff 64bit pref] Nov 8 00:14:24.792728 kernel: pci_bus 0000:15: resource 0 [io 0xe000-0xefff] Nov 8 00:14:24.792777 kernel: pci_bus 0000:15: resource 1 [mem 0xfcb00000-0xfcbfffff] Nov 8 00:14:24.792826 kernel: pci_bus 0000:15: resource 2 [mem 0xe7200000-0xe72fffff 64bit pref] Nov 8 00:14:24.792882 kernel: pci_bus 0000:16: resource 1 [mem 0xfc700000-0xfc7fffff] Nov 8 00:14:24.792935 kernel: pci_bus 0000:16: resource 2 [mem 0xe6e00000-0xe6efffff 64bit pref] Nov 8 00:14:24.792989 kernel: pci_bus 0000:17: resource 1 [mem 0xfc300000-0xfc3fffff] Nov 8 00:14:24.793100 kernel: pci_bus 0000:17: resource 2 [mem 0xe6a00000-0xe6afffff 64bit pref] Nov 8 00:14:24.793160 kernel: pci_bus 0000:18: resource 1 [mem 0xfbf00000-0xfbffffff] Nov 8 00:14:24.793209 kernel: pci_bus 0000:18: resource 2 [mem 0xe6600000-0xe66fffff 64bit pref] Nov 8 00:14:24.793267 kernel: pci_bus 0000:19: resource 1 [mem 0xfbb00000-0xfbbfffff] Nov 8 00:14:24.793316 kernel: pci_bus 0000:19: resource 2 [mem 0xe6200000-0xe62fffff 64bit pref] Nov 8 00:14:24.793368 kernel: pci_bus 0000:1a: resource 1 [mem 0xfb700000-0xfb7fffff] Nov 8 00:14:24.793417 kernel: pci_bus 0000:1a: resource 2 [mem 0xe5e00000-0xe5efffff 64bit pref] Nov 8 00:14:24.793472 kernel: pci_bus 0000:1b: resource 0 [io 0x7000-0x7fff] Nov 8 00:14:24.793524 kernel: pci_bus 0000:1b: resource 1 [mem 0xfd200000-0xfd2fffff] Nov 8 00:14:24.793572 kernel: pci_bus 0000:1b: resource 2 [mem 0xe7900000-0xe79fffff 64bit pref] Nov 8 00:14:24.793624 kernel: pci_bus 0000:1c: resource 0 [io 0xb000-0xbfff] Nov 8 00:14:24.793673 kernel: pci_bus 0000:1c: resource 1 [mem 0xfce00000-0xfcefffff] Nov 8 00:14:24.793721 kernel: pci_bus 0000:1c: resource 2 [mem 0xe7500000-0xe75fffff 64bit pref] Nov 8 00:14:24.793777 kernel: pci_bus 0000:1d: resource 1 [mem 0xfca00000-0xfcafffff] Nov 8 00:14:24.793826 kernel: pci_bus 0000:1d: resource 2 [mem 0xe7100000-0xe71fffff 64bit pref] Nov 8 00:14:24.793881 kernel: pci_bus 0000:1e: resource 1 [mem 0xfc600000-0xfc6fffff] Nov 8 00:14:24.793931 kernel: pci_bus 0000:1e: resource 2 [mem 0xe6d00000-0xe6dfffff 64bit pref] Nov 8 00:14:24.793985 kernel: pci_bus 0000:1f: resource 1 [mem 0xfc200000-0xfc2fffff] Nov 8 00:14:24.794102 kernel: pci_bus 0000:1f: resource 2 [mem 0xe6900000-0xe69fffff 64bit pref] Nov 8 00:14:24.794157 kernel: pci_bus 0000:20: resource 1 [mem 0xfbe00000-0xfbefffff] Nov 8 00:14:24.794205 kernel: pci_bus 0000:20: resource 2 [mem 0xe6500000-0xe65fffff 64bit pref] Nov 8 00:14:24.794262 kernel: pci_bus 0000:21: resource 1 [mem 0xfba00000-0xfbafffff] Nov 8 00:14:24.794310 kernel: pci_bus 0000:21: resource 2 [mem 0xe6100000-0xe61fffff 64bit pref] Nov 8 00:14:24.794368 kernel: pci_bus 0000:22: resource 1 [mem 0xfb600000-0xfb6fffff] Nov 8 00:14:24.794417 kernel: pci_bus 0000:22: resource 2 [mem 0xe5d00000-0xe5dfffff 64bit pref] Nov 8 00:14:24.794476 kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers Nov 8 00:14:24.794486 kernel: PCI: CLS 32 bytes, default 64 Nov 8 00:14:24.794493 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Nov 8 00:14:24.794502 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x311fd3cd494, max_idle_ns: 440795223879 ns Nov 8 00:14:24.794510 kernel: clocksource: Switched to clocksource tsc Nov 8 00:14:24.794516 kernel: Initialise system trusted keyrings Nov 8 00:14:24.794522 kernel: workingset: timestamp_bits=39 max_order=19 bucket_order=0 Nov 8 00:14:24.794529 kernel: Key type asymmetric registered Nov 8 00:14:24.794535 kernel: Asymmetric key parser 'x509' registered Nov 8 00:14:24.794541 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Nov 8 00:14:24.794548 kernel: io scheduler mq-deadline registered Nov 8 00:14:24.794554 kernel: io scheduler kyber registered Nov 8 00:14:24.794562 kernel: io scheduler bfq registered Nov 8 00:14:24.794616 kernel: pcieport 0000:00:15.0: PME: Signaling with IRQ 24 Nov 8 00:14:24.794671 kernel: pcieport 0000:00:15.0: pciehp: Slot #160 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Nov 8 00:14:24.794725 kernel: pcieport 0000:00:15.1: PME: Signaling with IRQ 25 Nov 8 00:14:24.794779 kernel: pcieport 0000:00:15.1: pciehp: Slot #161 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Nov 8 00:14:24.794833 kernel: pcieport 0000:00:15.2: PME: Signaling with IRQ 26 Nov 8 00:14:24.794886 kernel: pcieport 0000:00:15.2: pciehp: Slot #162 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Nov 8 00:14:24.794940 kernel: pcieport 0000:00:15.3: PME: Signaling with IRQ 27 Nov 8 00:14:24.794997 kernel: pcieport 0000:00:15.3: pciehp: Slot #163 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Nov 8 00:14:24.795064 kernel: pcieport 0000:00:15.4: PME: Signaling with IRQ 28 Nov 8 00:14:24.795130 kernel: pcieport 0000:00:15.4: pciehp: Slot #164 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Nov 8 00:14:24.795186 kernel: pcieport 0000:00:15.5: PME: Signaling with IRQ 29 Nov 8 00:14:24.795240 kernel: pcieport 0000:00:15.5: pciehp: Slot #165 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Nov 8 00:14:24.795297 kernel: pcieport 0000:00:15.6: PME: Signaling with IRQ 30 Nov 8 00:14:24.795352 kernel: pcieport 0000:00:15.6: pciehp: Slot #166 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Nov 8 00:14:24.795405 kernel: pcieport 0000:00:15.7: PME: Signaling with IRQ 31 Nov 8 00:14:24.795459 kernel: pcieport 0000:00:15.7: pciehp: Slot #167 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Nov 8 00:14:24.795512 kernel: pcieport 0000:00:16.0: PME: Signaling with IRQ 32 Nov 8 00:14:24.795565 kernel: pcieport 0000:00:16.0: pciehp: Slot #192 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Nov 8 00:14:24.795622 kernel: pcieport 0000:00:16.1: PME: Signaling with IRQ 33 Nov 8 00:14:24.795675 kernel: pcieport 0000:00:16.1: pciehp: Slot #193 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Nov 8 00:14:24.795730 kernel: pcieport 0000:00:16.2: PME: Signaling with IRQ 34 Nov 8 00:14:24.795787 kernel: pcieport 0000:00:16.2: pciehp: Slot #194 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Nov 8 00:14:24.795840 kernel: pcieport 0000:00:16.3: PME: Signaling with IRQ 35 Nov 8 00:14:24.795893 kernel: pcieport 0000:00:16.3: pciehp: Slot #195 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Nov 8 00:14:24.795951 kernel: pcieport 0000:00:16.4: PME: Signaling with IRQ 36 Nov 8 00:14:24.796004 kernel: pcieport 0000:00:16.4: pciehp: Slot #196 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Nov 8 00:14:24.796096 kernel: pcieport 0000:00:16.5: PME: Signaling with IRQ 37 Nov 8 00:14:24.796165 kernel: pcieport 0000:00:16.5: pciehp: Slot #197 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Nov 8 00:14:24.796220 kernel: pcieport 0000:00:16.6: PME: Signaling with IRQ 38 Nov 8 00:14:24.796273 kernel: pcieport 0000:00:16.6: pciehp: Slot #198 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Nov 8 00:14:24.796330 kernel: pcieport 0000:00:16.7: PME: Signaling with IRQ 39 Nov 8 00:14:24.796383 kernel: pcieport 0000:00:16.7: pciehp: Slot #199 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Nov 8 00:14:24.796437 kernel: pcieport 0000:00:17.0: PME: Signaling with IRQ 40 Nov 8 00:14:24.796490 kernel: pcieport 0000:00:17.0: pciehp: Slot #224 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Nov 8 00:14:24.796544 kernel: pcieport 0000:00:17.1: PME: Signaling with IRQ 41 Nov 8 00:14:24.796597 kernel: pcieport 0000:00:17.1: pciehp: Slot #225 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Nov 8 00:14:24.796653 kernel: pcieport 0000:00:17.2: PME: Signaling with IRQ 42 Nov 8 00:14:24.796706 kernel: pcieport 0000:00:17.2: pciehp: Slot #226 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Nov 8 00:14:24.796761 kernel: pcieport 0000:00:17.3: PME: Signaling with IRQ 43 Nov 8 00:14:24.796814 kernel: pcieport 0000:00:17.3: pciehp: Slot #227 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Nov 8 00:14:24.796868 kernel: pcieport 0000:00:17.4: PME: Signaling with IRQ 44 Nov 8 00:14:24.796921 kernel: pcieport 0000:00:17.4: pciehp: Slot #228 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Nov 8 00:14:24.796977 kernel: pcieport 0000:00:17.5: PME: Signaling with IRQ 45 Nov 8 00:14:24.797062 kernel: pcieport 0000:00:17.5: pciehp: Slot #229 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Nov 8 00:14:24.797123 kernel: pcieport 0000:00:17.6: PME: Signaling with IRQ 46 Nov 8 00:14:24.797176 kernel: pcieport 0000:00:17.6: pciehp: Slot #230 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Nov 8 00:14:24.797229 kernel: pcieport 0000:00:17.7: PME: Signaling with IRQ 47 Nov 8 00:14:24.797285 kernel: pcieport 0000:00:17.7: pciehp: Slot #231 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Nov 8 00:14:24.797339 kernel: pcieport 0000:00:18.0: PME: Signaling with IRQ 48 Nov 8 00:14:24.797397 kernel: pcieport 0000:00:18.0: pciehp: Slot #256 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Nov 8 00:14:24.797450 kernel: pcieport 0000:00:18.1: PME: Signaling with IRQ 49 Nov 8 00:14:24.797503 kernel: pcieport 0000:00:18.1: pciehp: Slot #257 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Nov 8 00:14:24.797556 kernel: pcieport 0000:00:18.2: PME: Signaling with IRQ 50 Nov 8 00:14:24.797609 kernel: pcieport 0000:00:18.2: pciehp: Slot #258 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Nov 8 00:14:24.797667 kernel: pcieport 0000:00:18.3: PME: Signaling with IRQ 51 Nov 8 00:14:24.797721 kernel: pcieport 0000:00:18.3: pciehp: Slot #259 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Nov 8 00:14:24.797776 kernel: pcieport 0000:00:18.4: PME: Signaling with IRQ 52 Nov 8 00:14:24.797829 kernel: pcieport 0000:00:18.4: pciehp: Slot #260 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Nov 8 00:14:24.797883 kernel: pcieport 0000:00:18.5: PME: Signaling with IRQ 53 Nov 8 00:14:24.797939 kernel: pcieport 0000:00:18.5: pciehp: Slot #261 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Nov 8 00:14:24.797994 kernel: pcieport 0000:00:18.6: PME: Signaling with IRQ 54 Nov 8 00:14:24.798323 kernel: pcieport 0000:00:18.6: pciehp: Slot #262 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Nov 8 00:14:24.798393 kernel: pcieport 0000:00:18.7: PME: Signaling with IRQ 55 Nov 8 00:14:24.798450 kernel: pcieport 0000:00:18.7: pciehp: Slot #263 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Nov 8 00:14:24.798462 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Nov 8 00:14:24.798469 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Nov 8 00:14:24.798476 kernel: 00:05: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Nov 8 00:14:24.798482 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBC,PNP0f13:MOUS] at 0x60,0x64 irq 1,12 Nov 8 00:14:24.798489 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Nov 8 00:14:24.798495 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Nov 8 00:14:24.798558 kernel: rtc_cmos 00:01: registered as rtc0 Nov 8 00:14:24.798609 kernel: rtc_cmos 00:01: setting system clock to 2025-11-08T00:14:24 UTC (1762560864) Nov 8 00:14:24.798661 kernel: rtc_cmos 00:01: alarms up to one month, y3k, 114 bytes nvram Nov 8 00:14:24.798670 kernel: intel_pstate: CPU model not supported Nov 8 00:14:24.798677 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input1 Nov 8 00:14:24.798684 kernel: NET: Registered PF_INET6 protocol family Nov 8 00:14:24.798690 kernel: Segment Routing with IPv6 Nov 8 00:14:24.798696 kernel: In-situ OAM (IOAM) with IPv6 Nov 8 00:14:24.798702 kernel: NET: Registered PF_PACKET protocol family Nov 8 00:14:24.798709 kernel: Key type dns_resolver registered Nov 8 00:14:24.798715 kernel: IPI shorthand broadcast: enabled Nov 8 00:14:24.798724 kernel: sched_clock: Marking stable (922003612, 231184225)->(1213725522, -60537685) Nov 8 00:14:24.798730 kernel: registered taskstats version 1 Nov 8 00:14:24.798737 kernel: Loading compiled-in X.509 certificates Nov 8 00:14:24.798743 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.113-flatcar: cf7a35a152685ec84a621291e4ce58c959319dfd' Nov 8 00:14:24.798749 kernel: Key type .fscrypt registered Nov 8 00:14:24.798755 kernel: Key type fscrypt-provisioning registered Nov 8 00:14:24.798762 kernel: ima: No TPM chip found, activating TPM-bypass! Nov 8 00:14:24.798768 kernel: ima: Allocated hash algorithm: sha1 Nov 8 00:14:24.798775 kernel: ima: No architecture policies found Nov 8 00:14:24.798782 kernel: clk: Disabling unused clocks Nov 8 00:14:24.798788 kernel: Freeing unused kernel image (initmem) memory: 42880K Nov 8 00:14:24.798795 kernel: Write protecting the kernel read-only data: 36864k Nov 8 00:14:24.798801 kernel: Freeing unused kernel image (rodata/data gap) memory: 1828K Nov 8 00:14:24.798807 kernel: Run /init as init process Nov 8 00:14:24.798814 kernel: with arguments: Nov 8 00:14:24.798820 kernel: /init Nov 8 00:14:24.798827 kernel: with environment: Nov 8 00:14:24.798833 kernel: HOME=/ Nov 8 00:14:24.798840 kernel: TERM=linux Nov 8 00:14:24.798849 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Nov 8 00:14:24.798857 systemd[1]: Detected virtualization vmware. Nov 8 00:14:24.798864 systemd[1]: Detected architecture x86-64. Nov 8 00:14:24.798871 systemd[1]: Running in initrd. Nov 8 00:14:24.798877 systemd[1]: No hostname configured, using default hostname. Nov 8 00:14:24.798883 systemd[1]: Hostname set to . Nov 8 00:14:24.798891 systemd[1]: Initializing machine ID from random generator. Nov 8 00:14:24.798897 systemd[1]: Queued start job for default target initrd.target. Nov 8 00:14:24.798904 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Nov 8 00:14:24.798910 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Nov 8 00:14:24.798918 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Nov 8 00:14:24.798924 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Nov 8 00:14:24.798931 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Nov 8 00:14:24.798937 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Nov 8 00:14:24.798946 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Nov 8 00:14:24.798952 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Nov 8 00:14:24.798959 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Nov 8 00:14:24.798966 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Nov 8 00:14:24.798972 systemd[1]: Reached target paths.target - Path Units. Nov 8 00:14:24.798978 systemd[1]: Reached target slices.target - Slice Units. Nov 8 00:14:24.798985 systemd[1]: Reached target swap.target - Swaps. Nov 8 00:14:24.798993 systemd[1]: Reached target timers.target - Timer Units. Nov 8 00:14:24.798999 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Nov 8 00:14:24.799006 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Nov 8 00:14:24.799012 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Nov 8 00:14:24.799019 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Nov 8 00:14:24.799026 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Nov 8 00:14:24.799315 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Nov 8 00:14:24.799323 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Nov 8 00:14:24.799329 systemd[1]: Reached target sockets.target - Socket Units. Nov 8 00:14:24.799339 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Nov 8 00:14:24.799346 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Nov 8 00:14:24.799352 systemd[1]: Finished network-cleanup.service - Network Cleanup. Nov 8 00:14:24.799358 systemd[1]: Starting systemd-fsck-usr.service... Nov 8 00:14:24.799365 systemd[1]: Starting systemd-journald.service - Journal Service... Nov 8 00:14:24.799371 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Nov 8 00:14:24.799379 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Nov 8 00:14:24.799386 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Nov 8 00:14:24.799408 systemd-journald[217]: Collecting audit messages is disabled. Nov 8 00:14:24.799425 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Nov 8 00:14:24.799432 systemd[1]: Finished systemd-fsck-usr.service. Nov 8 00:14:24.799441 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Nov 8 00:14:24.799447 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Nov 8 00:14:24.799454 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Nov 8 00:14:24.799461 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Nov 8 00:14:24.799467 kernel: Bridge firewalling registered Nov 8 00:14:24.799477 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Nov 8 00:14:24.799484 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Nov 8 00:14:24.799491 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Nov 8 00:14:24.799498 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Nov 8 00:14:24.799504 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Nov 8 00:14:24.799511 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Nov 8 00:14:24.799517 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Nov 8 00:14:24.799528 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Nov 8 00:14:24.799535 systemd-journald[217]: Journal started Nov 8 00:14:24.799552 systemd-journald[217]: Runtime Journal (/run/log/journal/69fa7aa358f64310a66e748faeb52d37) is 4.8M, max 38.6M, 33.8M free. Nov 8 00:14:24.740972 systemd-modules-load[218]: Inserted module 'overlay' Nov 8 00:14:24.770138 systemd-modules-load[218]: Inserted module 'br_netfilter' Nov 8 00:14:24.801203 systemd[1]: Started systemd-journald.service - Journal Service. Nov 8 00:14:24.804636 dracut-cmdline[238]: dracut-dracut-053 Nov 8 00:14:24.807474 dracut-cmdline[238]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=vmware flatcar.autologin verity.usrhash=480a02cf7f2001774aa495c3e719d4173e968e6839485a7d2b207ef2facca472 Nov 8 00:14:24.807189 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Nov 8 00:14:24.813350 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Nov 8 00:14:24.817136 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Nov 8 00:14:24.835130 systemd-resolved[274]: Positive Trust Anchors: Nov 8 00:14:24.835139 systemd-resolved[274]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Nov 8 00:14:24.835162 systemd-resolved[274]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Nov 8 00:14:24.836859 systemd-resolved[274]: Defaulting to hostname 'linux'. Nov 8 00:14:24.837556 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Nov 8 00:14:24.837722 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Nov 8 00:14:24.855049 kernel: SCSI subsystem initialized Nov 8 00:14:24.862043 kernel: Loading iSCSI transport class v2.0-870. Nov 8 00:14:24.870068 kernel: iscsi: registered transport (tcp) Nov 8 00:14:24.885064 kernel: iscsi: registered transport (qla4xxx) Nov 8 00:14:24.885124 kernel: QLogic iSCSI HBA Driver Nov 8 00:14:24.906492 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Nov 8 00:14:24.911137 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Nov 8 00:14:24.926543 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Nov 8 00:14:24.926590 kernel: device-mapper: uevent: version 1.0.3 Nov 8 00:14:24.927882 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Nov 8 00:14:24.959048 kernel: raid6: avx2x4 gen() 52236 MB/s Nov 8 00:14:24.976072 kernel: raid6: avx2x2 gen() 41840 MB/s Nov 8 00:14:24.993243 kernel: raid6: avx2x1 gen() 33873 MB/s Nov 8 00:14:24.993295 kernel: raid6: using algorithm avx2x4 gen() 52236 MB/s Nov 8 00:14:25.011245 kernel: raid6: .... xor() 18339 MB/s, rmw enabled Nov 8 00:14:25.011305 kernel: raid6: using avx2x2 recovery algorithm Nov 8 00:14:25.025044 kernel: xor: automatically using best checksumming function avx Nov 8 00:14:25.130101 kernel: Btrfs loaded, zoned=no, fsverity=no Nov 8 00:14:25.135273 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Nov 8 00:14:25.139140 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Nov 8 00:14:25.146978 systemd-udevd[433]: Using default interface naming scheme 'v255'. Nov 8 00:14:25.149553 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Nov 8 00:14:25.160154 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Nov 8 00:14:25.166696 dracut-pre-trigger[438]: rd.md=0: removing MD RAID activation Nov 8 00:14:25.181975 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Nov 8 00:14:25.186144 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Nov 8 00:14:25.257716 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Nov 8 00:14:25.261143 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Nov 8 00:14:25.270863 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Nov 8 00:14:25.271537 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Nov 8 00:14:25.271652 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Nov 8 00:14:25.271758 systemd[1]: Reached target remote-fs.target - Remote File Systems. Nov 8 00:14:25.278172 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Nov 8 00:14:25.285845 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Nov 8 00:14:25.333103 kernel: VMware PVSCSI driver - version 1.0.7.0-k Nov 8 00:14:25.336198 kernel: vmw_pvscsi: using 64bit dma Nov 8 00:14:25.336227 kernel: vmw_pvscsi: max_id: 16 Nov 8 00:14:25.336236 kernel: vmw_pvscsi: setting ring_pages to 8 Nov 8 00:14:25.343039 kernel: vmw_pvscsi: enabling reqCallThreshold Nov 8 00:14:25.343070 kernel: VMware vmxnet3 virtual NIC driver - version 1.7.0.0-k-NAPI Nov 8 00:14:25.343079 kernel: vmw_pvscsi: driver-based request coalescing enabled Nov 8 00:14:25.343464 kernel: vmw_pvscsi: using MSI-X Nov 8 00:14:25.346042 kernel: scsi host0: VMware PVSCSI storage adapter rev 2, req/cmp/msg rings: 8/8/1 pages, cmd_per_lun=254 Nov 8 00:14:25.348040 kernel: vmw_pvscsi 0000:03:00.0: VMware PVSCSI rev 2 host #0 Nov 8 00:14:25.352041 kernel: vmxnet3 0000:0b:00.0: # of Tx queues : 2, # of Rx queues : 2 Nov 8 00:14:25.352187 kernel: scsi 0:0:0:0: Direct-Access VMware Virtual disk 2.0 PQ: 0 ANSI: 6 Nov 8 00:14:25.357088 kernel: cryptd: max_cpu_qlen set to 1000 Nov 8 00:14:25.365474 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Nov 8 00:14:25.365724 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Nov 8 00:14:25.366113 kernel: vmxnet3 0000:0b:00.0 eth0: NIC Link is Up 10000 Mbps Nov 8 00:14:25.366229 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Nov 8 00:14:25.366469 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Nov 8 00:14:25.366546 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Nov 8 00:14:25.366933 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Nov 8 00:14:25.375798 kernel: vmxnet3 0000:0b:00.0 ens192: renamed from eth0 Nov 8 00:14:25.373238 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Nov 8 00:14:25.383929 kernel: AVX2 version of gcm_enc/dec engaged. Nov 8 00:14:25.383960 kernel: AES CTR mode by8 optimization enabled Nov 8 00:14:25.385124 kernel: libata version 3.00 loaded. Nov 8 00:14:25.389041 kernel: ata_piix 0000:00:07.1: version 2.13 Nov 8 00:14:25.390948 kernel: scsi host1: ata_piix Nov 8 00:14:25.391046 kernel: scsi host2: ata_piix Nov 8 00:14:25.394431 kernel: ata1: PATA max UDMA/33 cmd 0x1f0 ctl 0x3f6 bmdma 0x1060 irq 14 Nov 8 00:14:25.394449 kernel: ata2: PATA max UDMA/33 cmd 0x170 ctl 0x376 bmdma 0x1068 irq 15 Nov 8 00:14:25.399077 kernel: sd 0:0:0:0: [sda] 17805312 512-byte logical blocks: (9.12 GB/8.49 GiB) Nov 8 00:14:25.399175 kernel: sd 0:0:0:0: [sda] Write Protect is off Nov 8 00:14:25.399254 kernel: sd 0:0:0:0: [sda] Mode Sense: 31 00 00 00 Nov 8 00:14:25.400192 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Nov 8 00:14:25.401083 kernel: sd 0:0:0:0: [sda] Cache data unavailable Nov 8 00:14:25.401162 kernel: sd 0:0:0:0: [sda] Assuming drive cache: write through Nov 8 00:14:25.404161 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Nov 8 00:14:25.407041 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Nov 8 00:14:25.408044 kernel: sd 0:0:0:0: [sda] Attached SCSI disk Nov 8 00:14:25.416633 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Nov 8 00:14:25.559051 kernel: ata2.00: ATAPI: VMware Virtual IDE CDROM Drive, 00000001, max UDMA/33 Nov 8 00:14:25.565042 kernel: scsi 2:0:0:0: CD-ROM NECVMWar VMware IDE CDR10 1.00 PQ: 0 ANSI: 5 Nov 8 00:14:25.589437 kernel: sr 2:0:0:0: [sr0] scsi3-mmc drive: 1x/1x writer dvd-ram cd/rw xa/form2 cdda tray Nov 8 00:14:25.589560 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Nov 8 00:14:25.597173 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Virtual_disk ROOT. Nov 8 00:14:25.598086 kernel: BTRFS: device label OEM devid 1 transid 10 /dev/sda6 scanned by (udev-worker) (481) Nov 8 00:14:25.599984 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Virtual_disk EFI-SYSTEM. Nov 8 00:14:25.603202 kernel: sr 2:0:0:0: Attached scsi CD-ROM sr0 Nov 8 00:14:25.603304 kernel: BTRFS: device fsid a2737782-a37e-42f9-8b56-489a87f47acc devid 1 transid 35 /dev/sda3 scanned by (udev-worker) (494) Nov 8 00:14:25.607084 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Virtual_disk OEM. Nov 8 00:14:25.609401 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Virtual_disk USR-A. Nov 8 00:14:25.609689 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - Virtual_disk USR-A. Nov 8 00:14:25.616207 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Nov 8 00:14:25.641745 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Nov 8 00:14:25.646045 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Nov 8 00:14:25.650048 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Nov 8 00:14:26.651098 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Nov 8 00:14:26.652002 disk-uuid[588]: The operation has completed successfully. Nov 8 00:14:26.685753 systemd[1]: disk-uuid.service: Deactivated successfully. Nov 8 00:14:26.685814 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Nov 8 00:14:26.689105 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Nov 8 00:14:26.690818 sh[610]: Success Nov 8 00:14:26.699066 kernel: device-mapper: verity: sha256 using implementation "sha256-avx2" Nov 8 00:14:26.731442 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Nov 8 00:14:26.741883 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Nov 8 00:14:26.742117 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Nov 8 00:14:26.759045 kernel: BTRFS info (device dm-0): first mount of filesystem a2737782-a37e-42f9-8b56-489a87f47acc Nov 8 00:14:26.759080 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Nov 8 00:14:26.759089 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Nov 8 00:14:26.759100 kernel: BTRFS info (device dm-0): disabling log replay at mount time Nov 8 00:14:26.759631 kernel: BTRFS info (device dm-0): using free space tree Nov 8 00:14:26.767042 kernel: BTRFS info (device dm-0): enabling ssd optimizations Nov 8 00:14:26.767979 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Nov 8 00:14:26.777118 systemd[1]: Starting afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments... Nov 8 00:14:26.778330 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Nov 8 00:14:26.799591 kernel: BTRFS info (device sda6): first mount of filesystem 7b59d8a2-cf4e-4d67-8d1e-00d7f134f45e Nov 8 00:14:26.799625 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Nov 8 00:14:26.799634 kernel: BTRFS info (device sda6): using free space tree Nov 8 00:14:26.821045 kernel: BTRFS info (device sda6): enabling ssd optimizations Nov 8 00:14:26.826945 systemd[1]: mnt-oem.mount: Deactivated successfully. Nov 8 00:14:26.828056 kernel: BTRFS info (device sda6): last unmount of filesystem 7b59d8a2-cf4e-4d67-8d1e-00d7f134f45e Nov 8 00:14:26.832896 systemd[1]: Finished ignition-setup.service - Ignition (setup). Nov 8 00:14:26.842165 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Nov 8 00:14:26.843007 systemd[1]: Finished afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments. Nov 8 00:14:26.843933 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Nov 8 00:14:26.912712 ignition[670]: Ignition 2.19.0 Nov 8 00:14:26.912719 ignition[670]: Stage: fetch-offline Nov 8 00:14:26.912743 ignition[670]: no configs at "/usr/lib/ignition/base.d" Nov 8 00:14:26.912748 ignition[670]: no config dir at "/usr/lib/ignition/base.platform.d/vmware" Nov 8 00:14:26.912803 ignition[670]: parsed url from cmdline: "" Nov 8 00:14:26.912804 ignition[670]: no config URL provided Nov 8 00:14:26.912807 ignition[670]: reading system config file "/usr/lib/ignition/user.ign" Nov 8 00:14:26.912812 ignition[670]: no config at "/usr/lib/ignition/user.ign" Nov 8 00:14:26.913186 ignition[670]: config successfully fetched Nov 8 00:14:26.913203 ignition[670]: parsing config with SHA512: c159b550d3094945aab17c4d67d33324f3b71bc7d11664441500586d73a3b96ac71770e51393f92ed76f0be42766f2b828baa449335b2567f381ad626b20b95f Nov 8 00:14:26.915684 unknown[670]: fetched base config from "system" Nov 8 00:14:26.915690 unknown[670]: fetched user config from "vmware" Nov 8 00:14:26.915972 ignition[670]: fetch-offline: fetch-offline passed Nov 8 00:14:26.916009 ignition[670]: Ignition finished successfully Nov 8 00:14:26.917063 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Nov 8 00:14:26.930629 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Nov 8 00:14:26.935140 systemd[1]: Starting systemd-networkd.service - Network Configuration... Nov 8 00:14:26.946687 systemd-networkd[803]: lo: Link UP Nov 8 00:14:26.946692 systemd-networkd[803]: lo: Gained carrier Nov 8 00:14:26.947447 systemd-networkd[803]: Enumeration completed Nov 8 00:14:26.947721 systemd[1]: Started systemd-networkd.service - Network Configuration. Nov 8 00:14:26.947737 systemd-networkd[803]: ens192: Configuring with /etc/systemd/network/10-dracut-cmdline-99.network. Nov 8 00:14:26.947861 systemd[1]: Reached target network.target - Network. Nov 8 00:14:26.951226 kernel: vmxnet3 0000:0b:00.0 ens192: intr type 3, mode 0, 3 vectors allocated Nov 8 00:14:26.951357 kernel: vmxnet3 0000:0b:00.0 ens192: NIC Link is Up 10000 Mbps Nov 8 00:14:26.947948 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Nov 8 00:14:26.950892 systemd-networkd[803]: ens192: Link UP Nov 8 00:14:26.950895 systemd-networkd[803]: ens192: Gained carrier Nov 8 00:14:26.954194 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Nov 8 00:14:26.962112 ignition[805]: Ignition 2.19.0 Nov 8 00:14:26.962119 ignition[805]: Stage: kargs Nov 8 00:14:26.962231 ignition[805]: no configs at "/usr/lib/ignition/base.d" Nov 8 00:14:26.962238 ignition[805]: no config dir at "/usr/lib/ignition/base.platform.d/vmware" Nov 8 00:14:26.962807 ignition[805]: kargs: kargs passed Nov 8 00:14:26.962832 ignition[805]: Ignition finished successfully Nov 8 00:14:26.963866 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Nov 8 00:14:26.968140 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Nov 8 00:14:26.975528 ignition[813]: Ignition 2.19.0 Nov 8 00:14:26.975536 ignition[813]: Stage: disks Nov 8 00:14:26.975676 ignition[813]: no configs at "/usr/lib/ignition/base.d" Nov 8 00:14:26.975683 ignition[813]: no config dir at "/usr/lib/ignition/base.platform.d/vmware" Nov 8 00:14:26.976401 ignition[813]: disks: disks passed Nov 8 00:14:26.976435 ignition[813]: Ignition finished successfully Nov 8 00:14:26.977150 systemd[1]: Finished ignition-disks.service - Ignition (disks). Nov 8 00:14:26.977674 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Nov 8 00:14:26.977917 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Nov 8 00:14:26.978191 systemd[1]: Reached target local-fs.target - Local File Systems. Nov 8 00:14:26.978427 systemd[1]: Reached target sysinit.target - System Initialization. Nov 8 00:14:26.978659 systemd[1]: Reached target basic.target - Basic System. Nov 8 00:14:26.983124 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Nov 8 00:14:26.993571 systemd-fsck[821]: ROOT: clean, 14/1628000 files, 120691/1617920 blocks Nov 8 00:14:26.994641 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Nov 8 00:14:27.002106 systemd[1]: Mounting sysroot.mount - /sysroot... Nov 8 00:14:27.067666 systemd[1]: Mounted sysroot.mount - /sysroot. Nov 8 00:14:27.068086 kernel: EXT4-fs (sda9): mounted filesystem 3cd35b5c-4e0e-45c1-abc9-cf70eebd42df r/w with ordered data mode. Quota mode: none. Nov 8 00:14:27.068026 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Nov 8 00:14:27.071086 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Nov 8 00:14:27.073024 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Nov 8 00:14:27.073304 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Nov 8 00:14:27.073329 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Nov 8 00:14:27.073342 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Nov 8 00:14:27.077378 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Nov 8 00:14:27.078049 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Nov 8 00:14:27.081047 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/sda6 scanned by mount (829) Nov 8 00:14:27.083369 kernel: BTRFS info (device sda6): first mount of filesystem 7b59d8a2-cf4e-4d67-8d1e-00d7f134f45e Nov 8 00:14:27.083387 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Nov 8 00:14:27.083396 kernel: BTRFS info (device sda6): using free space tree Nov 8 00:14:27.088040 kernel: BTRFS info (device sda6): enabling ssd optimizations Nov 8 00:14:27.089303 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Nov 8 00:14:27.112413 initrd-setup-root[853]: cut: /sysroot/etc/passwd: No such file or directory Nov 8 00:14:27.114987 initrd-setup-root[860]: cut: /sysroot/etc/group: No such file or directory Nov 8 00:14:27.117146 initrd-setup-root[867]: cut: /sysroot/etc/shadow: No such file or directory Nov 8 00:14:27.119258 initrd-setup-root[874]: cut: /sysroot/etc/gshadow: No such file or directory Nov 8 00:14:27.177380 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Nov 8 00:14:27.181150 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Nov 8 00:14:27.183601 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Nov 8 00:14:27.187422 kernel: BTRFS info (device sda6): last unmount of filesystem 7b59d8a2-cf4e-4d67-8d1e-00d7f134f45e Nov 8 00:14:27.198021 ignition[942]: INFO : Ignition 2.19.0 Nov 8 00:14:27.200418 ignition[942]: INFO : Stage: mount Nov 8 00:14:27.200549 ignition[942]: INFO : no configs at "/usr/lib/ignition/base.d" Nov 8 00:14:27.200549 ignition[942]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/vmware" Nov 8 00:14:27.201521 ignition[942]: INFO : mount: mount passed Nov 8 00:14:27.201717 ignition[942]: INFO : Ignition finished successfully Nov 8 00:14:27.203436 systemd[1]: Finished ignition-mount.service - Ignition (mount). Nov 8 00:14:27.207170 systemd[1]: Starting ignition-files.service - Ignition (files)... Nov 8 00:14:27.212655 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Nov 8 00:14:27.594567 systemd-resolved[274]: Detected conflict on linux IN A 139.178.70.104 Nov 8 00:14:27.594578 systemd-resolved[274]: Hostname conflict, changing published hostname from 'linux' to 'linux7'. Nov 8 00:14:27.756122 systemd[1]: sysroot-oem.mount: Deactivated successfully. Nov 8 00:14:27.761139 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Nov 8 00:14:27.769061 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/sda6 scanned by mount (955) Nov 8 00:14:27.772221 kernel: BTRFS info (device sda6): first mount of filesystem 7b59d8a2-cf4e-4d67-8d1e-00d7f134f45e Nov 8 00:14:27.772239 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Nov 8 00:14:27.772247 kernel: BTRFS info (device sda6): using free space tree Nov 8 00:14:27.778045 kernel: BTRFS info (device sda6): enabling ssd optimizations Nov 8 00:14:27.778425 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Nov 8 00:14:27.789928 ignition[971]: INFO : Ignition 2.19.0 Nov 8 00:14:27.789928 ignition[971]: INFO : Stage: files Nov 8 00:14:27.790291 ignition[971]: INFO : no configs at "/usr/lib/ignition/base.d" Nov 8 00:14:27.790291 ignition[971]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/vmware" Nov 8 00:14:27.790627 ignition[971]: DEBUG : files: compiled without relabeling support, skipping Nov 8 00:14:27.791422 ignition[971]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Nov 8 00:14:27.791422 ignition[971]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Nov 8 00:14:27.793773 ignition[971]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Nov 8 00:14:27.794013 ignition[971]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Nov 8 00:14:27.794232 unknown[971]: wrote ssh authorized keys file for user: core Nov 8 00:14:27.794545 ignition[971]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Nov 8 00:14:27.796912 ignition[971]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Nov 8 00:14:27.796912 ignition[971]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.3-linux-amd64.tar.gz: attempt #1 Nov 8 00:14:27.847607 ignition[971]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Nov 8 00:14:27.964760 ignition[971]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Nov 8 00:14:27.965798 ignition[971]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Nov 8 00:14:27.965798 ignition[971]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Nov 8 00:14:27.965798 ignition[971]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Nov 8 00:14:27.965798 ignition[971]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Nov 8 00:14:27.965798 ignition[971]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Nov 8 00:14:27.965798 ignition[971]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Nov 8 00:14:27.965798 ignition[971]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Nov 8 00:14:27.965798 ignition[971]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Nov 8 00:14:27.965798 ignition[971]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Nov 8 00:14:27.965798 ignition[971]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Nov 8 00:14:27.965798 ignition[971]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.34.1-x86-64.raw" Nov 8 00:14:27.965798 ignition[971]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.34.1-x86-64.raw" Nov 8 00:14:27.965798 ignition[971]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.34.1-x86-64.raw" Nov 8 00:14:27.965798 ignition[971]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.34.1-x86-64.raw: attempt #1 Nov 8 00:14:28.457191 ignition[971]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Nov 8 00:14:28.705191 systemd-networkd[803]: ens192: Gained IPv6LL Nov 8 00:14:29.674239 ignition[971]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.34.1-x86-64.raw" Nov 8 00:14:29.674729 ignition[971]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/etc/systemd/network/00-vmware.network" Nov 8 00:14:29.674729 ignition[971]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/etc/systemd/network/00-vmware.network" Nov 8 00:14:29.674729 ignition[971]: INFO : files: op(c): [started] processing unit "prepare-helm.service" Nov 8 00:14:29.674729 ignition[971]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Nov 8 00:14:29.674729 ignition[971]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Nov 8 00:14:29.674729 ignition[971]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" Nov 8 00:14:29.674729 ignition[971]: INFO : files: op(e): [started] processing unit "coreos-metadata.service" Nov 8 00:14:29.674729 ignition[971]: INFO : files: op(e): op(f): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Nov 8 00:14:29.674729 ignition[971]: INFO : files: op(e): op(f): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Nov 8 00:14:29.674729 ignition[971]: INFO : files: op(e): [finished] processing unit "coreos-metadata.service" Nov 8 00:14:29.674729 ignition[971]: INFO : files: op(10): [started] setting preset to disabled for "coreos-metadata.service" Nov 8 00:14:29.827138 ignition[971]: INFO : files: op(10): op(11): [started] removing enablement symlink(s) for "coreos-metadata.service" Nov 8 00:14:29.829827 ignition[971]: INFO : files: op(10): op(11): [finished] removing enablement symlink(s) for "coreos-metadata.service" Nov 8 00:14:29.829827 ignition[971]: INFO : files: op(10): [finished] setting preset to disabled for "coreos-metadata.service" Nov 8 00:14:29.829827 ignition[971]: INFO : files: op(12): [started] setting preset to enabled for "prepare-helm.service" Nov 8 00:14:29.829827 ignition[971]: INFO : files: op(12): [finished] setting preset to enabled for "prepare-helm.service" Nov 8 00:14:29.831269 ignition[971]: INFO : files: createResultFile: createFiles: op(13): [started] writing file "/sysroot/etc/.ignition-result.json" Nov 8 00:14:29.831269 ignition[971]: INFO : files: createResultFile: createFiles: op(13): [finished] writing file "/sysroot/etc/.ignition-result.json" Nov 8 00:14:29.831269 ignition[971]: INFO : files: files passed Nov 8 00:14:29.831269 ignition[971]: INFO : Ignition finished successfully Nov 8 00:14:29.831674 systemd[1]: Finished ignition-files.service - Ignition (files). Nov 8 00:14:29.835140 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Nov 8 00:14:29.837143 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Nov 8 00:14:29.837905 systemd[1]: ignition-quench.service: Deactivated successfully. Nov 8 00:14:29.837970 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Nov 8 00:14:29.851373 initrd-setup-root-after-ignition[1003]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Nov 8 00:14:29.851373 initrd-setup-root-after-ignition[1003]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Nov 8 00:14:29.852524 initrd-setup-root-after-ignition[1007]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Nov 8 00:14:29.853427 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Nov 8 00:14:29.853957 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Nov 8 00:14:29.858190 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Nov 8 00:14:29.872102 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Nov 8 00:14:29.872167 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Nov 8 00:14:29.872468 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Nov 8 00:14:29.872596 systemd[1]: Reached target initrd.target - Initrd Default Target. Nov 8 00:14:29.872793 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Nov 8 00:14:29.873321 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Nov 8 00:14:29.883479 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Nov 8 00:14:29.888149 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Nov 8 00:14:29.893620 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Nov 8 00:14:29.893799 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Nov 8 00:14:29.894255 systemd[1]: Stopped target timers.target - Timer Units. Nov 8 00:14:29.894518 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Nov 8 00:14:29.894593 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Nov 8 00:14:29.895130 systemd[1]: Stopped target initrd.target - Initrd Default Target. Nov 8 00:14:29.895386 systemd[1]: Stopped target basic.target - Basic System. Nov 8 00:14:29.895667 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Nov 8 00:14:29.895931 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Nov 8 00:14:29.896407 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Nov 8 00:14:29.896562 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Nov 8 00:14:29.896964 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Nov 8 00:14:29.897188 systemd[1]: Stopped target sysinit.target - System Initialization. Nov 8 00:14:29.897579 systemd[1]: Stopped target local-fs.target - Local File Systems. Nov 8 00:14:29.897846 systemd[1]: Stopped target swap.target - Swaps. Nov 8 00:14:29.898087 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Nov 8 00:14:29.898153 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Nov 8 00:14:29.898667 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Nov 8 00:14:29.898938 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Nov 8 00:14:29.899082 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Nov 8 00:14:29.899127 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Nov 8 00:14:29.899351 systemd[1]: dracut-initqueue.service: Deactivated successfully. Nov 8 00:14:29.899412 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Nov 8 00:14:29.899690 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Nov 8 00:14:29.899754 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Nov 8 00:14:29.899962 systemd[1]: Stopped target paths.target - Path Units. Nov 8 00:14:29.900103 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Nov 8 00:14:29.904058 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Nov 8 00:14:29.904244 systemd[1]: Stopped target slices.target - Slice Units. Nov 8 00:14:29.904451 systemd[1]: Stopped target sockets.target - Socket Units. Nov 8 00:14:29.904638 systemd[1]: iscsid.socket: Deactivated successfully. Nov 8 00:14:29.904711 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Nov 8 00:14:29.904914 systemd[1]: iscsiuio.socket: Deactivated successfully. Nov 8 00:14:29.904983 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Nov 8 00:14:29.905217 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Nov 8 00:14:29.905291 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Nov 8 00:14:29.905510 systemd[1]: ignition-files.service: Deactivated successfully. Nov 8 00:14:29.905567 systemd[1]: Stopped ignition-files.service - Ignition (files). Nov 8 00:14:29.913215 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Nov 8 00:14:29.913341 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Nov 8 00:14:29.913438 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Nov 8 00:14:29.915216 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Nov 8 00:14:29.915339 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Nov 8 00:14:29.915442 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Nov 8 00:14:29.915736 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Nov 8 00:14:29.915817 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Nov 8 00:14:29.919390 systemd[1]: initrd-cleanup.service: Deactivated successfully. Nov 8 00:14:29.919463 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Nov 8 00:14:29.922693 ignition[1027]: INFO : Ignition 2.19.0 Nov 8 00:14:29.925260 ignition[1027]: INFO : Stage: umount Nov 8 00:14:29.925260 ignition[1027]: INFO : no configs at "/usr/lib/ignition/base.d" Nov 8 00:14:29.925260 ignition[1027]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/vmware" Nov 8 00:14:29.925260 ignition[1027]: INFO : umount: umount passed Nov 8 00:14:29.925260 ignition[1027]: INFO : Ignition finished successfully Nov 8 00:14:29.926165 systemd[1]: ignition-mount.service: Deactivated successfully. Nov 8 00:14:29.926239 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Nov 8 00:14:29.927452 systemd[1]: Stopped target network.target - Network. Nov 8 00:14:29.927768 systemd[1]: ignition-disks.service: Deactivated successfully. Nov 8 00:14:29.927900 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Nov 8 00:14:29.928132 systemd[1]: ignition-kargs.service: Deactivated successfully. Nov 8 00:14:29.928155 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Nov 8 00:14:29.928380 systemd[1]: ignition-setup.service: Deactivated successfully. Nov 8 00:14:29.928403 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Nov 8 00:14:29.928723 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Nov 8 00:14:29.928745 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Nov 8 00:14:29.929045 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Nov 8 00:14:29.929302 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Nov 8 00:14:29.930929 systemd[1]: systemd-resolved.service: Deactivated successfully. Nov 8 00:14:29.930988 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Nov 8 00:14:29.931858 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Nov 8 00:14:29.931909 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Nov 8 00:14:29.934622 systemd[1]: systemd-networkd.service: Deactivated successfully. Nov 8 00:14:29.934697 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Nov 8 00:14:29.935112 systemd[1]: systemd-networkd.socket: Deactivated successfully. Nov 8 00:14:29.935137 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Nov 8 00:14:29.939181 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Nov 8 00:14:29.939287 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Nov 8 00:14:29.939320 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Nov 8 00:14:29.939454 systemd[1]: afterburn-network-kargs.service: Deactivated successfully. Nov 8 00:14:29.939479 systemd[1]: Stopped afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments. Nov 8 00:14:29.939598 systemd[1]: systemd-sysctl.service: Deactivated successfully. Nov 8 00:14:29.939620 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Nov 8 00:14:29.939730 systemd[1]: systemd-modules-load.service: Deactivated successfully. Nov 8 00:14:29.939750 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Nov 8 00:14:29.939909 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Nov 8 00:14:29.943834 systemd[1]: sysroot-boot.mount: Deactivated successfully. Nov 8 00:14:29.948312 systemd[1]: network-cleanup.service: Deactivated successfully. Nov 8 00:14:29.948405 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Nov 8 00:14:29.954488 systemd[1]: systemd-udevd.service: Deactivated successfully. Nov 8 00:14:29.954595 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Nov 8 00:14:29.954982 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Nov 8 00:14:29.955022 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Nov 8 00:14:29.955308 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Nov 8 00:14:29.955333 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Nov 8 00:14:29.955544 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Nov 8 00:14:29.955574 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Nov 8 00:14:29.955933 systemd[1]: dracut-cmdline.service: Deactivated successfully. Nov 8 00:14:29.955963 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Nov 8 00:14:29.956581 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Nov 8 00:14:29.956613 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Nov 8 00:14:29.960227 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Nov 8 00:14:29.960381 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Nov 8 00:14:29.960427 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Nov 8 00:14:29.961160 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Nov 8 00:14:29.961191 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Nov 8 00:14:29.964079 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Nov 8 00:14:29.964141 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Nov 8 00:14:30.056394 systemd[1]: sysroot-boot.service: Deactivated successfully. Nov 8 00:14:30.056484 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Nov 8 00:14:30.056855 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Nov 8 00:14:30.057005 systemd[1]: initrd-setup-root.service: Deactivated successfully. Nov 8 00:14:30.057054 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Nov 8 00:14:30.061155 systemd[1]: Starting initrd-switch-root.service - Switch Root... Nov 8 00:14:30.077152 systemd[1]: Switching root. Nov 8 00:14:30.111216 systemd-journald[217]: Journal stopped Nov 8 00:14:24.737835 kernel: Linux version 6.6.113-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT_DYNAMIC Fri Nov 7 22:45:04 -00 2025 Nov 8 00:14:24.737851 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=vmware flatcar.autologin verity.usrhash=480a02cf7f2001774aa495c3e719d4173e968e6839485a7d2b207ef2facca472 Nov 8 00:14:24.737858 kernel: Disabled fast string operations Nov 8 00:14:24.737862 kernel: BIOS-provided physical RAM map: Nov 8 00:14:24.737865 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009ebff] usable Nov 8 00:14:24.737869 kernel: BIOS-e820: [mem 0x000000000009ec00-0x000000000009ffff] reserved Nov 8 00:14:24.737875 kernel: BIOS-e820: [mem 0x00000000000dc000-0x00000000000fffff] reserved Nov 8 00:14:24.737879 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000007fedffff] usable Nov 8 00:14:24.737884 kernel: BIOS-e820: [mem 0x000000007fee0000-0x000000007fefefff] ACPI data Nov 8 00:14:24.737888 kernel: BIOS-e820: [mem 0x000000007feff000-0x000000007fefffff] ACPI NVS Nov 8 00:14:24.737892 kernel: BIOS-e820: [mem 0x000000007ff00000-0x000000007fffffff] usable Nov 8 00:14:24.737896 kernel: BIOS-e820: [mem 0x00000000f0000000-0x00000000f7ffffff] reserved Nov 8 00:14:24.737900 kernel: BIOS-e820: [mem 0x00000000fec00000-0x00000000fec0ffff] reserved Nov 8 00:14:24.737904 kernel: BIOS-e820: [mem 0x00000000fee00000-0x00000000fee00fff] reserved Nov 8 00:14:24.737910 kernel: BIOS-e820: [mem 0x00000000fffe0000-0x00000000ffffffff] reserved Nov 8 00:14:24.737915 kernel: NX (Execute Disable) protection: active Nov 8 00:14:24.737919 kernel: APIC: Static calls initialized Nov 8 00:14:24.737924 kernel: SMBIOS 2.7 present. Nov 8 00:14:24.737929 kernel: DMI: VMware, Inc. VMware Virtual Platform/440BX Desktop Reference Platform, BIOS 6.00 05/28/2020 Nov 8 00:14:24.737934 kernel: vmware: hypercall mode: 0x00 Nov 8 00:14:24.737938 kernel: Hypervisor detected: VMware Nov 8 00:14:24.737943 kernel: vmware: TSC freq read from hypervisor : 3408.000 MHz Nov 8 00:14:24.737948 kernel: vmware: Host bus clock speed read from hypervisor : 66000000 Hz Nov 8 00:14:24.737953 kernel: vmware: using clock offset of 2938560806 ns Nov 8 00:14:24.737957 kernel: tsc: Detected 3408.000 MHz processor Nov 8 00:14:24.737963 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Nov 8 00:14:24.737968 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Nov 8 00:14:24.737972 kernel: last_pfn = 0x80000 max_arch_pfn = 0x400000000 Nov 8 00:14:24.737977 kernel: total RAM covered: 3072M Nov 8 00:14:24.737982 kernel: Found optimal setting for mtrr clean up Nov 8 00:14:24.737988 kernel: gran_size: 64K chunk_size: 64K num_reg: 2 lose cover RAM: 0G Nov 8 00:14:24.737994 kernel: MTRR map: 6 entries (5 fixed + 1 variable; max 21), built from 8 variable MTRRs Nov 8 00:14:24.737999 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Nov 8 00:14:24.738004 kernel: Using GB pages for direct mapping Nov 8 00:14:24.738008 kernel: ACPI: Early table checksum verification disabled Nov 8 00:14:24.738013 kernel: ACPI: RSDP 0x00000000000F6A00 000024 (v02 PTLTD ) Nov 8 00:14:24.738018 kernel: ACPI: XSDT 0x000000007FEE965B 00005C (v01 INTEL 440BX 06040000 VMW 01324272) Nov 8 00:14:24.738022 kernel: ACPI: FACP 0x000000007FEFEE73 0000F4 (v04 INTEL 440BX 06040000 PTL 000F4240) Nov 8 00:14:24.738027 kernel: ACPI: DSDT 0x000000007FEEAD55 01411E (v01 PTLTD Custom 06040000 MSFT 03000001) Nov 8 00:14:24.738039 kernel: ACPI: FACS 0x000000007FEFFFC0 000040 Nov 8 00:14:24.738047 kernel: ACPI: FACS 0x000000007FEFFFC0 000040 Nov 8 00:14:24.738052 kernel: ACPI: BOOT 0x000000007FEEAD2D 000028 (v01 PTLTD $SBFTBL$ 06040000 LTP 00000001) Nov 8 00:14:24.738057 kernel: ACPI: APIC 0x000000007FEEA5EB 000742 (v01 PTLTD ? APIC 06040000 LTP 00000000) Nov 8 00:14:24.739022 kernel: ACPI: MCFG 0x000000007FEEA5AF 00003C (v01 PTLTD $PCITBL$ 06040000 LTP 00000001) Nov 8 00:14:24.739063 kernel: ACPI: SRAT 0x000000007FEE9757 0008A8 (v02 VMWARE MEMPLUG 06040000 VMW 00000001) Nov 8 00:14:24.739071 kernel: ACPI: HPET 0x000000007FEE971F 000038 (v01 VMWARE VMW HPET 06040000 VMW 00000001) Nov 8 00:14:24.739076 kernel: ACPI: WAET 0x000000007FEE96F7 000028 (v01 VMWARE VMW WAET 06040000 VMW 00000001) Nov 8 00:14:24.739081 kernel: ACPI: Reserving FACP table memory at [mem 0x7fefee73-0x7fefef66] Nov 8 00:14:24.739086 kernel: ACPI: Reserving DSDT table memory at [mem 0x7feead55-0x7fefee72] Nov 8 00:14:24.739091 kernel: ACPI: Reserving FACS table memory at [mem 0x7fefffc0-0x7fefffff] Nov 8 00:14:24.739096 kernel: ACPI: Reserving FACS table memory at [mem 0x7fefffc0-0x7fefffff] Nov 8 00:14:24.739101 kernel: ACPI: Reserving BOOT table memory at [mem 0x7feead2d-0x7feead54] Nov 8 00:14:24.739106 kernel: ACPI: Reserving APIC table memory at [mem 0x7feea5eb-0x7feead2c] Nov 8 00:14:24.739111 kernel: ACPI: Reserving MCFG table memory at [mem 0x7feea5af-0x7feea5ea] Nov 8 00:14:24.739116 kernel: ACPI: Reserving SRAT table memory at [mem 0x7fee9757-0x7fee9ffe] Nov 8 00:14:24.739122 kernel: ACPI: Reserving HPET table memory at [mem 0x7fee971f-0x7fee9756] Nov 8 00:14:24.739127 kernel: ACPI: Reserving WAET table memory at [mem 0x7fee96f7-0x7fee971e] Nov 8 00:14:24.739132 kernel: system APIC only can use physical flat Nov 8 00:14:24.739138 kernel: APIC: Switched APIC routing to: physical flat Nov 8 00:14:24.739143 kernel: SRAT: PXM 0 -> APIC 0x00 -> Node 0 Nov 8 00:14:24.739148 kernel: SRAT: PXM 0 -> APIC 0x02 -> Node 0 Nov 8 00:14:24.739153 kernel: SRAT: PXM 0 -> APIC 0x04 -> Node 0 Nov 8 00:14:24.739158 kernel: SRAT: PXM 0 -> APIC 0x06 -> Node 0 Nov 8 00:14:24.739163 kernel: SRAT: PXM 0 -> APIC 0x08 -> Node 0 Nov 8 00:14:24.739168 kernel: SRAT: PXM 0 -> APIC 0x0a -> Node 0 Nov 8 00:14:24.739173 kernel: SRAT: PXM 0 -> APIC 0x0c -> Node 0 Nov 8 00:14:24.739178 kernel: SRAT: PXM 0 -> APIC 0x0e -> Node 0 Nov 8 00:14:24.739183 kernel: SRAT: PXM 0 -> APIC 0x10 -> Node 0 Nov 8 00:14:24.739188 kernel: SRAT: PXM 0 -> APIC 0x12 -> Node 0 Nov 8 00:14:24.739193 kernel: SRAT: PXM 0 -> APIC 0x14 -> Node 0 Nov 8 00:14:24.739198 kernel: SRAT: PXM 0 -> APIC 0x16 -> Node 0 Nov 8 00:14:24.739203 kernel: SRAT: PXM 0 -> APIC 0x18 -> Node 0 Nov 8 00:14:24.739207 kernel: SRAT: PXM 0 -> APIC 0x1a -> Node 0 Nov 8 00:14:24.739212 kernel: SRAT: PXM 0 -> APIC 0x1c -> Node 0 Nov 8 00:14:24.739218 kernel: SRAT: PXM 0 -> APIC 0x1e -> Node 0 Nov 8 00:14:24.739223 kernel: SRAT: PXM 0 -> APIC 0x20 -> Node 0 Nov 8 00:14:24.739228 kernel: SRAT: PXM 0 -> APIC 0x22 -> Node 0 Nov 8 00:14:24.739233 kernel: SRAT: PXM 0 -> APIC 0x24 -> Node 0 Nov 8 00:14:24.739237 kernel: SRAT: PXM 0 -> APIC 0x26 -> Node 0 Nov 8 00:14:24.739242 kernel: SRAT: PXM 0 -> APIC 0x28 -> Node 0 Nov 8 00:14:24.739247 kernel: SRAT: PXM 0 -> APIC 0x2a -> Node 0 Nov 8 00:14:24.739252 kernel: SRAT: PXM 0 -> APIC 0x2c -> Node 0 Nov 8 00:14:24.739257 kernel: SRAT: PXM 0 -> APIC 0x2e -> Node 0 Nov 8 00:14:24.739262 kernel: SRAT: PXM 0 -> APIC 0x30 -> Node 0 Nov 8 00:14:24.739266 kernel: SRAT: PXM 0 -> APIC 0x32 -> Node 0 Nov 8 00:14:24.739272 kernel: SRAT: PXM 0 -> APIC 0x34 -> Node 0 Nov 8 00:14:24.739277 kernel: SRAT: PXM 0 -> APIC 0x36 -> Node 0 Nov 8 00:14:24.739282 kernel: SRAT: PXM 0 -> APIC 0x38 -> Node 0 Nov 8 00:14:24.739287 kernel: SRAT: PXM 0 -> APIC 0x3a -> Node 0 Nov 8 00:14:24.739292 kernel: SRAT: PXM 0 -> APIC 0x3c -> Node 0 Nov 8 00:14:24.739297 kernel: SRAT: PXM 0 -> APIC 0x3e -> Node 0 Nov 8 00:14:24.739301 kernel: SRAT: PXM 0 -> APIC 0x40 -> Node 0 Nov 8 00:14:24.739306 kernel: SRAT: PXM 0 -> APIC 0x42 -> Node 0 Nov 8 00:14:24.739311 kernel: SRAT: PXM 0 -> APIC 0x44 -> Node 0 Nov 8 00:14:24.739316 kernel: SRAT: PXM 0 -> APIC 0x46 -> Node 0 Nov 8 00:14:24.739322 kernel: SRAT: PXM 0 -> APIC 0x48 -> Node 0 Nov 8 00:14:24.739327 kernel: SRAT: PXM 0 -> APIC 0x4a -> Node 0 Nov 8 00:14:24.739331 kernel: SRAT: PXM 0 -> APIC 0x4c -> Node 0 Nov 8 00:14:24.739336 kernel: SRAT: PXM 0 -> APIC 0x4e -> Node 0 Nov 8 00:14:24.739341 kernel: SRAT: PXM 0 -> APIC 0x50 -> Node 0 Nov 8 00:14:24.739346 kernel: SRAT: PXM 0 -> APIC 0x52 -> Node 0 Nov 8 00:14:24.739351 kernel: SRAT: PXM 0 -> APIC 0x54 -> Node 0 Nov 8 00:14:24.739356 kernel: SRAT: PXM 0 -> APIC 0x56 -> Node 0 Nov 8 00:14:24.739360 kernel: SRAT: PXM 0 -> APIC 0x58 -> Node 0 Nov 8 00:14:24.739368 kernel: SRAT: PXM 0 -> APIC 0x5a -> Node 0 Nov 8 00:14:24.739375 kernel: SRAT: PXM 0 -> APIC 0x5c -> Node 0 Nov 8 00:14:24.739398 kernel: SRAT: PXM 0 -> APIC 0x5e -> Node 0 Nov 8 00:14:24.739403 kernel: SRAT: PXM 0 -> APIC 0x60 -> Node 0 Nov 8 00:14:24.739411 kernel: SRAT: PXM 0 -> APIC 0x62 -> Node 0 Nov 8 00:14:24.739417 kernel: SRAT: PXM 0 -> APIC 0x64 -> Node 0 Nov 8 00:14:24.739431 kernel: SRAT: PXM 0 -> APIC 0x66 -> Node 0 Nov 8 00:14:24.739446 kernel: SRAT: PXM 0 -> APIC 0x68 -> Node 0 Nov 8 00:14:24.739458 kernel: SRAT: PXM 0 -> APIC 0x6a -> Node 0 Nov 8 00:14:24.739474 kernel: SRAT: PXM 0 -> APIC 0x6c -> Node 0 Nov 8 00:14:24.739486 kernel: SRAT: PXM 0 -> APIC 0x6e -> Node 0 Nov 8 00:14:24.739518 kernel: SRAT: PXM 0 -> APIC 0x70 -> Node 0 Nov 8 00:14:24.739524 kernel: SRAT: PXM 0 -> APIC 0x72 -> Node 0 Nov 8 00:14:24.739530 kernel: SRAT: PXM 0 -> APIC 0x74 -> Node 0 Nov 8 00:14:24.739539 kernel: SRAT: PXM 0 -> APIC 0x76 -> Node 0 Nov 8 00:14:24.739546 kernel: SRAT: PXM 0 -> APIC 0x78 -> Node 0 Nov 8 00:14:24.739551 kernel: SRAT: PXM 0 -> APIC 0x7a -> Node 0 Nov 8 00:14:24.739556 kernel: SRAT: PXM 0 -> APIC 0x7c -> Node 0 Nov 8 00:14:24.739561 kernel: SRAT: PXM 0 -> APIC 0x7e -> Node 0 Nov 8 00:14:24.739567 kernel: SRAT: PXM 0 -> APIC 0x80 -> Node 0 Nov 8 00:14:24.739573 kernel: SRAT: PXM 0 -> APIC 0x82 -> Node 0 Nov 8 00:14:24.739578 kernel: SRAT: PXM 0 -> APIC 0x84 -> Node 0 Nov 8 00:14:24.739583 kernel: SRAT: PXM 0 -> APIC 0x86 -> Node 0 Nov 8 00:14:24.739588 kernel: SRAT: PXM 0 -> APIC 0x88 -> Node 0 Nov 8 00:14:24.739594 kernel: SRAT: PXM 0 -> APIC 0x8a -> Node 0 Nov 8 00:14:24.739599 kernel: SRAT: PXM 0 -> APIC 0x8c -> Node 0 Nov 8 00:14:24.739604 kernel: SRAT: PXM 0 -> APIC 0x8e -> Node 0 Nov 8 00:14:24.739609 kernel: SRAT: PXM 0 -> APIC 0x90 -> Node 0 Nov 8 00:14:24.739614 kernel: SRAT: PXM 0 -> APIC 0x92 -> Node 0 Nov 8 00:14:24.739619 kernel: SRAT: PXM 0 -> APIC 0x94 -> Node 0 Nov 8 00:14:24.739625 kernel: SRAT: PXM 0 -> APIC 0x96 -> Node 0 Nov 8 00:14:24.739631 kernel: SRAT: PXM 0 -> APIC 0x98 -> Node 0 Nov 8 00:14:24.739636 kernel: SRAT: PXM 0 -> APIC 0x9a -> Node 0 Nov 8 00:14:24.739641 kernel: SRAT: PXM 0 -> APIC 0x9c -> Node 0 Nov 8 00:14:24.739646 kernel: SRAT: PXM 0 -> APIC 0x9e -> Node 0 Nov 8 00:14:24.739652 kernel: SRAT: PXM 0 -> APIC 0xa0 -> Node 0 Nov 8 00:14:24.739657 kernel: SRAT: PXM 0 -> APIC 0xa2 -> Node 0 Nov 8 00:14:24.739662 kernel: SRAT: PXM 0 -> APIC 0xa4 -> Node 0 Nov 8 00:14:24.739667 kernel: SRAT: PXM 0 -> APIC 0xa6 -> Node 0 Nov 8 00:14:24.739673 kernel: SRAT: PXM 0 -> APIC 0xa8 -> Node 0 Nov 8 00:14:24.739679 kernel: SRAT: PXM 0 -> APIC 0xaa -> Node 0 Nov 8 00:14:24.739684 kernel: SRAT: PXM 0 -> APIC 0xac -> Node 0 Nov 8 00:14:24.739689 kernel: SRAT: PXM 0 -> APIC 0xae -> Node 0 Nov 8 00:14:24.739695 kernel: SRAT: PXM 0 -> APIC 0xb0 -> Node 0 Nov 8 00:14:24.739700 kernel: SRAT: PXM 0 -> APIC 0xb2 -> Node 0 Nov 8 00:14:24.739705 kernel: SRAT: PXM 0 -> APIC 0xb4 -> Node 0 Nov 8 00:14:24.739710 kernel: SRAT: PXM 0 -> APIC 0xb6 -> Node 0 Nov 8 00:14:24.739715 kernel: SRAT: PXM 0 -> APIC 0xb8 -> Node 0 Nov 8 00:14:24.739720 kernel: SRAT: PXM 0 -> APIC 0xba -> Node 0 Nov 8 00:14:24.739726 kernel: SRAT: PXM 0 -> APIC 0xbc -> Node 0 Nov 8 00:14:24.739732 kernel: SRAT: PXM 0 -> APIC 0xbe -> Node 0 Nov 8 00:14:24.739737 kernel: SRAT: PXM 0 -> APIC 0xc0 -> Node 0 Nov 8 00:14:24.739742 kernel: SRAT: PXM 0 -> APIC 0xc2 -> Node 0 Nov 8 00:14:24.739747 kernel: SRAT: PXM 0 -> APIC 0xc4 -> Node 0 Nov 8 00:14:24.739752 kernel: SRAT: PXM 0 -> APIC 0xc6 -> Node 0 Nov 8 00:14:24.739757 kernel: SRAT: PXM 0 -> APIC 0xc8 -> Node 0 Nov 8 00:14:24.739763 kernel: SRAT: PXM 0 -> APIC 0xca -> Node 0 Nov 8 00:14:24.739768 kernel: SRAT: PXM 0 -> APIC 0xcc -> Node 0 Nov 8 00:14:24.739773 kernel: SRAT: PXM 0 -> APIC 0xce -> Node 0 Nov 8 00:14:24.739778 kernel: SRAT: PXM 0 -> APIC 0xd0 -> Node 0 Nov 8 00:14:24.739783 kernel: SRAT: PXM 0 -> APIC 0xd2 -> Node 0 Nov 8 00:14:24.739790 kernel: SRAT: PXM 0 -> APIC 0xd4 -> Node 0 Nov 8 00:14:24.739795 kernel: SRAT: PXM 0 -> APIC 0xd6 -> Node 0 Nov 8 00:14:24.739800 kernel: SRAT: PXM 0 -> APIC 0xd8 -> Node 0 Nov 8 00:14:24.739805 kernel: SRAT: PXM 0 -> APIC 0xda -> Node 0 Nov 8 00:14:24.739810 kernel: SRAT: PXM 0 -> APIC 0xdc -> Node 0 Nov 8 00:14:24.739815 kernel: SRAT: PXM 0 -> APIC 0xde -> Node 0 Nov 8 00:14:24.739820 kernel: SRAT: PXM 0 -> APIC 0xe0 -> Node 0 Nov 8 00:14:24.739826 kernel: SRAT: PXM 0 -> APIC 0xe2 -> Node 0 Nov 8 00:14:24.739831 kernel: SRAT: PXM 0 -> APIC 0xe4 -> Node 0 Nov 8 00:14:24.739836 kernel: SRAT: PXM 0 -> APIC 0xe6 -> Node 0 Nov 8 00:14:24.739842 kernel: SRAT: PXM 0 -> APIC 0xe8 -> Node 0 Nov 8 00:14:24.739848 kernel: SRAT: PXM 0 -> APIC 0xea -> Node 0 Nov 8 00:14:24.739853 kernel: SRAT: PXM 0 -> APIC 0xec -> Node 0 Nov 8 00:14:24.739858 kernel: SRAT: PXM 0 -> APIC 0xee -> Node 0 Nov 8 00:14:24.739863 kernel: SRAT: PXM 0 -> APIC 0xf0 -> Node 0 Nov 8 00:14:24.739868 kernel: SRAT: PXM 0 -> APIC 0xf2 -> Node 0 Nov 8 00:14:24.739888 kernel: SRAT: PXM 0 -> APIC 0xf4 -> Node 0 Nov 8 00:14:24.739900 kernel: SRAT: PXM 0 -> APIC 0xf6 -> Node 0 Nov 8 00:14:24.739907 kernel: SRAT: PXM 0 -> APIC 0xf8 -> Node 0 Nov 8 00:14:24.739912 kernel: SRAT: PXM 0 -> APIC 0xfa -> Node 0 Nov 8 00:14:24.739927 kernel: SRAT: PXM 0 -> APIC 0xfc -> Node 0 Nov 8 00:14:24.739934 kernel: SRAT: PXM 0 -> APIC 0xfe -> Node 0 Nov 8 00:14:24.739939 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x0009ffff] Nov 8 00:14:24.739944 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00100000-0x7fffffff] Nov 8 00:14:24.739950 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x80000000-0xbfffffff] hotplug Nov 8 00:14:24.739956 kernel: NUMA: Node 0 [mem 0x00000000-0x0009ffff] + [mem 0x00100000-0x7fffffff] -> [mem 0x00000000-0x7fffffff] Nov 8 00:14:24.739961 kernel: NODE_DATA(0) allocated [mem 0x7fffa000-0x7fffffff] Nov 8 00:14:24.739967 kernel: Zone ranges: Nov 8 00:14:24.739972 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Nov 8 00:14:24.739979 kernel: DMA32 [mem 0x0000000001000000-0x000000007fffffff] Nov 8 00:14:24.739985 kernel: Normal empty Nov 8 00:14:24.739990 kernel: Movable zone start for each node Nov 8 00:14:24.739995 kernel: Early memory node ranges Nov 8 00:14:24.740001 kernel: node 0: [mem 0x0000000000001000-0x000000000009dfff] Nov 8 00:14:24.740006 kernel: node 0: [mem 0x0000000000100000-0x000000007fedffff] Nov 8 00:14:24.740011 kernel: node 0: [mem 0x000000007ff00000-0x000000007fffffff] Nov 8 00:14:24.740016 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000007fffffff] Nov 8 00:14:24.740022 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Nov 8 00:14:24.740027 kernel: On node 0, zone DMA: 98 pages in unavailable ranges Nov 8 00:14:24.740051 kernel: On node 0, zone DMA32: 32 pages in unavailable ranges Nov 8 00:14:24.740057 kernel: ACPI: PM-Timer IO Port: 0x1008 Nov 8 00:14:24.740062 kernel: system APIC only can use physical flat Nov 8 00:14:24.740067 kernel: ACPI: LAPIC_NMI (acpi_id[0x00] high edge lint[0x1]) Nov 8 00:14:24.740073 kernel: ACPI: LAPIC_NMI (acpi_id[0x01] high edge lint[0x1]) Nov 8 00:14:24.740078 kernel: ACPI: LAPIC_NMI (acpi_id[0x02] high edge lint[0x1]) Nov 8 00:14:24.740083 kernel: ACPI: LAPIC_NMI (acpi_id[0x03] high edge lint[0x1]) Nov 8 00:14:24.740089 kernel: ACPI: LAPIC_NMI (acpi_id[0x04] high edge lint[0x1]) Nov 8 00:14:24.740094 kernel: ACPI: LAPIC_NMI (acpi_id[0x05] high edge lint[0x1]) Nov 8 00:14:24.740099 kernel: ACPI: LAPIC_NMI (acpi_id[0x06] high edge lint[0x1]) Nov 8 00:14:24.740106 kernel: ACPI: LAPIC_NMI (acpi_id[0x07] high edge lint[0x1]) Nov 8 00:14:24.740112 kernel: ACPI: LAPIC_NMI (acpi_id[0x08] high edge lint[0x1]) Nov 8 00:14:24.740117 kernel: ACPI: LAPIC_NMI (acpi_id[0x09] high edge lint[0x1]) Nov 8 00:14:24.740122 kernel: ACPI: LAPIC_NMI (acpi_id[0x0a] high edge lint[0x1]) Nov 8 00:14:24.740127 kernel: ACPI: LAPIC_NMI (acpi_id[0x0b] high edge lint[0x1]) Nov 8 00:14:24.740133 kernel: ACPI: LAPIC_NMI (acpi_id[0x0c] high edge lint[0x1]) Nov 8 00:14:24.740138 kernel: ACPI: LAPIC_NMI (acpi_id[0x0d] high edge lint[0x1]) Nov 8 00:14:24.740143 kernel: ACPI: LAPIC_NMI (acpi_id[0x0e] high edge lint[0x1]) Nov 8 00:14:24.740148 kernel: ACPI: LAPIC_NMI (acpi_id[0x0f] high edge lint[0x1]) Nov 8 00:14:24.740155 kernel: ACPI: LAPIC_NMI (acpi_id[0x10] high edge lint[0x1]) Nov 8 00:14:24.740160 kernel: ACPI: LAPIC_NMI (acpi_id[0x11] high edge lint[0x1]) Nov 8 00:14:24.740165 kernel: ACPI: LAPIC_NMI (acpi_id[0x12] high edge lint[0x1]) Nov 8 00:14:24.740171 kernel: ACPI: LAPIC_NMI (acpi_id[0x13] high edge lint[0x1]) Nov 8 00:14:24.740176 kernel: ACPI: LAPIC_NMI (acpi_id[0x14] high edge lint[0x1]) Nov 8 00:14:24.740181 kernel: ACPI: LAPIC_NMI (acpi_id[0x15] high edge lint[0x1]) Nov 8 00:14:24.740187 kernel: ACPI: LAPIC_NMI (acpi_id[0x16] high edge lint[0x1]) Nov 8 00:14:24.740192 kernel: ACPI: LAPIC_NMI (acpi_id[0x17] high edge lint[0x1]) Nov 8 00:14:24.740197 kernel: ACPI: LAPIC_NMI (acpi_id[0x18] high edge lint[0x1]) Nov 8 00:14:24.740202 kernel: ACPI: LAPIC_NMI (acpi_id[0x19] high edge lint[0x1]) Nov 8 00:14:24.740209 kernel: ACPI: LAPIC_NMI (acpi_id[0x1a] high edge lint[0x1]) Nov 8 00:14:24.740214 kernel: ACPI: LAPIC_NMI (acpi_id[0x1b] high edge lint[0x1]) Nov 8 00:14:24.740219 kernel: ACPI: LAPIC_NMI (acpi_id[0x1c] high edge lint[0x1]) Nov 8 00:14:24.740225 kernel: ACPI: LAPIC_NMI (acpi_id[0x1d] high edge lint[0x1]) Nov 8 00:14:24.740230 kernel: ACPI: LAPIC_NMI (acpi_id[0x1e] high edge lint[0x1]) Nov 8 00:14:24.740235 kernel: ACPI: LAPIC_NMI (acpi_id[0x1f] high edge lint[0x1]) Nov 8 00:14:24.740240 kernel: ACPI: LAPIC_NMI (acpi_id[0x20] high edge lint[0x1]) Nov 8 00:14:24.740246 kernel: ACPI: LAPIC_NMI (acpi_id[0x21] high edge lint[0x1]) Nov 8 00:14:24.740251 kernel: ACPI: LAPIC_NMI (acpi_id[0x22] high edge lint[0x1]) Nov 8 00:14:24.740256 kernel: ACPI: LAPIC_NMI (acpi_id[0x23] high edge lint[0x1]) Nov 8 00:14:24.740263 kernel: ACPI: LAPIC_NMI (acpi_id[0x24] high edge lint[0x1]) Nov 8 00:14:24.740268 kernel: ACPI: LAPIC_NMI (acpi_id[0x25] high edge lint[0x1]) Nov 8 00:14:24.740273 kernel: ACPI: LAPIC_NMI (acpi_id[0x26] high edge lint[0x1]) Nov 8 00:14:24.740278 kernel: ACPI: LAPIC_NMI (acpi_id[0x27] high edge lint[0x1]) Nov 8 00:14:24.740284 kernel: ACPI: LAPIC_NMI (acpi_id[0x28] high edge lint[0x1]) Nov 8 00:14:24.740289 kernel: ACPI: LAPIC_NMI (acpi_id[0x29] high edge lint[0x1]) Nov 8 00:14:24.740294 kernel: ACPI: LAPIC_NMI (acpi_id[0x2a] high edge lint[0x1]) Nov 8 00:14:24.740299 kernel: ACPI: LAPIC_NMI (acpi_id[0x2b] high edge lint[0x1]) Nov 8 00:14:24.740305 kernel: ACPI: LAPIC_NMI (acpi_id[0x2c] high edge lint[0x1]) Nov 8 00:14:24.740311 kernel: ACPI: LAPIC_NMI (acpi_id[0x2d] high edge lint[0x1]) Nov 8 00:14:24.740316 kernel: ACPI: LAPIC_NMI (acpi_id[0x2e] high edge lint[0x1]) Nov 8 00:14:24.740321 kernel: ACPI: LAPIC_NMI (acpi_id[0x2f] high edge lint[0x1]) Nov 8 00:14:24.740327 kernel: ACPI: LAPIC_NMI (acpi_id[0x30] high edge lint[0x1]) Nov 8 00:14:24.740332 kernel: ACPI: LAPIC_NMI (acpi_id[0x31] high edge lint[0x1]) Nov 8 00:14:24.740337 kernel: ACPI: LAPIC_NMI (acpi_id[0x32] high edge lint[0x1]) Nov 8 00:14:24.740342 kernel: ACPI: LAPIC_NMI (acpi_id[0x33] high edge lint[0x1]) Nov 8 00:14:24.740348 kernel: ACPI: LAPIC_NMI (acpi_id[0x34] high edge lint[0x1]) Nov 8 00:14:24.740353 kernel: ACPI: LAPIC_NMI (acpi_id[0x35] high edge lint[0x1]) Nov 8 00:14:24.740379 kernel: ACPI: LAPIC_NMI (acpi_id[0x36] high edge lint[0x1]) Nov 8 00:14:24.740385 kernel: ACPI: LAPIC_NMI (acpi_id[0x37] high edge lint[0x1]) Nov 8 00:14:24.740391 kernel: ACPI: LAPIC_NMI (acpi_id[0x38] high edge lint[0x1]) Nov 8 00:14:24.740396 kernel: ACPI: LAPIC_NMI (acpi_id[0x39] high edge lint[0x1]) Nov 8 00:14:24.740401 kernel: ACPI: LAPIC_NMI (acpi_id[0x3a] high edge lint[0x1]) Nov 8 00:14:24.740407 kernel: ACPI: LAPIC_NMI (acpi_id[0x3b] high edge lint[0x1]) Nov 8 00:14:24.740429 kernel: ACPI: LAPIC_NMI (acpi_id[0x3c] high edge lint[0x1]) Nov 8 00:14:24.740435 kernel: ACPI: LAPIC_NMI (acpi_id[0x3d] high edge lint[0x1]) Nov 8 00:14:24.740440 kernel: ACPI: LAPIC_NMI (acpi_id[0x3e] high edge lint[0x1]) Nov 8 00:14:24.740445 kernel: ACPI: LAPIC_NMI (acpi_id[0x3f] high edge lint[0x1]) Nov 8 00:14:24.740450 kernel: ACPI: LAPIC_NMI (acpi_id[0x40] high edge lint[0x1]) Nov 8 00:14:24.740457 kernel: ACPI: LAPIC_NMI (acpi_id[0x41] high edge lint[0x1]) Nov 8 00:14:24.740462 kernel: ACPI: LAPIC_NMI (acpi_id[0x42] high edge lint[0x1]) Nov 8 00:14:24.740467 kernel: ACPI: LAPIC_NMI (acpi_id[0x43] high edge lint[0x1]) Nov 8 00:14:24.740472 kernel: ACPI: LAPIC_NMI (acpi_id[0x44] high edge lint[0x1]) Nov 8 00:14:24.740478 kernel: ACPI: LAPIC_NMI (acpi_id[0x45] high edge lint[0x1]) Nov 8 00:14:24.740483 kernel: ACPI: LAPIC_NMI (acpi_id[0x46] high edge lint[0x1]) Nov 8 00:14:24.740488 kernel: ACPI: LAPIC_NMI (acpi_id[0x47] high edge lint[0x1]) Nov 8 00:14:24.740494 kernel: ACPI: LAPIC_NMI (acpi_id[0x48] high edge lint[0x1]) Nov 8 00:14:24.740499 kernel: ACPI: LAPIC_NMI (acpi_id[0x49] high edge lint[0x1]) Nov 8 00:14:24.740505 kernel: ACPI: LAPIC_NMI (acpi_id[0x4a] high edge lint[0x1]) Nov 8 00:14:24.740511 kernel: ACPI: LAPIC_NMI (acpi_id[0x4b] high edge lint[0x1]) Nov 8 00:14:24.740516 kernel: ACPI: LAPIC_NMI (acpi_id[0x4c] high edge lint[0x1]) Nov 8 00:14:24.740521 kernel: ACPI: LAPIC_NMI (acpi_id[0x4d] high edge lint[0x1]) Nov 8 00:14:24.740526 kernel: ACPI: LAPIC_NMI (acpi_id[0x4e] high edge lint[0x1]) Nov 8 00:14:24.740532 kernel: ACPI: LAPIC_NMI (acpi_id[0x4f] high edge lint[0x1]) Nov 8 00:14:24.740537 kernel: ACPI: LAPIC_NMI (acpi_id[0x50] high edge lint[0x1]) Nov 8 00:14:24.740542 kernel: ACPI: LAPIC_NMI (acpi_id[0x51] high edge lint[0x1]) Nov 8 00:14:24.740548 kernel: ACPI: LAPIC_NMI (acpi_id[0x52] high edge lint[0x1]) Nov 8 00:14:24.740553 kernel: ACPI: LAPIC_NMI (acpi_id[0x53] high edge lint[0x1]) Nov 8 00:14:24.740559 kernel: ACPI: LAPIC_NMI (acpi_id[0x54] high edge lint[0x1]) Nov 8 00:14:24.740564 kernel: ACPI: LAPIC_NMI (acpi_id[0x55] high edge lint[0x1]) Nov 8 00:14:24.740570 kernel: ACPI: LAPIC_NMI (acpi_id[0x56] high edge lint[0x1]) Nov 8 00:14:24.740575 kernel: ACPI: LAPIC_NMI (acpi_id[0x57] high edge lint[0x1]) Nov 8 00:14:24.740580 kernel: ACPI: LAPIC_NMI (acpi_id[0x58] high edge lint[0x1]) Nov 8 00:14:24.740586 kernel: ACPI: LAPIC_NMI (acpi_id[0x59] high edge lint[0x1]) Nov 8 00:14:24.740591 kernel: ACPI: LAPIC_NMI (acpi_id[0x5a] high edge lint[0x1]) Nov 8 00:14:24.740596 kernel: ACPI: LAPIC_NMI (acpi_id[0x5b] high edge lint[0x1]) Nov 8 00:14:24.740602 kernel: ACPI: LAPIC_NMI (acpi_id[0x5c] high edge lint[0x1]) Nov 8 00:14:24.740607 kernel: ACPI: LAPIC_NMI (acpi_id[0x5d] high edge lint[0x1]) Nov 8 00:14:24.740613 kernel: ACPI: LAPIC_NMI (acpi_id[0x5e] high edge lint[0x1]) Nov 8 00:14:24.740618 kernel: ACPI: LAPIC_NMI (acpi_id[0x5f] high edge lint[0x1]) Nov 8 00:14:24.740624 kernel: ACPI: LAPIC_NMI (acpi_id[0x60] high edge lint[0x1]) Nov 8 00:14:24.740629 kernel: ACPI: LAPIC_NMI (acpi_id[0x61] high edge lint[0x1]) Nov 8 00:14:24.740634 kernel: ACPI: LAPIC_NMI (acpi_id[0x62] high edge lint[0x1]) Nov 8 00:14:24.740640 kernel: ACPI: LAPIC_NMI (acpi_id[0x63] high edge lint[0x1]) Nov 8 00:14:24.740645 kernel: ACPI: LAPIC_NMI (acpi_id[0x64] high edge lint[0x1]) Nov 8 00:14:24.740650 kernel: ACPI: LAPIC_NMI (acpi_id[0x65] high edge lint[0x1]) Nov 8 00:14:24.740655 kernel: ACPI: LAPIC_NMI (acpi_id[0x66] high edge lint[0x1]) Nov 8 00:14:24.740662 kernel: ACPI: LAPIC_NMI (acpi_id[0x67] high edge lint[0x1]) Nov 8 00:14:24.740667 kernel: ACPI: LAPIC_NMI (acpi_id[0x68] high edge lint[0x1]) Nov 8 00:14:24.740672 kernel: ACPI: LAPIC_NMI (acpi_id[0x69] high edge lint[0x1]) Nov 8 00:14:24.740678 kernel: ACPI: LAPIC_NMI (acpi_id[0x6a] high edge lint[0x1]) Nov 8 00:14:24.740683 kernel: ACPI: LAPIC_NMI (acpi_id[0x6b] high edge lint[0x1]) Nov 8 00:14:24.740688 kernel: ACPI: LAPIC_NMI (acpi_id[0x6c] high edge lint[0x1]) Nov 8 00:14:24.740694 kernel: ACPI: LAPIC_NMI (acpi_id[0x6d] high edge lint[0x1]) Nov 8 00:14:24.740699 kernel: ACPI: LAPIC_NMI (acpi_id[0x6e] high edge lint[0x1]) Nov 8 00:14:24.740704 kernel: ACPI: LAPIC_NMI (acpi_id[0x6f] high edge lint[0x1]) Nov 8 00:14:24.740709 kernel: ACPI: LAPIC_NMI (acpi_id[0x70] high edge lint[0x1]) Nov 8 00:14:24.740716 kernel: ACPI: LAPIC_NMI (acpi_id[0x71] high edge lint[0x1]) Nov 8 00:14:24.740721 kernel: ACPI: LAPIC_NMI (acpi_id[0x72] high edge lint[0x1]) Nov 8 00:14:24.740727 kernel: ACPI: LAPIC_NMI (acpi_id[0x73] high edge lint[0x1]) Nov 8 00:14:24.740732 kernel: ACPI: LAPIC_NMI (acpi_id[0x74] high edge lint[0x1]) Nov 8 00:14:24.740737 kernel: ACPI: LAPIC_NMI (acpi_id[0x75] high edge lint[0x1]) Nov 8 00:14:24.740742 kernel: ACPI: LAPIC_NMI (acpi_id[0x76] high edge lint[0x1]) Nov 8 00:14:24.740747 kernel: ACPI: LAPIC_NMI (acpi_id[0x77] high edge lint[0x1]) Nov 8 00:14:24.740753 kernel: ACPI: LAPIC_NMI (acpi_id[0x78] high edge lint[0x1]) Nov 8 00:14:24.740758 kernel: ACPI: LAPIC_NMI (acpi_id[0x79] high edge lint[0x1]) Nov 8 00:14:24.740763 kernel: ACPI: LAPIC_NMI (acpi_id[0x7a] high edge lint[0x1]) Nov 8 00:14:24.740778 kernel: ACPI: LAPIC_NMI (acpi_id[0x7b] high edge lint[0x1]) Nov 8 00:14:24.740801 kernel: ACPI: LAPIC_NMI (acpi_id[0x7c] high edge lint[0x1]) Nov 8 00:14:24.740826 kernel: ACPI: LAPIC_NMI (acpi_id[0x7d] high edge lint[0x1]) Nov 8 00:14:24.740840 kernel: ACPI: LAPIC_NMI (acpi_id[0x7e] high edge lint[0x1]) Nov 8 00:14:24.740846 kernel: ACPI: LAPIC_NMI (acpi_id[0x7f] high edge lint[0x1]) Nov 8 00:14:24.740851 kernel: IOAPIC[0]: apic_id 1, version 17, address 0xfec00000, GSI 0-23 Nov 8 00:14:24.740857 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 high edge) Nov 8 00:14:24.740862 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Nov 8 00:14:24.740868 kernel: ACPI: HPET id: 0x8086af01 base: 0xfed00000 Nov 8 00:14:24.740876 kernel: TSC deadline timer available Nov 8 00:14:24.740881 kernel: smpboot: Allowing 128 CPUs, 126 hotplug CPUs Nov 8 00:14:24.740887 kernel: [mem 0x80000000-0xefffffff] available for PCI devices Nov 8 00:14:24.740892 kernel: Booting paravirtualized kernel on VMware hypervisor Nov 8 00:14:24.740898 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Nov 8 00:14:24.740903 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:128 nr_cpu_ids:128 nr_node_ids:1 Nov 8 00:14:24.740909 kernel: percpu: Embedded 58 pages/cpu s196712 r8192 d32664 u262144 Nov 8 00:14:24.740914 kernel: pcpu-alloc: s196712 r8192 d32664 u262144 alloc=1*2097152 Nov 8 00:14:24.740919 kernel: pcpu-alloc: [0] 000 001 002 003 004 005 006 007 Nov 8 00:14:24.740926 kernel: pcpu-alloc: [0] 008 009 010 011 012 013 014 015 Nov 8 00:14:24.740931 kernel: pcpu-alloc: [0] 016 017 018 019 020 021 022 023 Nov 8 00:14:24.740936 kernel: pcpu-alloc: [0] 024 025 026 027 028 029 030 031 Nov 8 00:14:24.740945 kernel: pcpu-alloc: [0] 032 033 034 035 036 037 038 039 Nov 8 00:14:24.740961 kernel: pcpu-alloc: [0] 040 041 042 043 044 045 046 047 Nov 8 00:14:24.740968 kernel: pcpu-alloc: [0] 048 049 050 051 052 053 054 055 Nov 8 00:14:24.740974 kernel: pcpu-alloc: [0] 056 057 058 059 060 061 062 063 Nov 8 00:14:24.740980 kernel: pcpu-alloc: [0] 064 065 066 067 068 069 070 071 Nov 8 00:14:24.740986 kernel: pcpu-alloc: [0] 072 073 074 075 076 077 078 079 Nov 8 00:14:24.740992 kernel: pcpu-alloc: [0] 080 081 082 083 084 085 086 087 Nov 8 00:14:24.740998 kernel: pcpu-alloc: [0] 088 089 090 091 092 093 094 095 Nov 8 00:14:24.741004 kernel: pcpu-alloc: [0] 096 097 098 099 100 101 102 103 Nov 8 00:14:24.741009 kernel: pcpu-alloc: [0] 104 105 106 107 108 109 110 111 Nov 8 00:14:24.741015 kernel: pcpu-alloc: [0] 112 113 114 115 116 117 118 119 Nov 8 00:14:24.741020 kernel: pcpu-alloc: [0] 120 121 122 123 124 125 126 127 Nov 8 00:14:24.741026 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=vmware flatcar.autologin verity.usrhash=480a02cf7f2001774aa495c3e719d4173e968e6839485a7d2b207ef2facca472 Nov 8 00:14:24.741044 kernel: random: crng init done Nov 8 00:14:24.741052 kernel: printk: log_buf_len individual max cpu contribution: 4096 bytes Nov 8 00:14:24.741058 kernel: printk: log_buf_len total cpu_extra contributions: 520192 bytes Nov 8 00:14:24.741064 kernel: printk: log_buf_len min size: 262144 bytes Nov 8 00:14:24.741070 kernel: printk: log_buf_len: 1048576 bytes Nov 8 00:14:24.741075 kernel: printk: early log buf free: 239760(91%) Nov 8 00:14:24.741082 kernel: Dentry cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Nov 8 00:14:24.741087 kernel: Inode-cache hash table entries: 131072 (order: 8, 1048576 bytes, linear) Nov 8 00:14:24.741093 kernel: Fallback order for Node 0: 0 Nov 8 00:14:24.741099 kernel: Built 1 zonelists, mobility grouping on. Total pages: 515808 Nov 8 00:14:24.741106 kernel: Policy zone: DMA32 Nov 8 00:14:24.741111 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Nov 8 00:14:24.741117 kernel: Memory: 1936388K/2096628K available (12288K kernel code, 2288K rwdata, 22748K rodata, 42880K init, 2320K bss, 159980K reserved, 0K cma-reserved) Nov 8 00:14:24.741124 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=128, Nodes=1 Nov 8 00:14:24.741130 kernel: ftrace: allocating 37980 entries in 149 pages Nov 8 00:14:24.741137 kernel: ftrace: allocated 149 pages with 4 groups Nov 8 00:14:24.741143 kernel: Dynamic Preempt: voluntary Nov 8 00:14:24.741148 kernel: rcu: Preemptible hierarchical RCU implementation. Nov 8 00:14:24.741154 kernel: rcu: RCU event tracing is enabled. Nov 8 00:14:24.741160 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=128. Nov 8 00:14:24.741166 kernel: Trampoline variant of Tasks RCU enabled. Nov 8 00:14:24.741172 kernel: Rude variant of Tasks RCU enabled. Nov 8 00:14:24.741177 kernel: Tracing variant of Tasks RCU enabled. Nov 8 00:14:24.741183 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Nov 8 00:14:24.741189 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=128 Nov 8 00:14:24.741195 kernel: NR_IRQS: 33024, nr_irqs: 1448, preallocated irqs: 16 Nov 8 00:14:24.741201 kernel: rcu: srcu_init: Setting srcu_struct sizes to big. Nov 8 00:14:24.741207 kernel: Console: colour VGA+ 80x25 Nov 8 00:14:24.741212 kernel: printk: console [tty0] enabled Nov 8 00:14:24.741218 kernel: printk: console [ttyS0] enabled Nov 8 00:14:24.741224 kernel: ACPI: Core revision 20230628 Nov 8 00:14:24.741230 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 133484882848 ns Nov 8 00:14:24.741236 kernel: APIC: Switch to symmetric I/O mode setup Nov 8 00:14:24.741242 kernel: x2apic enabled Nov 8 00:14:24.741249 kernel: APIC: Switched APIC routing to: physical x2apic Nov 8 00:14:24.741254 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Nov 8 00:14:24.741260 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x311fd3cd494, max_idle_ns: 440795223879 ns Nov 8 00:14:24.741266 kernel: Calibrating delay loop (skipped) preset value.. 6816.00 BogoMIPS (lpj=3408000) Nov 8 00:14:24.741272 kernel: Disabled fast string operations Nov 8 00:14:24.741277 kernel: Last level iTLB entries: 4KB 64, 2MB 8, 4MB 8 Nov 8 00:14:24.741283 kernel: Last level dTLB entries: 4KB 64, 2MB 32, 4MB 32, 1GB 4 Nov 8 00:14:24.741290 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Nov 8 00:14:24.741296 kernel: Spectre V2 : Spectre BHI mitigation: SW BHB clearing on vm exit Nov 8 00:14:24.741302 kernel: Spectre V2 : Spectre BHI mitigation: SW BHB clearing on syscall Nov 8 00:14:24.741308 kernel: Spectre V2 : Mitigation: Enhanced / Automatic IBRS Nov 8 00:14:24.741314 kernel: Spectre V2 : Spectre v2 / PBRSB-eIBRS: Retire a single CALL on VMEXIT Nov 8 00:14:24.741319 kernel: RETBleed: Mitigation: Enhanced IBRS Nov 8 00:14:24.741325 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Nov 8 00:14:24.741331 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Nov 8 00:14:24.741337 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode Nov 8 00:14:24.741342 kernel: SRBDS: Unknown: Dependent on hypervisor status Nov 8 00:14:24.741348 kernel: GDS: Unknown: Dependent on hypervisor status Nov 8 00:14:24.741358 kernel: active return thunk: its_return_thunk Nov 8 00:14:24.741364 kernel: ITS: Mitigation: Aligned branch/return thunks Nov 8 00:14:24.741370 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Nov 8 00:14:24.741376 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Nov 8 00:14:24.741381 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Nov 8 00:14:24.741387 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Nov 8 00:14:24.741393 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format. Nov 8 00:14:24.741399 kernel: Freeing SMP alternatives memory: 32K Nov 8 00:14:24.741405 kernel: pid_max: default: 131072 minimum: 1024 Nov 8 00:14:24.741411 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Nov 8 00:14:24.741417 kernel: landlock: Up and running. Nov 8 00:14:24.741423 kernel: SELinux: Initializing. Nov 8 00:14:24.741428 kernel: Mount-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Nov 8 00:14:24.741434 kernel: Mountpoint-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Nov 8 00:14:24.741440 kernel: smpboot: CPU0: Intel(R) Xeon(R) E-2278G CPU @ 3.40GHz (family: 0x6, model: 0x9e, stepping: 0xd) Nov 8 00:14:24.741446 kernel: RCU Tasks: Setting shift to 7 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=128. Nov 8 00:14:24.741452 kernel: RCU Tasks Rude: Setting shift to 7 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=128. Nov 8 00:14:24.741458 kernel: RCU Tasks Trace: Setting shift to 7 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=128. Nov 8 00:14:24.741464 kernel: Performance Events: Skylake events, core PMU driver. Nov 8 00:14:24.741470 kernel: core: CPUID marked event: 'cpu cycles' unavailable Nov 8 00:14:24.741476 kernel: core: CPUID marked event: 'instructions' unavailable Nov 8 00:14:24.741481 kernel: core: CPUID marked event: 'bus cycles' unavailable Nov 8 00:14:24.741487 kernel: core: CPUID marked event: 'cache references' unavailable Nov 8 00:14:24.741492 kernel: core: CPUID marked event: 'cache misses' unavailable Nov 8 00:14:24.741498 kernel: core: CPUID marked event: 'branch instructions' unavailable Nov 8 00:14:24.741504 kernel: core: CPUID marked event: 'branch misses' unavailable Nov 8 00:14:24.741510 kernel: ... version: 1 Nov 8 00:14:24.741516 kernel: ... bit width: 48 Nov 8 00:14:24.741521 kernel: ... generic registers: 4 Nov 8 00:14:24.741527 kernel: ... value mask: 0000ffffffffffff Nov 8 00:14:24.741533 kernel: ... max period: 000000007fffffff Nov 8 00:14:24.741539 kernel: ... fixed-purpose events: 0 Nov 8 00:14:24.741545 kernel: ... event mask: 000000000000000f Nov 8 00:14:24.741550 kernel: signal: max sigframe size: 1776 Nov 8 00:14:24.741556 kernel: rcu: Hierarchical SRCU implementation. Nov 8 00:14:24.741563 kernel: rcu: Max phase no-delay instances is 400. Nov 8 00:14:24.741569 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Nov 8 00:14:24.741574 kernel: smp: Bringing up secondary CPUs ... Nov 8 00:14:24.741580 kernel: smpboot: x86: Booting SMP configuration: Nov 8 00:14:24.741586 kernel: .... node #0, CPUs: #1 Nov 8 00:14:24.741592 kernel: Disabled fast string operations Nov 8 00:14:24.741597 kernel: smpboot: CPU 1 Converting physical 2 to logical package 1 Nov 8 00:14:24.741603 kernel: smpboot: CPU 1 Converting physical 0 to logical die 1 Nov 8 00:14:24.741608 kernel: smp: Brought up 1 node, 2 CPUs Nov 8 00:14:24.741614 kernel: smpboot: Max logical packages: 128 Nov 8 00:14:24.741621 kernel: smpboot: Total of 2 processors activated (13632.00 BogoMIPS) Nov 8 00:14:24.741627 kernel: devtmpfs: initialized Nov 8 00:14:24.741636 kernel: x86/mm: Memory block size: 128MB Nov 8 00:14:24.741643 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x7feff000-0x7fefffff] (4096 bytes) Nov 8 00:14:24.741653 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Nov 8 00:14:24.741666 kernel: futex hash table entries: 32768 (order: 9, 2097152 bytes, linear) Nov 8 00:14:24.741673 kernel: pinctrl core: initialized pinctrl subsystem Nov 8 00:14:24.741679 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Nov 8 00:14:24.741690 kernel: audit: initializing netlink subsys (disabled) Nov 8 00:14:24.741698 kernel: audit: type=2000 audit(1762560863.093:1): state=initialized audit_enabled=0 res=1 Nov 8 00:14:24.741704 kernel: thermal_sys: Registered thermal governor 'step_wise' Nov 8 00:14:24.741709 kernel: thermal_sys: Registered thermal governor 'user_space' Nov 8 00:14:24.741715 kernel: cpuidle: using governor menu Nov 8 00:14:24.741721 kernel: Simple Boot Flag at 0x36 set to 0x80 Nov 8 00:14:24.741729 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Nov 8 00:14:24.741735 kernel: dca service started, version 1.12.1 Nov 8 00:14:24.741741 kernel: PCI: MMCONFIG for domain 0000 [bus 00-7f] at [mem 0xf0000000-0xf7ffffff] (base 0xf0000000) Nov 8 00:14:24.741747 kernel: PCI: Using configuration type 1 for base access Nov 8 00:14:24.741754 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Nov 8 00:14:24.741760 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Nov 8 00:14:24.741766 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Nov 8 00:14:24.741771 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Nov 8 00:14:24.741777 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Nov 8 00:14:24.741783 kernel: ACPI: Added _OSI(Module Device) Nov 8 00:14:24.741789 kernel: ACPI: Added _OSI(Processor Device) Nov 8 00:14:24.741794 kernel: ACPI: Added _OSI(Processor Aggregator Device) Nov 8 00:14:24.741800 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Nov 8 00:14:24.741807 kernel: ACPI: [Firmware Bug]: BIOS _OSI(Linux) query ignored Nov 8 00:14:24.741812 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Nov 8 00:14:24.741818 kernel: ACPI: Interpreter enabled Nov 8 00:14:24.741824 kernel: ACPI: PM: (supports S0 S1 S5) Nov 8 00:14:24.741829 kernel: ACPI: Using IOAPIC for interrupt routing Nov 8 00:14:24.741835 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Nov 8 00:14:24.741841 kernel: PCI: Using E820 reservations for host bridge windows Nov 8 00:14:24.741847 kernel: ACPI: Enabled 4 GPEs in block 00 to 0F Nov 8 00:14:24.741852 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-7f]) Nov 8 00:14:24.741932 kernel: acpi PNP0A03:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Nov 8 00:14:24.741990 kernel: acpi PNP0A03:00: _OSC: platform does not support [AER LTR] Nov 8 00:14:24.744095 kernel: acpi PNP0A03:00: _OSC: OS now controls [PCIeHotplug PME PCIeCapability] Nov 8 00:14:24.744107 kernel: PCI host bridge to bus 0000:00 Nov 8 00:14:24.744166 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Nov 8 00:14:24.744215 kernel: pci_bus 0000:00: root bus resource [mem 0x000cc000-0x000dbfff window] Nov 8 00:14:24.744265 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Nov 8 00:14:24.744311 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Nov 8 00:14:24.744359 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xfeff window] Nov 8 00:14:24.744406 kernel: pci_bus 0000:00: root bus resource [bus 00-7f] Nov 8 00:14:24.744484 kernel: pci 0000:00:00.0: [8086:7190] type 00 class 0x060000 Nov 8 00:14:24.744545 kernel: pci 0000:00:01.0: [8086:7191] type 01 class 0x060400 Nov 8 00:14:24.744601 kernel: pci 0000:00:07.0: [8086:7110] type 00 class 0x060100 Nov 8 00:14:24.744662 kernel: pci 0000:00:07.1: [8086:7111] type 00 class 0x01018a Nov 8 00:14:24.744714 kernel: pci 0000:00:07.1: reg 0x20: [io 0x1060-0x106f] Nov 8 00:14:24.744766 kernel: pci 0000:00:07.1: legacy IDE quirk: reg 0x10: [io 0x01f0-0x01f7] Nov 8 00:14:24.744818 kernel: pci 0000:00:07.1: legacy IDE quirk: reg 0x14: [io 0x03f6] Nov 8 00:14:24.744869 kernel: pci 0000:00:07.1: legacy IDE quirk: reg 0x18: [io 0x0170-0x0177] Nov 8 00:14:24.744920 kernel: pci 0000:00:07.1: legacy IDE quirk: reg 0x1c: [io 0x0376] Nov 8 00:14:24.744978 kernel: pci 0000:00:07.3: [8086:7113] type 00 class 0x068000 Nov 8 00:14:24.745040 kernel: pci 0000:00:07.3: quirk: [io 0x1000-0x103f] claimed by PIIX4 ACPI Nov 8 00:14:24.745095 kernel: pci 0000:00:07.3: quirk: [io 0x1040-0x104f] claimed by PIIX4 SMB Nov 8 00:14:24.745151 kernel: pci 0000:00:07.7: [15ad:0740] type 00 class 0x088000 Nov 8 00:14:24.745203 kernel: pci 0000:00:07.7: reg 0x10: [io 0x1080-0x10bf] Nov 8 00:14:24.745255 kernel: pci 0000:00:07.7: reg 0x14: [mem 0xfebfe000-0xfebfffff 64bit] Nov 8 00:14:24.745311 kernel: pci 0000:00:0f.0: [15ad:0405] type 00 class 0x030000 Nov 8 00:14:24.745367 kernel: pci 0000:00:0f.0: reg 0x10: [io 0x1070-0x107f] Nov 8 00:14:24.745418 kernel: pci 0000:00:0f.0: reg 0x14: [mem 0xe8000000-0xefffffff pref] Nov 8 00:14:24.745468 kernel: pci 0000:00:0f.0: reg 0x18: [mem 0xfe000000-0xfe7fffff] Nov 8 00:14:24.745519 kernel: pci 0000:00:0f.0: reg 0x30: [mem 0x00000000-0x00007fff pref] Nov 8 00:14:24.745570 kernel: pci 0000:00:0f.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Nov 8 00:14:24.745625 kernel: pci 0000:00:11.0: [15ad:0790] type 01 class 0x060401 Nov 8 00:14:24.745683 kernel: pci 0000:00:15.0: [15ad:07a0] type 01 class 0x060400 Nov 8 00:14:24.745736 kernel: pci 0000:00:15.0: PME# supported from D0 D3hot D3cold Nov 8 00:14:24.745795 kernel: pci 0000:00:15.1: [15ad:07a0] type 01 class 0x060400 Nov 8 00:14:24.745848 kernel: pci 0000:00:15.1: PME# supported from D0 D3hot D3cold Nov 8 00:14:24.745905 kernel: pci 0000:00:15.2: [15ad:07a0] type 01 class 0x060400 Nov 8 00:14:24.745958 kernel: pci 0000:00:15.2: PME# supported from D0 D3hot D3cold Nov 8 00:14:24.746015 kernel: pci 0000:00:15.3: [15ad:07a0] type 01 class 0x060400 Nov 8 00:14:24.748126 kernel: pci 0000:00:15.3: PME# supported from D0 D3hot D3cold Nov 8 00:14:24.748189 kernel: pci 0000:00:15.4: [15ad:07a0] type 01 class 0x060400 Nov 8 00:14:24.748244 kernel: pci 0000:00:15.4: PME# supported from D0 D3hot D3cold Nov 8 00:14:24.748301 kernel: pci 0000:00:15.5: [15ad:07a0] type 01 class 0x060400 Nov 8 00:14:24.748361 kernel: pci 0000:00:15.5: PME# supported from D0 D3hot D3cold Nov 8 00:14:24.748423 kernel: pci 0000:00:15.6: [15ad:07a0] type 01 class 0x060400 Nov 8 00:14:24.748480 kernel: pci 0000:00:15.6: PME# supported from D0 D3hot D3cold Nov 8 00:14:24.748536 kernel: pci 0000:00:15.7: [15ad:07a0] type 01 class 0x060400 Nov 8 00:14:24.748589 kernel: pci 0000:00:15.7: PME# supported from D0 D3hot D3cold Nov 8 00:14:24.748645 kernel: pci 0000:00:16.0: [15ad:07a0] type 01 class 0x060400 Nov 8 00:14:24.748697 kernel: pci 0000:00:16.0: PME# supported from D0 D3hot D3cold Nov 8 00:14:24.748755 kernel: pci 0000:00:16.1: [15ad:07a0] type 01 class 0x060400 Nov 8 00:14:24.748808 kernel: pci 0000:00:16.1: PME# supported from D0 D3hot D3cold Nov 8 00:14:24.748865 kernel: pci 0000:00:16.2: [15ad:07a0] type 01 class 0x060400 Nov 8 00:14:24.748918 kernel: pci 0000:00:16.2: PME# supported from D0 D3hot D3cold Nov 8 00:14:24.748975 kernel: pci 0000:00:16.3: [15ad:07a0] type 01 class 0x060400 Nov 8 00:14:24.751034 kernel: pci 0000:00:16.3: PME# supported from D0 D3hot D3cold Nov 8 00:14:24.751113 kernel: pci 0000:00:16.4: [15ad:07a0] type 01 class 0x060400 Nov 8 00:14:24.751168 kernel: pci 0000:00:16.4: PME# supported from D0 D3hot D3cold Nov 8 00:14:24.751224 kernel: pci 0000:00:16.5: [15ad:07a0] type 01 class 0x060400 Nov 8 00:14:24.751277 kernel: pci 0000:00:16.5: PME# supported from D0 D3hot D3cold Nov 8 00:14:24.751332 kernel: pci 0000:00:16.6: [15ad:07a0] type 01 class 0x060400 Nov 8 00:14:24.751394 kernel: pci 0000:00:16.6: PME# supported from D0 D3hot D3cold Nov 8 00:14:24.751452 kernel: pci 0000:00:16.7: [15ad:07a0] type 01 class 0x060400 Nov 8 00:14:24.751508 kernel: pci 0000:00:16.7: PME# supported from D0 D3hot D3cold Nov 8 00:14:24.751565 kernel: pci 0000:00:17.0: [15ad:07a0] type 01 class 0x060400 Nov 8 00:14:24.751618 kernel: pci 0000:00:17.0: PME# supported from D0 D3hot D3cold Nov 8 00:14:24.751673 kernel: pci 0000:00:17.1: [15ad:07a0] type 01 class 0x060400 Nov 8 00:14:24.751726 kernel: pci 0000:00:17.1: PME# supported from D0 D3hot D3cold Nov 8 00:14:24.751781 kernel: pci 0000:00:17.2: [15ad:07a0] type 01 class 0x060400 Nov 8 00:14:24.751836 kernel: pci 0000:00:17.2: PME# supported from D0 D3hot D3cold Nov 8 00:14:24.751895 kernel: pci 0000:00:17.3: [15ad:07a0] type 01 class 0x060400 Nov 8 00:14:24.751947 kernel: pci 0000:00:17.3: PME# supported from D0 D3hot D3cold Nov 8 00:14:24.752002 kernel: pci 0000:00:17.4: [15ad:07a0] type 01 class 0x060400 Nov 8 00:14:24.752070 kernel: pci 0000:00:17.4: PME# supported from D0 D3hot D3cold Nov 8 00:14:24.752128 kernel: pci 0000:00:17.5: [15ad:07a0] type 01 class 0x060400 Nov 8 00:14:24.752184 kernel: pci 0000:00:17.5: PME# supported from D0 D3hot D3cold Nov 8 00:14:24.752240 kernel: pci 0000:00:17.6: [15ad:07a0] type 01 class 0x060400 Nov 8 00:14:24.752292 kernel: pci 0000:00:17.6: PME# supported from D0 D3hot D3cold Nov 8 00:14:24.752348 kernel: pci 0000:00:17.7: [15ad:07a0] type 01 class 0x060400 Nov 8 00:14:24.752401 kernel: pci 0000:00:17.7: PME# supported from D0 D3hot D3cold Nov 8 00:14:24.752456 kernel: pci 0000:00:18.0: [15ad:07a0] type 01 class 0x060400 Nov 8 00:14:24.752511 kernel: pci 0000:00:18.0: PME# supported from D0 D3hot D3cold Nov 8 00:14:24.752567 kernel: pci 0000:00:18.1: [15ad:07a0] type 01 class 0x060400 Nov 8 00:14:24.752620 kernel: pci 0000:00:18.1: PME# supported from D0 D3hot D3cold Nov 8 00:14:24.752677 kernel: pci 0000:00:18.2: [15ad:07a0] type 01 class 0x060400 Nov 8 00:14:24.752730 kernel: pci 0000:00:18.2: PME# supported from D0 D3hot D3cold Nov 8 00:14:24.752789 kernel: pci 0000:00:18.3: [15ad:07a0] type 01 class 0x060400 Nov 8 00:14:24.752845 kernel: pci 0000:00:18.3: PME# supported from D0 D3hot D3cold Nov 8 00:14:24.752901 kernel: pci 0000:00:18.4: [15ad:07a0] type 01 class 0x060400 Nov 8 00:14:24.752954 kernel: pci 0000:00:18.4: PME# supported from D0 D3hot D3cold Nov 8 00:14:24.753011 kernel: pci 0000:00:18.5: [15ad:07a0] type 01 class 0x060400 Nov 8 00:14:24.754906 kernel: pci 0000:00:18.5: PME# supported from D0 D3hot D3cold Nov 8 00:14:24.754967 kernel: pci 0000:00:18.6: [15ad:07a0] type 01 class 0x060400 Nov 8 00:14:24.755021 kernel: pci 0000:00:18.6: PME# supported from D0 D3hot D3cold Nov 8 00:14:24.755100 kernel: pci 0000:00:18.7: [15ad:07a0] type 01 class 0x060400 Nov 8 00:14:24.755154 kernel: pci 0000:00:18.7: PME# supported from D0 D3hot D3cold Nov 8 00:14:24.755210 kernel: pci_bus 0000:01: extended config space not accessible Nov 8 00:14:24.755265 kernel: pci 0000:00:01.0: PCI bridge to [bus 01] Nov 8 00:14:24.755320 kernel: pci_bus 0000:02: extended config space not accessible Nov 8 00:14:24.755329 kernel: acpiphp: Slot [32] registered Nov 8 00:14:24.755337 kernel: acpiphp: Slot [33] registered Nov 8 00:14:24.755343 kernel: acpiphp: Slot [34] registered Nov 8 00:14:24.755349 kernel: acpiphp: Slot [35] registered Nov 8 00:14:24.755355 kernel: acpiphp: Slot [36] registered Nov 8 00:14:24.755361 kernel: acpiphp: Slot [37] registered Nov 8 00:14:24.755367 kernel: acpiphp: Slot [38] registered Nov 8 00:14:24.755373 kernel: acpiphp: Slot [39] registered Nov 8 00:14:24.755379 kernel: acpiphp: Slot [40] registered Nov 8 00:14:24.755385 kernel: acpiphp: Slot [41] registered Nov 8 00:14:24.755391 kernel: acpiphp: Slot [42] registered Nov 8 00:14:24.755398 kernel: acpiphp: Slot [43] registered Nov 8 00:14:24.755404 kernel: acpiphp: Slot [44] registered Nov 8 00:14:24.755410 kernel: acpiphp: Slot [45] registered Nov 8 00:14:24.755415 kernel: acpiphp: Slot [46] registered Nov 8 00:14:24.755421 kernel: acpiphp: Slot [47] registered Nov 8 00:14:24.755427 kernel: acpiphp: Slot [48] registered Nov 8 00:14:24.755433 kernel: acpiphp: Slot [49] registered Nov 8 00:14:24.755439 kernel: acpiphp: Slot [50] registered Nov 8 00:14:24.755444 kernel: acpiphp: Slot [51] registered Nov 8 00:14:24.755451 kernel: acpiphp: Slot [52] registered Nov 8 00:14:24.755457 kernel: acpiphp: Slot [53] registered Nov 8 00:14:24.755463 kernel: acpiphp: Slot [54] registered Nov 8 00:14:24.755468 kernel: acpiphp: Slot [55] registered Nov 8 00:14:24.755474 kernel: acpiphp: Slot [56] registered Nov 8 00:14:24.755480 kernel: acpiphp: Slot [57] registered Nov 8 00:14:24.755486 kernel: acpiphp: Slot [58] registered Nov 8 00:14:24.755492 kernel: acpiphp: Slot [59] registered Nov 8 00:14:24.755498 kernel: acpiphp: Slot [60] registered Nov 8 00:14:24.755503 kernel: acpiphp: Slot [61] registered Nov 8 00:14:24.755510 kernel: acpiphp: Slot [62] registered Nov 8 00:14:24.755516 kernel: acpiphp: Slot [63] registered Nov 8 00:14:24.755568 kernel: pci 0000:00:11.0: PCI bridge to [bus 02] (subtractive decode) Nov 8 00:14:24.755668 kernel: pci 0000:00:11.0: bridge window [io 0x2000-0x3fff] Nov 8 00:14:24.755786 kernel: pci 0000:00:11.0: bridge window [mem 0xfd600000-0xfdffffff] Nov 8 00:14:24.755841 kernel: pci 0000:00:11.0: bridge window [mem 0xe7b00000-0xe7ffffff 64bit pref] Nov 8 00:14:24.755894 kernel: pci 0000:00:11.0: bridge window [mem 0x000a0000-0x000bffff window] (subtractive decode) Nov 8 00:14:24.755946 kernel: pci 0000:00:11.0: bridge window [mem 0x000cc000-0x000dbfff window] (subtractive decode) Nov 8 00:14:24.756001 kernel: pci 0000:00:11.0: bridge window [mem 0xc0000000-0xfebfffff window] (subtractive decode) Nov 8 00:14:24.756101 kernel: pci 0000:00:11.0: bridge window [io 0x0000-0x0cf7 window] (subtractive decode) Nov 8 00:14:24.756154 kernel: pci 0000:00:11.0: bridge window [io 0x0d00-0xfeff window] (subtractive decode) Nov 8 00:14:24.756212 kernel: pci 0000:03:00.0: [15ad:07c0] type 00 class 0x010700 Nov 8 00:14:24.756266 kernel: pci 0000:03:00.0: reg 0x10: [io 0x4000-0x4007] Nov 8 00:14:24.756319 kernel: pci 0000:03:00.0: reg 0x14: [mem 0xfd5f8000-0xfd5fffff 64bit] Nov 8 00:14:24.756372 kernel: pci 0000:03:00.0: reg 0x30: [mem 0x00000000-0x0000ffff pref] Nov 8 00:14:24.756443 kernel: pci 0000:03:00.0: PME# supported from D0 D3hot D3cold Nov 8 00:14:24.756805 kernel: pci 0000:03:00.0: disabling ASPM on pre-1.1 PCIe device. You can enable it with 'pcie_aspm=force' Nov 8 00:14:24.756861 kernel: pci 0000:00:15.0: PCI bridge to [bus 03] Nov 8 00:14:24.756914 kernel: pci 0000:00:15.0: bridge window [io 0x4000-0x4fff] Nov 8 00:14:24.756966 kernel: pci 0000:00:15.0: bridge window [mem 0xfd500000-0xfd5fffff] Nov 8 00:14:24.757018 kernel: pci 0000:00:15.1: PCI bridge to [bus 04] Nov 8 00:14:24.757106 kernel: pci 0000:00:15.1: bridge window [io 0x8000-0x8fff] Nov 8 00:14:24.757162 kernel: pci 0000:00:15.1: bridge window [mem 0xfd100000-0xfd1fffff] Nov 8 00:14:24.757213 kernel: pci 0000:00:15.1: bridge window [mem 0xe7800000-0xe78fffff 64bit pref] Nov 8 00:14:24.757267 kernel: pci 0000:00:15.2: PCI bridge to [bus 05] Nov 8 00:14:24.757318 kernel: pci 0000:00:15.2: bridge window [io 0xc000-0xcfff] Nov 8 00:14:24.757369 kernel: pci 0000:00:15.2: bridge window [mem 0xfcd00000-0xfcdfffff] Nov 8 00:14:24.757419 kernel: pci 0000:00:15.2: bridge window [mem 0xe7400000-0xe74fffff 64bit pref] Nov 8 00:14:24.757471 kernel: pci 0000:00:15.3: PCI bridge to [bus 06] Nov 8 00:14:24.757523 kernel: pci 0000:00:15.3: bridge window [mem 0xfc900000-0xfc9fffff] Nov 8 00:14:24.757577 kernel: pci 0000:00:15.3: bridge window [mem 0xe7000000-0xe70fffff 64bit pref] Nov 8 00:14:24.757630 kernel: pci 0000:00:15.4: PCI bridge to [bus 07] Nov 8 00:14:24.757681 kernel: pci 0000:00:15.4: bridge window [mem 0xfc500000-0xfc5fffff] Nov 8 00:14:24.757732 kernel: pci 0000:00:15.4: bridge window [mem 0xe6c00000-0xe6cfffff 64bit pref] Nov 8 00:14:24.757786 kernel: pci 0000:00:15.5: PCI bridge to [bus 08] Nov 8 00:14:24.757837 kernel: pci 0000:00:15.5: bridge window [mem 0xfc100000-0xfc1fffff] Nov 8 00:14:24.757888 kernel: pci 0000:00:15.5: bridge window [mem 0xe6800000-0xe68fffff 64bit pref] Nov 8 00:14:24.757941 kernel: pci 0000:00:15.6: PCI bridge to [bus 09] Nov 8 00:14:24.757992 kernel: pci 0000:00:15.6: bridge window [mem 0xfbd00000-0xfbdfffff] Nov 8 00:14:24.758060 kernel: pci 0000:00:15.6: bridge window [mem 0xe6400000-0xe64fffff 64bit pref] Nov 8 00:14:24.758116 kernel: pci 0000:00:15.7: PCI bridge to [bus 0a] Nov 8 00:14:24.758166 kernel: pci 0000:00:15.7: bridge window [mem 0xfb900000-0xfb9fffff] Nov 8 00:14:24.758220 kernel: pci 0000:00:15.7: bridge window [mem 0xe6000000-0xe60fffff 64bit pref] Nov 8 00:14:24.758311 kernel: pci 0000:0b:00.0: [15ad:07b0] type 00 class 0x020000 Nov 8 00:14:24.758391 kernel: pci 0000:0b:00.0: reg 0x10: [mem 0xfd4fc000-0xfd4fcfff] Nov 8 00:14:24.758461 kernel: pci 0000:0b:00.0: reg 0x14: [mem 0xfd4fd000-0xfd4fdfff] Nov 8 00:14:24.758513 kernel: pci 0000:0b:00.0: reg 0x18: [mem 0xfd4fe000-0xfd4fffff] Nov 8 00:14:24.758565 kernel: pci 0000:0b:00.0: reg 0x1c: [io 0x5000-0x500f] Nov 8 00:14:24.758617 kernel: pci 0000:0b:00.0: reg 0x30: [mem 0x00000000-0x0000ffff pref] Nov 8 00:14:24.758670 kernel: pci 0000:0b:00.0: supports D1 D2 Nov 8 00:14:24.758725 kernel: pci 0000:0b:00.0: PME# supported from D0 D1 D2 D3hot D3cold Nov 8 00:14:24.758777 kernel: pci 0000:0b:00.0: disabling ASPM on pre-1.1 PCIe device. You can enable it with 'pcie_aspm=force' Nov 8 00:14:24.758830 kernel: pci 0000:00:16.0: PCI bridge to [bus 0b] Nov 8 00:14:24.758881 kernel: pci 0000:00:16.0: bridge window [io 0x5000-0x5fff] Nov 8 00:14:24.758932 kernel: pci 0000:00:16.0: bridge window [mem 0xfd400000-0xfd4fffff] Nov 8 00:14:24.758985 kernel: pci 0000:00:16.1: PCI bridge to [bus 0c] Nov 8 00:14:24.761053 kernel: pci 0000:00:16.1: bridge window [io 0x9000-0x9fff] Nov 8 00:14:24.761116 kernel: pci 0000:00:16.1: bridge window [mem 0xfd000000-0xfd0fffff] Nov 8 00:14:24.761175 kernel: pci 0000:00:16.1: bridge window [mem 0xe7700000-0xe77fffff 64bit pref] Nov 8 00:14:24.761230 kernel: pci 0000:00:16.2: PCI bridge to [bus 0d] Nov 8 00:14:24.761282 kernel: pci 0000:00:16.2: bridge window [io 0xd000-0xdfff] Nov 8 00:14:24.761333 kernel: pci 0000:00:16.2: bridge window [mem 0xfcc00000-0xfccfffff] Nov 8 00:14:24.761389 kernel: pci 0000:00:16.2: bridge window [mem 0xe7300000-0xe73fffff 64bit pref] Nov 8 00:14:24.761478 kernel: pci 0000:00:16.3: PCI bridge to [bus 0e] Nov 8 00:14:24.761529 kernel: pci 0000:00:16.3: bridge window [mem 0xfc800000-0xfc8fffff] Nov 8 00:14:24.761583 kernel: pci 0000:00:16.3: bridge window [mem 0xe6f00000-0xe6ffffff 64bit pref] Nov 8 00:14:24.761635 kernel: pci 0000:00:16.4: PCI bridge to [bus 0f] Nov 8 00:14:24.761686 kernel: pci 0000:00:16.4: bridge window [mem 0xfc400000-0xfc4fffff] Nov 8 00:14:24.761736 kernel: pci 0000:00:16.4: bridge window [mem 0xe6b00000-0xe6bfffff 64bit pref] Nov 8 00:14:24.761789 kernel: pci 0000:00:16.5: PCI bridge to [bus 10] Nov 8 00:14:24.761840 kernel: pci 0000:00:16.5: bridge window [mem 0xfc000000-0xfc0fffff] Nov 8 00:14:24.761890 kernel: pci 0000:00:16.5: bridge window [mem 0xe6700000-0xe67fffff 64bit pref] Nov 8 00:14:24.761943 kernel: pci 0000:00:16.6: PCI bridge to [bus 11] Nov 8 00:14:24.761997 kernel: pci 0000:00:16.6: bridge window [mem 0xfbc00000-0xfbcfffff] Nov 8 00:14:24.762067 kernel: pci 0000:00:16.6: bridge window [mem 0xe6300000-0xe63fffff 64bit pref] Nov 8 00:14:24.762121 kernel: pci 0000:00:16.7: PCI bridge to [bus 12] Nov 8 00:14:24.762172 kernel: pci 0000:00:16.7: bridge window [mem 0xfb800000-0xfb8fffff] Nov 8 00:14:24.762222 kernel: pci 0000:00:16.7: bridge window [mem 0xe5f00000-0xe5ffffff 64bit pref] Nov 8 00:14:24.762274 kernel: pci 0000:00:17.0: PCI bridge to [bus 13] Nov 8 00:14:24.762325 kernel: pci 0000:00:17.0: bridge window [io 0x6000-0x6fff] Nov 8 00:14:24.762375 kernel: pci 0000:00:17.0: bridge window [mem 0xfd300000-0xfd3fffff] Nov 8 00:14:24.762437 kernel: pci 0000:00:17.0: bridge window [mem 0xe7a00000-0xe7afffff 64bit pref] Nov 8 00:14:24.762492 kernel: pci 0000:00:17.1: PCI bridge to [bus 14] Nov 8 00:14:24.762544 kernel: pci 0000:00:17.1: bridge window [io 0xa000-0xafff] Nov 8 00:14:24.762614 kernel: pci 0000:00:17.1: bridge window [mem 0xfcf00000-0xfcffffff] Nov 8 00:14:24.762681 kernel: pci 0000:00:17.1: bridge window [mem 0xe7600000-0xe76fffff 64bit pref] Nov 8 00:14:24.762733 kernel: pci 0000:00:17.2: PCI bridge to [bus 15] Nov 8 00:14:24.762785 kernel: pci 0000:00:17.2: bridge window [io 0xe000-0xefff] Nov 8 00:14:24.763082 kernel: pci 0000:00:17.2: bridge window [mem 0xfcb00000-0xfcbfffff] Nov 8 00:14:24.763143 kernel: pci 0000:00:17.2: bridge window [mem 0xe7200000-0xe72fffff 64bit pref] Nov 8 00:14:24.763199 kernel: pci 0000:00:17.3: PCI bridge to [bus 16] Nov 8 00:14:24.765051 kernel: pci 0000:00:17.3: bridge window [mem 0xfc700000-0xfc7fffff] Nov 8 00:14:24.765116 kernel: pci 0000:00:17.3: bridge window [mem 0xe6e00000-0xe6efffff 64bit pref] Nov 8 00:14:24.765173 kernel: pci 0000:00:17.4: PCI bridge to [bus 17] Nov 8 00:14:24.765228 kernel: pci 0000:00:17.4: bridge window [mem 0xfc300000-0xfc3fffff] Nov 8 00:14:24.765293 kernel: pci 0000:00:17.4: bridge window [mem 0xe6a00000-0xe6afffff 64bit pref] Nov 8 00:14:24.765351 kernel: pci 0000:00:17.5: PCI bridge to [bus 18] Nov 8 00:14:24.765403 kernel: pci 0000:00:17.5: bridge window [mem 0xfbf00000-0xfbffffff] Nov 8 00:14:24.765455 kernel: pci 0000:00:17.5: bridge window [mem 0xe6600000-0xe66fffff 64bit pref] Nov 8 00:14:24.765508 kernel: pci 0000:00:17.6: PCI bridge to [bus 19] Nov 8 00:14:24.765559 kernel: pci 0000:00:17.6: bridge window [mem 0xfbb00000-0xfbbfffff] Nov 8 00:14:24.765610 kernel: pci 0000:00:17.6: bridge window [mem 0xe6200000-0xe62fffff 64bit pref] Nov 8 00:14:24.765698 kernel: pci 0000:00:17.7: PCI bridge to [bus 1a] Nov 8 00:14:24.765749 kernel: pci 0000:00:17.7: bridge window [mem 0xfb700000-0xfb7fffff] Nov 8 00:14:24.765803 kernel: pci 0000:00:17.7: bridge window [mem 0xe5e00000-0xe5efffff 64bit pref] Nov 8 00:14:24.765855 kernel: pci 0000:00:18.0: PCI bridge to [bus 1b] Nov 8 00:14:24.765906 kernel: pci 0000:00:18.0: bridge window [io 0x7000-0x7fff] Nov 8 00:14:24.765956 kernel: pci 0000:00:18.0: bridge window [mem 0xfd200000-0xfd2fffff] Nov 8 00:14:24.766007 kernel: pci 0000:00:18.0: bridge window [mem 0xe7900000-0xe79fffff 64bit pref] Nov 8 00:14:24.766153 kernel: pci 0000:00:18.1: PCI bridge to [bus 1c] Nov 8 00:14:24.766206 kernel: pci 0000:00:18.1: bridge window [io 0xb000-0xbfff] Nov 8 00:14:24.766258 kernel: pci 0000:00:18.1: bridge window [mem 0xfce00000-0xfcefffff] Nov 8 00:14:24.766311 kernel: pci 0000:00:18.1: bridge window [mem 0xe7500000-0xe75fffff 64bit pref] Nov 8 00:14:24.766364 kernel: pci 0000:00:18.2: PCI bridge to [bus 1d] Nov 8 00:14:24.766416 kernel: pci 0000:00:18.2: bridge window [mem 0xfca00000-0xfcafffff] Nov 8 00:14:24.766467 kernel: pci 0000:00:18.2: bridge window [mem 0xe7100000-0xe71fffff 64bit pref] Nov 8 00:14:24.766529 kernel: pci 0000:00:18.3: PCI bridge to [bus 1e] Nov 8 00:14:24.766596 kernel: pci 0000:00:18.3: bridge window [mem 0xfc600000-0xfc6fffff] Nov 8 00:14:24.766649 kernel: pci 0000:00:18.3: bridge window [mem 0xe6d00000-0xe6dfffff 64bit pref] Nov 8 00:14:24.766702 kernel: pci 0000:00:18.4: PCI bridge to [bus 1f] Nov 8 00:14:24.766756 kernel: pci 0000:00:18.4: bridge window [mem 0xfc200000-0xfc2fffff] Nov 8 00:14:24.766807 kernel: pci 0000:00:18.4: bridge window [mem 0xe6900000-0xe69fffff 64bit pref] Nov 8 00:14:24.766858 kernel: pci 0000:00:18.5: PCI bridge to [bus 20] Nov 8 00:14:24.766909 kernel: pci 0000:00:18.5: bridge window [mem 0xfbe00000-0xfbefffff] Nov 8 00:14:24.766960 kernel: pci 0000:00:18.5: bridge window [mem 0xe6500000-0xe65fffff 64bit pref] Nov 8 00:14:24.767011 kernel: pci 0000:00:18.6: PCI bridge to [bus 21] Nov 8 00:14:24.767123 kernel: pci 0000:00:18.6: bridge window [mem 0xfba00000-0xfbafffff] Nov 8 00:14:24.767175 kernel: pci 0000:00:18.6: bridge window [mem 0xe6100000-0xe61fffff 64bit pref] Nov 8 00:14:24.767231 kernel: pci 0000:00:18.7: PCI bridge to [bus 22] Nov 8 00:14:24.767282 kernel: pci 0000:00:18.7: bridge window [mem 0xfb600000-0xfb6fffff] Nov 8 00:14:24.767333 kernel: pci 0000:00:18.7: bridge window [mem 0xe5d00000-0xe5dfffff 64bit pref] Nov 8 00:14:24.767342 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 9 Nov 8 00:14:24.767348 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 0 Nov 8 00:14:24.767359 kernel: ACPI: PCI: Interrupt link LNKB disabled Nov 8 00:14:24.767366 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Nov 8 00:14:24.767372 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 10 Nov 8 00:14:24.767380 kernel: iommu: Default domain type: Translated Nov 8 00:14:24.767385 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Nov 8 00:14:24.767391 kernel: PCI: Using ACPI for IRQ routing Nov 8 00:14:24.767397 kernel: PCI: pci_cache_line_size set to 64 bytes Nov 8 00:14:24.767403 kernel: e820: reserve RAM buffer [mem 0x0009ec00-0x0009ffff] Nov 8 00:14:24.767409 kernel: e820: reserve RAM buffer [mem 0x7fee0000-0x7fffffff] Nov 8 00:14:24.767464 kernel: pci 0000:00:0f.0: vgaarb: setting as boot VGA device Nov 8 00:14:24.767515 kernel: pci 0000:00:0f.0: vgaarb: bridge control possible Nov 8 00:14:24.767565 kernel: pci 0000:00:0f.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Nov 8 00:14:24.767576 kernel: vgaarb: loaded Nov 8 00:14:24.767582 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 Nov 8 00:14:24.767588 kernel: hpet0: 16 comparators, 64-bit 14.318180 MHz counter Nov 8 00:14:24.767594 kernel: clocksource: Switched to clocksource tsc-early Nov 8 00:14:24.767600 kernel: VFS: Disk quotas dquot_6.6.0 Nov 8 00:14:24.767606 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Nov 8 00:14:24.767612 kernel: pnp: PnP ACPI init Nov 8 00:14:24.767699 kernel: system 00:00: [io 0x1000-0x103f] has been reserved Nov 8 00:14:24.767769 kernel: system 00:00: [io 0x1040-0x104f] has been reserved Nov 8 00:14:24.768300 kernel: system 00:00: [io 0x0cf0-0x0cf1] has been reserved Nov 8 00:14:24.768387 kernel: system 00:04: [mem 0xfed00000-0xfed003ff] has been reserved Nov 8 00:14:24.768457 kernel: pnp 00:06: [dma 2] Nov 8 00:14:24.768509 kernel: system 00:07: [io 0xfce0-0xfcff] has been reserved Nov 8 00:14:24.768557 kernel: system 00:07: [mem 0xf0000000-0xf7ffffff] has been reserved Nov 8 00:14:24.768604 kernel: system 00:07: [mem 0xfe800000-0xfe9fffff] has been reserved Nov 8 00:14:24.768615 kernel: pnp: PnP ACPI: found 8 devices Nov 8 00:14:24.768621 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Nov 8 00:14:24.768627 kernel: NET: Registered PF_INET protocol family Nov 8 00:14:24.768633 kernel: IP idents hash table entries: 32768 (order: 6, 262144 bytes, linear) Nov 8 00:14:24.768639 kernel: tcp_listen_portaddr_hash hash table entries: 1024 (order: 2, 16384 bytes, linear) Nov 8 00:14:24.768645 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Nov 8 00:14:24.768650 kernel: TCP established hash table entries: 16384 (order: 5, 131072 bytes, linear) Nov 8 00:14:24.768656 kernel: TCP bind hash table entries: 16384 (order: 7, 524288 bytes, linear) Nov 8 00:14:24.768664 kernel: TCP: Hash tables configured (established 16384 bind 16384) Nov 8 00:14:24.768669 kernel: UDP hash table entries: 1024 (order: 3, 32768 bytes, linear) Nov 8 00:14:24.768675 kernel: UDP-Lite hash table entries: 1024 (order: 3, 32768 bytes, linear) Nov 8 00:14:24.768681 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Nov 8 00:14:24.768687 kernel: NET: Registered PF_XDP protocol family Nov 8 00:14:24.768741 kernel: pci 0000:00:15.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 03] add_size 200000 add_align 100000 Nov 8 00:14:24.768795 kernel: pci 0000:00:15.3: bridge window [io 0x1000-0x0fff] to [bus 06] add_size 1000 Nov 8 00:14:24.768857 kernel: pci 0000:00:15.4: bridge window [io 0x1000-0x0fff] to [bus 07] add_size 1000 Nov 8 00:14:24.768914 kernel: pci 0000:00:15.5: bridge window [io 0x1000-0x0fff] to [bus 08] add_size 1000 Nov 8 00:14:24.768967 kernel: pci 0000:00:15.6: bridge window [io 0x1000-0x0fff] to [bus 09] add_size 1000 Nov 8 00:14:24.769020 kernel: pci 0000:00:15.7: bridge window [io 0x1000-0x0fff] to [bus 0a] add_size 1000 Nov 8 00:14:24.769084 kernel: pci 0000:00:16.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 0b] add_size 200000 add_align 100000 Nov 8 00:14:24.769138 kernel: pci 0000:00:16.3: bridge window [io 0x1000-0x0fff] to [bus 0e] add_size 1000 Nov 8 00:14:24.769191 kernel: pci 0000:00:16.4: bridge window [io 0x1000-0x0fff] to [bus 0f] add_size 1000 Nov 8 00:14:24.769246 kernel: pci 0000:00:16.5: bridge window [io 0x1000-0x0fff] to [bus 10] add_size 1000 Nov 8 00:14:24.769297 kernel: pci 0000:00:16.6: bridge window [io 0x1000-0x0fff] to [bus 11] add_size 1000 Nov 8 00:14:24.769350 kernel: pci 0000:00:16.7: bridge window [io 0x1000-0x0fff] to [bus 12] add_size 1000 Nov 8 00:14:24.769401 kernel: pci 0000:00:17.3: bridge window [io 0x1000-0x0fff] to [bus 16] add_size 1000 Nov 8 00:14:24.769453 kernel: pci 0000:00:17.4: bridge window [io 0x1000-0x0fff] to [bus 17] add_size 1000 Nov 8 00:14:24.769505 kernel: pci 0000:00:17.5: bridge window [io 0x1000-0x0fff] to [bus 18] add_size 1000 Nov 8 00:14:24.769559 kernel: pci 0000:00:17.6: bridge window [io 0x1000-0x0fff] to [bus 19] add_size 1000 Nov 8 00:14:24.769611 kernel: pci 0000:00:17.7: bridge window [io 0x1000-0x0fff] to [bus 1a] add_size 1000 Nov 8 00:14:24.769662 kernel: pci 0000:00:18.2: bridge window [io 0x1000-0x0fff] to [bus 1d] add_size 1000 Nov 8 00:14:24.769713 kernel: pci 0000:00:18.3: bridge window [io 0x1000-0x0fff] to [bus 1e] add_size 1000 Nov 8 00:14:24.769764 kernel: pci 0000:00:18.4: bridge window [io 0x1000-0x0fff] to [bus 1f] add_size 1000 Nov 8 00:14:24.769818 kernel: pci 0000:00:18.5: bridge window [io 0x1000-0x0fff] to [bus 20] add_size 1000 Nov 8 00:14:24.769870 kernel: pci 0000:00:18.6: bridge window [io 0x1000-0x0fff] to [bus 21] add_size 1000 Nov 8 00:14:24.769922 kernel: pci 0000:00:18.7: bridge window [io 0x1000-0x0fff] to [bus 22] add_size 1000 Nov 8 00:14:24.769974 kernel: pci 0000:00:15.0: BAR 15: assigned [mem 0xc0000000-0xc01fffff 64bit pref] Nov 8 00:14:24.770024 kernel: pci 0000:00:16.0: BAR 15: assigned [mem 0xc0200000-0xc03fffff 64bit pref] Nov 8 00:14:24.770585 kernel: pci 0000:00:15.3: BAR 13: no space for [io size 0x1000] Nov 8 00:14:24.770644 kernel: pci 0000:00:15.3: BAR 13: failed to assign [io size 0x1000] Nov 8 00:14:24.770701 kernel: pci 0000:00:15.4: BAR 13: no space for [io size 0x1000] Nov 8 00:14:24.770753 kernel: pci 0000:00:15.4: BAR 13: failed to assign [io size 0x1000] Nov 8 00:14:24.770806 kernel: pci 0000:00:15.5: BAR 13: no space for [io size 0x1000] Nov 8 00:14:24.770858 kernel: pci 0000:00:15.5: BAR 13: failed to assign [io size 0x1000] Nov 8 00:14:24.770910 kernel: pci 0000:00:15.6: BAR 13: no space for [io size 0x1000] Nov 8 00:14:24.770962 kernel: pci 0000:00:15.6: BAR 13: failed to assign [io size 0x1000] Nov 8 00:14:24.771014 kernel: pci 0000:00:15.7: BAR 13: no space for [io size 0x1000] Nov 8 00:14:24.771091 kernel: pci 0000:00:15.7: BAR 13: failed to assign [io size 0x1000] Nov 8 00:14:24.771148 kernel: pci 0000:00:16.3: BAR 13: no space for [io size 0x1000] Nov 8 00:14:24.771200 kernel: pci 0000:00:16.3: BAR 13: failed to assign [io size 0x1000] Nov 8 00:14:24.771252 kernel: pci 0000:00:16.4: BAR 13: no space for [io size 0x1000] Nov 8 00:14:24.771304 kernel: pci 0000:00:16.4: BAR 13: failed to assign [io size 0x1000] Nov 8 00:14:24.771355 kernel: pci 0000:00:16.5: BAR 13: no space for [io size 0x1000] Nov 8 00:14:24.771425 kernel: pci 0000:00:16.5: BAR 13: failed to assign [io size 0x1000] Nov 8 00:14:24.771495 kernel: pci 0000:00:16.6: BAR 13: no space for [io size 0x1000] Nov 8 00:14:24.771546 kernel: pci 0000:00:16.6: BAR 13: failed to assign [io size 0x1000] Nov 8 00:14:24.771600 kernel: pci 0000:00:16.7: BAR 13: no space for [io size 0x1000] Nov 8 00:14:24.771652 kernel: pci 0000:00:16.7: BAR 13: failed to assign [io size 0x1000] Nov 8 00:14:24.771703 kernel: pci 0000:00:17.3: BAR 13: no space for [io size 0x1000] Nov 8 00:14:24.771755 kernel: pci 0000:00:17.3: BAR 13: failed to assign [io size 0x1000] Nov 8 00:14:24.771806 kernel: pci 0000:00:17.4: BAR 13: no space for [io size 0x1000] Nov 8 00:14:24.771858 kernel: pci 0000:00:17.4: BAR 13: failed to assign [io size 0x1000] Nov 8 00:14:24.771910 kernel: pci 0000:00:17.5: BAR 13: no space for [io size 0x1000] Nov 8 00:14:24.771963 kernel: pci 0000:00:17.5: BAR 13: failed to assign [io size 0x1000] Nov 8 00:14:24.772017 kernel: pci 0000:00:17.6: BAR 13: no space for [io size 0x1000] Nov 8 00:14:24.772096 kernel: pci 0000:00:17.6: BAR 13: failed to assign [io size 0x1000] Nov 8 00:14:24.772149 kernel: pci 0000:00:17.7: BAR 13: no space for [io size 0x1000] Nov 8 00:14:24.772200 kernel: pci 0000:00:17.7: BAR 13: failed to assign [io size 0x1000] Nov 8 00:14:24.772252 kernel: pci 0000:00:18.2: BAR 13: no space for [io size 0x1000] Nov 8 00:14:24.772303 kernel: pci 0000:00:18.2: BAR 13: failed to assign [io size 0x1000] Nov 8 00:14:24.772354 kernel: pci 0000:00:18.3: BAR 13: no space for [io size 0x1000] Nov 8 00:14:24.772439 kernel: pci 0000:00:18.3: BAR 13: failed to assign [io size 0x1000] Nov 8 00:14:24.772494 kernel: pci 0000:00:18.4: BAR 13: no space for [io size 0x1000] Nov 8 00:14:24.772546 kernel: pci 0000:00:18.4: BAR 13: failed to assign [io size 0x1000] Nov 8 00:14:24.772597 kernel: pci 0000:00:18.5: BAR 13: no space for [io size 0x1000] Nov 8 00:14:24.772649 kernel: pci 0000:00:18.5: BAR 13: failed to assign [io size 0x1000] Nov 8 00:14:24.772700 kernel: pci 0000:00:18.6: BAR 13: no space for [io size 0x1000] Nov 8 00:14:24.772752 kernel: pci 0000:00:18.6: BAR 13: failed to assign [io size 0x1000] Nov 8 00:14:24.772804 kernel: pci 0000:00:18.7: BAR 13: no space for [io size 0x1000] Nov 8 00:14:24.772855 kernel: pci 0000:00:18.7: BAR 13: failed to assign [io size 0x1000] Nov 8 00:14:24.772910 kernel: pci 0000:00:18.7: BAR 13: no space for [io size 0x1000] Nov 8 00:14:24.772962 kernel: pci 0000:00:18.7: BAR 13: failed to assign [io size 0x1000] Nov 8 00:14:24.773013 kernel: pci 0000:00:18.6: BAR 13: no space for [io size 0x1000] Nov 8 00:14:24.773143 kernel: pci 0000:00:18.6: BAR 13: failed to assign [io size 0x1000] Nov 8 00:14:24.773207 kernel: pci 0000:00:18.5: BAR 13: no space for [io size 0x1000] Nov 8 00:14:24.773260 kernel: pci 0000:00:18.5: BAR 13: failed to assign [io size 0x1000] Nov 8 00:14:24.773311 kernel: pci 0000:00:18.4: BAR 13: no space for [io size 0x1000] Nov 8 00:14:24.773362 kernel: pci 0000:00:18.4: BAR 13: failed to assign [io size 0x1000] Nov 8 00:14:24.773413 kernel: pci 0000:00:18.3: BAR 13: no space for [io size 0x1000] Nov 8 00:14:24.773463 kernel: pci 0000:00:18.3: BAR 13: failed to assign [io size 0x1000] Nov 8 00:14:24.773518 kernel: pci 0000:00:18.2: BAR 13: no space for [io size 0x1000] Nov 8 00:14:24.773569 kernel: pci 0000:00:18.2: BAR 13: failed to assign [io size 0x1000] Nov 8 00:14:24.773620 kernel: pci 0000:00:17.7: BAR 13: no space for [io size 0x1000] Nov 8 00:14:24.773672 kernel: pci 0000:00:17.7: BAR 13: failed to assign [io size 0x1000] Nov 8 00:14:24.773723 kernel: pci 0000:00:17.6: BAR 13: no space for [io size 0x1000] Nov 8 00:14:24.773774 kernel: pci 0000:00:17.6: BAR 13: failed to assign [io size 0x1000] Nov 8 00:14:24.773825 kernel: pci 0000:00:17.5: BAR 13: no space for [io size 0x1000] Nov 8 00:14:24.773876 kernel: pci 0000:00:17.5: BAR 13: failed to assign [io size 0x1000] Nov 8 00:14:24.773928 kernel: pci 0000:00:17.4: BAR 13: no space for [io size 0x1000] Nov 8 00:14:24.773979 kernel: pci 0000:00:17.4: BAR 13: failed to assign [io size 0x1000] Nov 8 00:14:24.776070 kernel: pci 0000:00:17.3: BAR 13: no space for [io size 0x1000] Nov 8 00:14:24.776139 kernel: pci 0000:00:17.3: BAR 13: failed to assign [io size 0x1000] Nov 8 00:14:24.776195 kernel: pci 0000:00:16.7: BAR 13: no space for [io size 0x1000] Nov 8 00:14:24.776272 kernel: pci 0000:00:16.7: BAR 13: failed to assign [io size 0x1000] Nov 8 00:14:24.776326 kernel: pci 0000:00:16.6: BAR 13: no space for [io size 0x1000] Nov 8 00:14:24.776378 kernel: pci 0000:00:16.6: BAR 13: failed to assign [io size 0x1000] Nov 8 00:14:24.776430 kernel: pci 0000:00:16.5: BAR 13: no space for [io size 0x1000] Nov 8 00:14:24.776481 kernel: pci 0000:00:16.5: BAR 13: failed to assign [io size 0x1000] Nov 8 00:14:24.776533 kernel: pci 0000:00:16.4: BAR 13: no space for [io size 0x1000] Nov 8 00:14:24.776588 kernel: pci 0000:00:16.4: BAR 13: failed to assign [io size 0x1000] Nov 8 00:14:24.776640 kernel: pci 0000:00:16.3: BAR 13: no space for [io size 0x1000] Nov 8 00:14:24.776692 kernel: pci 0000:00:16.3: BAR 13: failed to assign [io size 0x1000] Nov 8 00:14:24.776744 kernel: pci 0000:00:15.7: BAR 13: no space for [io size 0x1000] Nov 8 00:14:24.776797 kernel: pci 0000:00:15.7: BAR 13: failed to assign [io size 0x1000] Nov 8 00:14:24.776849 kernel: pci 0000:00:15.6: BAR 13: no space for [io size 0x1000] Nov 8 00:14:24.776901 kernel: pci 0000:00:15.6: BAR 13: failed to assign [io size 0x1000] Nov 8 00:14:24.776953 kernel: pci 0000:00:15.5: BAR 13: no space for [io size 0x1000] Nov 8 00:14:24.777005 kernel: pci 0000:00:15.5: BAR 13: failed to assign [io size 0x1000] Nov 8 00:14:24.777092 kernel: pci 0000:00:15.4: BAR 13: no space for [io size 0x1000] Nov 8 00:14:24.777143 kernel: pci 0000:00:15.4: BAR 13: failed to assign [io size 0x1000] Nov 8 00:14:24.777195 kernel: pci 0000:00:15.3: BAR 13: no space for [io size 0x1000] Nov 8 00:14:24.777245 kernel: pci 0000:00:15.3: BAR 13: failed to assign [io size 0x1000] Nov 8 00:14:24.777297 kernel: pci 0000:00:01.0: PCI bridge to [bus 01] Nov 8 00:14:24.777349 kernel: pci 0000:00:11.0: PCI bridge to [bus 02] Nov 8 00:14:24.777399 kernel: pci 0000:00:11.0: bridge window [io 0x2000-0x3fff] Nov 8 00:14:24.777450 kernel: pci 0000:00:11.0: bridge window [mem 0xfd600000-0xfdffffff] Nov 8 00:14:24.778456 kernel: pci 0000:00:11.0: bridge window [mem 0xe7b00000-0xe7ffffff 64bit pref] Nov 8 00:14:24.778525 kernel: pci 0000:03:00.0: BAR 6: assigned [mem 0xfd500000-0xfd50ffff pref] Nov 8 00:14:24.778583 kernel: pci 0000:00:15.0: PCI bridge to [bus 03] Nov 8 00:14:24.778676 kernel: pci 0000:00:15.0: bridge window [io 0x4000-0x4fff] Nov 8 00:14:24.778782 kernel: pci 0000:00:15.0: bridge window [mem 0xfd500000-0xfd5fffff] Nov 8 00:14:24.778835 kernel: pci 0000:00:15.0: bridge window [mem 0xc0000000-0xc01fffff 64bit pref] Nov 8 00:14:24.778889 kernel: pci 0000:00:15.1: PCI bridge to [bus 04] Nov 8 00:14:24.778941 kernel: pci 0000:00:15.1: bridge window [io 0x8000-0x8fff] Nov 8 00:14:24.778993 kernel: pci 0000:00:15.1: bridge window [mem 0xfd100000-0xfd1fffff] Nov 8 00:14:24.779053 kernel: pci 0000:00:15.1: bridge window [mem 0xe7800000-0xe78fffff 64bit pref] Nov 8 00:14:24.780127 kernel: pci 0000:00:15.2: PCI bridge to [bus 05] Nov 8 00:14:24.780187 kernel: pci 0000:00:15.2: bridge window [io 0xc000-0xcfff] Nov 8 00:14:24.780240 kernel: pci 0000:00:15.2: bridge window [mem 0xfcd00000-0xfcdfffff] Nov 8 00:14:24.780292 kernel: pci 0000:00:15.2: bridge window [mem 0xe7400000-0xe74fffff 64bit pref] Nov 8 00:14:24.780344 kernel: pci 0000:00:15.3: PCI bridge to [bus 06] Nov 8 00:14:24.780433 kernel: pci 0000:00:15.3: bridge window [mem 0xfc900000-0xfc9fffff] Nov 8 00:14:24.780504 kernel: pci 0000:00:15.3: bridge window [mem 0xe7000000-0xe70fffff 64bit pref] Nov 8 00:14:24.780556 kernel: pci 0000:00:15.4: PCI bridge to [bus 07] Nov 8 00:14:24.780608 kernel: pci 0000:00:15.4: bridge window [mem 0xfc500000-0xfc5fffff] Nov 8 00:14:24.780664 kernel: pci 0000:00:15.4: bridge window [mem 0xe6c00000-0xe6cfffff 64bit pref] Nov 8 00:14:24.780718 kernel: pci 0000:00:15.5: PCI bridge to [bus 08] Nov 8 00:14:24.780771 kernel: pci 0000:00:15.5: bridge window [mem 0xfc100000-0xfc1fffff] Nov 8 00:14:24.780822 kernel: pci 0000:00:15.5: bridge window [mem 0xe6800000-0xe68fffff 64bit pref] Nov 8 00:14:24.780874 kernel: pci 0000:00:15.6: PCI bridge to [bus 09] Nov 8 00:14:24.782208 kernel: pci 0000:00:15.6: bridge window [mem 0xfbd00000-0xfbdfffff] Nov 8 00:14:24.782280 kernel: pci 0000:00:15.6: bridge window [mem 0xe6400000-0xe64fffff 64bit pref] Nov 8 00:14:24.782338 kernel: pci 0000:00:15.7: PCI bridge to [bus 0a] Nov 8 00:14:24.782393 kernel: pci 0000:00:15.7: bridge window [mem 0xfb900000-0xfb9fffff] Nov 8 00:14:24.782446 kernel: pci 0000:00:15.7: bridge window [mem 0xe6000000-0xe60fffff 64bit pref] Nov 8 00:14:24.782505 kernel: pci 0000:0b:00.0: BAR 6: assigned [mem 0xfd400000-0xfd40ffff pref] Nov 8 00:14:24.782559 kernel: pci 0000:00:16.0: PCI bridge to [bus 0b] Nov 8 00:14:24.782613 kernel: pci 0000:00:16.0: bridge window [io 0x5000-0x5fff] Nov 8 00:14:24.782665 kernel: pci 0000:00:16.0: bridge window [mem 0xfd400000-0xfd4fffff] Nov 8 00:14:24.782718 kernel: pci 0000:00:16.0: bridge window [mem 0xc0200000-0xc03fffff 64bit pref] Nov 8 00:14:24.782775 kernel: pci 0000:00:16.1: PCI bridge to [bus 0c] Nov 8 00:14:24.782828 kernel: pci 0000:00:16.1: bridge window [io 0x9000-0x9fff] Nov 8 00:14:24.782882 kernel: pci 0000:00:16.1: bridge window [mem 0xfd000000-0xfd0fffff] Nov 8 00:14:24.782935 kernel: pci 0000:00:16.1: bridge window [mem 0xe7700000-0xe77fffff 64bit pref] Nov 8 00:14:24.784287 kernel: pci 0000:00:16.2: PCI bridge to [bus 0d] Nov 8 00:14:24.784347 kernel: pci 0000:00:16.2: bridge window [io 0xd000-0xdfff] Nov 8 00:14:24.784402 kernel: pci 0000:00:16.2: bridge window [mem 0xfcc00000-0xfccfffff] Nov 8 00:14:24.784456 kernel: pci 0000:00:16.2: bridge window [mem 0xe7300000-0xe73fffff 64bit pref] Nov 8 00:14:24.784509 kernel: pci 0000:00:16.3: PCI bridge to [bus 0e] Nov 8 00:14:24.784564 kernel: pci 0000:00:16.3: bridge window [mem 0xfc800000-0xfc8fffff] Nov 8 00:14:24.784620 kernel: pci 0000:00:16.3: bridge window [mem 0xe6f00000-0xe6ffffff 64bit pref] Nov 8 00:14:24.784674 kernel: pci 0000:00:16.4: PCI bridge to [bus 0f] Nov 8 00:14:24.784727 kernel: pci 0000:00:16.4: bridge window [mem 0xfc400000-0xfc4fffff] Nov 8 00:14:24.784779 kernel: pci 0000:00:16.4: bridge window [mem 0xe6b00000-0xe6bfffff 64bit pref] Nov 8 00:14:24.784833 kernel: pci 0000:00:16.5: PCI bridge to [bus 10] Nov 8 00:14:24.785250 kernel: pci 0000:00:16.5: bridge window [mem 0xfc000000-0xfc0fffff] Nov 8 00:14:24.785311 kernel: pci 0000:00:16.5: bridge window [mem 0xe6700000-0xe67fffff 64bit pref] Nov 8 00:14:24.785367 kernel: pci 0000:00:16.6: PCI bridge to [bus 11] Nov 8 00:14:24.785421 kernel: pci 0000:00:16.6: bridge window [mem 0xfbc00000-0xfbcfffff] Nov 8 00:14:24.785479 kernel: pci 0000:00:16.6: bridge window [mem 0xe6300000-0xe63fffff 64bit pref] Nov 8 00:14:24.785532 kernel: pci 0000:00:16.7: PCI bridge to [bus 12] Nov 8 00:14:24.785586 kernel: pci 0000:00:16.7: bridge window [mem 0xfb800000-0xfb8fffff] Nov 8 00:14:24.785639 kernel: pci 0000:00:16.7: bridge window [mem 0xe5f00000-0xe5ffffff 64bit pref] Nov 8 00:14:24.785693 kernel: pci 0000:00:17.0: PCI bridge to [bus 13] Nov 8 00:14:24.785747 kernel: pci 0000:00:17.0: bridge window [io 0x6000-0x6fff] Nov 8 00:14:24.785799 kernel: pci 0000:00:17.0: bridge window [mem 0xfd300000-0xfd3fffff] Nov 8 00:14:24.785853 kernel: pci 0000:00:17.0: bridge window [mem 0xe7a00000-0xe7afffff 64bit pref] Nov 8 00:14:24.785907 kernel: pci 0000:00:17.1: PCI bridge to [bus 14] Nov 8 00:14:24.785961 kernel: pci 0000:00:17.1: bridge window [io 0xa000-0xafff] Nov 8 00:14:24.786017 kernel: pci 0000:00:17.1: bridge window [mem 0xfcf00000-0xfcffffff] Nov 8 00:14:24.786086 kernel: pci 0000:00:17.1: bridge window [mem 0xe7600000-0xe76fffff 64bit pref] Nov 8 00:14:24.786142 kernel: pci 0000:00:17.2: PCI bridge to [bus 15] Nov 8 00:14:24.786195 kernel: pci 0000:00:17.2: bridge window [io 0xe000-0xefff] Nov 8 00:14:24.786248 kernel: pci 0000:00:17.2: bridge window [mem 0xfcb00000-0xfcbfffff] Nov 8 00:14:24.786300 kernel: pci 0000:00:17.2: bridge window [mem 0xe7200000-0xe72fffff 64bit pref] Nov 8 00:14:24.786354 kernel: pci 0000:00:17.3: PCI bridge to [bus 16] Nov 8 00:14:24.786407 kernel: pci 0000:00:17.3: bridge window [mem 0xfc700000-0xfc7fffff] Nov 8 00:14:24.786459 kernel: pci 0000:00:17.3: bridge window [mem 0xe6e00000-0xe6efffff 64bit pref] Nov 8 00:14:24.786516 kernel: pci 0000:00:17.4: PCI bridge to [bus 17] Nov 8 00:14:24.786568 kernel: pci 0000:00:17.4: bridge window [mem 0xfc300000-0xfc3fffff] Nov 8 00:14:24.786621 kernel: pci 0000:00:17.4: bridge window [mem 0xe6a00000-0xe6afffff 64bit pref] Nov 8 00:14:24.786675 kernel: pci 0000:00:17.5: PCI bridge to [bus 18] Nov 8 00:14:24.786729 kernel: pci 0000:00:17.5: bridge window [mem 0xfbf00000-0xfbffffff] Nov 8 00:14:24.786782 kernel: pci 0000:00:17.5: bridge window [mem 0xe6600000-0xe66fffff 64bit pref] Nov 8 00:14:24.786835 kernel: pci 0000:00:17.6: PCI bridge to [bus 19] Nov 8 00:14:24.786887 kernel: pci 0000:00:17.6: bridge window [mem 0xfbb00000-0xfbbfffff] Nov 8 00:14:24.786939 kernel: pci 0000:00:17.6: bridge window [mem 0xe6200000-0xe62fffff 64bit pref] Nov 8 00:14:24.786993 kernel: pci 0000:00:17.7: PCI bridge to [bus 1a] Nov 8 00:14:24.788081 kernel: pci 0000:00:17.7: bridge window [mem 0xfb700000-0xfb7fffff] Nov 8 00:14:24.788145 kernel: pci 0000:00:17.7: bridge window [mem 0xe5e00000-0xe5efffff 64bit pref] Nov 8 00:14:24.788214 kernel: pci 0000:00:18.0: PCI bridge to [bus 1b] Nov 8 00:14:24.788274 kernel: pci 0000:00:18.0: bridge window [io 0x7000-0x7fff] Nov 8 00:14:24.788328 kernel: pci 0000:00:18.0: bridge window [mem 0xfd200000-0xfd2fffff] Nov 8 00:14:24.788388 kernel: pci 0000:00:18.0: bridge window [mem 0xe7900000-0xe79fffff 64bit pref] Nov 8 00:14:24.788443 kernel: pci 0000:00:18.1: PCI bridge to [bus 1c] Nov 8 00:14:24.788497 kernel: pci 0000:00:18.1: bridge window [io 0xb000-0xbfff] Nov 8 00:14:24.788550 kernel: pci 0000:00:18.1: bridge window [mem 0xfce00000-0xfcefffff] Nov 8 00:14:24.788605 kernel: pci 0000:00:18.1: bridge window [mem 0xe7500000-0xe75fffff 64bit pref] Nov 8 00:14:24.788660 kernel: pci 0000:00:18.2: PCI bridge to [bus 1d] Nov 8 00:14:24.788712 kernel: pci 0000:00:18.2: bridge window [mem 0xfca00000-0xfcafffff] Nov 8 00:14:24.788764 kernel: pci 0000:00:18.2: bridge window [mem 0xe7100000-0xe71fffff 64bit pref] Nov 8 00:14:24.788819 kernel: pci 0000:00:18.3: PCI bridge to [bus 1e] Nov 8 00:14:24.788870 kernel: pci 0000:00:18.3: bridge window [mem 0xfc600000-0xfc6fffff] Nov 8 00:14:24.788922 kernel: pci 0000:00:18.3: bridge window [mem 0xe6d00000-0xe6dfffff 64bit pref] Nov 8 00:14:24.788976 kernel: pci 0000:00:18.4: PCI bridge to [bus 1f] Nov 8 00:14:24.789089 kernel: pci 0000:00:18.4: bridge window [mem 0xfc200000-0xfc2fffff] Nov 8 00:14:24.789147 kernel: pci 0000:00:18.4: bridge window [mem 0xe6900000-0xe69fffff 64bit pref] Nov 8 00:14:24.789204 kernel: pci 0000:00:18.5: PCI bridge to [bus 20] Nov 8 00:14:24.789257 kernel: pci 0000:00:18.5: bridge window [mem 0xfbe00000-0xfbefffff] Nov 8 00:14:24.789309 kernel: pci 0000:00:18.5: bridge window [mem 0xe6500000-0xe65fffff 64bit pref] Nov 8 00:14:24.789362 kernel: pci 0000:00:18.6: PCI bridge to [bus 21] Nov 8 00:14:24.789413 kernel: pci 0000:00:18.6: bridge window [mem 0xfba00000-0xfbafffff] Nov 8 00:14:24.789466 kernel: pci 0000:00:18.6: bridge window [mem 0xe6100000-0xe61fffff 64bit pref] Nov 8 00:14:24.789520 kernel: pci 0000:00:18.7: PCI bridge to [bus 22] Nov 8 00:14:24.789573 kernel: pci 0000:00:18.7: bridge window [mem 0xfb600000-0xfb6fffff] Nov 8 00:14:24.789626 kernel: pci 0000:00:18.7: bridge window [mem 0xe5d00000-0xe5dfffff 64bit pref] Nov 8 00:14:24.789690 kernel: pci_bus 0000:00: resource 4 [mem 0x000a0000-0x000bffff window] Nov 8 00:14:24.789738 kernel: pci_bus 0000:00: resource 5 [mem 0x000cc000-0x000dbfff window] Nov 8 00:14:24.789784 kernel: pci_bus 0000:00: resource 6 [mem 0xc0000000-0xfebfffff window] Nov 8 00:14:24.789830 kernel: pci_bus 0000:00: resource 7 [io 0x0000-0x0cf7 window] Nov 8 00:14:24.789875 kernel: pci_bus 0000:00: resource 8 [io 0x0d00-0xfeff window] Nov 8 00:14:24.789927 kernel: pci_bus 0000:02: resource 0 [io 0x2000-0x3fff] Nov 8 00:14:24.789976 kernel: pci_bus 0000:02: resource 1 [mem 0xfd600000-0xfdffffff] Nov 8 00:14:24.790023 kernel: pci_bus 0000:02: resource 2 [mem 0xe7b00000-0xe7ffffff 64bit pref] Nov 8 00:14:24.790109 kernel: pci_bus 0000:02: resource 4 [mem 0x000a0000-0x000bffff window] Nov 8 00:14:24.790157 kernel: pci_bus 0000:02: resource 5 [mem 0x000cc000-0x000dbfff window] Nov 8 00:14:24.790205 kernel: pci_bus 0000:02: resource 6 [mem 0xc0000000-0xfebfffff window] Nov 8 00:14:24.790252 kernel: pci_bus 0000:02: resource 7 [io 0x0000-0x0cf7 window] Nov 8 00:14:24.790299 kernel: pci_bus 0000:02: resource 8 [io 0x0d00-0xfeff window] Nov 8 00:14:24.790354 kernel: pci_bus 0000:03: resource 0 [io 0x4000-0x4fff] Nov 8 00:14:24.790406 kernel: pci_bus 0000:03: resource 1 [mem 0xfd500000-0xfd5fffff] Nov 8 00:14:24.790457 kernel: pci_bus 0000:03: resource 2 [mem 0xc0000000-0xc01fffff 64bit pref] Nov 8 00:14:24.790513 kernel: pci_bus 0000:04: resource 0 [io 0x8000-0x8fff] Nov 8 00:14:24.790562 kernel: pci_bus 0000:04: resource 1 [mem 0xfd100000-0xfd1fffff] Nov 8 00:14:24.790610 kernel: pci_bus 0000:04: resource 2 [mem 0xe7800000-0xe78fffff 64bit pref] Nov 8 00:14:24.790663 kernel: pci_bus 0000:05: resource 0 [io 0xc000-0xcfff] Nov 8 00:14:24.790712 kernel: pci_bus 0000:05: resource 1 [mem 0xfcd00000-0xfcdfffff] Nov 8 00:14:24.790760 kernel: pci_bus 0000:05: resource 2 [mem 0xe7400000-0xe74fffff 64bit pref] Nov 8 00:14:24.790815 kernel: pci_bus 0000:06: resource 1 [mem 0xfc900000-0xfc9fffff] Nov 8 00:14:24.790864 kernel: pci_bus 0000:06: resource 2 [mem 0xe7000000-0xe70fffff 64bit pref] Nov 8 00:14:24.790918 kernel: pci_bus 0000:07: resource 1 [mem 0xfc500000-0xfc5fffff] Nov 8 00:14:24.790969 kernel: pci_bus 0000:07: resource 2 [mem 0xe6c00000-0xe6cfffff 64bit pref] Nov 8 00:14:24.791021 kernel: pci_bus 0000:08: resource 1 [mem 0xfc100000-0xfc1fffff] Nov 8 00:14:24.791113 kernel: pci_bus 0000:08: resource 2 [mem 0xe6800000-0xe68fffff 64bit pref] Nov 8 00:14:24.791171 kernel: pci_bus 0000:09: resource 1 [mem 0xfbd00000-0xfbdfffff] Nov 8 00:14:24.791220 kernel: pci_bus 0000:09: resource 2 [mem 0xe6400000-0xe64fffff 64bit pref] Nov 8 00:14:24.791273 kernel: pci_bus 0000:0a: resource 1 [mem 0xfb900000-0xfb9fffff] Nov 8 00:14:24.791323 kernel: pci_bus 0000:0a: resource 2 [mem 0xe6000000-0xe60fffff 64bit pref] Nov 8 00:14:24.791401 kernel: pci_bus 0000:0b: resource 0 [io 0x5000-0x5fff] Nov 8 00:14:24.791454 kernel: pci_bus 0000:0b: resource 1 [mem 0xfd400000-0xfd4fffff] Nov 8 00:14:24.791502 kernel: pci_bus 0000:0b: resource 2 [mem 0xc0200000-0xc03fffff 64bit pref] Nov 8 00:14:24.791555 kernel: pci_bus 0000:0c: resource 0 [io 0x9000-0x9fff] Nov 8 00:14:24.791604 kernel: pci_bus 0000:0c: resource 1 [mem 0xfd000000-0xfd0fffff] Nov 8 00:14:24.791655 kernel: pci_bus 0000:0c: resource 2 [mem 0xe7700000-0xe77fffff 64bit pref] Nov 8 00:14:24.791710 kernel: pci_bus 0000:0d: resource 0 [io 0xd000-0xdfff] Nov 8 00:14:24.791760 kernel: pci_bus 0000:0d: resource 1 [mem 0xfcc00000-0xfccfffff] Nov 8 00:14:24.791814 kernel: pci_bus 0000:0d: resource 2 [mem 0xe7300000-0xe73fffff 64bit pref] Nov 8 00:14:24.791868 kernel: pci_bus 0000:0e: resource 1 [mem 0xfc800000-0xfc8fffff] Nov 8 00:14:24.791917 kernel: pci_bus 0000:0e: resource 2 [mem 0xe6f00000-0xe6ffffff 64bit pref] Nov 8 00:14:24.791970 kernel: pci_bus 0000:0f: resource 1 [mem 0xfc400000-0xfc4fffff] Nov 8 00:14:24.792020 kernel: pci_bus 0000:0f: resource 2 [mem 0xe6b00000-0xe6bfffff 64bit pref] Nov 8 00:14:24.792105 kernel: pci_bus 0000:10: resource 1 [mem 0xfc000000-0xfc0fffff] Nov 8 00:14:24.792155 kernel: pci_bus 0000:10: resource 2 [mem 0xe6700000-0xe67fffff 64bit pref] Nov 8 00:14:24.792211 kernel: pci_bus 0000:11: resource 1 [mem 0xfbc00000-0xfbcfffff] Nov 8 00:14:24.792260 kernel: pci_bus 0000:11: resource 2 [mem 0xe6300000-0xe63fffff 64bit pref] Nov 8 00:14:24.792312 kernel: pci_bus 0000:12: resource 1 [mem 0xfb800000-0xfb8fffff] Nov 8 00:14:24.792362 kernel: pci_bus 0000:12: resource 2 [mem 0xe5f00000-0xe5ffffff 64bit pref] Nov 8 00:14:24.792416 kernel: pci_bus 0000:13: resource 0 [io 0x6000-0x6fff] Nov 8 00:14:24.792465 kernel: pci_bus 0000:13: resource 1 [mem 0xfd300000-0xfd3fffff] Nov 8 00:14:24.792518 kernel: pci_bus 0000:13: resource 2 [mem 0xe7a00000-0xe7afffff 64bit pref] Nov 8 00:14:24.792572 kernel: pci_bus 0000:14: resource 0 [io 0xa000-0xafff] Nov 8 00:14:24.792622 kernel: pci_bus 0000:14: resource 1 [mem 0xfcf00000-0xfcffffff] Nov 8 00:14:24.792671 kernel: pci_bus 0000:14: resource 2 [mem 0xe7600000-0xe76fffff 64bit pref] Nov 8 00:14:24.792728 kernel: pci_bus 0000:15: resource 0 [io 0xe000-0xefff] Nov 8 00:14:24.792777 kernel: pci_bus 0000:15: resource 1 [mem 0xfcb00000-0xfcbfffff] Nov 8 00:14:24.792826 kernel: pci_bus 0000:15: resource 2 [mem 0xe7200000-0xe72fffff 64bit pref] Nov 8 00:14:24.792882 kernel: pci_bus 0000:16: resource 1 [mem 0xfc700000-0xfc7fffff] Nov 8 00:14:24.792935 kernel: pci_bus 0000:16: resource 2 [mem 0xe6e00000-0xe6efffff 64bit pref] Nov 8 00:14:24.792989 kernel: pci_bus 0000:17: resource 1 [mem 0xfc300000-0xfc3fffff] Nov 8 00:14:24.793100 kernel: pci_bus 0000:17: resource 2 [mem 0xe6a00000-0xe6afffff 64bit pref] Nov 8 00:14:24.793160 kernel: pci_bus 0000:18: resource 1 [mem 0xfbf00000-0xfbffffff] Nov 8 00:14:24.793209 kernel: pci_bus 0000:18: resource 2 [mem 0xe6600000-0xe66fffff 64bit pref] Nov 8 00:14:24.793267 kernel: pci_bus 0000:19: resource 1 [mem 0xfbb00000-0xfbbfffff] Nov 8 00:14:24.793316 kernel: pci_bus 0000:19: resource 2 [mem 0xe6200000-0xe62fffff 64bit pref] Nov 8 00:14:24.793368 kernel: pci_bus 0000:1a: resource 1 [mem 0xfb700000-0xfb7fffff] Nov 8 00:14:24.793417 kernel: pci_bus 0000:1a: resource 2 [mem 0xe5e00000-0xe5efffff 64bit pref] Nov 8 00:14:24.793472 kernel: pci_bus 0000:1b: resource 0 [io 0x7000-0x7fff] Nov 8 00:14:24.793524 kernel: pci_bus 0000:1b: resource 1 [mem 0xfd200000-0xfd2fffff] Nov 8 00:14:24.793572 kernel: pci_bus 0000:1b: resource 2 [mem 0xe7900000-0xe79fffff 64bit pref] Nov 8 00:14:24.793624 kernel: pci_bus 0000:1c: resource 0 [io 0xb000-0xbfff] Nov 8 00:14:24.793673 kernel: pci_bus 0000:1c: resource 1 [mem 0xfce00000-0xfcefffff] Nov 8 00:14:24.793721 kernel: pci_bus 0000:1c: resource 2 [mem 0xe7500000-0xe75fffff 64bit pref] Nov 8 00:14:24.793777 kernel: pci_bus 0000:1d: resource 1 [mem 0xfca00000-0xfcafffff] Nov 8 00:14:24.793826 kernel: pci_bus 0000:1d: resource 2 [mem 0xe7100000-0xe71fffff 64bit pref] Nov 8 00:14:24.793881 kernel: pci_bus 0000:1e: resource 1 [mem 0xfc600000-0xfc6fffff] Nov 8 00:14:24.793931 kernel: pci_bus 0000:1e: resource 2 [mem 0xe6d00000-0xe6dfffff 64bit pref] Nov 8 00:14:24.793985 kernel: pci_bus 0000:1f: resource 1 [mem 0xfc200000-0xfc2fffff] Nov 8 00:14:24.794102 kernel: pci_bus 0000:1f: resource 2 [mem 0xe6900000-0xe69fffff 64bit pref] Nov 8 00:14:24.794157 kernel: pci_bus 0000:20: resource 1 [mem 0xfbe00000-0xfbefffff] Nov 8 00:14:24.794205 kernel: pci_bus 0000:20: resource 2 [mem 0xe6500000-0xe65fffff 64bit pref] Nov 8 00:14:24.794262 kernel: pci_bus 0000:21: resource 1 [mem 0xfba00000-0xfbafffff] Nov 8 00:14:24.794310 kernel: pci_bus 0000:21: resource 2 [mem 0xe6100000-0xe61fffff 64bit pref] Nov 8 00:14:24.794368 kernel: pci_bus 0000:22: resource 1 [mem 0xfb600000-0xfb6fffff] Nov 8 00:14:24.794417 kernel: pci_bus 0000:22: resource 2 [mem 0xe5d00000-0xe5dfffff 64bit pref] Nov 8 00:14:24.794476 kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers Nov 8 00:14:24.794486 kernel: PCI: CLS 32 bytes, default 64 Nov 8 00:14:24.794493 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Nov 8 00:14:24.794502 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x311fd3cd494, max_idle_ns: 440795223879 ns Nov 8 00:14:24.794510 kernel: clocksource: Switched to clocksource tsc Nov 8 00:14:24.794516 kernel: Initialise system trusted keyrings Nov 8 00:14:24.794522 kernel: workingset: timestamp_bits=39 max_order=19 bucket_order=0 Nov 8 00:14:24.794529 kernel: Key type asymmetric registered Nov 8 00:14:24.794535 kernel: Asymmetric key parser 'x509' registered Nov 8 00:14:24.794541 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Nov 8 00:14:24.794548 kernel: io scheduler mq-deadline registered Nov 8 00:14:24.794554 kernel: io scheduler kyber registered Nov 8 00:14:24.794562 kernel: io scheduler bfq registered Nov 8 00:14:24.794616 kernel: pcieport 0000:00:15.0: PME: Signaling with IRQ 24 Nov 8 00:14:24.794671 kernel: pcieport 0000:00:15.0: pciehp: Slot #160 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Nov 8 00:14:24.794725 kernel: pcieport 0000:00:15.1: PME: Signaling with IRQ 25 Nov 8 00:14:24.794779 kernel: pcieport 0000:00:15.1: pciehp: Slot #161 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Nov 8 00:14:24.794833 kernel: pcieport 0000:00:15.2: PME: Signaling with IRQ 26 Nov 8 00:14:24.794886 kernel: pcieport 0000:00:15.2: pciehp: Slot #162 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Nov 8 00:14:24.794940 kernel: pcieport 0000:00:15.3: PME: Signaling with IRQ 27 Nov 8 00:14:24.794997 kernel: pcieport 0000:00:15.3: pciehp: Slot #163 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Nov 8 00:14:24.795064 kernel: pcieport 0000:00:15.4: PME: Signaling with IRQ 28 Nov 8 00:14:24.795130 kernel: pcieport 0000:00:15.4: pciehp: Slot #164 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Nov 8 00:14:24.795186 kernel: pcieport 0000:00:15.5: PME: Signaling with IRQ 29 Nov 8 00:14:24.795240 kernel: pcieport 0000:00:15.5: pciehp: Slot #165 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Nov 8 00:14:24.795297 kernel: pcieport 0000:00:15.6: PME: Signaling with IRQ 30 Nov 8 00:14:24.795352 kernel: pcieport 0000:00:15.6: pciehp: Slot #166 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Nov 8 00:14:24.795405 kernel: pcieport 0000:00:15.7: PME: Signaling with IRQ 31 Nov 8 00:14:24.795459 kernel: pcieport 0000:00:15.7: pciehp: Slot #167 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Nov 8 00:14:24.795512 kernel: pcieport 0000:00:16.0: PME: Signaling with IRQ 32 Nov 8 00:14:24.795565 kernel: pcieport 0000:00:16.0: pciehp: Slot #192 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Nov 8 00:14:24.795622 kernel: pcieport 0000:00:16.1: PME: Signaling with IRQ 33 Nov 8 00:14:24.795675 kernel: pcieport 0000:00:16.1: pciehp: Slot #193 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Nov 8 00:14:24.795730 kernel: pcieport 0000:00:16.2: PME: Signaling with IRQ 34 Nov 8 00:14:24.795787 kernel: pcieport 0000:00:16.2: pciehp: Slot #194 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Nov 8 00:14:24.795840 kernel: pcieport 0000:00:16.3: PME: Signaling with IRQ 35 Nov 8 00:14:24.795893 kernel: pcieport 0000:00:16.3: pciehp: Slot #195 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Nov 8 00:14:24.795951 kernel: pcieport 0000:00:16.4: PME: Signaling with IRQ 36 Nov 8 00:14:24.796004 kernel: pcieport 0000:00:16.4: pciehp: Slot #196 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Nov 8 00:14:24.796096 kernel: pcieport 0000:00:16.5: PME: Signaling with IRQ 37 Nov 8 00:14:24.796165 kernel: pcieport 0000:00:16.5: pciehp: Slot #197 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Nov 8 00:14:24.796220 kernel: pcieport 0000:00:16.6: PME: Signaling with IRQ 38 Nov 8 00:14:24.796273 kernel: pcieport 0000:00:16.6: pciehp: Slot #198 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Nov 8 00:14:24.796330 kernel: pcieport 0000:00:16.7: PME: Signaling with IRQ 39 Nov 8 00:14:24.796383 kernel: pcieport 0000:00:16.7: pciehp: Slot #199 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Nov 8 00:14:24.796437 kernel: pcieport 0000:00:17.0: PME: Signaling with IRQ 40 Nov 8 00:14:24.796490 kernel: pcieport 0000:00:17.0: pciehp: Slot #224 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Nov 8 00:14:24.796544 kernel: pcieport 0000:00:17.1: PME: Signaling with IRQ 41 Nov 8 00:14:24.796597 kernel: pcieport 0000:00:17.1: pciehp: Slot #225 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Nov 8 00:14:24.796653 kernel: pcieport 0000:00:17.2: PME: Signaling with IRQ 42 Nov 8 00:14:24.796706 kernel: pcieport 0000:00:17.2: pciehp: Slot #226 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Nov 8 00:14:24.796761 kernel: pcieport 0000:00:17.3: PME: Signaling with IRQ 43 Nov 8 00:14:24.796814 kernel: pcieport 0000:00:17.3: pciehp: Slot #227 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Nov 8 00:14:24.796868 kernel: pcieport 0000:00:17.4: PME: Signaling with IRQ 44 Nov 8 00:14:24.796921 kernel: pcieport 0000:00:17.4: pciehp: Slot #228 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Nov 8 00:14:24.796977 kernel: pcieport 0000:00:17.5: PME: Signaling with IRQ 45 Nov 8 00:14:24.797062 kernel: pcieport 0000:00:17.5: pciehp: Slot #229 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Nov 8 00:14:24.797123 kernel: pcieport 0000:00:17.6: PME: Signaling with IRQ 46 Nov 8 00:14:24.797176 kernel: pcieport 0000:00:17.6: pciehp: Slot #230 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Nov 8 00:14:24.797229 kernel: pcieport 0000:00:17.7: PME: Signaling with IRQ 47 Nov 8 00:14:24.797285 kernel: pcieport 0000:00:17.7: pciehp: Slot #231 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Nov 8 00:14:24.797339 kernel: pcieport 0000:00:18.0: PME: Signaling with IRQ 48 Nov 8 00:14:24.797397 kernel: pcieport 0000:00:18.0: pciehp: Slot #256 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Nov 8 00:14:24.797450 kernel: pcieport 0000:00:18.1: PME: Signaling with IRQ 49 Nov 8 00:14:24.797503 kernel: pcieport 0000:00:18.1: pciehp: Slot #257 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Nov 8 00:14:24.797556 kernel: pcieport 0000:00:18.2: PME: Signaling with IRQ 50 Nov 8 00:14:24.797609 kernel: pcieport 0000:00:18.2: pciehp: Slot #258 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Nov 8 00:14:24.797667 kernel: pcieport 0000:00:18.3: PME: Signaling with IRQ 51 Nov 8 00:14:24.797721 kernel: pcieport 0000:00:18.3: pciehp: Slot #259 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Nov 8 00:14:24.797776 kernel: pcieport 0000:00:18.4: PME: Signaling with IRQ 52 Nov 8 00:14:24.797829 kernel: pcieport 0000:00:18.4: pciehp: Slot #260 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Nov 8 00:14:24.797883 kernel: pcieport 0000:00:18.5: PME: Signaling with IRQ 53 Nov 8 00:14:24.797939 kernel: pcieport 0000:00:18.5: pciehp: Slot #261 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Nov 8 00:14:24.797994 kernel: pcieport 0000:00:18.6: PME: Signaling with IRQ 54 Nov 8 00:14:24.798323 kernel: pcieport 0000:00:18.6: pciehp: Slot #262 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Nov 8 00:14:24.798393 kernel: pcieport 0000:00:18.7: PME: Signaling with IRQ 55 Nov 8 00:14:24.798450 kernel: pcieport 0000:00:18.7: pciehp: Slot #263 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Nov 8 00:14:24.798462 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Nov 8 00:14:24.798469 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Nov 8 00:14:24.798476 kernel: 00:05: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Nov 8 00:14:24.798482 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBC,PNP0f13:MOUS] at 0x60,0x64 irq 1,12 Nov 8 00:14:24.798489 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Nov 8 00:14:24.798495 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Nov 8 00:14:24.798558 kernel: rtc_cmos 00:01: registered as rtc0 Nov 8 00:14:24.798609 kernel: rtc_cmos 00:01: setting system clock to 2025-11-08T00:14:24 UTC (1762560864) Nov 8 00:14:24.798661 kernel: rtc_cmos 00:01: alarms up to one month, y3k, 114 bytes nvram Nov 8 00:14:24.798670 kernel: intel_pstate: CPU model not supported Nov 8 00:14:24.798677 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input1 Nov 8 00:14:24.798684 kernel: NET: Registered PF_INET6 protocol family Nov 8 00:14:24.798690 kernel: Segment Routing with IPv6 Nov 8 00:14:24.798696 kernel: In-situ OAM (IOAM) with IPv6 Nov 8 00:14:24.798702 kernel: NET: Registered PF_PACKET protocol family Nov 8 00:14:24.798709 kernel: Key type dns_resolver registered Nov 8 00:14:24.798715 kernel: IPI shorthand broadcast: enabled Nov 8 00:14:24.798724 kernel: sched_clock: Marking stable (922003612, 231184225)->(1213725522, -60537685) Nov 8 00:14:24.798730 kernel: registered taskstats version 1 Nov 8 00:14:24.798737 kernel: Loading compiled-in X.509 certificates Nov 8 00:14:24.798743 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.113-flatcar: cf7a35a152685ec84a621291e4ce58c959319dfd' Nov 8 00:14:24.798749 kernel: Key type .fscrypt registered Nov 8 00:14:24.798755 kernel: Key type fscrypt-provisioning registered Nov 8 00:14:24.798762 kernel: ima: No TPM chip found, activating TPM-bypass! Nov 8 00:14:24.798768 kernel: ima: Allocated hash algorithm: sha1 Nov 8 00:14:24.798775 kernel: ima: No architecture policies found Nov 8 00:14:24.798782 kernel: clk: Disabling unused clocks Nov 8 00:14:24.798788 kernel: Freeing unused kernel image (initmem) memory: 42880K Nov 8 00:14:24.798795 kernel: Write protecting the kernel read-only data: 36864k Nov 8 00:14:24.798801 kernel: Freeing unused kernel image (rodata/data gap) memory: 1828K Nov 8 00:14:24.798807 kernel: Run /init as init process Nov 8 00:14:24.798814 kernel: with arguments: Nov 8 00:14:24.798820 kernel: /init Nov 8 00:14:24.798827 kernel: with environment: Nov 8 00:14:24.798833 kernel: HOME=/ Nov 8 00:14:24.798840 kernel: TERM=linux Nov 8 00:14:24.798849 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Nov 8 00:14:24.798857 systemd[1]: Detected virtualization vmware. Nov 8 00:14:24.798864 systemd[1]: Detected architecture x86-64. Nov 8 00:14:24.798871 systemd[1]: Running in initrd. Nov 8 00:14:24.798877 systemd[1]: No hostname configured, using default hostname. Nov 8 00:14:24.798883 systemd[1]: Hostname set to . Nov 8 00:14:24.798891 systemd[1]: Initializing machine ID from random generator. Nov 8 00:14:24.798897 systemd[1]: Queued start job for default target initrd.target. Nov 8 00:14:24.798904 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Nov 8 00:14:24.798910 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Nov 8 00:14:24.798918 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Nov 8 00:14:24.798924 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Nov 8 00:14:24.798931 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Nov 8 00:14:24.798937 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Nov 8 00:14:24.798946 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Nov 8 00:14:24.798952 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Nov 8 00:14:24.798959 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Nov 8 00:14:24.798966 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Nov 8 00:14:24.798972 systemd[1]: Reached target paths.target - Path Units. Nov 8 00:14:24.798978 systemd[1]: Reached target slices.target - Slice Units. Nov 8 00:14:24.798985 systemd[1]: Reached target swap.target - Swaps. Nov 8 00:14:24.798993 systemd[1]: Reached target timers.target - Timer Units. Nov 8 00:14:24.798999 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Nov 8 00:14:24.799006 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Nov 8 00:14:24.799012 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Nov 8 00:14:24.799019 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Nov 8 00:14:24.799026 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Nov 8 00:14:24.799315 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Nov 8 00:14:24.799323 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Nov 8 00:14:24.799329 systemd[1]: Reached target sockets.target - Socket Units. Nov 8 00:14:24.799339 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Nov 8 00:14:24.799346 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Nov 8 00:14:24.799352 systemd[1]: Finished network-cleanup.service - Network Cleanup. Nov 8 00:14:24.799358 systemd[1]: Starting systemd-fsck-usr.service... Nov 8 00:14:24.799365 systemd[1]: Starting systemd-journald.service - Journal Service... Nov 8 00:14:24.799371 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Nov 8 00:14:24.799379 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Nov 8 00:14:24.799386 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Nov 8 00:14:24.799408 systemd-journald[217]: Collecting audit messages is disabled. Nov 8 00:14:24.799425 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Nov 8 00:14:24.799432 systemd[1]: Finished systemd-fsck-usr.service. Nov 8 00:14:24.799441 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Nov 8 00:14:24.799447 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Nov 8 00:14:24.799454 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Nov 8 00:14:24.799461 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Nov 8 00:14:24.799467 kernel: Bridge firewalling registered Nov 8 00:14:24.799477 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Nov 8 00:14:24.799484 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Nov 8 00:14:24.799491 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Nov 8 00:14:24.799498 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Nov 8 00:14:24.799504 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Nov 8 00:14:24.799511 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Nov 8 00:14:24.799517 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Nov 8 00:14:24.799528 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Nov 8 00:14:24.799535 systemd-journald[217]: Journal started Nov 8 00:14:24.799552 systemd-journald[217]: Runtime Journal (/run/log/journal/69fa7aa358f64310a66e748faeb52d37) is 4.8M, max 38.6M, 33.8M free. Nov 8 00:14:24.740972 systemd-modules-load[218]: Inserted module 'overlay' Nov 8 00:14:24.770138 systemd-modules-load[218]: Inserted module 'br_netfilter' Nov 8 00:14:24.801203 systemd[1]: Started systemd-journald.service - Journal Service. Nov 8 00:14:24.804636 dracut-cmdline[238]: dracut-dracut-053 Nov 8 00:14:24.807474 dracut-cmdline[238]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=vmware flatcar.autologin verity.usrhash=480a02cf7f2001774aa495c3e719d4173e968e6839485a7d2b207ef2facca472 Nov 8 00:14:24.807189 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Nov 8 00:14:24.813350 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Nov 8 00:14:24.817136 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Nov 8 00:14:24.835130 systemd-resolved[274]: Positive Trust Anchors: Nov 8 00:14:24.835139 systemd-resolved[274]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Nov 8 00:14:24.835162 systemd-resolved[274]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Nov 8 00:14:24.836859 systemd-resolved[274]: Defaulting to hostname 'linux'. Nov 8 00:14:24.837556 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Nov 8 00:14:24.837722 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Nov 8 00:14:24.855049 kernel: SCSI subsystem initialized Nov 8 00:14:24.862043 kernel: Loading iSCSI transport class v2.0-870. Nov 8 00:14:24.870068 kernel: iscsi: registered transport (tcp) Nov 8 00:14:24.885064 kernel: iscsi: registered transport (qla4xxx) Nov 8 00:14:24.885124 kernel: QLogic iSCSI HBA Driver Nov 8 00:14:24.906492 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Nov 8 00:14:24.911137 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Nov 8 00:14:24.926543 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Nov 8 00:14:24.926590 kernel: device-mapper: uevent: version 1.0.3 Nov 8 00:14:24.927882 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Nov 8 00:14:24.959048 kernel: raid6: avx2x4 gen() 52236 MB/s Nov 8 00:14:24.976072 kernel: raid6: avx2x2 gen() 41840 MB/s Nov 8 00:14:24.993243 kernel: raid6: avx2x1 gen() 33873 MB/s Nov 8 00:14:24.993295 kernel: raid6: using algorithm avx2x4 gen() 52236 MB/s Nov 8 00:14:25.011245 kernel: raid6: .... xor() 18339 MB/s, rmw enabled Nov 8 00:14:25.011305 kernel: raid6: using avx2x2 recovery algorithm Nov 8 00:14:25.025044 kernel: xor: automatically using best checksumming function avx Nov 8 00:14:25.130101 kernel: Btrfs loaded, zoned=no, fsverity=no Nov 8 00:14:25.135273 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Nov 8 00:14:25.139140 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Nov 8 00:14:25.146978 systemd-udevd[433]: Using default interface naming scheme 'v255'. Nov 8 00:14:25.149553 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Nov 8 00:14:25.160154 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Nov 8 00:14:25.166696 dracut-pre-trigger[438]: rd.md=0: removing MD RAID activation Nov 8 00:14:25.181975 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Nov 8 00:14:25.186144 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Nov 8 00:14:25.257716 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Nov 8 00:14:25.261143 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Nov 8 00:14:25.270863 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Nov 8 00:14:25.271537 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Nov 8 00:14:25.271652 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Nov 8 00:14:25.271758 systemd[1]: Reached target remote-fs.target - Remote File Systems. Nov 8 00:14:25.278172 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Nov 8 00:14:25.285845 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Nov 8 00:14:25.333103 kernel: VMware PVSCSI driver - version 1.0.7.0-k Nov 8 00:14:25.336198 kernel: vmw_pvscsi: using 64bit dma Nov 8 00:14:25.336227 kernel: vmw_pvscsi: max_id: 16 Nov 8 00:14:25.336236 kernel: vmw_pvscsi: setting ring_pages to 8 Nov 8 00:14:25.343039 kernel: vmw_pvscsi: enabling reqCallThreshold Nov 8 00:14:25.343070 kernel: VMware vmxnet3 virtual NIC driver - version 1.7.0.0-k-NAPI Nov 8 00:14:25.343079 kernel: vmw_pvscsi: driver-based request coalescing enabled Nov 8 00:14:25.343464 kernel: vmw_pvscsi: using MSI-X Nov 8 00:14:25.346042 kernel: scsi host0: VMware PVSCSI storage adapter rev 2, req/cmp/msg rings: 8/8/1 pages, cmd_per_lun=254 Nov 8 00:14:25.348040 kernel: vmw_pvscsi 0000:03:00.0: VMware PVSCSI rev 2 host #0 Nov 8 00:14:25.352041 kernel: vmxnet3 0000:0b:00.0: # of Tx queues : 2, # of Rx queues : 2 Nov 8 00:14:25.352187 kernel: scsi 0:0:0:0: Direct-Access VMware Virtual disk 2.0 PQ: 0 ANSI: 6 Nov 8 00:14:25.357088 kernel: cryptd: max_cpu_qlen set to 1000 Nov 8 00:14:25.365474 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Nov 8 00:14:25.365724 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Nov 8 00:14:25.366113 kernel: vmxnet3 0000:0b:00.0 eth0: NIC Link is Up 10000 Mbps Nov 8 00:14:25.366229 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Nov 8 00:14:25.366469 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Nov 8 00:14:25.366546 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Nov 8 00:14:25.366933 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Nov 8 00:14:25.375798 kernel: vmxnet3 0000:0b:00.0 ens192: renamed from eth0 Nov 8 00:14:25.373238 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Nov 8 00:14:25.383929 kernel: AVX2 version of gcm_enc/dec engaged. Nov 8 00:14:25.383960 kernel: AES CTR mode by8 optimization enabled Nov 8 00:14:25.385124 kernel: libata version 3.00 loaded. Nov 8 00:14:25.389041 kernel: ata_piix 0000:00:07.1: version 2.13 Nov 8 00:14:25.390948 kernel: scsi host1: ata_piix Nov 8 00:14:25.391046 kernel: scsi host2: ata_piix Nov 8 00:14:25.394431 kernel: ata1: PATA max UDMA/33 cmd 0x1f0 ctl 0x3f6 bmdma 0x1060 irq 14 Nov 8 00:14:25.394449 kernel: ata2: PATA max UDMA/33 cmd 0x170 ctl 0x376 bmdma 0x1068 irq 15 Nov 8 00:14:25.399077 kernel: sd 0:0:0:0: [sda] 17805312 512-byte logical blocks: (9.12 GB/8.49 GiB) Nov 8 00:14:25.399175 kernel: sd 0:0:0:0: [sda] Write Protect is off Nov 8 00:14:25.399254 kernel: sd 0:0:0:0: [sda] Mode Sense: 31 00 00 00 Nov 8 00:14:25.400192 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Nov 8 00:14:25.401083 kernel: sd 0:0:0:0: [sda] Cache data unavailable Nov 8 00:14:25.401162 kernel: sd 0:0:0:0: [sda] Assuming drive cache: write through Nov 8 00:14:25.404161 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Nov 8 00:14:25.407041 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Nov 8 00:14:25.408044 kernel: sd 0:0:0:0: [sda] Attached SCSI disk Nov 8 00:14:25.416633 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Nov 8 00:14:25.559051 kernel: ata2.00: ATAPI: VMware Virtual IDE CDROM Drive, 00000001, max UDMA/33 Nov 8 00:14:25.565042 kernel: scsi 2:0:0:0: CD-ROM NECVMWar VMware IDE CDR10 1.00 PQ: 0 ANSI: 5 Nov 8 00:14:25.589437 kernel: sr 2:0:0:0: [sr0] scsi3-mmc drive: 1x/1x writer dvd-ram cd/rw xa/form2 cdda tray Nov 8 00:14:25.589560 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Nov 8 00:14:25.597173 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Virtual_disk ROOT. Nov 8 00:14:25.598086 kernel: BTRFS: device label OEM devid 1 transid 10 /dev/sda6 scanned by (udev-worker) (481) Nov 8 00:14:25.599984 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Virtual_disk EFI-SYSTEM. Nov 8 00:14:25.603202 kernel: sr 2:0:0:0: Attached scsi CD-ROM sr0 Nov 8 00:14:25.603304 kernel: BTRFS: device fsid a2737782-a37e-42f9-8b56-489a87f47acc devid 1 transid 35 /dev/sda3 scanned by (udev-worker) (494) Nov 8 00:14:25.607084 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Virtual_disk OEM. Nov 8 00:14:25.609401 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Virtual_disk USR-A. Nov 8 00:14:25.609689 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - Virtual_disk USR-A. Nov 8 00:14:25.616207 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Nov 8 00:14:25.641745 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Nov 8 00:14:25.646045 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Nov 8 00:14:25.650048 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Nov 8 00:14:26.651098 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Nov 8 00:14:26.652002 disk-uuid[588]: The operation has completed successfully. Nov 8 00:14:26.685753 systemd[1]: disk-uuid.service: Deactivated successfully. Nov 8 00:14:26.685814 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Nov 8 00:14:26.689105 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Nov 8 00:14:26.690818 sh[610]: Success Nov 8 00:14:26.699066 kernel: device-mapper: verity: sha256 using implementation "sha256-avx2" Nov 8 00:14:26.731442 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Nov 8 00:14:26.741883 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Nov 8 00:14:26.742117 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Nov 8 00:14:26.759045 kernel: BTRFS info (device dm-0): first mount of filesystem a2737782-a37e-42f9-8b56-489a87f47acc Nov 8 00:14:26.759080 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Nov 8 00:14:26.759089 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Nov 8 00:14:26.759100 kernel: BTRFS info (device dm-0): disabling log replay at mount time Nov 8 00:14:26.759631 kernel: BTRFS info (device dm-0): using free space tree Nov 8 00:14:26.767042 kernel: BTRFS info (device dm-0): enabling ssd optimizations Nov 8 00:14:26.767979 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Nov 8 00:14:26.777118 systemd[1]: Starting afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments... Nov 8 00:14:26.778330 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Nov 8 00:14:26.799591 kernel: BTRFS info (device sda6): first mount of filesystem 7b59d8a2-cf4e-4d67-8d1e-00d7f134f45e Nov 8 00:14:26.799625 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Nov 8 00:14:26.799634 kernel: BTRFS info (device sda6): using free space tree Nov 8 00:14:26.821045 kernel: BTRFS info (device sda6): enabling ssd optimizations Nov 8 00:14:26.826945 systemd[1]: mnt-oem.mount: Deactivated successfully. Nov 8 00:14:26.828056 kernel: BTRFS info (device sda6): last unmount of filesystem 7b59d8a2-cf4e-4d67-8d1e-00d7f134f45e Nov 8 00:14:26.832896 systemd[1]: Finished ignition-setup.service - Ignition (setup). Nov 8 00:14:26.842165 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Nov 8 00:14:26.843007 systemd[1]: Finished afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments. Nov 8 00:14:26.843933 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Nov 8 00:14:26.912712 ignition[670]: Ignition 2.19.0 Nov 8 00:14:26.912719 ignition[670]: Stage: fetch-offline Nov 8 00:14:26.912743 ignition[670]: no configs at "/usr/lib/ignition/base.d" Nov 8 00:14:26.912748 ignition[670]: no config dir at "/usr/lib/ignition/base.platform.d/vmware" Nov 8 00:14:26.912803 ignition[670]: parsed url from cmdline: "" Nov 8 00:14:26.912804 ignition[670]: no config URL provided Nov 8 00:14:26.912807 ignition[670]: reading system config file "/usr/lib/ignition/user.ign" Nov 8 00:14:26.912812 ignition[670]: no config at "/usr/lib/ignition/user.ign" Nov 8 00:14:26.913186 ignition[670]: config successfully fetched Nov 8 00:14:26.913203 ignition[670]: parsing config with SHA512: c159b550d3094945aab17c4d67d33324f3b71bc7d11664441500586d73a3b96ac71770e51393f92ed76f0be42766f2b828baa449335b2567f381ad626b20b95f Nov 8 00:14:26.915684 unknown[670]: fetched base config from "system" Nov 8 00:14:26.915690 unknown[670]: fetched user config from "vmware" Nov 8 00:14:26.915972 ignition[670]: fetch-offline: fetch-offline passed Nov 8 00:14:26.916009 ignition[670]: Ignition finished successfully Nov 8 00:14:26.917063 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Nov 8 00:14:26.930629 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Nov 8 00:14:26.935140 systemd[1]: Starting systemd-networkd.service - Network Configuration... Nov 8 00:14:26.946687 systemd-networkd[803]: lo: Link UP Nov 8 00:14:26.946692 systemd-networkd[803]: lo: Gained carrier Nov 8 00:14:26.947447 systemd-networkd[803]: Enumeration completed Nov 8 00:14:26.947721 systemd[1]: Started systemd-networkd.service - Network Configuration. Nov 8 00:14:26.947737 systemd-networkd[803]: ens192: Configuring with /etc/systemd/network/10-dracut-cmdline-99.network. Nov 8 00:14:26.947861 systemd[1]: Reached target network.target - Network. Nov 8 00:14:26.951226 kernel: vmxnet3 0000:0b:00.0 ens192: intr type 3, mode 0, 3 vectors allocated Nov 8 00:14:26.951357 kernel: vmxnet3 0000:0b:00.0 ens192: NIC Link is Up 10000 Mbps Nov 8 00:14:26.947948 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Nov 8 00:14:26.950892 systemd-networkd[803]: ens192: Link UP Nov 8 00:14:26.950895 systemd-networkd[803]: ens192: Gained carrier Nov 8 00:14:26.954194 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Nov 8 00:14:26.962112 ignition[805]: Ignition 2.19.0 Nov 8 00:14:26.962119 ignition[805]: Stage: kargs Nov 8 00:14:26.962231 ignition[805]: no configs at "/usr/lib/ignition/base.d" Nov 8 00:14:26.962238 ignition[805]: no config dir at "/usr/lib/ignition/base.platform.d/vmware" Nov 8 00:14:26.962807 ignition[805]: kargs: kargs passed Nov 8 00:14:26.962832 ignition[805]: Ignition finished successfully Nov 8 00:14:26.963866 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Nov 8 00:14:26.968140 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Nov 8 00:14:26.975528 ignition[813]: Ignition 2.19.0 Nov 8 00:14:26.975536 ignition[813]: Stage: disks Nov 8 00:14:26.975676 ignition[813]: no configs at "/usr/lib/ignition/base.d" Nov 8 00:14:26.975683 ignition[813]: no config dir at "/usr/lib/ignition/base.platform.d/vmware" Nov 8 00:14:26.976401 ignition[813]: disks: disks passed Nov 8 00:14:26.976435 ignition[813]: Ignition finished successfully Nov 8 00:14:26.977150 systemd[1]: Finished ignition-disks.service - Ignition (disks). Nov 8 00:14:26.977674 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Nov 8 00:14:26.977917 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Nov 8 00:14:26.978191 systemd[1]: Reached target local-fs.target - Local File Systems. Nov 8 00:14:26.978427 systemd[1]: Reached target sysinit.target - System Initialization. Nov 8 00:14:26.978659 systemd[1]: Reached target basic.target - Basic System. Nov 8 00:14:26.983124 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Nov 8 00:14:26.993571 systemd-fsck[821]: ROOT: clean, 14/1628000 files, 120691/1617920 blocks Nov 8 00:14:26.994641 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Nov 8 00:14:27.002106 systemd[1]: Mounting sysroot.mount - /sysroot... Nov 8 00:14:27.067666 systemd[1]: Mounted sysroot.mount - /sysroot. Nov 8 00:14:27.068086 kernel: EXT4-fs (sda9): mounted filesystem 3cd35b5c-4e0e-45c1-abc9-cf70eebd42df r/w with ordered data mode. Quota mode: none. Nov 8 00:14:27.068026 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Nov 8 00:14:27.071086 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Nov 8 00:14:27.073024 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Nov 8 00:14:27.073304 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Nov 8 00:14:27.073329 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Nov 8 00:14:27.073342 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Nov 8 00:14:27.077378 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Nov 8 00:14:27.078049 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Nov 8 00:14:27.081047 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/sda6 scanned by mount (829) Nov 8 00:14:27.083369 kernel: BTRFS info (device sda6): first mount of filesystem 7b59d8a2-cf4e-4d67-8d1e-00d7f134f45e Nov 8 00:14:27.083387 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Nov 8 00:14:27.083396 kernel: BTRFS info (device sda6): using free space tree Nov 8 00:14:27.088040 kernel: BTRFS info (device sda6): enabling ssd optimizations Nov 8 00:14:27.089303 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Nov 8 00:14:27.112413 initrd-setup-root[853]: cut: /sysroot/etc/passwd: No such file or directory Nov 8 00:14:27.114987 initrd-setup-root[860]: cut: /sysroot/etc/group: No such file or directory Nov 8 00:14:27.117146 initrd-setup-root[867]: cut: /sysroot/etc/shadow: No such file or directory Nov 8 00:14:27.119258 initrd-setup-root[874]: cut: /sysroot/etc/gshadow: No such file or directory Nov 8 00:14:27.177380 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Nov 8 00:14:27.181150 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Nov 8 00:14:27.183601 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Nov 8 00:14:27.187422 kernel: BTRFS info (device sda6): last unmount of filesystem 7b59d8a2-cf4e-4d67-8d1e-00d7f134f45e Nov 8 00:14:27.198021 ignition[942]: INFO : Ignition 2.19.0 Nov 8 00:14:27.200418 ignition[942]: INFO : Stage: mount Nov 8 00:14:27.200549 ignition[942]: INFO : no configs at "/usr/lib/ignition/base.d" Nov 8 00:14:27.200549 ignition[942]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/vmware" Nov 8 00:14:27.201521 ignition[942]: INFO : mount: mount passed Nov 8 00:14:27.201717 ignition[942]: INFO : Ignition finished successfully Nov 8 00:14:27.203436 systemd[1]: Finished ignition-mount.service - Ignition (mount). Nov 8 00:14:27.207170 systemd[1]: Starting ignition-files.service - Ignition (files)... Nov 8 00:14:27.212655 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Nov 8 00:14:27.594567 systemd-resolved[274]: Detected conflict on linux IN A 139.178.70.104 Nov 8 00:14:27.594578 systemd-resolved[274]: Hostname conflict, changing published hostname from 'linux' to 'linux7'. Nov 8 00:14:27.756122 systemd[1]: sysroot-oem.mount: Deactivated successfully. Nov 8 00:14:27.761139 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Nov 8 00:14:27.769061 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/sda6 scanned by mount (955) Nov 8 00:14:27.772221 kernel: BTRFS info (device sda6): first mount of filesystem 7b59d8a2-cf4e-4d67-8d1e-00d7f134f45e Nov 8 00:14:27.772239 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Nov 8 00:14:27.772247 kernel: BTRFS info (device sda6): using free space tree Nov 8 00:14:27.778045 kernel: BTRFS info (device sda6): enabling ssd optimizations Nov 8 00:14:27.778425 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Nov 8 00:14:27.789928 ignition[971]: INFO : Ignition 2.19.0 Nov 8 00:14:27.789928 ignition[971]: INFO : Stage: files Nov 8 00:14:27.790291 ignition[971]: INFO : no configs at "/usr/lib/ignition/base.d" Nov 8 00:14:27.790291 ignition[971]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/vmware" Nov 8 00:14:27.790627 ignition[971]: DEBUG : files: compiled without relabeling support, skipping Nov 8 00:14:27.791422 ignition[971]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Nov 8 00:14:27.791422 ignition[971]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Nov 8 00:14:27.793773 ignition[971]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Nov 8 00:14:27.794013 ignition[971]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Nov 8 00:14:27.794232 unknown[971]: wrote ssh authorized keys file for user: core Nov 8 00:14:27.794545 ignition[971]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Nov 8 00:14:27.796912 ignition[971]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Nov 8 00:14:27.796912 ignition[971]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.3-linux-amd64.tar.gz: attempt #1 Nov 8 00:14:27.847607 ignition[971]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Nov 8 00:14:27.964760 ignition[971]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Nov 8 00:14:27.965798 ignition[971]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Nov 8 00:14:27.965798 ignition[971]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Nov 8 00:14:27.965798 ignition[971]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Nov 8 00:14:27.965798 ignition[971]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Nov 8 00:14:27.965798 ignition[971]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Nov 8 00:14:27.965798 ignition[971]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Nov 8 00:14:27.965798 ignition[971]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Nov 8 00:14:27.965798 ignition[971]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Nov 8 00:14:27.965798 ignition[971]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Nov 8 00:14:27.965798 ignition[971]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Nov 8 00:14:27.965798 ignition[971]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.34.1-x86-64.raw" Nov 8 00:14:27.965798 ignition[971]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.34.1-x86-64.raw" Nov 8 00:14:27.965798 ignition[971]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.34.1-x86-64.raw" Nov 8 00:14:27.965798 ignition[971]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.34.1-x86-64.raw: attempt #1 Nov 8 00:14:28.457191 ignition[971]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Nov 8 00:14:28.705191 systemd-networkd[803]: ens192: Gained IPv6LL Nov 8 00:14:29.674239 ignition[971]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.34.1-x86-64.raw" Nov 8 00:14:29.674729 ignition[971]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/etc/systemd/network/00-vmware.network" Nov 8 00:14:29.674729 ignition[971]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/etc/systemd/network/00-vmware.network" Nov 8 00:14:29.674729 ignition[971]: INFO : files: op(c): [started] processing unit "prepare-helm.service" Nov 8 00:14:29.674729 ignition[971]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Nov 8 00:14:29.674729 ignition[971]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Nov 8 00:14:29.674729 ignition[971]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" Nov 8 00:14:29.674729 ignition[971]: INFO : files: op(e): [started] processing unit "coreos-metadata.service" Nov 8 00:14:29.674729 ignition[971]: INFO : files: op(e): op(f): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Nov 8 00:14:29.674729 ignition[971]: INFO : files: op(e): op(f): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Nov 8 00:14:29.674729 ignition[971]: INFO : files: op(e): [finished] processing unit "coreos-metadata.service" Nov 8 00:14:29.674729 ignition[971]: INFO : files: op(10): [started] setting preset to disabled for "coreos-metadata.service" Nov 8 00:14:29.827138 ignition[971]: INFO : files: op(10): op(11): [started] removing enablement symlink(s) for "coreos-metadata.service" Nov 8 00:14:29.829827 ignition[971]: INFO : files: op(10): op(11): [finished] removing enablement symlink(s) for "coreos-metadata.service" Nov 8 00:14:29.829827 ignition[971]: INFO : files: op(10): [finished] setting preset to disabled for "coreos-metadata.service" Nov 8 00:14:29.829827 ignition[971]: INFO : files: op(12): [started] setting preset to enabled for "prepare-helm.service" Nov 8 00:14:29.829827 ignition[971]: INFO : files: op(12): [finished] setting preset to enabled for "prepare-helm.service" Nov 8 00:14:29.831269 ignition[971]: INFO : files: createResultFile: createFiles: op(13): [started] writing file "/sysroot/etc/.ignition-result.json" Nov 8 00:14:29.831269 ignition[971]: INFO : files: createResultFile: createFiles: op(13): [finished] writing file "/sysroot/etc/.ignition-result.json" Nov 8 00:14:29.831269 ignition[971]: INFO : files: files passed Nov 8 00:14:29.831269 ignition[971]: INFO : Ignition finished successfully Nov 8 00:14:29.831674 systemd[1]: Finished ignition-files.service - Ignition (files). Nov 8 00:14:29.835140 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Nov 8 00:14:29.837143 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Nov 8 00:14:29.837905 systemd[1]: ignition-quench.service: Deactivated successfully. Nov 8 00:14:29.837970 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Nov 8 00:14:29.851373 initrd-setup-root-after-ignition[1003]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Nov 8 00:14:29.851373 initrd-setup-root-after-ignition[1003]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Nov 8 00:14:29.852524 initrd-setup-root-after-ignition[1007]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Nov 8 00:14:29.853427 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Nov 8 00:14:29.853957 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Nov 8 00:14:29.858190 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Nov 8 00:14:29.872102 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Nov 8 00:14:29.872167 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Nov 8 00:14:29.872468 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Nov 8 00:14:29.872596 systemd[1]: Reached target initrd.target - Initrd Default Target. Nov 8 00:14:29.872793 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Nov 8 00:14:29.873321 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Nov 8 00:14:29.883479 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Nov 8 00:14:29.888149 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Nov 8 00:14:29.893620 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Nov 8 00:14:29.893799 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Nov 8 00:14:29.894255 systemd[1]: Stopped target timers.target - Timer Units. Nov 8 00:14:29.894518 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Nov 8 00:14:29.894593 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Nov 8 00:14:29.895130 systemd[1]: Stopped target initrd.target - Initrd Default Target. Nov 8 00:14:29.895386 systemd[1]: Stopped target basic.target - Basic System. Nov 8 00:14:29.895667 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Nov 8 00:14:29.895931 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Nov 8 00:14:29.896407 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Nov 8 00:14:29.896562 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Nov 8 00:14:29.896964 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Nov 8 00:14:29.897188 systemd[1]: Stopped target sysinit.target - System Initialization. Nov 8 00:14:29.897579 systemd[1]: Stopped target local-fs.target - Local File Systems. Nov 8 00:14:29.897846 systemd[1]: Stopped target swap.target - Swaps. Nov 8 00:14:29.898087 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Nov 8 00:14:29.898153 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Nov 8 00:14:29.898667 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Nov 8 00:14:29.898938 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Nov 8 00:14:29.899082 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Nov 8 00:14:29.899127 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Nov 8 00:14:29.899351 systemd[1]: dracut-initqueue.service: Deactivated successfully. Nov 8 00:14:29.899412 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Nov 8 00:14:29.899690 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Nov 8 00:14:29.899754 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Nov 8 00:14:29.899962 systemd[1]: Stopped target paths.target - Path Units. Nov 8 00:14:29.900103 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Nov 8 00:14:29.904058 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Nov 8 00:14:29.904244 systemd[1]: Stopped target slices.target - Slice Units. Nov 8 00:14:29.904451 systemd[1]: Stopped target sockets.target - Socket Units. Nov 8 00:14:29.904638 systemd[1]: iscsid.socket: Deactivated successfully. Nov 8 00:14:29.904711 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Nov 8 00:14:29.904914 systemd[1]: iscsiuio.socket: Deactivated successfully. Nov 8 00:14:29.904983 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Nov 8 00:14:29.905217 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Nov 8 00:14:29.905291 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Nov 8 00:14:29.905510 systemd[1]: ignition-files.service: Deactivated successfully. Nov 8 00:14:29.905567 systemd[1]: Stopped ignition-files.service - Ignition (files). Nov 8 00:14:29.913215 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Nov 8 00:14:29.913341 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Nov 8 00:14:29.913438 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Nov 8 00:14:29.915216 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Nov 8 00:14:29.915339 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Nov 8 00:14:29.915442 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Nov 8 00:14:29.915736 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Nov 8 00:14:29.915817 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Nov 8 00:14:29.919390 systemd[1]: initrd-cleanup.service: Deactivated successfully. Nov 8 00:14:29.919463 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Nov 8 00:14:29.922693 ignition[1027]: INFO : Ignition 2.19.0 Nov 8 00:14:29.925260 ignition[1027]: INFO : Stage: umount Nov 8 00:14:29.925260 ignition[1027]: INFO : no configs at "/usr/lib/ignition/base.d" Nov 8 00:14:29.925260 ignition[1027]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/vmware" Nov 8 00:14:29.925260 ignition[1027]: INFO : umount: umount passed Nov 8 00:14:29.925260 ignition[1027]: INFO : Ignition finished successfully Nov 8 00:14:29.926165 systemd[1]: ignition-mount.service: Deactivated successfully. Nov 8 00:14:29.926239 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Nov 8 00:14:29.927452 systemd[1]: Stopped target network.target - Network. Nov 8 00:14:29.927768 systemd[1]: ignition-disks.service: Deactivated successfully. Nov 8 00:14:29.927900 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Nov 8 00:14:29.928132 systemd[1]: ignition-kargs.service: Deactivated successfully. Nov 8 00:14:29.928155 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Nov 8 00:14:29.928380 systemd[1]: ignition-setup.service: Deactivated successfully. Nov 8 00:14:29.928403 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Nov 8 00:14:29.928723 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Nov 8 00:14:29.928745 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Nov 8 00:14:29.929045 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Nov 8 00:14:29.929302 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Nov 8 00:14:29.930929 systemd[1]: systemd-resolved.service: Deactivated successfully. Nov 8 00:14:29.930988 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Nov 8 00:14:29.931858 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Nov 8 00:14:29.931909 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Nov 8 00:14:29.934622 systemd[1]: systemd-networkd.service: Deactivated successfully. Nov 8 00:14:29.934697 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Nov 8 00:14:29.935112 systemd[1]: systemd-networkd.socket: Deactivated successfully. Nov 8 00:14:29.935137 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Nov 8 00:14:29.939181 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Nov 8 00:14:29.939287 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Nov 8 00:14:29.939320 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Nov 8 00:14:29.939454 systemd[1]: afterburn-network-kargs.service: Deactivated successfully. Nov 8 00:14:29.939479 systemd[1]: Stopped afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments. Nov 8 00:14:29.939598 systemd[1]: systemd-sysctl.service: Deactivated successfully. Nov 8 00:14:29.939620 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Nov 8 00:14:29.939730 systemd[1]: systemd-modules-load.service: Deactivated successfully. Nov 8 00:14:29.939750 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Nov 8 00:14:29.939909 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Nov 8 00:14:29.943834 systemd[1]: sysroot-boot.mount: Deactivated successfully. Nov 8 00:14:29.948312 systemd[1]: network-cleanup.service: Deactivated successfully. Nov 8 00:14:29.948405 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Nov 8 00:14:29.954488 systemd[1]: systemd-udevd.service: Deactivated successfully. Nov 8 00:14:29.954595 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Nov 8 00:14:29.954982 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Nov 8 00:14:29.955022 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Nov 8 00:14:29.955308 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Nov 8 00:14:29.955333 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Nov 8 00:14:29.955544 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Nov 8 00:14:29.955574 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Nov 8 00:14:29.955933 systemd[1]: dracut-cmdline.service: Deactivated successfully. Nov 8 00:14:29.955963 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Nov 8 00:14:29.956581 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Nov 8 00:14:29.956613 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Nov 8 00:14:29.960227 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Nov 8 00:14:29.960381 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Nov 8 00:14:29.960427 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Nov 8 00:14:29.961160 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Nov 8 00:14:29.961191 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Nov 8 00:14:29.964079 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Nov 8 00:14:29.964141 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Nov 8 00:14:30.056394 systemd[1]: sysroot-boot.service: Deactivated successfully. Nov 8 00:14:30.056484 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Nov 8 00:14:30.056855 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Nov 8 00:14:30.057005 systemd[1]: initrd-setup-root.service: Deactivated successfully. Nov 8 00:14:30.057054 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Nov 8 00:14:30.061155 systemd[1]: Starting initrd-switch-root.service - Switch Root... Nov 8 00:14:30.077152 systemd[1]: Switching root. Nov 8 00:14:30.111216 systemd-journald[217]: Journal stopped Nov 8 00:14:32.056068 systemd-journald[217]: Received SIGTERM from PID 1 (systemd). Nov 8 00:14:32.056093 kernel: SELinux: policy capability network_peer_controls=1 Nov 8 00:14:32.056102 kernel: SELinux: policy capability open_perms=1 Nov 8 00:14:32.056107 kernel: SELinux: policy capability extended_socket_class=1 Nov 8 00:14:32.056113 kernel: SELinux: policy capability always_check_network=0 Nov 8 00:14:32.056119 kernel: SELinux: policy capability cgroup_seclabel=1 Nov 8 00:14:32.056126 kernel: SELinux: policy capability nnp_nosuid_transition=1 Nov 8 00:14:32.056132 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Nov 8 00:14:32.056138 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Nov 8 00:14:32.056144 kernel: audit: type=1403 audit(1762560871.419:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Nov 8 00:14:32.056150 systemd[1]: Successfully loaded SELinux policy in 35.889ms. Nov 8 00:14:32.056157 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 7.042ms. Nov 8 00:14:32.056164 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Nov 8 00:14:32.056172 systemd[1]: Detected virtualization vmware. Nov 8 00:14:32.056179 systemd[1]: Detected architecture x86-64. Nov 8 00:14:32.056186 systemd[1]: Detected first boot. Nov 8 00:14:32.056193 systemd[1]: Initializing machine ID from random generator. Nov 8 00:14:32.056201 zram_generator::config[1070]: No configuration found. Nov 8 00:14:32.056208 systemd[1]: Populated /etc with preset unit settings. Nov 8 00:14:32.056216 systemd[1]: /etc/systemd/system/coreos-metadata.service:11: Ignoring unknown escape sequences: "echo "COREOS_CUSTOM_PRIVATE_IPV4=$(ip addr show ens192 | grep "inet 10." | grep -Po "inet \K[\d.]+") Nov 8 00:14:32.056224 systemd[1]: COREOS_CUSTOM_PUBLIC_IPV4=$(ip addr show ens192 | grep -v "inet 10." | grep -Po "inet \K[\d.]+")" > ${OUTPUT}" Nov 8 00:14:32.056231 systemd[1]: initrd-switch-root.service: Deactivated successfully. Nov 8 00:14:32.056237 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Nov 8 00:14:32.056244 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Nov 8 00:14:32.056251 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Nov 8 00:14:32.056259 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Nov 8 00:14:32.056266 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Nov 8 00:14:32.056272 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Nov 8 00:14:32.056279 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Nov 8 00:14:32.056286 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Nov 8 00:14:32.056292 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Nov 8 00:14:32.056300 systemd[1]: Created slice user.slice - User and Session Slice. Nov 8 00:14:32.056307 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Nov 8 00:14:32.056314 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Nov 8 00:14:32.056321 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Nov 8 00:14:32.056327 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Nov 8 00:14:32.056334 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Nov 8 00:14:32.056341 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Nov 8 00:14:32.056348 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Nov 8 00:14:32.056362 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Nov 8 00:14:32.056370 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Nov 8 00:14:32.056380 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Nov 8 00:14:32.056387 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Nov 8 00:14:32.056394 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Nov 8 00:14:32.056401 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Nov 8 00:14:32.056407 systemd[1]: Reached target remote-fs.target - Remote File Systems. Nov 8 00:14:32.056414 systemd[1]: Reached target slices.target - Slice Units. Nov 8 00:14:32.056422 systemd[1]: Reached target swap.target - Swaps. Nov 8 00:14:32.056429 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Nov 8 00:14:32.056437 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Nov 8 00:14:32.056444 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Nov 8 00:14:32.056451 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Nov 8 00:14:32.056459 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Nov 8 00:14:32.056466 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Nov 8 00:14:32.056473 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Nov 8 00:14:32.056480 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Nov 8 00:14:32.056487 systemd[1]: Mounting media.mount - External Media Directory... Nov 8 00:14:32.056495 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 8 00:14:32.056502 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Nov 8 00:14:32.056509 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Nov 8 00:14:32.056518 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Nov 8 00:14:32.056525 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Nov 8 00:14:32.056533 systemd[1]: Reached target machines.target - Containers. Nov 8 00:14:32.056540 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Nov 8 00:14:32.056547 systemd[1]: Starting ignition-delete-config.service - Ignition (delete config)... Nov 8 00:14:32.056554 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Nov 8 00:14:32.056561 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Nov 8 00:14:32.056568 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Nov 8 00:14:32.056577 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Nov 8 00:14:32.056584 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Nov 8 00:14:32.056591 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Nov 8 00:14:32.056598 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Nov 8 00:14:32.056605 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Nov 8 00:14:32.056612 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Nov 8 00:14:32.056619 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Nov 8 00:14:32.056626 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Nov 8 00:14:32.056633 systemd[1]: Stopped systemd-fsck-usr.service. Nov 8 00:14:32.056642 systemd[1]: Starting systemd-journald.service - Journal Service... Nov 8 00:14:32.056649 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Nov 8 00:14:32.056656 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Nov 8 00:14:32.056663 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Nov 8 00:14:32.056670 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Nov 8 00:14:32.056677 systemd[1]: verity-setup.service: Deactivated successfully. Nov 8 00:14:32.056684 systemd[1]: Stopped verity-setup.service. Nov 8 00:14:32.056691 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 8 00:14:32.056699 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Nov 8 00:14:32.056707 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Nov 8 00:14:32.056714 systemd[1]: Mounted media.mount - External Media Directory. Nov 8 00:14:32.056721 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Nov 8 00:14:32.056728 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Nov 8 00:14:32.056736 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Nov 8 00:14:32.056743 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Nov 8 00:14:32.056750 systemd[1]: modprobe@configfs.service: Deactivated successfully. Nov 8 00:14:32.056757 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Nov 8 00:14:32.056765 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Nov 8 00:14:32.056772 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Nov 8 00:14:32.056779 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Nov 8 00:14:32.056786 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Nov 8 00:14:32.056793 kernel: fuse: init (API version 7.39) Nov 8 00:14:32.056800 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Nov 8 00:14:32.056807 systemd[1]: modprobe@fuse.service: Deactivated successfully. Nov 8 00:14:32.056814 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Nov 8 00:14:32.056822 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Nov 8 00:14:32.056829 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Nov 8 00:14:32.056836 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Nov 8 00:14:32.056843 systemd[1]: Reached target network-pre.target - Preparation for Network. Nov 8 00:14:32.056864 systemd-journald[1153]: Collecting audit messages is disabled. Nov 8 00:14:32.056881 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Nov 8 00:14:32.056889 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Nov 8 00:14:32.056898 systemd-journald[1153]: Journal started Nov 8 00:14:32.056913 systemd-journald[1153]: Runtime Journal (/run/log/journal/490cea176b6940c0adbd678d56b05ef5) is 4.8M, max 38.6M, 33.8M free. Nov 8 00:14:31.852521 systemd[1]: Queued start job for default target multi-user.target. Nov 8 00:14:31.869292 systemd[1]: Unnecessary job was removed for dev-sda6.device - /dev/sda6. Nov 8 00:14:31.869540 systemd[1]: systemd-journald.service: Deactivated successfully. Nov 8 00:14:32.059255 jq[1137]: true Nov 8 00:14:32.060731 systemd[1]: Reached target local-fs.target - Local File Systems. Nov 8 00:14:32.064094 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Nov 8 00:14:32.077814 jq[1165]: true Nov 8 00:14:32.080242 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Nov 8 00:14:32.084082 kernel: ACPI: bus type drm_connector registered Nov 8 00:14:32.085051 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Nov 8 00:14:32.088058 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Nov 8 00:14:32.088084 kernel: loop: module loaded Nov 8 00:14:32.093577 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Nov 8 00:14:32.096055 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Nov 8 00:14:32.102050 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Nov 8 00:14:32.115254 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Nov 8 00:14:32.115307 systemd[1]: Started systemd-journald.service - Journal Service. Nov 8 00:14:32.112390 systemd[1]: modprobe@drm.service: Deactivated successfully. Nov 8 00:14:32.112662 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Nov 8 00:14:32.116327 systemd[1]: modprobe@loop.service: Deactivated successfully. Nov 8 00:14:32.116463 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Nov 8 00:14:32.116742 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Nov 8 00:14:32.121289 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Nov 8 00:14:32.121592 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Nov 8 00:14:32.132548 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Nov 8 00:14:32.134254 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Nov 8 00:14:32.140875 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Nov 8 00:14:32.147161 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Nov 8 00:14:32.149681 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Nov 8 00:14:32.149831 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Nov 8 00:14:32.152367 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Nov 8 00:14:32.154080 systemd[1]: Starting systemd-sysusers.service - Create System Users... Nov 8 00:14:32.158040 kernel: loop0: detected capacity change from 0 to 2976 Nov 8 00:14:32.176794 systemd-journald[1153]: Time spent on flushing to /var/log/journal/490cea176b6940c0adbd678d56b05ef5 is 37.616ms for 1838 entries. Nov 8 00:14:32.176794 systemd-journald[1153]: System Journal (/var/log/journal/490cea176b6940c0adbd678d56b05ef5) is 8.0M, max 584.8M, 576.8M free. Nov 8 00:14:32.247211 systemd-journald[1153]: Received client request to flush runtime journal. Nov 8 00:14:32.250954 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Nov 8 00:14:32.254149 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Nov 8 00:14:32.261220 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Nov 8 00:14:32.274622 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Nov 8 00:14:32.275963 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Nov 8 00:14:32.277099 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Nov 8 00:14:32.277875 udevadm[1221]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Nov 8 00:14:32.322627 ignition[1187]: Ignition 2.19.0 Nov 8 00:14:32.322814 ignition[1187]: deleting config from guestinfo properties Nov 8 00:14:32.333800 ignition[1187]: Successfully deleted config Nov 8 00:14:32.336345 systemd[1]: Finished ignition-delete-config.service - Ignition (delete config). Nov 8 00:14:32.349184 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Nov 8 00:14:32.355646 systemd[1]: Finished systemd-sysusers.service - Create System Users. Nov 8 00:14:32.361332 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Nov 8 00:14:32.386064 kernel: loop1: detected capacity change from 0 to 142488 Nov 8 00:14:32.425778 systemd-tmpfiles[1232]: ACLs are not supported, ignoring. Nov 8 00:14:32.425791 systemd-tmpfiles[1232]: ACLs are not supported, ignoring. Nov 8 00:14:32.429446 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Nov 8 00:14:32.576054 kernel: loop2: detected capacity change from 0 to 219144 Nov 8 00:14:32.740061 kernel: loop3: detected capacity change from 0 to 140768 Nov 8 00:14:32.800368 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Nov 8 00:14:32.807194 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Nov 8 00:14:32.822455 systemd-udevd[1239]: Using default interface naming scheme 'v255'. Nov 8 00:14:32.904051 kernel: loop4: detected capacity change from 0 to 2976 Nov 8 00:14:32.906346 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Nov 8 00:14:32.913846 systemd[1]: Starting systemd-networkd.service - Network Configuration... Nov 8 00:14:32.928263 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Nov 8 00:14:32.933141 kernel: loop5: detected capacity change from 0 to 142488 Nov 8 00:14:32.959508 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Nov 8 00:14:32.962048 kernel: loop6: detected capacity change from 0 to 219144 Nov 8 00:14:32.981957 systemd[1]: Started systemd-userdbd.service - User Database Manager. Nov 8 00:14:33.003048 kernel: loop7: detected capacity change from 0 to 140768 Nov 8 00:14:33.030880 (sd-merge)[1241]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-vmware'. Nov 8 00:14:33.031205 (sd-merge)[1241]: Merged extensions into '/usr'. Nov 8 00:14:33.037406 systemd[1]: Reloading requested from client PID 1183 ('systemd-sysext') (unit systemd-sysext.service)... Nov 8 00:14:33.037422 systemd[1]: Reloading... Nov 8 00:14:33.080038 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 35 scanned by (udev-worker) (1254) Nov 8 00:14:33.078384 systemd-networkd[1249]: lo: Link UP Nov 8 00:14:33.083087 systemd-networkd[1249]: lo: Gained carrier Nov 8 00:14:33.085330 systemd-networkd[1249]: Enumeration completed Nov 8 00:14:33.087796 systemd-networkd[1249]: ens192: Configuring with /etc/systemd/network/00-vmware.network. Nov 8 00:14:33.098048 kernel: vmxnet3 0000:0b:00.0 ens192: intr type 3, mode 0, 3 vectors allocated Nov 8 00:14:33.098225 kernel: vmxnet3 0000:0b:00.0 ens192: NIC Link is Up 10000 Mbps Nov 8 00:14:33.102823 systemd-networkd[1249]: ens192: Link UP Nov 8 00:14:33.103221 systemd-networkd[1249]: ens192: Gained carrier Nov 8 00:14:33.134046 zram_generator::config[1300]: No configuration found. Nov 8 00:14:33.158257 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3 Nov 8 00:14:33.163049 kernel: ACPI: button: Power Button [PWRF] Nov 8 00:14:33.221989 kernel: piix4_smbus 0000:00:07.3: SMBus Host Controller not enabled! Nov 8 00:14:33.250665 systemd[1]: /etc/systemd/system/coreos-metadata.service:11: Ignoring unknown escape sequences: "echo "COREOS_CUSTOM_PRIVATE_IPV4=$(ip addr show ens192 | grep "inet 10." | grep -Po "inet \K[\d.]+") Nov 8 00:14:33.260063 kernel: vmw_vmci 0000:00:07.7: Using capabilities 0xc Nov 8 00:14:33.268618 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Nov 8 00:14:33.278081 kernel: Guest personality initialized and is active Nov 8 00:14:33.280306 kernel: input: ImPS/2 Generic Wheel Mouse as /devices/platform/i8042/serio1/input/input4 Nov 8 00:14:33.280355 kernel: VMCI host device registered (name=vmci, major=10, minor=125) Nov 8 00:14:33.280373 kernel: Initialized host personality Nov 8 00:14:33.317643 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Virtual_disk OEM. Nov 8 00:14:33.318513 systemd[1]: Reloading finished in 280 ms. Nov 8 00:14:33.324696 (udev-worker)[1251]: id: Truncating stdout of 'dmi_memory_id' up to 16384 byte. Nov 8 00:14:33.332077 kernel: mousedev: PS/2 mouse device common for all mice Nov 8 00:14:33.337168 systemd[1]: Started systemd-networkd.service - Network Configuration. Nov 8 00:14:33.337965 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Nov 8 00:14:33.341718 ldconfig[1177]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Nov 8 00:14:33.346392 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Nov 8 00:14:33.365164 systemd[1]: Starting ensure-sysext.service... Nov 8 00:14:33.368168 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Nov 8 00:14:33.370075 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Nov 8 00:14:33.371180 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Nov 8 00:14:33.376868 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Nov 8 00:14:33.377558 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Nov 8 00:14:33.385605 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Nov 8 00:14:33.385946 systemd[1]: Reloading requested from client PID 1362 ('systemctl') (unit ensure-sysext.service)... Nov 8 00:14:33.385953 systemd[1]: Reloading... Nov 8 00:14:33.395593 systemd-tmpfiles[1365]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Nov 8 00:14:33.395818 systemd-tmpfiles[1365]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Nov 8 00:14:33.396386 systemd-tmpfiles[1365]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Nov 8 00:14:33.396563 systemd-tmpfiles[1365]: ACLs are not supported, ignoring. Nov 8 00:14:33.396601 systemd-tmpfiles[1365]: ACLs are not supported, ignoring. Nov 8 00:14:33.398530 systemd-tmpfiles[1365]: Detected autofs mount point /boot during canonicalization of boot. Nov 8 00:14:33.398534 systemd-tmpfiles[1365]: Skipping /boot Nov 8 00:14:33.403496 systemd-tmpfiles[1365]: Detected autofs mount point /boot during canonicalization of boot. Nov 8 00:14:33.403503 systemd-tmpfiles[1365]: Skipping /boot Nov 8 00:14:33.418098 lvm[1369]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Nov 8 00:14:33.447051 zram_generator::config[1398]: No configuration found. Nov 8 00:14:33.525947 systemd[1]: /etc/systemd/system/coreos-metadata.service:11: Ignoring unknown escape sequences: "echo "COREOS_CUSTOM_PRIVATE_IPV4=$(ip addr show ens192 | grep "inet 10." | grep -Po "inet \K[\d.]+") Nov 8 00:14:33.541826 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Nov 8 00:14:33.582724 systemd[1]: Reloading finished in 195 ms. Nov 8 00:14:33.609467 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Nov 8 00:14:33.609860 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Nov 8 00:14:33.610239 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Nov 8 00:14:33.610565 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Nov 8 00:14:33.617484 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Nov 8 00:14:33.624309 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Nov 8 00:14:33.630088 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Nov 8 00:14:33.633308 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Nov 8 00:14:33.636286 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Nov 8 00:14:33.638539 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Nov 8 00:14:33.639562 lvm[1468]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Nov 8 00:14:33.646319 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Nov 8 00:14:33.647751 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 8 00:14:33.648734 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Nov 8 00:14:33.650012 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Nov 8 00:14:33.658291 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Nov 8 00:14:33.660181 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Nov 8 00:14:33.660295 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 8 00:14:33.661885 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 8 00:14:33.662011 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Nov 8 00:14:33.662227 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 8 00:14:33.666145 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 8 00:14:33.673246 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Nov 8 00:14:33.673484 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Nov 8 00:14:33.673599 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 8 00:14:33.675769 systemd[1]: Finished ensure-sysext.service. Nov 8 00:14:33.682485 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Nov 8 00:14:33.683550 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Nov 8 00:14:33.683676 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Nov 8 00:14:33.686588 systemd[1]: modprobe@loop.service: Deactivated successfully. Nov 8 00:14:33.687347 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Nov 8 00:14:33.689637 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Nov 8 00:14:33.690278 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Nov 8 00:14:33.694335 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Nov 8 00:14:33.694470 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Nov 8 00:14:33.698935 systemd[1]: modprobe@drm.service: Deactivated successfully. Nov 8 00:14:33.699113 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Nov 8 00:14:33.700833 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Nov 8 00:14:33.716661 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Nov 8 00:14:33.735941 systemd-resolved[1470]: Positive Trust Anchors: Nov 8 00:14:33.735954 systemd-resolved[1470]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Nov 8 00:14:33.735977 systemd-resolved[1470]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Nov 8 00:14:33.739378 systemd-resolved[1470]: Defaulting to hostname 'linux'. Nov 8 00:14:33.741534 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Nov 8 00:14:33.741771 systemd[1]: Reached target network.target - Network. Nov 8 00:14:33.741894 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Nov 8 00:14:33.753962 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Nov 8 00:14:33.761021 systemd[1]: Starting systemd-update-done.service - Update is Completed... Nov 8 00:14:33.761284 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Nov 8 00:14:33.761489 systemd[1]: Reached target time-set.target - System Time Set. Nov 8 00:14:33.762705 augenrules[1499]: No rules Nov 8 00:14:33.766114 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Nov 8 00:14:33.771526 systemd[1]: Finished systemd-update-done.service - Update is Completed. Nov 8 00:14:33.785430 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Nov 8 00:14:33.785683 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Nov 8 00:14:33.785708 systemd[1]: Reached target sysinit.target - System Initialization. Nov 8 00:14:33.785868 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Nov 8 00:14:33.786002 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Nov 8 00:14:33.786238 systemd[1]: Started logrotate.timer - Daily rotation of log files. Nov 8 00:14:33.786384 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Nov 8 00:14:33.786504 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Nov 8 00:14:33.786617 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Nov 8 00:14:33.786636 systemd[1]: Reached target paths.target - Path Units. Nov 8 00:14:33.786723 systemd[1]: Reached target timers.target - Timer Units. Nov 8 00:14:33.787501 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Nov 8 00:14:33.788568 systemd[1]: Starting docker.socket - Docker Socket for the API... Nov 8 00:14:33.792237 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Nov 8 00:14:33.792699 systemd[1]: Listening on docker.socket - Docker Socket for the API. Nov 8 00:14:33.792854 systemd[1]: Reached target sockets.target - Socket Units. Nov 8 00:14:33.792950 systemd[1]: Reached target basic.target - Basic System. Nov 8 00:14:33.793071 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Nov 8 00:14:33.793095 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Nov 8 00:14:33.793907 systemd[1]: Starting containerd.service - containerd container runtime... Nov 8 00:14:33.795220 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Nov 8 00:14:33.799217 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Nov 8 00:14:33.800306 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Nov 8 00:14:33.800421 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Nov 8 00:14:33.808175 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Nov 8 00:14:33.814199 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Nov 8 00:14:33.816205 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Nov 8 00:14:33.818377 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Nov 8 00:14:33.823526 extend-filesystems[1514]: Found loop4 Nov 8 00:14:33.823526 extend-filesystems[1514]: Found loop5 Nov 8 00:14:33.823526 extend-filesystems[1514]: Found loop6 Nov 8 00:14:33.823526 extend-filesystems[1514]: Found loop7 Nov 8 00:14:33.823526 extend-filesystems[1514]: Found sda Nov 8 00:14:33.823526 extend-filesystems[1514]: Found sda1 Nov 8 00:14:33.823526 extend-filesystems[1514]: Found sda2 Nov 8 00:14:33.823526 extend-filesystems[1514]: Found sda3 Nov 8 00:14:33.823526 extend-filesystems[1514]: Found usr Nov 8 00:14:33.823526 extend-filesystems[1514]: Found sda4 Nov 8 00:14:33.823526 extend-filesystems[1514]: Found sda6 Nov 8 00:14:33.823526 extend-filesystems[1514]: Found sda7 Nov 8 00:14:33.823526 extend-filesystems[1514]: Found sda9 Nov 8 00:14:33.823526 extend-filesystems[1514]: Checking size of /dev/sda9 Nov 8 00:14:33.829126 jq[1513]: false Nov 8 00:14:33.824286 systemd[1]: Starting systemd-logind.service - User Login Management... Nov 8 00:14:33.824734 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Nov 8 00:14:33.825279 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Nov 8 00:14:33.828399 systemd[1]: Starting update-engine.service - Update Engine... Nov 8 00:14:33.835416 dbus-daemon[1512]: [system] SELinux support is enabled Nov 8 00:14:33.838861 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Nov 8 00:14:33.841082 systemd[1]: Starting vgauthd.service - VGAuth Service for open-vm-tools... Nov 8 00:14:33.841527 systemd[1]: Started dbus.service - D-Bus System Message Bus. Nov 8 00:14:33.845453 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Nov 8 00:14:33.845911 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Nov 8 00:14:33.851884 jq[1523]: true Nov 8 00:14:33.851178 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Nov 8 00:14:33.851221 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Nov 8 00:14:33.851413 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Nov 8 00:14:33.851425 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Nov 8 00:14:33.862009 (ntainerd)[1543]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Nov 8 00:14:33.863147 systemd[1]: Started vgauthd.service - VGAuth Service for open-vm-tools. Nov 8 00:14:33.867115 systemd[1]: Starting vmtoolsd.service - Service for virtual machines hosted on VMware... Nov 8 00:14:33.869504 extend-filesystems[1514]: Old size kept for /dev/sda9 Nov 8 00:14:33.869504 extend-filesystems[1514]: Found sr0 Nov 8 00:14:33.870248 systemd[1]: extend-filesystems.service: Deactivated successfully. Nov 8 00:14:33.870397 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Nov 8 00:14:33.875801 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Nov 8 00:14:33.876428 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Nov 8 00:14:33.876783 systemd[1]: motdgen.service: Deactivated successfully. Nov 8 00:14:33.879158 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Nov 8 00:14:33.887216 systemd[1]: Started vmtoolsd.service - Service for virtual machines hosted on VMware. Nov 8 00:14:33.899092 jq[1550]: true Nov 8 00:14:33.909115 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 35 scanned by (udev-worker) (1247) Nov 8 00:14:33.934360 tar[1537]: linux-amd64/LICENSE Nov 8 00:14:33.934608 tar[1537]: linux-amd64/helm Nov 8 00:14:33.947289 kernel: NET: Registered PF_VSOCK protocol family Nov 8 00:14:33.947336 update_engine[1522]: I20251108 00:14:33.940799 1522 main.cc:92] Flatcar Update Engine starting Nov 8 00:14:33.961061 systemd[1]: Started update-engine.service - Update Engine. Nov 8 00:14:33.962454 update_engine[1522]: I20251108 00:14:33.962356 1522 update_check_scheduler.cc:74] Next update check in 7m10s Nov 8 00:14:33.969188 systemd[1]: Started locksmithd.service - Cluster reboot manager. Nov 8 00:14:33.971502 unknown[1544]: Pref_Init: Using '/etc/vmware-tools/vgauth.conf' as preferences filepath Nov 8 00:14:33.978783 unknown[1544]: Core dump limit set to -1 Nov 8 00:14:34.098944 systemd-logind[1520]: Watching system buttons on /dev/input/event1 (Power Button) Nov 8 00:14:34.099074 systemd-logind[1520]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Nov 8 00:14:34.099634 systemd-logind[1520]: New seat seat0. Nov 8 00:14:34.105171 systemd[1]: Started systemd-logind.service - User Login Management. Nov 8 00:14:34.112680 sshd_keygen[1527]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Nov 8 00:14:34.112862 bash[1578]: Updated "/home/core/.ssh/authorized_keys" Nov 8 00:14:34.114684 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Nov 8 00:14:34.115447 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Nov 8 00:16:13.347754 systemd-resolved[1470]: Clock change detected. Flushing caches. Nov 8 00:16:13.348006 systemd-timesyncd[1486]: Contacted time server 104.207.148.118:123 (0.flatcar.pool.ntp.org). Nov 8 00:16:13.348564 systemd-timesyncd[1486]: Initial clock synchronization to Sat 2025-11-08 00:16:13.347708 UTC. Nov 8 00:16:13.393400 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Nov 8 00:16:13.400524 systemd[1]: Starting issuegen.service - Generate /run/issue... Nov 8 00:16:13.405704 locksmithd[1563]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Nov 8 00:16:13.413029 systemd[1]: issuegen.service: Deactivated successfully. Nov 8 00:16:13.413210 systemd[1]: Finished issuegen.service - Generate /run/issue. Nov 8 00:16:13.425695 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Nov 8 00:16:13.443102 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Nov 8 00:16:13.449584 systemd[1]: Started getty@tty1.service - Getty on tty1. Nov 8 00:16:13.451405 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Nov 8 00:16:13.451682 systemd[1]: Reached target getty.target - Login Prompts. Nov 8 00:16:13.507997 containerd[1543]: time="2025-11-08T00:16:13.507554554Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Nov 8 00:16:13.529976 containerd[1543]: time="2025-11-08T00:16:13.529388142Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Nov 8 00:16:13.530700 containerd[1543]: time="2025-11-08T00:16:13.530677215Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.113-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Nov 8 00:16:13.530755 containerd[1543]: time="2025-11-08T00:16:13.530746005Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Nov 8 00:16:13.530828 containerd[1543]: time="2025-11-08T00:16:13.530819610Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Nov 8 00:16:13.530992 containerd[1543]: time="2025-11-08T00:16:13.530980694Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Nov 8 00:16:13.531423 containerd[1543]: time="2025-11-08T00:16:13.531411586Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Nov 8 00:16:13.531511 containerd[1543]: time="2025-11-08T00:16:13.531499522Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Nov 8 00:16:13.531732 containerd[1543]: time="2025-11-08T00:16:13.531722859Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Nov 8 00:16:13.531887 containerd[1543]: time="2025-11-08T00:16:13.531874314Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Nov 8 00:16:13.532292 containerd[1543]: time="2025-11-08T00:16:13.532281072Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Nov 8 00:16:13.532349 containerd[1543]: time="2025-11-08T00:16:13.532338751Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Nov 8 00:16:13.532393 containerd[1543]: time="2025-11-08T00:16:13.532384840Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Nov 8 00:16:13.532475 containerd[1543]: time="2025-11-08T00:16:13.532465152Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Nov 8 00:16:13.532872 containerd[1543]: time="2025-11-08T00:16:13.532861275Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Nov 8 00:16:13.533518 containerd[1543]: time="2025-11-08T00:16:13.533386786Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Nov 8 00:16:13.533518 containerd[1543]: time="2025-11-08T00:16:13.533406629Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Nov 8 00:16:13.533518 containerd[1543]: time="2025-11-08T00:16:13.533462900Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Nov 8 00:16:13.533518 containerd[1543]: time="2025-11-08T00:16:13.533496047Z" level=info msg="metadata content store policy set" policy=shared Nov 8 00:16:13.535618 containerd[1543]: time="2025-11-08T00:16:13.535587596Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Nov 8 00:16:13.535677 containerd[1543]: time="2025-11-08T00:16:13.535624288Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Nov 8 00:16:13.535677 containerd[1543]: time="2025-11-08T00:16:13.535636336Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Nov 8 00:16:13.535677 containerd[1543]: time="2025-11-08T00:16:13.535645298Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Nov 8 00:16:13.535677 containerd[1543]: time="2025-11-08T00:16:13.535656411Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Nov 8 00:16:13.535758 containerd[1543]: time="2025-11-08T00:16:13.535742147Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Nov 8 00:16:13.535898 containerd[1543]: time="2025-11-08T00:16:13.535884240Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Nov 8 00:16:13.535994 containerd[1543]: time="2025-11-08T00:16:13.535966555Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Nov 8 00:16:13.535994 containerd[1543]: time="2025-11-08T00:16:13.535985542Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Nov 8 00:16:13.536030 containerd[1543]: time="2025-11-08T00:16:13.535994014Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Nov 8 00:16:13.536030 containerd[1543]: time="2025-11-08T00:16:13.536002799Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Nov 8 00:16:13.536030 containerd[1543]: time="2025-11-08T00:16:13.536011062Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Nov 8 00:16:13.536030 containerd[1543]: time="2025-11-08T00:16:13.536018833Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Nov 8 00:16:13.536030 containerd[1543]: time="2025-11-08T00:16:13.536027100Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Nov 8 00:16:13.536100 containerd[1543]: time="2025-11-08T00:16:13.536035667Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Nov 8 00:16:13.536100 containerd[1543]: time="2025-11-08T00:16:13.536043110Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Nov 8 00:16:13.536100 containerd[1543]: time="2025-11-08T00:16:13.536051109Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Nov 8 00:16:13.536100 containerd[1543]: time="2025-11-08T00:16:13.536058215Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Nov 8 00:16:13.536100 containerd[1543]: time="2025-11-08T00:16:13.536070622Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Nov 8 00:16:13.536100 containerd[1543]: time="2025-11-08T00:16:13.536079188Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Nov 8 00:16:13.536100 containerd[1543]: time="2025-11-08T00:16:13.536086157Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Nov 8 00:16:13.536100 containerd[1543]: time="2025-11-08T00:16:13.536093781Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Nov 8 00:16:13.536211 containerd[1543]: time="2025-11-08T00:16:13.536101386Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Nov 8 00:16:13.536211 containerd[1543]: time="2025-11-08T00:16:13.536109603Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Nov 8 00:16:13.536211 containerd[1543]: time="2025-11-08T00:16:13.536116516Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Nov 8 00:16:13.536211 containerd[1543]: time="2025-11-08T00:16:13.536124201Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Nov 8 00:16:13.536211 containerd[1543]: time="2025-11-08T00:16:13.536131432Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Nov 8 00:16:13.536211 containerd[1543]: time="2025-11-08T00:16:13.536142085Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Nov 8 00:16:13.536211 containerd[1543]: time="2025-11-08T00:16:13.536149122Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Nov 8 00:16:13.536211 containerd[1543]: time="2025-11-08T00:16:13.536156231Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Nov 8 00:16:13.536211 containerd[1543]: time="2025-11-08T00:16:13.536163865Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Nov 8 00:16:13.536211 containerd[1543]: time="2025-11-08T00:16:13.536172753Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Nov 8 00:16:13.536211 containerd[1543]: time="2025-11-08T00:16:13.536184941Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Nov 8 00:16:13.536211 containerd[1543]: time="2025-11-08T00:16:13.536192142Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Nov 8 00:16:13.536211 containerd[1543]: time="2025-11-08T00:16:13.536198207Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Nov 8 00:16:13.536400 containerd[1543]: time="2025-11-08T00:16:13.536223600Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Nov 8 00:16:13.536400 containerd[1543]: time="2025-11-08T00:16:13.536235164Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Nov 8 00:16:13.536400 containerd[1543]: time="2025-11-08T00:16:13.536242378Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Nov 8 00:16:13.536400 containerd[1543]: time="2025-11-08T00:16:13.536249094Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Nov 8 00:16:13.536400 containerd[1543]: time="2025-11-08T00:16:13.536254740Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Nov 8 00:16:13.536400 containerd[1543]: time="2025-11-08T00:16:13.536262031Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Nov 8 00:16:13.536400 containerd[1543]: time="2025-11-08T00:16:13.536267814Z" level=info msg="NRI interface is disabled by configuration." Nov 8 00:16:13.536400 containerd[1543]: time="2025-11-08T00:16:13.536274173Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Nov 8 00:16:13.536513 containerd[1543]: time="2025-11-08T00:16:13.536454451Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Nov 8 00:16:13.536513 containerd[1543]: time="2025-11-08T00:16:13.536492126Z" level=info msg="Connect containerd service" Nov 8 00:16:13.536610 containerd[1543]: time="2025-11-08T00:16:13.536517307Z" level=info msg="using legacy CRI server" Nov 8 00:16:13.536610 containerd[1543]: time="2025-11-08T00:16:13.536522002Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Nov 8 00:16:13.536610 containerd[1543]: time="2025-11-08T00:16:13.536575177Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Nov 8 00:16:13.537455 containerd[1543]: time="2025-11-08T00:16:13.537018896Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Nov 8 00:16:13.537455 containerd[1543]: time="2025-11-08T00:16:13.537110328Z" level=info msg="Start subscribing containerd event" Nov 8 00:16:13.537455 containerd[1543]: time="2025-11-08T00:16:13.537136623Z" level=info msg="Start recovering state" Nov 8 00:16:13.537455 containerd[1543]: time="2025-11-08T00:16:13.537173748Z" level=info msg="Start event monitor" Nov 8 00:16:13.537455 containerd[1543]: time="2025-11-08T00:16:13.537181326Z" level=info msg="Start snapshots syncer" Nov 8 00:16:13.537455 containerd[1543]: time="2025-11-08T00:16:13.537186767Z" level=info msg="Start cni network conf syncer for default" Nov 8 00:16:13.537455 containerd[1543]: time="2025-11-08T00:16:13.537191692Z" level=info msg="Start streaming server" Nov 8 00:16:13.537455 containerd[1543]: time="2025-11-08T00:16:13.537436432Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Nov 8 00:16:13.540025 containerd[1543]: time="2025-11-08T00:16:13.537463917Z" level=info msg=serving... address=/run/containerd/containerd.sock Nov 8 00:16:13.540025 containerd[1543]: time="2025-11-08T00:16:13.539105567Z" level=info msg="containerd successfully booted in 0.032070s" Nov 8 00:16:13.537556 systemd[1]: Started containerd.service - containerd container runtime. Nov 8 00:16:13.557423 systemd-networkd[1249]: ens192: Gained IPv6LL Nov 8 00:16:13.558985 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Nov 8 00:16:13.559785 systemd[1]: Reached target network-online.target - Network is Online. Nov 8 00:16:13.574602 systemd[1]: Starting coreos-metadata.service - VMware metadata agent... Nov 8 00:16:13.581248 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 8 00:16:13.583950 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Nov 8 00:16:13.618157 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Nov 8 00:16:13.633147 systemd[1]: coreos-metadata.service: Deactivated successfully. Nov 8 00:16:13.633398 systemd[1]: Finished coreos-metadata.service - VMware metadata agent. Nov 8 00:16:13.634065 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Nov 8 00:16:13.711341 tar[1537]: linux-amd64/README.md Nov 8 00:16:13.718724 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Nov 8 00:16:14.759406 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 8 00:16:14.759779 systemd[1]: Reached target multi-user.target - Multi-User System. Nov 8 00:16:14.760078 systemd[1]: Startup finished in 1.003s (kernel) + 6.789s (initrd) + 4.155s (userspace) = 11.947s. Nov 8 00:16:14.765036 (kubelet)[1689]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Nov 8 00:16:14.827141 login[1623]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Nov 8 00:16:14.827979 login[1626]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Nov 8 00:16:14.836186 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Nov 8 00:16:14.844019 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Nov 8 00:16:14.846084 systemd-logind[1520]: New session 1 of user core. Nov 8 00:16:14.848425 systemd-logind[1520]: New session 2 of user core. Nov 8 00:16:14.852897 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Nov 8 00:16:14.858497 systemd[1]: Starting user@500.service - User Manager for UID 500... Nov 8 00:16:14.860327 (systemd)[1696]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Nov 8 00:16:14.950131 systemd[1696]: Queued start job for default target default.target. Nov 8 00:16:14.954142 systemd[1696]: Created slice app.slice - User Application Slice. Nov 8 00:16:14.954160 systemd[1696]: Reached target paths.target - Paths. Nov 8 00:16:14.954169 systemd[1696]: Reached target timers.target - Timers. Nov 8 00:16:14.955128 systemd[1696]: Starting dbus.socket - D-Bus User Message Bus Socket... Nov 8 00:16:14.964248 systemd[1696]: Listening on dbus.socket - D-Bus User Message Bus Socket. Nov 8 00:16:14.964288 systemd[1696]: Reached target sockets.target - Sockets. Nov 8 00:16:14.964309 systemd[1696]: Reached target basic.target - Basic System. Nov 8 00:16:14.964336 systemd[1696]: Reached target default.target - Main User Target. Nov 8 00:16:14.964353 systemd[1696]: Startup finished in 100ms. Nov 8 00:16:14.964586 systemd[1]: Started user@500.service - User Manager for UID 500. Nov 8 00:16:14.966201 systemd[1]: Started session-1.scope - Session 1 of User core. Nov 8 00:16:14.968354 systemd[1]: Started session-2.scope - Session 2 of User core. Nov 8 00:16:15.332579 kubelet[1689]: E1108 00:16:15.332515 1689 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Nov 8 00:16:15.334092 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Nov 8 00:16:15.334233 systemd[1]: kubelet.service: Failed with result 'exit-code'. Nov 8 00:16:25.584599 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Nov 8 00:16:25.595504 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 8 00:16:25.942092 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 8 00:16:25.944672 (kubelet)[1740]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Nov 8 00:16:25.973313 kubelet[1740]: E1108 00:16:25.972568 1740 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Nov 8 00:16:25.974921 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Nov 8 00:16:25.975015 systemd[1]: kubelet.service: Failed with result 'exit-code'. Nov 8 00:16:36.225356 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Nov 8 00:16:36.233504 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 8 00:16:36.573774 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 8 00:16:36.576284 (kubelet)[1755]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Nov 8 00:16:36.597915 kubelet[1755]: E1108 00:16:36.597882 1755 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Nov 8 00:16:36.599073 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Nov 8 00:16:36.599152 systemd[1]: kubelet.service: Failed with result 'exit-code'. Nov 8 00:16:43.251759 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Nov 8 00:16:43.253450 systemd[1]: Started sshd@0-139.178.70.104:22-147.75.109.163:60692.service - OpenSSH per-connection server daemon (147.75.109.163:60692). Nov 8 00:16:43.282100 sshd[1764]: Accepted publickey for core from 147.75.109.163 port 60692 ssh2: RSA SHA256:w/xGfnUobRwx5tSVykkPMEgd5qYNjwGoDH1wcB/4M9g Nov 8 00:16:43.282924 sshd[1764]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 8 00:16:43.285447 systemd-logind[1520]: New session 3 of user core. Nov 8 00:16:43.295403 systemd[1]: Started session-3.scope - Session 3 of User core. Nov 8 00:16:43.351448 systemd[1]: Started sshd@1-139.178.70.104:22-147.75.109.163:60700.service - OpenSSH per-connection server daemon (147.75.109.163:60700). Nov 8 00:16:43.384348 sshd[1769]: Accepted publickey for core from 147.75.109.163 port 60700 ssh2: RSA SHA256:w/xGfnUobRwx5tSVykkPMEgd5qYNjwGoDH1wcB/4M9g Nov 8 00:16:43.385514 sshd[1769]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 8 00:16:43.388409 systemd-logind[1520]: New session 4 of user core. Nov 8 00:16:43.398546 systemd[1]: Started session-4.scope - Session 4 of User core. Nov 8 00:16:43.449077 sshd[1769]: pam_unix(sshd:session): session closed for user core Nov 8 00:16:43.455845 systemd[1]: sshd@1-139.178.70.104:22-147.75.109.163:60700.service: Deactivated successfully. Nov 8 00:16:43.457055 systemd[1]: session-4.scope: Deactivated successfully. Nov 8 00:16:43.457896 systemd-logind[1520]: Session 4 logged out. Waiting for processes to exit. Nov 8 00:16:43.458947 systemd[1]: Started sshd@2-139.178.70.104:22-147.75.109.163:60712.service - OpenSSH per-connection server daemon (147.75.109.163:60712). Nov 8 00:16:43.459785 systemd-logind[1520]: Removed session 4. Nov 8 00:16:43.494984 sshd[1776]: Accepted publickey for core from 147.75.109.163 port 60712 ssh2: RSA SHA256:w/xGfnUobRwx5tSVykkPMEgd5qYNjwGoDH1wcB/4M9g Nov 8 00:16:43.495341 sshd[1776]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 8 00:16:43.498165 systemd-logind[1520]: New session 5 of user core. Nov 8 00:16:43.503403 systemd[1]: Started session-5.scope - Session 5 of User core. Nov 8 00:16:43.550621 sshd[1776]: pam_unix(sshd:session): session closed for user core Nov 8 00:16:43.565908 systemd[1]: sshd@2-139.178.70.104:22-147.75.109.163:60712.service: Deactivated successfully. Nov 8 00:16:43.567059 systemd[1]: session-5.scope: Deactivated successfully. Nov 8 00:16:43.568287 systemd-logind[1520]: Session 5 logged out. Waiting for processes to exit. Nov 8 00:16:43.574725 systemd[1]: Started sshd@3-139.178.70.104:22-147.75.109.163:60718.service - OpenSSH per-connection server daemon (147.75.109.163:60718). Nov 8 00:16:43.576507 systemd-logind[1520]: Removed session 5. Nov 8 00:16:43.599935 sshd[1783]: Accepted publickey for core from 147.75.109.163 port 60718 ssh2: RSA SHA256:w/xGfnUobRwx5tSVykkPMEgd5qYNjwGoDH1wcB/4M9g Nov 8 00:16:43.600708 sshd[1783]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 8 00:16:43.603083 systemd-logind[1520]: New session 6 of user core. Nov 8 00:16:43.614494 systemd[1]: Started session-6.scope - Session 6 of User core. Nov 8 00:16:43.664881 sshd[1783]: pam_unix(sshd:session): session closed for user core Nov 8 00:16:43.675609 systemd[1]: sshd@3-139.178.70.104:22-147.75.109.163:60718.service: Deactivated successfully. Nov 8 00:16:43.676681 systemd[1]: session-6.scope: Deactivated successfully. Nov 8 00:16:43.677621 systemd-logind[1520]: Session 6 logged out. Waiting for processes to exit. Nov 8 00:16:43.683825 systemd[1]: Started sshd@4-139.178.70.104:22-147.75.109.163:60722.service - OpenSSH per-connection server daemon (147.75.109.163:60722). Nov 8 00:16:43.684991 systemd-logind[1520]: Removed session 6. Nov 8 00:16:43.704790 sshd[1790]: Accepted publickey for core from 147.75.109.163 port 60722 ssh2: RSA SHA256:w/xGfnUobRwx5tSVykkPMEgd5qYNjwGoDH1wcB/4M9g Nov 8 00:16:43.705563 sshd[1790]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 8 00:16:43.708804 systemd-logind[1520]: New session 7 of user core. Nov 8 00:16:43.718389 systemd[1]: Started session-7.scope - Session 7 of User core. Nov 8 00:16:43.774076 sudo[1793]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Nov 8 00:16:43.774240 sudo[1793]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Nov 8 00:16:43.784884 sudo[1793]: pam_unix(sudo:session): session closed for user root Nov 8 00:16:43.786353 sshd[1790]: pam_unix(sshd:session): session closed for user core Nov 8 00:16:43.795886 systemd[1]: sshd@4-139.178.70.104:22-147.75.109.163:60722.service: Deactivated successfully. Nov 8 00:16:43.796864 systemd[1]: session-7.scope: Deactivated successfully. Nov 8 00:16:43.797704 systemd-logind[1520]: Session 7 logged out. Waiting for processes to exit. Nov 8 00:16:43.799118 systemd[1]: Started sshd@5-139.178.70.104:22-147.75.109.163:60726.service - OpenSSH per-connection server daemon (147.75.109.163:60726). Nov 8 00:16:43.799710 systemd-logind[1520]: Removed session 7. Nov 8 00:16:43.823688 sshd[1798]: Accepted publickey for core from 147.75.109.163 port 60726 ssh2: RSA SHA256:w/xGfnUobRwx5tSVykkPMEgd5qYNjwGoDH1wcB/4M9g Nov 8 00:16:43.824546 sshd[1798]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 8 00:16:43.828065 systemd-logind[1520]: New session 8 of user core. Nov 8 00:16:43.833489 systemd[1]: Started session-8.scope - Session 8 of User core. Nov 8 00:16:43.882459 sudo[1802]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Nov 8 00:16:43.882620 sudo[1802]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Nov 8 00:16:43.884772 sudo[1802]: pam_unix(sudo:session): session closed for user root Nov 8 00:16:43.888669 sudo[1801]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Nov 8 00:16:43.888886 sudo[1801]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Nov 8 00:16:43.898488 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Nov 8 00:16:43.899453 auditctl[1805]: No rules Nov 8 00:16:43.899698 systemd[1]: audit-rules.service: Deactivated successfully. Nov 8 00:16:43.899813 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Nov 8 00:16:43.901500 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Nov 8 00:16:43.920252 augenrules[1823]: No rules Nov 8 00:16:43.921005 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Nov 8 00:16:43.921954 sudo[1801]: pam_unix(sudo:session): session closed for user root Nov 8 00:16:43.923103 sshd[1798]: pam_unix(sshd:session): session closed for user core Nov 8 00:16:43.928829 systemd[1]: sshd@5-139.178.70.104:22-147.75.109.163:60726.service: Deactivated successfully. Nov 8 00:16:43.930021 systemd[1]: session-8.scope: Deactivated successfully. Nov 8 00:16:43.931062 systemd-logind[1520]: Session 8 logged out. Waiting for processes to exit. Nov 8 00:16:43.932171 systemd[1]: Started sshd@6-139.178.70.104:22-147.75.109.163:60740.service - OpenSSH per-connection server daemon (147.75.109.163:60740). Nov 8 00:16:43.933920 systemd-logind[1520]: Removed session 8. Nov 8 00:16:43.959110 sshd[1831]: Accepted publickey for core from 147.75.109.163 port 60740 ssh2: RSA SHA256:w/xGfnUobRwx5tSVykkPMEgd5qYNjwGoDH1wcB/4M9g Nov 8 00:16:43.959895 sshd[1831]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 8 00:16:43.963616 systemd-logind[1520]: New session 9 of user core. Nov 8 00:16:43.968447 systemd[1]: Started session-9.scope - Session 9 of User core. Nov 8 00:16:44.019132 sudo[1834]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Nov 8 00:16:44.019419 sudo[1834]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Nov 8 00:16:44.431562 (dockerd)[1850]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Nov 8 00:16:44.431847 systemd[1]: Starting docker.service - Docker Application Container Engine... Nov 8 00:16:44.838028 dockerd[1850]: time="2025-11-08T00:16:44.837809578Z" level=info msg="Starting up" Nov 8 00:16:45.202085 dockerd[1850]: time="2025-11-08T00:16:45.202009647Z" level=info msg="Loading containers: start." Nov 8 00:16:45.338941 kernel: Initializing XFRM netlink socket Nov 8 00:16:45.386415 systemd-networkd[1249]: docker0: Link UP Nov 8 00:16:45.400404 dockerd[1850]: time="2025-11-08T00:16:45.400366800Z" level=info msg="Loading containers: done." Nov 8 00:16:45.409320 dockerd[1850]: time="2025-11-08T00:16:45.409266356Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Nov 8 00:16:45.409429 dockerd[1850]: time="2025-11-08T00:16:45.409370299Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Nov 8 00:16:45.409487 dockerd[1850]: time="2025-11-08T00:16:45.409472108Z" level=info msg="Daemon has completed initialization" Nov 8 00:16:45.434769 dockerd[1850]: time="2025-11-08T00:16:45.433352733Z" level=info msg="API listen on /run/docker.sock" Nov 8 00:16:45.434553 systemd[1]: Started docker.service - Docker Application Container Engine. Nov 8 00:16:46.131561 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck691301952-merged.mount: Deactivated successfully. Nov 8 00:16:46.192957 containerd[1543]: time="2025-11-08T00:16:46.192757567Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.34.1\"" Nov 8 00:16:46.849443 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Nov 8 00:16:46.857594 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 8 00:16:46.924607 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 8 00:16:46.927016 (kubelet)[1997]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Nov 8 00:16:46.950897 kubelet[1997]: E1108 00:16:46.950863 1997 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Nov 8 00:16:46.951884 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Nov 8 00:16:46.951960 systemd[1]: kubelet.service: Failed with result 'exit-code'. Nov 8 00:16:47.174069 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1904980409.mount: Deactivated successfully. Nov 8 00:16:48.849346 containerd[1543]: time="2025-11-08T00:16:48.849282510Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.34.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:16:48.849912 containerd[1543]: time="2025-11-08T00:16:48.849890903Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.34.1: active requests=0, bytes read=27065392" Nov 8 00:16:48.849972 containerd[1543]: time="2025-11-08T00:16:48.849899652Z" level=info msg="ImageCreate event name:\"sha256:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:16:48.853308 containerd[1543]: time="2025-11-08T00:16:48.852122670Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:b9d7c117f8ac52bed4b13aeed973dc5198f9d93a926e6fe9e0b384f155baa902\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:16:48.854159 containerd[1543]: time="2025-11-08T00:16:48.854142552Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.34.1\" with image id \"sha256:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97\", repo tag \"registry.k8s.io/kube-apiserver:v1.34.1\", repo digest \"registry.k8s.io/kube-apiserver@sha256:b9d7c117f8ac52bed4b13aeed973dc5198f9d93a926e6fe9e0b384f155baa902\", size \"27061991\" in 2.661362815s" Nov 8 00:16:48.854277 containerd[1543]: time="2025-11-08T00:16:48.854262482Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.34.1\" returns image reference \"sha256:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97\"" Nov 8 00:16:48.854616 containerd[1543]: time="2025-11-08T00:16:48.854603178Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.34.1\"" Nov 8 00:16:50.551034 containerd[1543]: time="2025-11-08T00:16:50.550993460Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.34.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:16:50.552317 containerd[1543]: time="2025-11-08T00:16:50.552207152Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.34.1: active requests=0, bytes read=21159757" Nov 8 00:16:50.557326 containerd[1543]: time="2025-11-08T00:16:50.555212625Z" level=info msg="ImageCreate event name:\"sha256:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:16:50.564803 containerd[1543]: time="2025-11-08T00:16:50.564759965Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:2bf47c1b01f51e8963bf2327390883c9fa4ed03ea1b284500a2cba17ce303e89\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:16:50.565725 containerd[1543]: time="2025-11-08T00:16:50.565639117Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.34.1\" with image id \"sha256:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f\", repo tag \"registry.k8s.io/kube-controller-manager:v1.34.1\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:2bf47c1b01f51e8963bf2327390883c9fa4ed03ea1b284500a2cba17ce303e89\", size \"22820214\" in 1.710968252s" Nov 8 00:16:50.565725 containerd[1543]: time="2025-11-08T00:16:50.565663864Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.34.1\" returns image reference \"sha256:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f\"" Nov 8 00:16:50.566707 containerd[1543]: time="2025-11-08T00:16:50.566690289Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.34.1\"" Nov 8 00:16:52.431914 containerd[1543]: time="2025-11-08T00:16:52.431878525Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.34.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:16:52.438155 containerd[1543]: time="2025-11-08T00:16:52.437986070Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.34.1: active requests=0, bytes read=15725093" Nov 8 00:16:52.448260 containerd[1543]: time="2025-11-08T00:16:52.448215831Z" level=info msg="ImageCreate event name:\"sha256:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:16:52.455762 containerd[1543]: time="2025-11-08T00:16:52.455729076Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:6e9fbc4e25a576483e6a233976353a66e4d77eb5d0530e9118e94b7d46fb3500\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:16:52.456765 containerd[1543]: time="2025-11-08T00:16:52.456431705Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.34.1\" with image id \"sha256:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813\", repo tag \"registry.k8s.io/kube-scheduler:v1.34.1\", repo digest \"registry.k8s.io/kube-scheduler@sha256:6e9fbc4e25a576483e6a233976353a66e4d77eb5d0530e9118e94b7d46fb3500\", size \"17385568\" in 1.889719724s" Nov 8 00:16:52.456765 containerd[1543]: time="2025-11-08T00:16:52.456457582Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.34.1\" returns image reference \"sha256:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813\"" Nov 8 00:16:52.457029 containerd[1543]: time="2025-11-08T00:16:52.456941376Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.34.1\"" Nov 8 00:16:53.430889 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1272241513.mount: Deactivated successfully. Nov 8 00:16:53.742427 containerd[1543]: time="2025-11-08T00:16:53.742351705Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.34.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:16:53.747322 containerd[1543]: time="2025-11-08T00:16:53.747284144Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.34.1: active requests=0, bytes read=25964699" Nov 8 00:16:53.755586 containerd[1543]: time="2025-11-08T00:16:53.755560298Z" level=info msg="ImageCreate event name:\"sha256:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:16:53.763076 containerd[1543]: time="2025-11-08T00:16:53.763053523Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:913cc83ca0b5588a81d86ce8eedeb3ed1e9c1326e81852a1ea4f622b74ff749a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:16:53.763521 containerd[1543]: time="2025-11-08T00:16:53.763289416Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.34.1\" with image id \"sha256:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7\", repo tag \"registry.k8s.io/kube-proxy:v1.34.1\", repo digest \"registry.k8s.io/kube-proxy@sha256:913cc83ca0b5588a81d86ce8eedeb3ed1e9c1326e81852a1ea4f622b74ff749a\", size \"25963718\" in 1.3063292s" Nov 8 00:16:53.763521 containerd[1543]: time="2025-11-08T00:16:53.763319001Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.34.1\" returns image reference \"sha256:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7\"" Nov 8 00:16:53.763568 containerd[1543]: time="2025-11-08T00:16:53.763559284Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.1\"" Nov 8 00:16:54.573174 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1719611568.mount: Deactivated successfully. Nov 8 00:16:55.963206 containerd[1543]: time="2025-11-08T00:16:55.962392131Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.12.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:16:55.967564 containerd[1543]: time="2025-11-08T00:16:55.967516384Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.12.1: active requests=0, bytes read=22388007" Nov 8 00:16:55.972805 containerd[1543]: time="2025-11-08T00:16:55.972761052Z" level=info msg="ImageCreate event name:\"sha256:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:16:55.979326 containerd[1543]: time="2025-11-08T00:16:55.977899335Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:16:55.979326 containerd[1543]: time="2025-11-08T00:16:55.979212754Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.12.1\" with image id \"sha256:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969\", repo tag \"registry.k8s.io/coredns/coredns:v1.12.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c\", size \"22384805\" in 2.215634595s" Nov 8 00:16:55.979326 containerd[1543]: time="2025-11-08T00:16:55.979243778Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.1\" returns image reference \"sha256:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969\"" Nov 8 00:16:55.979790 containerd[1543]: time="2025-11-08T00:16:55.979766935Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10.1\"" Nov 8 00:16:56.507800 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount215294725.mount: Deactivated successfully. Nov 8 00:16:56.509936 containerd[1543]: time="2025-11-08T00:16:56.509506397Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:16:56.510239 containerd[1543]: time="2025-11-08T00:16:56.510221641Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10.1: active requests=0, bytes read=321218" Nov 8 00:16:56.510605 containerd[1543]: time="2025-11-08T00:16:56.510587143Z" level=info msg="ImageCreate event name:\"sha256:cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:16:56.512111 containerd[1543]: time="2025-11-08T00:16:56.512095067Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:16:56.512567 containerd[1543]: time="2025-11-08T00:16:56.512550968Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10.1\" with image id \"sha256:cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f\", repo tag \"registry.k8s.io/pause:3.10.1\", repo digest \"registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c\", size \"320448\" in 532.759234ms" Nov 8 00:16:56.512602 containerd[1543]: time="2025-11-08T00:16:56.512569332Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10.1\" returns image reference \"sha256:cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f\"" Nov 8 00:16:56.513124 containerd[1543]: time="2025-11-08T00:16:56.513101636Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.6.4-0\"" Nov 8 00:16:57.144965 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 4. Nov 8 00:16:57.153451 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 8 00:16:57.853544 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 8 00:16:57.857925 (kubelet)[2144]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Nov 8 00:16:57.888330 kubelet[2144]: E1108 00:16:57.888291 2144 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Nov 8 00:16:57.891560 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Nov 8 00:16:57.891645 systemd[1]: kubelet.service: Failed with result 'exit-code'. Nov 8 00:16:58.826419 update_engine[1522]: I20251108 00:16:58.826363 1522 update_attempter.cc:509] Updating boot flags... Nov 8 00:16:59.139334 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 35 scanned by (udev-worker) (2165) Nov 8 00:17:01.471574 containerd[1543]: time="2025-11-08T00:17:01.471548289Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.6.4-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:17:01.472006 containerd[1543]: time="2025-11-08T00:17:01.471975988Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.6.4-0: active requests=0, bytes read=73514593" Nov 8 00:17:01.472543 containerd[1543]: time="2025-11-08T00:17:01.472521429Z" level=info msg="ImageCreate event name:\"sha256:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:17:01.474476 containerd[1543]: time="2025-11-08T00:17:01.474452726Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:e36c081683425b5b3bc1425bc508b37e7107bb65dfa9367bf5a80125d431fa19\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:17:01.475263 containerd[1543]: time="2025-11-08T00:17:01.475126297Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.6.4-0\" with image id \"sha256:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115\", repo tag \"registry.k8s.io/etcd:3.6.4-0\", repo digest \"registry.k8s.io/etcd@sha256:e36c081683425b5b3bc1425bc508b37e7107bb65dfa9367bf5a80125d431fa19\", size \"74311308\" in 4.962007892s" Nov 8 00:17:01.475263 containerd[1543]: time="2025-11-08T00:17:01.475146138Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.6.4-0\" returns image reference \"sha256:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115\"" Nov 8 00:17:04.708740 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Nov 8 00:17:04.712427 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 8 00:17:04.729289 systemd[1]: Reloading requested from client PID 2232 ('systemctl') (unit session-9.scope)... Nov 8 00:17:04.729399 systemd[1]: Reloading... Nov 8 00:17:04.793791 zram_generator::config[2273]: No configuration found. Nov 8 00:17:04.865216 systemd[1]: /etc/systemd/system/coreos-metadata.service:11: Ignoring unknown escape sequences: "echo "COREOS_CUSTOM_PRIVATE_IPV4=$(ip addr show ens192 | grep "inet 10." | grep -Po "inet \K[\d.]+") Nov 8 00:17:04.881407 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Nov 8 00:17:04.927682 systemd[1]: Reloading finished in 198 ms. Nov 8 00:17:04.957650 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Nov 8 00:17:04.957706 systemd[1]: kubelet.service: Failed with result 'signal'. Nov 8 00:17:04.957853 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Nov 8 00:17:04.959015 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 8 00:17:06.247595 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 8 00:17:06.251488 (kubelet)[2338]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Nov 8 00:17:06.294103 kubelet[2338]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Nov 8 00:17:06.294103 kubelet[2338]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Nov 8 00:17:06.313001 kubelet[2338]: I1108 00:17:06.312946 2338 server.go:213] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Nov 8 00:17:06.766586 kubelet[2338]: I1108 00:17:06.766554 2338 server.go:529] "Kubelet version" kubeletVersion="v1.34.1" Nov 8 00:17:06.766586 kubelet[2338]: I1108 00:17:06.766579 2338 server.go:531] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Nov 8 00:17:06.848544 kubelet[2338]: I1108 00:17:06.848494 2338 watchdog_linux.go:95] "Systemd watchdog is not enabled" Nov 8 00:17:06.848544 kubelet[2338]: I1108 00:17:06.848533 2338 watchdog_linux.go:137] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Nov 8 00:17:06.848773 kubelet[2338]: I1108 00:17:06.848756 2338 server.go:956] "Client rotation is on, will bootstrap in background" Nov 8 00:17:07.310335 kubelet[2338]: E1108 00:17:07.310231 2338 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://139.178.70.104:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 139.178.70.104:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Nov 8 00:17:07.348059 kubelet[2338]: I1108 00:17:07.347883 2338 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Nov 8 00:17:07.457385 kubelet[2338]: E1108 00:17:07.457348 2338 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Nov 8 00:17:07.457473 kubelet[2338]: I1108 00:17:07.457402 2338 server.go:1400] "CRI implementation should be updated to support RuntimeConfig. Falling back to using cgroupDriver from kubelet config." Nov 8 00:17:07.519554 kubelet[2338]: I1108 00:17:07.519522 2338 server.go:781] "--cgroups-per-qos enabled, but --cgroup-root was not specified. Defaulting to /" Nov 8 00:17:07.519749 kubelet[2338]: I1108 00:17:07.519675 2338 container_manager_linux.go:270] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Nov 8 00:17:07.531984 kubelet[2338]: I1108 00:17:07.519691 2338 container_manager_linux.go:275] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Nov 8 00:17:07.531984 kubelet[2338]: I1108 00:17:07.531980 2338 topology_manager.go:138] "Creating topology manager with none policy" Nov 8 00:17:07.531984 kubelet[2338]: I1108 00:17:07.531994 2338 container_manager_linux.go:306] "Creating device plugin manager" Nov 8 00:17:07.532202 kubelet[2338]: I1108 00:17:07.532091 2338 container_manager_linux.go:315] "Creating Dynamic Resource Allocation (DRA) manager" Nov 8 00:17:07.590561 kubelet[2338]: I1108 00:17:07.590468 2338 state_mem.go:36] "Initialized new in-memory state store" Nov 8 00:17:07.602939 kubelet[2338]: I1108 00:17:07.602911 2338 kubelet.go:475] "Attempting to sync node with API server" Nov 8 00:17:07.602939 kubelet[2338]: I1108 00:17:07.602938 2338 kubelet.go:376] "Adding static pod path" path="/etc/kubernetes/manifests" Nov 8 00:17:07.603564 kubelet[2338]: E1108 00:17:07.603541 2338 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://139.178.70.104:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 139.178.70.104:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Nov 8 00:17:07.611016 kubelet[2338]: I1108 00:17:07.610668 2338 kubelet.go:387] "Adding apiserver pod source" Nov 8 00:17:07.611016 kubelet[2338]: I1108 00:17:07.610695 2338 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Nov 8 00:17:07.625566 kubelet[2338]: I1108 00:17:07.625443 2338 kuberuntime_manager.go:291] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Nov 8 00:17:07.633113 kubelet[2338]: I1108 00:17:07.633040 2338 kubelet.go:940] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Nov 8 00:17:07.633113 kubelet[2338]: I1108 00:17:07.633066 2338 kubelet.go:964] "Not starting PodCertificateRequest manager because we are in static kubelet mode or the PodCertificateProjection feature gate is disabled" Nov 8 00:17:07.656531 kubelet[2338]: W1108 00:17:07.656481 2338 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Nov 8 00:17:07.667329 kubelet[2338]: E1108 00:17:07.666877 2338 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: Get \"https://139.178.70.104:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 139.178.70.104:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Nov 8 00:17:07.696034 kubelet[2338]: I1108 00:17:07.695922 2338 server.go:1262] "Started kubelet" Nov 8 00:17:07.706328 kubelet[2338]: I1108 00:17:07.706296 2338 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Nov 8 00:17:07.718164 kubelet[2338]: I1108 00:17:07.717971 2338 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Nov 8 00:17:07.738862 kubelet[2338]: I1108 00:17:07.738580 2338 ratelimit.go:56] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Nov 8 00:17:07.738862 kubelet[2338]: I1108 00:17:07.738627 2338 server_v1.go:49] "podresources" method="list" useActivePods=true Nov 8 00:17:07.738862 kubelet[2338]: I1108 00:17:07.738791 2338 server.go:249] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Nov 8 00:17:07.765370 kubelet[2338]: I1108 00:17:07.765042 2338 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Nov 8 00:17:07.767188 kubelet[2338]: I1108 00:17:07.767173 2338 volume_manager.go:313] "Starting Kubelet Volume Manager" Nov 8 00:17:07.767350 kubelet[2338]: E1108 00:17:07.767334 2338 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Nov 8 00:17:07.805786 kubelet[2338]: I1108 00:17:07.805769 2338 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Nov 8 00:17:07.806210 kubelet[2338]: I1108 00:17:07.805891 2338 reconciler.go:29] "Reconciler: start to sync state" Nov 8 00:17:07.831316 kubelet[2338]: E1108 00:17:07.830891 2338 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://139.178.70.104:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 139.178.70.104:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Nov 8 00:17:07.831440 kubelet[2338]: I1108 00:17:07.831428 2338 server.go:310] "Adding debug handlers to kubelet server" Nov 8 00:17:07.832461 kubelet[2338]: I1108 00:17:07.832449 2338 factory.go:223] Registration of the systemd container factory successfully Nov 8 00:17:07.832574 kubelet[2338]: I1108 00:17:07.832562 2338 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Nov 8 00:17:07.833180 kubelet[2338]: I1108 00:17:07.833166 2338 factory.go:223] Registration of the containerd container factory successfully Nov 8 00:17:07.861231 kubelet[2338]: E1108 00:17:07.861159 2338 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://139.178.70.104:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 139.178.70.104:6443: connect: connection refused" interval="200ms" Nov 8 00:17:07.861485 kubelet[2338]: E1108 00:17:07.832986 2338 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://139.178.70.104:6443/api/v1/namespaces/default/events\": dial tcp 139.178.70.104:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.1875dfea8ec2c513 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-11-08 00:17:07.695887635 +0000 UTC m=+1.442397947,LastTimestamp:2025-11-08 00:17:07.695887635 +0000 UTC m=+1.442397947,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Nov 8 00:17:07.872337 kubelet[2338]: I1108 00:17:07.872255 2338 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv4" Nov 8 00:17:07.873155 kubelet[2338]: I1108 00:17:07.873134 2338 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv6" Nov 8 00:17:07.873155 kubelet[2338]: I1108 00:17:07.873151 2338 status_manager.go:244] "Starting to sync pod status with apiserver" Nov 8 00:17:07.873219 kubelet[2338]: I1108 00:17:07.873171 2338 kubelet.go:2427] "Starting kubelet main sync loop" Nov 8 00:17:07.873219 kubelet[2338]: E1108 00:17:07.873199 2338 kubelet.go:2451] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Nov 8 00:17:07.878117 kubelet[2338]: E1108 00:17:07.878016 2338 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Nov 8 00:17:07.883780 kubelet[2338]: E1108 00:17:07.883764 2338 reflector.go:205] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://139.178.70.104:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 139.178.70.104:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Nov 8 00:17:07.884630 kubelet[2338]: I1108 00:17:07.884494 2338 cpu_manager.go:221] "Starting CPU manager" policy="none" Nov 8 00:17:07.884630 kubelet[2338]: I1108 00:17:07.884504 2338 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Nov 8 00:17:07.884630 kubelet[2338]: I1108 00:17:07.884519 2338 state_mem.go:36] "Initialized new in-memory state store" Nov 8 00:17:07.907668 kubelet[2338]: I1108 00:17:07.907489 2338 policy_none.go:49] "None policy: Start" Nov 8 00:17:07.907668 kubelet[2338]: I1108 00:17:07.907512 2338 memory_manager.go:187] "Starting memorymanager" policy="None" Nov 8 00:17:07.907668 kubelet[2338]: I1108 00:17:07.907521 2338 state_mem.go:36] "Initializing new in-memory state store" logger="Memory Manager state checkpoint" Nov 8 00:17:07.924390 kubelet[2338]: I1108 00:17:07.924369 2338 policy_none.go:47] "Start" Nov 8 00:17:07.932222 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Nov 8 00:17:07.943519 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Nov 8 00:17:07.946459 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Nov 8 00:17:07.956055 kubelet[2338]: E1108 00:17:07.956035 2338 manager.go:513] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Nov 8 00:17:07.956315 kubelet[2338]: I1108 00:17:07.956169 2338 eviction_manager.go:189] "Eviction manager: starting control loop" Nov 8 00:17:07.956315 kubelet[2338]: I1108 00:17:07.956179 2338 container_log_manager.go:146] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Nov 8 00:17:07.957811 kubelet[2338]: E1108 00:17:07.957765 2338 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Nov 8 00:17:07.957938 kubelet[2338]: E1108 00:17:07.957863 2338 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Nov 8 00:17:07.968090 kubelet[2338]: I1108 00:17:07.968062 2338 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Nov 8 00:17:08.027779 systemd[1]: Created slice kubepods-burstable-pode59486787442560b70da2f6db204cd4d.slice - libcontainer container kubepods-burstable-pode59486787442560b70da2f6db204cd4d.slice. Nov 8 00:17:08.032832 kubelet[2338]: E1108 00:17:08.032814 2338 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Nov 8 00:17:08.046963 systemd[1]: Created slice kubepods-burstable-podce161b3b11c90b0b844f2e4f86b4e8cd.slice - libcontainer container kubepods-burstable-podce161b3b11c90b0b844f2e4f86b4e8cd.slice. Nov 8 00:17:08.049316 kubelet[2338]: E1108 00:17:08.048618 2338 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Nov 8 00:17:08.057821 kubelet[2338]: I1108 00:17:08.057811 2338 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Nov 8 00:17:08.058080 kubelet[2338]: E1108 00:17:08.058065 2338 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://139.178.70.104:6443/api/v1/nodes\": dial tcp 139.178.70.104:6443: connect: connection refused" node="localhost" Nov 8 00:17:08.062344 kubelet[2338]: E1108 00:17:08.062327 2338 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://139.178.70.104:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 139.178.70.104:6443: connect: connection refused" interval="400ms" Nov 8 00:17:08.092690 systemd[1]: Created slice kubepods-burstable-pod72ae43bf624d285361487631af8a6ba6.slice - libcontainer container kubepods-burstable-pod72ae43bf624d285361487631af8a6ba6.slice. Nov 8 00:17:08.094126 kubelet[2338]: E1108 00:17:08.094105 2338 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Nov 8 00:17:08.107377 kubelet[2338]: I1108 00:17:08.107351 2338 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/ce161b3b11c90b0b844f2e4f86b4e8cd-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"ce161b3b11c90b0b844f2e4f86b4e8cd\") " pod="kube-system/kube-controller-manager-localhost" Nov 8 00:17:08.107377 kubelet[2338]: I1108 00:17:08.107379 2338 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/ce161b3b11c90b0b844f2e4f86b4e8cd-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"ce161b3b11c90b0b844f2e4f86b4e8cd\") " pod="kube-system/kube-controller-manager-localhost" Nov 8 00:17:08.107505 kubelet[2338]: I1108 00:17:08.107395 2338 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/72ae43bf624d285361487631af8a6ba6-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"72ae43bf624d285361487631af8a6ba6\") " pod="kube-system/kube-scheduler-localhost" Nov 8 00:17:08.107505 kubelet[2338]: I1108 00:17:08.107406 2338 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/e59486787442560b70da2f6db204cd4d-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"e59486787442560b70da2f6db204cd4d\") " pod="kube-system/kube-apiserver-localhost" Nov 8 00:17:08.107505 kubelet[2338]: I1108 00:17:08.107418 2338 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/e59486787442560b70da2f6db204cd4d-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"e59486787442560b70da2f6db204cd4d\") " pod="kube-system/kube-apiserver-localhost" Nov 8 00:17:08.107505 kubelet[2338]: I1108 00:17:08.107444 2338 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/e59486787442560b70da2f6db204cd4d-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"e59486787442560b70da2f6db204cd4d\") " pod="kube-system/kube-apiserver-localhost" Nov 8 00:17:08.107505 kubelet[2338]: I1108 00:17:08.107458 2338 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/ce161b3b11c90b0b844f2e4f86b4e8cd-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"ce161b3b11c90b0b844f2e4f86b4e8cd\") " pod="kube-system/kube-controller-manager-localhost" Nov 8 00:17:08.107617 kubelet[2338]: I1108 00:17:08.107470 2338 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/ce161b3b11c90b0b844f2e4f86b4e8cd-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"ce161b3b11c90b0b844f2e4f86b4e8cd\") " pod="kube-system/kube-controller-manager-localhost" Nov 8 00:17:08.107617 kubelet[2338]: I1108 00:17:08.107484 2338 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/ce161b3b11c90b0b844f2e4f86b4e8cd-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"ce161b3b11c90b0b844f2e4f86b4e8cd\") " pod="kube-system/kube-controller-manager-localhost" Nov 8 00:17:08.259988 kubelet[2338]: I1108 00:17:08.259961 2338 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Nov 8 00:17:08.260200 kubelet[2338]: E1108 00:17:08.260178 2338 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://139.178.70.104:6443/api/v1/nodes\": dial tcp 139.178.70.104:6443: connect: connection refused" node="localhost" Nov 8 00:17:08.359072 containerd[1543]: time="2025-11-08T00:17:08.358846606Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:e59486787442560b70da2f6db204cd4d,Namespace:kube-system,Attempt:0,}" Nov 8 00:17:08.365381 containerd[1543]: time="2025-11-08T00:17:08.365351327Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:ce161b3b11c90b0b844f2e4f86b4e8cd,Namespace:kube-system,Attempt:0,}" Nov 8 00:17:08.401220 containerd[1543]: time="2025-11-08T00:17:08.400968033Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:72ae43bf624d285361487631af8a6ba6,Namespace:kube-system,Attempt:0,}" Nov 8 00:17:08.463161 kubelet[2338]: E1108 00:17:08.463122 2338 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://139.178.70.104:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 139.178.70.104:6443: connect: connection refused" interval="800ms" Nov 8 00:17:08.661424 kubelet[2338]: I1108 00:17:08.661397 2338 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Nov 8 00:17:08.661716 kubelet[2338]: E1108 00:17:08.661684 2338 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://139.178.70.104:6443/api/v1/nodes\": dial tcp 139.178.70.104:6443: connect: connection refused" node="localhost" Nov 8 00:17:08.674557 kubelet[2338]: E1108 00:17:08.674528 2338 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://139.178.70.104:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 139.178.70.104:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Nov 8 00:17:08.868011 kubelet[2338]: E1108 00:17:08.867976 2338 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://139.178.70.104:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 139.178.70.104:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Nov 8 00:17:09.061552 kubelet[2338]: E1108 00:17:09.061470 2338 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: Get \"https://139.178.70.104:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 139.178.70.104:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Nov 8 00:17:09.263929 kubelet[2338]: E1108 00:17:09.263883 2338 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://139.178.70.104:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 139.178.70.104:6443: connect: connection refused" interval="1.6s" Nov 8 00:17:09.345924 kubelet[2338]: E1108 00:17:09.345832 2338 reflector.go:205] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://139.178.70.104:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 139.178.70.104:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Nov 8 00:17:09.386476 kubelet[2338]: E1108 00:17:09.386452 2338 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://139.178.70.104:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 139.178.70.104:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Nov 8 00:17:09.462455 kubelet[2338]: I1108 00:17:09.462433 2338 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Nov 8 00:17:09.462621 kubelet[2338]: E1108 00:17:09.462605 2338 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://139.178.70.104:6443/api/v1/nodes\": dial tcp 139.178.70.104:6443: connect: connection refused" node="localhost" Nov 8 00:17:09.732156 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount968929536.mount: Deactivated successfully. Nov 8 00:17:09.740131 containerd[1543]: time="2025-11-08T00:17:09.739440963Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Nov 8 00:17:09.740510 containerd[1543]: time="2025-11-08T00:17:09.740491995Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Nov 8 00:17:09.741174 containerd[1543]: time="2025-11-08T00:17:09.741143716Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312056" Nov 8 00:17:09.745457 containerd[1543]: time="2025-11-08T00:17:09.745412499Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Nov 8 00:17:09.749536 containerd[1543]: time="2025-11-08T00:17:09.749501683Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Nov 8 00:17:09.752290 containerd[1543]: time="2025-11-08T00:17:09.752255942Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Nov 8 00:17:09.757554 containerd[1543]: time="2025-11-08T00:17:09.757232317Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Nov 8 00:17:09.760768 containerd[1543]: time="2025-11-08T00:17:09.760725847Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Nov 8 00:17:09.761531 containerd[1543]: time="2025-11-08T00:17:09.761345886Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 1.395946392s" Nov 8 00:17:09.762007 containerd[1543]: time="2025-11-08T00:17:09.761979633Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 1.403078747s" Nov 8 00:17:09.763211 containerd[1543]: time="2025-11-08T00:17:09.763182313Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 1.362150687s" Nov 8 00:17:10.363578 containerd[1543]: time="2025-11-08T00:17:10.363403398Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 8 00:17:10.363578 containerd[1543]: time="2025-11-08T00:17:10.363540856Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 8 00:17:10.363578 containerd[1543]: time="2025-11-08T00:17:10.363557353Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 8 00:17:10.364428 containerd[1543]: time="2025-11-08T00:17:10.364248333Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 8 00:17:10.368646 containerd[1543]: time="2025-11-08T00:17:10.368440474Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 8 00:17:10.368646 containerd[1543]: time="2025-11-08T00:17:10.368483132Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 8 00:17:10.369182 containerd[1543]: time="2025-11-08T00:17:10.369128226Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 8 00:17:10.369389 containerd[1543]: time="2025-11-08T00:17:10.369331268Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 8 00:17:10.378966 containerd[1543]: time="2025-11-08T00:17:10.374811966Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 8 00:17:10.378966 containerd[1543]: time="2025-11-08T00:17:10.374859314Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 8 00:17:10.378966 containerd[1543]: time="2025-11-08T00:17:10.374875099Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 8 00:17:10.378966 containerd[1543]: time="2025-11-08T00:17:10.374953341Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 8 00:17:10.395464 systemd[1]: Started cri-containerd-135aab4f9f5e9d15e2b5466d0a7b631f3ad3c307bddea214fa0b795937cc8a1c.scope - libcontainer container 135aab4f9f5e9d15e2b5466d0a7b631f3ad3c307bddea214fa0b795937cc8a1c. Nov 8 00:17:10.397894 systemd[1]: Started cri-containerd-a6e66355e3cecc001b254b70df5c8d1f8a1ecda0961213defee423b998f341d9.scope - libcontainer container a6e66355e3cecc001b254b70df5c8d1f8a1ecda0961213defee423b998f341d9. Nov 8 00:17:10.407471 systemd[1]: Started cri-containerd-5d6ca603c4f79e46930231f849ef0c6346fe77aae71b5511c309ab61ed82ff86.scope - libcontainer container 5d6ca603c4f79e46930231f849ef0c6346fe77aae71b5511c309ab61ed82ff86. Nov 8 00:17:10.452796 containerd[1543]: time="2025-11-08T00:17:10.452470786Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:72ae43bf624d285361487631af8a6ba6,Namespace:kube-system,Attempt:0,} returns sandbox id \"a6e66355e3cecc001b254b70df5c8d1f8a1ecda0961213defee423b998f341d9\"" Nov 8 00:17:10.466093 containerd[1543]: time="2025-11-08T00:17:10.466040283Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:ce161b3b11c90b0b844f2e4f86b4e8cd,Namespace:kube-system,Attempt:0,} returns sandbox id \"135aab4f9f5e9d15e2b5466d0a7b631f3ad3c307bddea214fa0b795937cc8a1c\"" Nov 8 00:17:10.470105 containerd[1543]: time="2025-11-08T00:17:10.469980078Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:e59486787442560b70da2f6db204cd4d,Namespace:kube-system,Attempt:0,} returns sandbox id \"5d6ca603c4f79e46930231f849ef0c6346fe77aae71b5511c309ab61ed82ff86\"" Nov 8 00:17:10.472245 containerd[1543]: time="2025-11-08T00:17:10.472210185Z" level=info msg="CreateContainer within sandbox \"a6e66355e3cecc001b254b70df5c8d1f8a1ecda0961213defee423b998f341d9\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Nov 8 00:17:10.478040 containerd[1543]: time="2025-11-08T00:17:10.477985381Z" level=info msg="CreateContainer within sandbox \"135aab4f9f5e9d15e2b5466d0a7b631f3ad3c307bddea214fa0b795937cc8a1c\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Nov 8 00:17:10.499423 containerd[1543]: time="2025-11-08T00:17:10.499321794Z" level=info msg="CreateContainer within sandbox \"5d6ca603c4f79e46930231f849ef0c6346fe77aae71b5511c309ab61ed82ff86\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Nov 8 00:17:10.659282 containerd[1543]: time="2025-11-08T00:17:10.659236610Z" level=info msg="CreateContainer within sandbox \"a6e66355e3cecc001b254b70df5c8d1f8a1ecda0961213defee423b998f341d9\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"30a4e36cb688bc330f3b0a3464c8d4bd5105351840aaae3a4a727a2094fbcabe\"" Nov 8 00:17:10.659998 containerd[1543]: time="2025-11-08T00:17:10.659973932Z" level=info msg="StartContainer for \"30a4e36cb688bc330f3b0a3464c8d4bd5105351840aaae3a4a727a2094fbcabe\"" Nov 8 00:17:10.662375 containerd[1543]: time="2025-11-08T00:17:10.662195490Z" level=info msg="CreateContainer within sandbox \"135aab4f9f5e9d15e2b5466d0a7b631f3ad3c307bddea214fa0b795937cc8a1c\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"13cf1347122ffdeef83e52cde2590a4af5bbad4a43e0248f72bc90d4c04df628\"" Nov 8 00:17:10.663150 containerd[1543]: time="2025-11-08T00:17:10.663135017Z" level=info msg="CreateContainer within sandbox \"5d6ca603c4f79e46930231f849ef0c6346fe77aae71b5511c309ab61ed82ff86\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"7f34a1be3f60edd1e37493c3dbf98c5d7b07114a85924473dd5b6bb1792094fa\"" Nov 8 00:17:10.663613 containerd[1543]: time="2025-11-08T00:17:10.663596770Z" level=info msg="StartContainer for \"7f34a1be3f60edd1e37493c3dbf98c5d7b07114a85924473dd5b6bb1792094fa\"" Nov 8 00:17:10.667858 containerd[1543]: time="2025-11-08T00:17:10.667823103Z" level=info msg="StartContainer for \"13cf1347122ffdeef83e52cde2590a4af5bbad4a43e0248f72bc90d4c04df628\"" Nov 8 00:17:10.692016 systemd[1]: Started cri-containerd-30a4e36cb688bc330f3b0a3464c8d4bd5105351840aaae3a4a727a2094fbcabe.scope - libcontainer container 30a4e36cb688bc330f3b0a3464c8d4bd5105351840aaae3a4a727a2094fbcabe. Nov 8 00:17:10.700487 systemd[1]: Started cri-containerd-13cf1347122ffdeef83e52cde2590a4af5bbad4a43e0248f72bc90d4c04df628.scope - libcontainer container 13cf1347122ffdeef83e52cde2590a4af5bbad4a43e0248f72bc90d4c04df628. Nov 8 00:17:10.702453 systemd[1]: Started cri-containerd-7f34a1be3f60edd1e37493c3dbf98c5d7b07114a85924473dd5b6bb1792094fa.scope - libcontainer container 7f34a1be3f60edd1e37493c3dbf98c5d7b07114a85924473dd5b6bb1792094fa. Nov 8 00:17:10.756770 containerd[1543]: time="2025-11-08T00:17:10.756705352Z" level=info msg="StartContainer for \"30a4e36cb688bc330f3b0a3464c8d4bd5105351840aaae3a4a727a2094fbcabe\" returns successfully" Nov 8 00:17:10.789986 containerd[1543]: time="2025-11-08T00:17:10.789892711Z" level=info msg="StartContainer for \"7f34a1be3f60edd1e37493c3dbf98c5d7b07114a85924473dd5b6bb1792094fa\" returns successfully" Nov 8 00:17:10.789986 containerd[1543]: time="2025-11-08T00:17:10.789956004Z" level=info msg="StartContainer for \"13cf1347122ffdeef83e52cde2590a4af5bbad4a43e0248f72bc90d4c04df628\" returns successfully" Nov 8 00:17:10.865332 kubelet[2338]: E1108 00:17:10.865024 2338 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://139.178.70.104:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 139.178.70.104:6443: connect: connection refused" interval="3.2s" Nov 8 00:17:10.898381 kubelet[2338]: E1108 00:17:10.898098 2338 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Nov 8 00:17:10.898648 kubelet[2338]: E1108 00:17:10.898638 2338 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Nov 8 00:17:10.900294 kubelet[2338]: E1108 00:17:10.900286 2338 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Nov 8 00:17:11.064297 kubelet[2338]: I1108 00:17:11.063767 2338 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Nov 8 00:17:11.064789 kubelet[2338]: E1108 00:17:11.064769 2338 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://139.178.70.104:6443/api/v1/nodes\": dial tcp 139.178.70.104:6443: connect: connection refused" node="localhost" Nov 8 00:17:11.406588 kubelet[2338]: E1108 00:17:11.406553 2338 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://139.178.70.104:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 139.178.70.104:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Nov 8 00:17:11.752023 kubelet[2338]: E1108 00:17:11.751938 2338 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://139.178.70.104:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 139.178.70.104:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Nov 8 00:17:11.864631 kubelet[2338]: E1108 00:17:11.864599 2338 reflector.go:205] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://139.178.70.104:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 139.178.70.104:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Nov 8 00:17:11.901769 kubelet[2338]: E1108 00:17:11.901654 2338 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Nov 8 00:17:11.902219 kubelet[2338]: E1108 00:17:11.902145 2338 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Nov 8 00:17:11.907727 kubelet[2338]: E1108 00:17:11.907697 2338 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: Get \"https://139.178.70.104:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 139.178.70.104:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Nov 8 00:17:14.266182 kubelet[2338]: I1108 00:17:14.266150 2338 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Nov 8 00:17:14.372152 kubelet[2338]: E1108 00:17:14.372129 2338 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Nov 8 00:17:14.461061 kubelet[2338]: I1108 00:17:14.460858 2338 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Nov 8 00:17:14.461061 kubelet[2338]: E1108 00:17:14.460898 2338 kubelet_node_status.go:486] "Error updating node status, will retry" err="error getting node \"localhost\": node \"localhost\" not found" Nov 8 00:17:14.472512 kubelet[2338]: E1108 00:17:14.472473 2338 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Nov 8 00:17:14.573190 kubelet[2338]: E1108 00:17:14.572722 2338 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Nov 8 00:17:14.673559 kubelet[2338]: E1108 00:17:14.673537 2338 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Nov 8 00:17:14.774575 kubelet[2338]: E1108 00:17:14.774530 2338 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Nov 8 00:17:14.875535 kubelet[2338]: E1108 00:17:14.875431 2338 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Nov 8 00:17:14.975973 kubelet[2338]: E1108 00:17:14.975939 2338 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Nov 8 00:17:15.076722 kubelet[2338]: E1108 00:17:15.076678 2338 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Nov 8 00:17:15.177631 kubelet[2338]: E1108 00:17:15.177598 2338 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Nov 8 00:17:15.278331 kubelet[2338]: E1108 00:17:15.278265 2338 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Nov 8 00:17:15.378912 kubelet[2338]: E1108 00:17:15.378884 2338 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Nov 8 00:17:15.479231 kubelet[2338]: E1108 00:17:15.479008 2338 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Nov 8 00:17:15.579897 kubelet[2338]: E1108 00:17:15.579869 2338 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Nov 8 00:17:15.680904 kubelet[2338]: E1108 00:17:15.680878 2338 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Nov 8 00:17:15.782147 kubelet[2338]: E1108 00:17:15.781930 2338 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Nov 8 00:17:15.882177 kubelet[2338]: E1108 00:17:15.882021 2338 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Nov 8 00:17:15.982971 kubelet[2338]: E1108 00:17:15.982937 2338 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Nov 8 00:17:16.083744 kubelet[2338]: E1108 00:17:16.083640 2338 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Nov 8 00:17:16.184536 kubelet[2338]: E1108 00:17:16.184495 2338 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Nov 8 00:17:16.285078 kubelet[2338]: E1108 00:17:16.285038 2338 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Nov 8 00:17:16.386680 kubelet[2338]: I1108 00:17:16.386457 2338 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Nov 8 00:17:16.406462 kubelet[2338]: I1108 00:17:16.406433 2338 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Nov 8 00:17:16.481956 kubelet[2338]: I1108 00:17:16.481930 2338 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Nov 8 00:17:16.508667 kubelet[2338]: I1108 00:17:16.508453 2338 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Nov 8 00:17:16.564721 kubelet[2338]: E1108 00:17:16.564700 2338 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" Nov 8 00:17:16.628378 kubelet[2338]: I1108 00:17:16.628351 2338 apiserver.go:52] "Watching apiserver" Nov 8 00:17:16.705999 kubelet[2338]: I1108 00:17:16.705960 2338 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Nov 8 00:17:16.759921 systemd[1]: Reloading requested from client PID 2624 ('systemctl') (unit session-9.scope)... Nov 8 00:17:16.759933 systemd[1]: Reloading... Nov 8 00:17:16.815392 zram_generator::config[2661]: No configuration found. Nov 8 00:17:16.876912 systemd[1]: /etc/systemd/system/coreos-metadata.service:11: Ignoring unknown escape sequences: "echo "COREOS_CUSTOM_PRIVATE_IPV4=$(ip addr show ens192 | grep "inet 10." | grep -Po "inet \K[\d.]+") Nov 8 00:17:16.892157 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Nov 8 00:17:16.944422 systemd[1]: Reloading finished in 184 ms. Nov 8 00:17:16.970741 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Nov 8 00:17:16.976297 systemd[1]: kubelet.service: Deactivated successfully. Nov 8 00:17:16.976506 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Nov 8 00:17:16.980818 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 8 00:17:18.091929 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 8 00:17:18.104653 (kubelet)[2729]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Nov 8 00:17:18.148738 kubelet[2729]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Nov 8 00:17:18.148738 kubelet[2729]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Nov 8 00:17:18.148960 kubelet[2729]: I1108 00:17:18.148773 2729 server.go:213] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Nov 8 00:17:18.171556 kubelet[2729]: I1108 00:17:18.171532 2729 server.go:529] "Kubelet version" kubeletVersion="v1.34.1" Nov 8 00:17:18.171556 kubelet[2729]: I1108 00:17:18.171550 2729 server.go:531] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Nov 8 00:17:18.171668 kubelet[2729]: I1108 00:17:18.171571 2729 watchdog_linux.go:95] "Systemd watchdog is not enabled" Nov 8 00:17:18.171668 kubelet[2729]: I1108 00:17:18.171575 2729 watchdog_linux.go:137] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Nov 8 00:17:18.171731 kubelet[2729]: I1108 00:17:18.171718 2729 server.go:956] "Client rotation is on, will bootstrap in background" Nov 8 00:17:18.172680 kubelet[2729]: I1108 00:17:18.172668 2729 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem" Nov 8 00:17:18.174208 kubelet[2729]: I1108 00:17:18.173928 2729 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Nov 8 00:17:18.207677 kubelet[2729]: E1108 00:17:18.207635 2729 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Nov 8 00:17:18.207795 kubelet[2729]: I1108 00:17:18.207710 2729 server.go:1400] "CRI implementation should be updated to support RuntimeConfig. Falling back to using cgroupDriver from kubelet config." Nov 8 00:17:18.210478 kubelet[2729]: I1108 00:17:18.210457 2729 server.go:781] "--cgroups-per-qos enabled, but --cgroup-root was not specified. Defaulting to /" Nov 8 00:17:18.210600 kubelet[2729]: I1108 00:17:18.210579 2729 container_manager_linux.go:270] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Nov 8 00:17:18.210698 kubelet[2729]: I1108 00:17:18.210600 2729 container_manager_linux.go:275] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Nov 8 00:17:18.210698 kubelet[2729]: I1108 00:17:18.210699 2729 topology_manager.go:138] "Creating topology manager with none policy" Nov 8 00:17:18.210799 kubelet[2729]: I1108 00:17:18.210705 2729 container_manager_linux.go:306] "Creating device plugin manager" Nov 8 00:17:18.210799 kubelet[2729]: I1108 00:17:18.210721 2729 container_manager_linux.go:315] "Creating Dynamic Resource Allocation (DRA) manager" Nov 8 00:17:18.211077 kubelet[2729]: I1108 00:17:18.211063 2729 state_mem.go:36] "Initialized new in-memory state store" Nov 8 00:17:18.214555 kubelet[2729]: I1108 00:17:18.214420 2729 kubelet.go:475] "Attempting to sync node with API server" Nov 8 00:17:18.214555 kubelet[2729]: I1108 00:17:18.214443 2729 kubelet.go:376] "Adding static pod path" path="/etc/kubernetes/manifests" Nov 8 00:17:18.214555 kubelet[2729]: I1108 00:17:18.214462 2729 kubelet.go:387] "Adding apiserver pod source" Nov 8 00:17:18.214555 kubelet[2729]: I1108 00:17:18.214476 2729 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Nov 8 00:17:18.222545 kubelet[2729]: I1108 00:17:18.221411 2729 kuberuntime_manager.go:291] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Nov 8 00:17:18.222545 kubelet[2729]: I1108 00:17:18.221825 2729 kubelet.go:940] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Nov 8 00:17:18.222545 kubelet[2729]: I1108 00:17:18.221848 2729 kubelet.go:964] "Not starting PodCertificateRequest manager because we are in static kubelet mode or the PodCertificateProjection feature gate is disabled" Nov 8 00:17:18.226952 kubelet[2729]: I1108 00:17:18.226936 2729 server.go:1262] "Started kubelet" Nov 8 00:17:18.228358 kubelet[2729]: I1108 00:17:18.228028 2729 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Nov 8 00:17:18.246246 kubelet[2729]: I1108 00:17:18.246213 2729 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Nov 8 00:17:18.247005 kubelet[2729]: I1108 00:17:18.246915 2729 server.go:310] "Adding debug handlers to kubelet server" Nov 8 00:17:18.248939 kubelet[2729]: I1108 00:17:18.248916 2729 ratelimit.go:56] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Nov 8 00:17:18.251113 kubelet[2729]: I1108 00:17:18.251097 2729 server_v1.go:49] "podresources" method="list" useActivePods=true Nov 8 00:17:18.251457 kubelet[2729]: I1108 00:17:18.251447 2729 server.go:249] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Nov 8 00:17:18.251746 kubelet[2729]: I1108 00:17:18.249168 2729 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Nov 8 00:17:18.253983 kubelet[2729]: I1108 00:17:18.253974 2729 volume_manager.go:313] "Starting Kubelet Volume Manager" Nov 8 00:17:18.254236 kubelet[2729]: I1108 00:17:18.254228 2729 reconciler.go:29] "Reconciler: start to sync state" Nov 8 00:17:18.255310 kubelet[2729]: I1108 00:17:18.255290 2729 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Nov 8 00:17:18.261276 kubelet[2729]: I1108 00:17:18.261250 2729 factory.go:223] Registration of the systemd container factory successfully Nov 8 00:17:18.261463 kubelet[2729]: I1108 00:17:18.261454 2729 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Nov 8 00:17:18.261693 kubelet[2729]: I1108 00:17:18.261398 2729 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv4" Nov 8 00:17:18.263981 kubelet[2729]: I1108 00:17:18.263500 2729 factory.go:223] Registration of the containerd container factory successfully Nov 8 00:17:18.264524 kubelet[2729]: I1108 00:17:18.264509 2729 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv6" Nov 8 00:17:18.265389 kubelet[2729]: I1108 00:17:18.265375 2729 status_manager.go:244] "Starting to sync pod status with apiserver" Nov 8 00:17:18.265537 kubelet[2729]: I1108 00:17:18.265519 2729 kubelet.go:2427] "Starting kubelet main sync loop" Nov 8 00:17:18.265892 kubelet[2729]: E1108 00:17:18.265755 2729 kubelet.go:2451] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Nov 8 00:17:18.307274 kubelet[2729]: I1108 00:17:18.307248 2729 cpu_manager.go:221] "Starting CPU manager" policy="none" Nov 8 00:17:18.307274 kubelet[2729]: I1108 00:17:18.307264 2729 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Nov 8 00:17:18.307704 kubelet[2729]: I1108 00:17:18.307686 2729 state_mem.go:36] "Initialized new in-memory state store" Nov 8 00:17:18.308038 kubelet[2729]: I1108 00:17:18.307786 2729 state_mem.go:88] "Updated default CPUSet" cpuSet="" Nov 8 00:17:18.308038 kubelet[2729]: I1108 00:17:18.307798 2729 state_mem.go:96] "Updated CPUSet assignments" assignments={} Nov 8 00:17:18.308038 kubelet[2729]: I1108 00:17:18.307812 2729 policy_none.go:49] "None policy: Start" Nov 8 00:17:18.308038 kubelet[2729]: I1108 00:17:18.307837 2729 memory_manager.go:187] "Starting memorymanager" policy="None" Nov 8 00:17:18.308038 kubelet[2729]: I1108 00:17:18.307847 2729 state_mem.go:36] "Initializing new in-memory state store" logger="Memory Manager state checkpoint" Nov 8 00:17:18.308038 kubelet[2729]: I1108 00:17:18.307914 2729 state_mem.go:77] "Updated machine memory state" logger="Memory Manager state checkpoint" Nov 8 00:17:18.308038 kubelet[2729]: I1108 00:17:18.307922 2729 policy_none.go:47] "Start" Nov 8 00:17:18.322568 kubelet[2729]: E1108 00:17:18.321672 2729 manager.go:513] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Nov 8 00:17:18.322568 kubelet[2729]: I1108 00:17:18.321828 2729 eviction_manager.go:189] "Eviction manager: starting control loop" Nov 8 00:17:18.322568 kubelet[2729]: I1108 00:17:18.321837 2729 container_log_manager.go:146] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Nov 8 00:17:18.322568 kubelet[2729]: I1108 00:17:18.322043 2729 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Nov 8 00:17:18.326239 kubelet[2729]: E1108 00:17:18.325413 2729 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Nov 8 00:17:18.367003 kubelet[2729]: I1108 00:17:18.366940 2729 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Nov 8 00:17:18.367268 kubelet[2729]: I1108 00:17:18.367255 2729 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Nov 8 00:17:18.367375 kubelet[2729]: I1108 00:17:18.367368 2729 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Nov 8 00:17:18.381408 kubelet[2729]: E1108 00:17:18.381367 2729 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Nov 8 00:17:18.382415 kubelet[2729]: E1108 00:17:18.382390 2729 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" already exists" pod="kube-system/kube-controller-manager-localhost" Nov 8 00:17:18.382488 kubelet[2729]: E1108 00:17:18.382473 2729 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" Nov 8 00:17:18.433277 kubelet[2729]: I1108 00:17:18.431379 2729 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Nov 8 00:17:18.436344 kubelet[2729]: I1108 00:17:18.435995 2729 kubelet_node_status.go:124] "Node was previously registered" node="localhost" Nov 8 00:17:18.436344 kubelet[2729]: I1108 00:17:18.436047 2729 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Nov 8 00:17:18.555552 kubelet[2729]: I1108 00:17:18.555529 2729 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/ce161b3b11c90b0b844f2e4f86b4e8cd-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"ce161b3b11c90b0b844f2e4f86b4e8cd\") " pod="kube-system/kube-controller-manager-localhost" Nov 8 00:17:18.556277 kubelet[2729]: I1108 00:17:18.556263 2729 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/ce161b3b11c90b0b844f2e4f86b4e8cd-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"ce161b3b11c90b0b844f2e4f86b4e8cd\") " pod="kube-system/kube-controller-manager-localhost" Nov 8 00:17:18.556386 kubelet[2729]: I1108 00:17:18.556373 2729 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/72ae43bf624d285361487631af8a6ba6-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"72ae43bf624d285361487631af8a6ba6\") " pod="kube-system/kube-scheduler-localhost" Nov 8 00:17:18.556574 kubelet[2729]: I1108 00:17:18.556436 2729 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/e59486787442560b70da2f6db204cd4d-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"e59486787442560b70da2f6db204cd4d\") " pod="kube-system/kube-apiserver-localhost" Nov 8 00:17:18.556574 kubelet[2729]: I1108 00:17:18.556452 2729 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/e59486787442560b70da2f6db204cd4d-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"e59486787442560b70da2f6db204cd4d\") " pod="kube-system/kube-apiserver-localhost" Nov 8 00:17:18.556574 kubelet[2729]: I1108 00:17:18.556475 2729 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/ce161b3b11c90b0b844f2e4f86b4e8cd-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"ce161b3b11c90b0b844f2e4f86b4e8cd\") " pod="kube-system/kube-controller-manager-localhost" Nov 8 00:17:18.556574 kubelet[2729]: I1108 00:17:18.556491 2729 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/ce161b3b11c90b0b844f2e4f86b4e8cd-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"ce161b3b11c90b0b844f2e4f86b4e8cd\") " pod="kube-system/kube-controller-manager-localhost" Nov 8 00:17:18.556574 kubelet[2729]: I1108 00:17:18.556506 2729 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/ce161b3b11c90b0b844f2e4f86b4e8cd-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"ce161b3b11c90b0b844f2e4f86b4e8cd\") " pod="kube-system/kube-controller-manager-localhost" Nov 8 00:17:18.556734 kubelet[2729]: I1108 00:17:18.556527 2729 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/e59486787442560b70da2f6db204cd4d-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"e59486787442560b70da2f6db204cd4d\") " pod="kube-system/kube-apiserver-localhost" Nov 8 00:17:19.218790 kubelet[2729]: I1108 00:17:19.218766 2729 apiserver.go:52] "Watching apiserver" Nov 8 00:17:19.255617 kubelet[2729]: I1108 00:17:19.255584 2729 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Nov 8 00:17:19.280320 kubelet[2729]: I1108 00:17:19.280223 2729 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Nov 8 00:17:19.284259 kubelet[2729]: E1108 00:17:19.284234 2729 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" Nov 8 00:17:19.303833 kubelet[2729]: I1108 00:17:19.303781 2729 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=3.303763425 podStartE2EDuration="3.303763425s" podCreationTimestamp="2025-11-08 00:17:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-08 00:17:19.297601382 +0000 UTC m=+1.179512160" watchObservedRunningTime="2025-11-08 00:17:19.303763425 +0000 UTC m=+1.185674197" Nov 8 00:17:19.310729 kubelet[2729]: I1108 00:17:19.310603 2729 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=3.310588971 podStartE2EDuration="3.310588971s" podCreationTimestamp="2025-11-08 00:17:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-08 00:17:19.303978039 +0000 UTC m=+1.185888804" watchObservedRunningTime="2025-11-08 00:17:19.310588971 +0000 UTC m=+1.192499748" Nov 8 00:17:22.859130 kubelet[2729]: I1108 00:17:22.858934 2729 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=6.8589101249999995 podStartE2EDuration="6.858910125s" podCreationTimestamp="2025-11-08 00:17:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-08 00:17:19.317371418 +0000 UTC m=+1.199282189" watchObservedRunningTime="2025-11-08 00:17:22.858910125 +0000 UTC m=+4.740820894" Nov 8 00:17:23.334980 kubelet[2729]: I1108 00:17:23.334963 2729 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Nov 8 00:17:23.335318 containerd[1543]: time="2025-11-08T00:17:23.335288589Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Nov 8 00:17:23.335610 kubelet[2729]: I1108 00:17:23.335401 2729 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Nov 8 00:17:24.511006 systemd[1]: Created slice kubepods-besteffort-pod9201ce71_a369_4b16_880c_1fa4ed53bd7e.slice - libcontainer container kubepods-besteffort-pod9201ce71_a369_4b16_880c_1fa4ed53bd7e.slice. Nov 8 00:17:24.598415 kubelet[2729]: I1108 00:17:24.598277 2729 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/9201ce71-a369-4b16-880c-1fa4ed53bd7e-kube-proxy\") pod \"kube-proxy-z22mp\" (UID: \"9201ce71-a369-4b16-880c-1fa4ed53bd7e\") " pod="kube-system/kube-proxy-z22mp" Nov 8 00:17:24.598415 kubelet[2729]: I1108 00:17:24.598341 2729 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/9201ce71-a369-4b16-880c-1fa4ed53bd7e-xtables-lock\") pod \"kube-proxy-z22mp\" (UID: \"9201ce71-a369-4b16-880c-1fa4ed53bd7e\") " pod="kube-system/kube-proxy-z22mp" Nov 8 00:17:24.598415 kubelet[2729]: I1108 00:17:24.598356 2729 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/9201ce71-a369-4b16-880c-1fa4ed53bd7e-lib-modules\") pod \"kube-proxy-z22mp\" (UID: \"9201ce71-a369-4b16-880c-1fa4ed53bd7e\") " pod="kube-system/kube-proxy-z22mp" Nov 8 00:17:24.598415 kubelet[2729]: I1108 00:17:24.598369 2729 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4cv2p\" (UniqueName: \"kubernetes.io/projected/9201ce71-a369-4b16-880c-1fa4ed53bd7e-kube-api-access-4cv2p\") pod \"kube-proxy-z22mp\" (UID: \"9201ce71-a369-4b16-880c-1fa4ed53bd7e\") " pod="kube-system/kube-proxy-z22mp" Nov 8 00:17:24.669907 systemd[1]: Created slice kubepods-besteffort-pode3789c4a_b31a_4a03_bf2a_315f9361c614.slice - libcontainer container kubepods-besteffort-pode3789c4a_b31a_4a03_bf2a_315f9361c614.slice. Nov 8 00:17:24.699043 kubelet[2729]: I1108 00:17:24.698764 2729 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jwwdq\" (UniqueName: \"kubernetes.io/projected/e3789c4a-b31a-4a03-bf2a-315f9361c614-kube-api-access-jwwdq\") pod \"tigera-operator-65cdcdfd6d-z6d2h\" (UID: \"e3789c4a-b31a-4a03-bf2a-315f9361c614\") " pod="tigera-operator/tigera-operator-65cdcdfd6d-z6d2h" Nov 8 00:17:24.699043 kubelet[2729]: I1108 00:17:24.698795 2729 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/e3789c4a-b31a-4a03-bf2a-315f9361c614-var-lib-calico\") pod \"tigera-operator-65cdcdfd6d-z6d2h\" (UID: \"e3789c4a-b31a-4a03-bf2a-315f9361c614\") " pod="tigera-operator/tigera-operator-65cdcdfd6d-z6d2h" Nov 8 00:17:24.846452 containerd[1543]: time="2025-11-08T00:17:24.846023743Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-z22mp,Uid:9201ce71-a369-4b16-880c-1fa4ed53bd7e,Namespace:kube-system,Attempt:0,}" Nov 8 00:17:24.932964 containerd[1543]: time="2025-11-08T00:17:24.932738595Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 8 00:17:24.932964 containerd[1543]: time="2025-11-08T00:17:24.932801715Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 8 00:17:24.932964 containerd[1543]: time="2025-11-08T00:17:24.932821042Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 8 00:17:24.932964 containerd[1543]: time="2025-11-08T00:17:24.932899970Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 8 00:17:24.951479 systemd[1]: Started cri-containerd-6f0c412e7c32ef0cb74ee8b51d8d73f972ebc5771912eadac2fa4ec7ece66f87.scope - libcontainer container 6f0c412e7c32ef0cb74ee8b51d8d73f972ebc5771912eadac2fa4ec7ece66f87. Nov 8 00:17:24.966786 containerd[1543]: time="2025-11-08T00:17:24.966749442Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-z22mp,Uid:9201ce71-a369-4b16-880c-1fa4ed53bd7e,Namespace:kube-system,Attempt:0,} returns sandbox id \"6f0c412e7c32ef0cb74ee8b51d8d73f972ebc5771912eadac2fa4ec7ece66f87\"" Nov 8 00:17:24.980275 containerd[1543]: time="2025-11-08T00:17:24.980246185Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-65cdcdfd6d-z6d2h,Uid:e3789c4a-b31a-4a03-bf2a-315f9361c614,Namespace:tigera-operator,Attempt:0,}" Nov 8 00:17:24.980481 containerd[1543]: time="2025-11-08T00:17:24.980462132Z" level=info msg="CreateContainer within sandbox \"6f0c412e7c32ef0cb74ee8b51d8d73f972ebc5771912eadac2fa4ec7ece66f87\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Nov 8 00:17:25.206312 containerd[1543]: time="2025-11-08T00:17:25.204984792Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 8 00:17:25.206312 containerd[1543]: time="2025-11-08T00:17:25.205058090Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 8 00:17:25.206312 containerd[1543]: time="2025-11-08T00:17:25.205089890Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 8 00:17:25.206312 containerd[1543]: time="2025-11-08T00:17:25.205182149Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 8 00:17:25.210928 containerd[1543]: time="2025-11-08T00:17:25.210901198Z" level=info msg="CreateContainer within sandbox \"6f0c412e7c32ef0cb74ee8b51d8d73f972ebc5771912eadac2fa4ec7ece66f87\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"8343d860972a6fa7ab32247db328a7ebc7924d318f9a76f0d745f3af483dd539\"" Nov 8 00:17:25.212791 containerd[1543]: time="2025-11-08T00:17:25.212761012Z" level=info msg="StartContainer for \"8343d860972a6fa7ab32247db328a7ebc7924d318f9a76f0d745f3af483dd539\"" Nov 8 00:17:25.224437 systemd[1]: Started cri-containerd-2f1518d0fe0d97e598c37809330b0e32526bb3e58b4f7a038ba8d3d6d1d8f28e.scope - libcontainer container 2f1518d0fe0d97e598c37809330b0e32526bb3e58b4f7a038ba8d3d6d1d8f28e. Nov 8 00:17:25.244376 systemd[1]: Started cri-containerd-8343d860972a6fa7ab32247db328a7ebc7924d318f9a76f0d745f3af483dd539.scope - libcontainer container 8343d860972a6fa7ab32247db328a7ebc7924d318f9a76f0d745f3af483dd539. Nov 8 00:17:25.266973 containerd[1543]: time="2025-11-08T00:17:25.266946838Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-65cdcdfd6d-z6d2h,Uid:e3789c4a-b31a-4a03-bf2a-315f9361c614,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"2f1518d0fe0d97e598c37809330b0e32526bb3e58b4f7a038ba8d3d6d1d8f28e\"" Nov 8 00:17:25.268932 containerd[1543]: time="2025-11-08T00:17:25.268773780Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.7\"" Nov 8 00:17:25.278258 containerd[1543]: time="2025-11-08T00:17:25.278189493Z" level=info msg="StartContainer for \"8343d860972a6fa7ab32247db328a7ebc7924d318f9a76f0d745f3af483dd539\" returns successfully" Nov 8 00:17:25.478424 kubelet[2729]: I1108 00:17:25.478345 2729 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-z22mp" podStartSLOduration=1.478335953 podStartE2EDuration="1.478335953s" podCreationTimestamp="2025-11-08 00:17:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-08 00:17:25.302135703 +0000 UTC m=+7.184046481" watchObservedRunningTime="2025-11-08 00:17:25.478335953 +0000 UTC m=+7.360246726" Nov 8 00:17:27.681543 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2867694590.mount: Deactivated successfully. Nov 8 00:17:28.431010 containerd[1543]: time="2025-11-08T00:17:28.430903689Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.38.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:17:28.442391 containerd[1543]: time="2025-11-08T00:17:28.442282637Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.38.7: active requests=0, bytes read=25061691" Nov 8 00:17:28.469506 containerd[1543]: time="2025-11-08T00:17:28.467634302Z" level=info msg="ImageCreate event name:\"sha256:f2c1be207523e593db82e3b8cf356a12f3ad8d1aad2225f8114b2cf9d6486cf1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:17:28.484235 containerd[1543]: time="2025-11-08T00:17:28.484201610Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:1b629a1403f5b6d7243f7dd523d04b8a50352a33c1d4d6970b6002a8733acf2e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:17:28.485485 containerd[1543]: time="2025-11-08T00:17:28.485466255Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.38.7\" with image id \"sha256:f2c1be207523e593db82e3b8cf356a12f3ad8d1aad2225f8114b2cf9d6486cf1\", repo tag \"quay.io/tigera/operator:v1.38.7\", repo digest \"quay.io/tigera/operator@sha256:1b629a1403f5b6d7243f7dd523d04b8a50352a33c1d4d6970b6002a8733acf2e\", size \"25057686\" in 3.216520528s" Nov 8 00:17:28.485565 containerd[1543]: time="2025-11-08T00:17:28.485551604Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.7\" returns image reference \"sha256:f2c1be207523e593db82e3b8cf356a12f3ad8d1aad2225f8114b2cf9d6486cf1\"" Nov 8 00:17:28.590963 containerd[1543]: time="2025-11-08T00:17:28.590938750Z" level=info msg="CreateContainer within sandbox \"2f1518d0fe0d97e598c37809330b0e32526bb3e58b4f7a038ba8d3d6d1d8f28e\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Nov 8 00:17:28.606211 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3143065326.mount: Deactivated successfully. Nov 8 00:17:28.607205 containerd[1543]: time="2025-11-08T00:17:28.607092998Z" level=info msg="CreateContainer within sandbox \"2f1518d0fe0d97e598c37809330b0e32526bb3e58b4f7a038ba8d3d6d1d8f28e\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"9b106ea714ab213536b0f41ad34c02fdb122aaea4a7f6cd03c545d7517d547b5\"" Nov 8 00:17:28.608795 containerd[1543]: time="2025-11-08T00:17:28.608774979Z" level=info msg="StartContainer for \"9b106ea714ab213536b0f41ad34c02fdb122aaea4a7f6cd03c545d7517d547b5\"" Nov 8 00:17:28.635488 systemd[1]: Started cri-containerd-9b106ea714ab213536b0f41ad34c02fdb122aaea4a7f6cd03c545d7517d547b5.scope - libcontainer container 9b106ea714ab213536b0f41ad34c02fdb122aaea4a7f6cd03c545d7517d547b5. Nov 8 00:17:28.665773 containerd[1543]: time="2025-11-08T00:17:28.665653862Z" level=info msg="StartContainer for \"9b106ea714ab213536b0f41ad34c02fdb122aaea4a7f6cd03c545d7517d547b5\" returns successfully" Nov 8 00:17:29.660745 kubelet[2729]: I1108 00:17:29.659347 2729 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-65cdcdfd6d-z6d2h" podStartSLOduration=2.360374285 podStartE2EDuration="5.659289658s" podCreationTimestamp="2025-11-08 00:17:24 +0000 UTC" firstStartedPulling="2025-11-08 00:17:25.268172492 +0000 UTC m=+7.150083260" lastFinishedPulling="2025-11-08 00:17:28.567087864 +0000 UTC m=+10.448998633" observedRunningTime="2025-11-08 00:17:29.659266729 +0000 UTC m=+11.541177508" watchObservedRunningTime="2025-11-08 00:17:29.659289658 +0000 UTC m=+11.541200435" Nov 8 00:17:34.363044 sudo[1834]: pam_unix(sudo:session): session closed for user root Nov 8 00:17:34.367478 sshd[1831]: pam_unix(sshd:session): session closed for user core Nov 8 00:17:34.369742 systemd[1]: sshd@6-139.178.70.104:22-147.75.109.163:60740.service: Deactivated successfully. Nov 8 00:17:34.373047 systemd[1]: session-9.scope: Deactivated successfully. Nov 8 00:17:34.373475 systemd[1]: session-9.scope: Consumed 4.321s CPU time, 145.9M memory peak, 0B memory swap peak. Nov 8 00:17:34.377367 systemd-logind[1520]: Session 9 logged out. Waiting for processes to exit. Nov 8 00:17:34.378929 systemd-logind[1520]: Removed session 9. Nov 8 00:17:38.607230 systemd[1]: Created slice kubepods-besteffort-pod1a4bd873_b45a_4866_9e39_1bd94a08685c.slice - libcontainer container kubepods-besteffort-pod1a4bd873_b45a_4866_9e39_1bd94a08685c.slice. Nov 8 00:17:38.660826 kubelet[2729]: I1108 00:17:38.660697 2729 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1a4bd873-b45a-4866-9e39-1bd94a08685c-tigera-ca-bundle\") pod \"calico-typha-6d4b498cff-js2r5\" (UID: \"1a4bd873-b45a-4866-9e39-1bd94a08685c\") " pod="calico-system/calico-typha-6d4b498cff-js2r5" Nov 8 00:17:38.660826 kubelet[2729]: I1108 00:17:38.660739 2729 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9x95d\" (UniqueName: \"kubernetes.io/projected/1a4bd873-b45a-4866-9e39-1bd94a08685c-kube-api-access-9x95d\") pod \"calico-typha-6d4b498cff-js2r5\" (UID: \"1a4bd873-b45a-4866-9e39-1bd94a08685c\") " pod="calico-system/calico-typha-6d4b498cff-js2r5" Nov 8 00:17:38.660826 kubelet[2729]: I1108 00:17:38.660760 2729 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/1a4bd873-b45a-4866-9e39-1bd94a08685c-typha-certs\") pod \"calico-typha-6d4b498cff-js2r5\" (UID: \"1a4bd873-b45a-4866-9e39-1bd94a08685c\") " pod="calico-system/calico-typha-6d4b498cff-js2r5" Nov 8 00:17:38.799178 systemd[1]: Created slice kubepods-besteffort-pod1dccf2ea_689c_4481_bed3_8909f8b297ab.slice - libcontainer container kubepods-besteffort-pod1dccf2ea_689c_4481_bed3_8909f8b297ab.slice. Nov 8 00:17:38.862060 kubelet[2729]: I1108 00:17:38.861931 2729 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/1dccf2ea-689c-4481-bed3-8909f8b297ab-xtables-lock\") pod \"calico-node-vdv2v\" (UID: \"1dccf2ea-689c-4481-bed3-8909f8b297ab\") " pod="calico-system/calico-node-vdv2v" Nov 8 00:17:38.862060 kubelet[2729]: I1108 00:17:38.861967 2729 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/1dccf2ea-689c-4481-bed3-8909f8b297ab-cni-log-dir\") pod \"calico-node-vdv2v\" (UID: \"1dccf2ea-689c-4481-bed3-8909f8b297ab\") " pod="calico-system/calico-node-vdv2v" Nov 8 00:17:38.862060 kubelet[2729]: I1108 00:17:38.861986 2729 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/1dccf2ea-689c-4481-bed3-8909f8b297ab-lib-modules\") pod \"calico-node-vdv2v\" (UID: \"1dccf2ea-689c-4481-bed3-8909f8b297ab\") " pod="calico-system/calico-node-vdv2v" Nov 8 00:17:38.862060 kubelet[2729]: I1108 00:17:38.862003 2729 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/1dccf2ea-689c-4481-bed3-8909f8b297ab-var-run-calico\") pod \"calico-node-vdv2v\" (UID: \"1dccf2ea-689c-4481-bed3-8909f8b297ab\") " pod="calico-system/calico-node-vdv2v" Nov 8 00:17:38.862060 kubelet[2729]: I1108 00:17:38.862019 2729 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f9dgw\" (UniqueName: \"kubernetes.io/projected/1dccf2ea-689c-4481-bed3-8909f8b297ab-kube-api-access-f9dgw\") pod \"calico-node-vdv2v\" (UID: \"1dccf2ea-689c-4481-bed3-8909f8b297ab\") " pod="calico-system/calico-node-vdv2v" Nov 8 00:17:38.862224 kubelet[2729]: I1108 00:17:38.862038 2729 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/1dccf2ea-689c-4481-bed3-8909f8b297ab-policysync\") pod \"calico-node-vdv2v\" (UID: \"1dccf2ea-689c-4481-bed3-8909f8b297ab\") " pod="calico-system/calico-node-vdv2v" Nov 8 00:17:38.862224 kubelet[2729]: I1108 00:17:38.862071 2729 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/1dccf2ea-689c-4481-bed3-8909f8b297ab-cni-bin-dir\") pod \"calico-node-vdv2v\" (UID: \"1dccf2ea-689c-4481-bed3-8909f8b297ab\") " pod="calico-system/calico-node-vdv2v" Nov 8 00:17:38.862224 kubelet[2729]: I1108 00:17:38.862087 2729 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1dccf2ea-689c-4481-bed3-8909f8b297ab-tigera-ca-bundle\") pod \"calico-node-vdv2v\" (UID: \"1dccf2ea-689c-4481-bed3-8909f8b297ab\") " pod="calico-system/calico-node-vdv2v" Nov 8 00:17:38.862224 kubelet[2729]: I1108 00:17:38.862102 2729 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/1dccf2ea-689c-4481-bed3-8909f8b297ab-cni-net-dir\") pod \"calico-node-vdv2v\" (UID: \"1dccf2ea-689c-4481-bed3-8909f8b297ab\") " pod="calico-system/calico-node-vdv2v" Nov 8 00:17:38.862224 kubelet[2729]: I1108 00:17:38.862121 2729 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/1dccf2ea-689c-4481-bed3-8909f8b297ab-flexvol-driver-host\") pod \"calico-node-vdv2v\" (UID: \"1dccf2ea-689c-4481-bed3-8909f8b297ab\") " pod="calico-system/calico-node-vdv2v" Nov 8 00:17:38.862332 kubelet[2729]: I1108 00:17:38.862133 2729 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/1dccf2ea-689c-4481-bed3-8909f8b297ab-node-certs\") pod \"calico-node-vdv2v\" (UID: \"1dccf2ea-689c-4481-bed3-8909f8b297ab\") " pod="calico-system/calico-node-vdv2v" Nov 8 00:17:38.862332 kubelet[2729]: I1108 00:17:38.862150 2729 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/1dccf2ea-689c-4481-bed3-8909f8b297ab-var-lib-calico\") pod \"calico-node-vdv2v\" (UID: \"1dccf2ea-689c-4481-bed3-8909f8b297ab\") " pod="calico-system/calico-node-vdv2v" Nov 8 00:17:38.950744 containerd[1543]: time="2025-11-08T00:17:38.950705042Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-6d4b498cff-js2r5,Uid:1a4bd873-b45a-4866-9e39-1bd94a08685c,Namespace:calico-system,Attempt:0,}" Nov 8 00:17:38.995493 kubelet[2729]: E1108 00:17:38.995449 2729 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:17:38.995493 kubelet[2729]: W1108 00:17:38.995487 2729 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:17:38.995611 kubelet[2729]: E1108 00:17:38.995513 2729 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:17:39.049368 kubelet[2729]: E1108 00:17:39.047276 2729 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-tzsgq" podUID="d3463313-ed82-40c5-914e-c6418d31744b" Nov 8 00:17:39.057414 containerd[1543]: time="2025-11-08T00:17:39.054182035Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 8 00:17:39.057414 containerd[1543]: time="2025-11-08T00:17:39.054243299Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 8 00:17:39.057414 containerd[1543]: time="2025-11-08T00:17:39.054256338Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 8 00:17:39.057414 containerd[1543]: time="2025-11-08T00:17:39.054333253Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 8 00:17:39.063269 kubelet[2729]: E1108 00:17:39.063246 2729 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:17:39.064093 kubelet[2729]: W1108 00:17:39.064071 2729 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:17:39.064660 kubelet[2729]: E1108 00:17:39.064097 2729 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:17:39.064660 kubelet[2729]: E1108 00:17:39.064422 2729 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:17:39.064660 kubelet[2729]: W1108 00:17:39.064433 2729 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:17:39.064660 kubelet[2729]: E1108 00:17:39.064456 2729 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:17:39.065237 kubelet[2729]: E1108 00:17:39.064676 2729 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:17:39.065237 kubelet[2729]: W1108 00:17:39.064685 2729 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:17:39.065237 kubelet[2729]: E1108 00:17:39.064695 2729 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:17:39.065377 kubelet[2729]: E1108 00:17:39.065249 2729 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:17:39.065377 kubelet[2729]: W1108 00:17:39.065260 2729 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:17:39.065377 kubelet[2729]: E1108 00:17:39.065273 2729 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:17:39.066901 kubelet[2729]: E1108 00:17:39.066505 2729 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:17:39.066901 kubelet[2729]: W1108 00:17:39.066518 2729 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:17:39.066901 kubelet[2729]: E1108 00:17:39.066533 2729 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:17:39.066901 kubelet[2729]: E1108 00:17:39.066708 2729 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:17:39.066901 kubelet[2729]: W1108 00:17:39.066715 2729 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:17:39.066901 kubelet[2729]: E1108 00:17:39.066723 2729 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:17:39.066901 kubelet[2729]: E1108 00:17:39.066864 2729 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:17:39.066901 kubelet[2729]: W1108 00:17:39.066870 2729 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:17:39.066901 kubelet[2729]: E1108 00:17:39.066878 2729 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:17:39.069542 kubelet[2729]: E1108 00:17:39.067070 2729 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:17:39.069542 kubelet[2729]: W1108 00:17:39.067077 2729 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:17:39.069542 kubelet[2729]: E1108 00:17:39.067084 2729 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:17:39.069926 kubelet[2729]: E1108 00:17:39.069883 2729 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:17:39.069926 kubelet[2729]: W1108 00:17:39.069919 2729 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:17:39.070611 kubelet[2729]: E1108 00:17:39.069945 2729 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:17:39.070611 kubelet[2729]: E1108 00:17:39.070413 2729 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:17:39.070611 kubelet[2729]: W1108 00:17:39.070430 2729 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:17:39.070611 kubelet[2729]: E1108 00:17:39.070442 2729 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:17:39.070611 kubelet[2729]: E1108 00:17:39.070589 2729 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:17:39.070611 kubelet[2729]: W1108 00:17:39.070595 2729 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:17:39.070611 kubelet[2729]: E1108 00:17:39.070602 2729 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:17:39.074080 kubelet[2729]: E1108 00:17:39.072883 2729 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:17:39.074080 kubelet[2729]: W1108 00:17:39.072904 2729 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:17:39.074080 kubelet[2729]: E1108 00:17:39.072929 2729 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:17:39.074080 kubelet[2729]: E1108 00:17:39.074072 2729 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:17:39.074315 kubelet[2729]: W1108 00:17:39.074089 2729 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:17:39.074315 kubelet[2729]: E1108 00:17:39.074105 2729 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:17:39.074691 kubelet[2729]: E1108 00:17:39.074668 2729 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:17:39.074691 kubelet[2729]: W1108 00:17:39.074689 2729 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:17:39.075193 kubelet[2729]: E1108 00:17:39.074696 2729 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:17:39.075193 kubelet[2729]: E1108 00:17:39.074849 2729 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:17:39.075193 kubelet[2729]: W1108 00:17:39.074854 2729 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:17:39.075193 kubelet[2729]: E1108 00:17:39.074861 2729 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:17:39.075193 kubelet[2729]: E1108 00:17:39.074995 2729 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:17:39.075193 kubelet[2729]: W1108 00:17:39.075001 2729 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:17:39.075193 kubelet[2729]: E1108 00:17:39.075006 2729 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:17:39.075193 kubelet[2729]: E1108 00:17:39.075153 2729 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:17:39.075193 kubelet[2729]: W1108 00:17:39.075158 2729 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:17:39.075193 kubelet[2729]: E1108 00:17:39.075164 2729 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:17:39.079533 kubelet[2729]: E1108 00:17:39.075267 2729 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:17:39.079533 kubelet[2729]: W1108 00:17:39.075271 2729 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:17:39.079533 kubelet[2729]: E1108 00:17:39.075276 2729 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:17:39.079533 kubelet[2729]: E1108 00:17:39.075411 2729 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:17:39.079533 kubelet[2729]: W1108 00:17:39.075415 2729 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:17:39.079533 kubelet[2729]: E1108 00:17:39.075420 2729 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:17:39.079533 kubelet[2729]: E1108 00:17:39.075556 2729 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:17:39.079533 kubelet[2729]: W1108 00:17:39.075569 2729 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:17:39.079533 kubelet[2729]: E1108 00:17:39.075580 2729 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:17:39.079533 kubelet[2729]: E1108 00:17:39.075886 2729 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:17:39.081994 kubelet[2729]: W1108 00:17:39.075893 2729 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:17:39.081994 kubelet[2729]: E1108 00:17:39.075902 2729 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:17:39.081994 kubelet[2729]: I1108 00:17:39.075936 2729 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hbjmr\" (UniqueName: \"kubernetes.io/projected/d3463313-ed82-40c5-914e-c6418d31744b-kube-api-access-hbjmr\") pod \"csi-node-driver-tzsgq\" (UID: \"d3463313-ed82-40c5-914e-c6418d31744b\") " pod="calico-system/csi-node-driver-tzsgq" Nov 8 00:17:39.081994 kubelet[2729]: E1108 00:17:39.076493 2729 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:17:39.081994 kubelet[2729]: W1108 00:17:39.076513 2729 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:17:39.081994 kubelet[2729]: E1108 00:17:39.076526 2729 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:17:39.081994 kubelet[2729]: I1108 00:17:39.076546 2729 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/d3463313-ed82-40c5-914e-c6418d31744b-registration-dir\") pod \"csi-node-driver-tzsgq\" (UID: \"d3463313-ed82-40c5-914e-c6418d31744b\") " pod="calico-system/csi-node-driver-tzsgq" Nov 8 00:17:39.081994 kubelet[2729]: E1108 00:17:39.076728 2729 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:17:39.083110 kubelet[2729]: W1108 00:17:39.076737 2729 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:17:39.083110 kubelet[2729]: E1108 00:17:39.076746 2729 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:17:39.083110 kubelet[2729]: I1108 00:17:39.076764 2729 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/d3463313-ed82-40c5-914e-c6418d31744b-socket-dir\") pod \"csi-node-driver-tzsgq\" (UID: \"d3463313-ed82-40c5-914e-c6418d31744b\") " pod="calico-system/csi-node-driver-tzsgq" Nov 8 00:17:39.083110 kubelet[2729]: E1108 00:17:39.077648 2729 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:17:39.083110 kubelet[2729]: W1108 00:17:39.077670 2729 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:17:39.083110 kubelet[2729]: E1108 00:17:39.077689 2729 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:17:39.083110 kubelet[2729]: I1108 00:17:39.077716 2729 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/d3463313-ed82-40c5-914e-c6418d31744b-varrun\") pod \"csi-node-driver-tzsgq\" (UID: \"d3463313-ed82-40c5-914e-c6418d31744b\") " pod="calico-system/csi-node-driver-tzsgq" Nov 8 00:17:39.083110 kubelet[2729]: E1108 00:17:39.078380 2729 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:17:39.083400 kubelet[2729]: W1108 00:17:39.078395 2729 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:17:39.083400 kubelet[2729]: E1108 00:17:39.078424 2729 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:17:39.083400 kubelet[2729]: E1108 00:17:39.080742 2729 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:17:39.083400 kubelet[2729]: W1108 00:17:39.080763 2729 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:17:39.083400 kubelet[2729]: E1108 00:17:39.080779 2729 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:17:39.083400 kubelet[2729]: E1108 00:17:39.081370 2729 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:17:39.083400 kubelet[2729]: W1108 00:17:39.081383 2729 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:17:39.083400 kubelet[2729]: E1108 00:17:39.081400 2729 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:17:39.083400 kubelet[2729]: E1108 00:17:39.081780 2729 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:17:39.083400 kubelet[2729]: W1108 00:17:39.081789 2729 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:17:39.083696 kubelet[2729]: E1108 00:17:39.081802 2729 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:17:39.083696 kubelet[2729]: E1108 00:17:39.082425 2729 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:17:39.083696 kubelet[2729]: W1108 00:17:39.082436 2729 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:17:39.083696 kubelet[2729]: E1108 00:17:39.082448 2729 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:17:39.083696 kubelet[2729]: E1108 00:17:39.083361 2729 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:17:39.083696 kubelet[2729]: W1108 00:17:39.083372 2729 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:17:39.083696 kubelet[2729]: E1108 00:17:39.083387 2729 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:17:39.083911 kubelet[2729]: E1108 00:17:39.083883 2729 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:17:39.083911 kubelet[2729]: W1108 00:17:39.083893 2729 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:17:39.083911 kubelet[2729]: E1108 00:17:39.083905 2729 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:17:39.084342 kubelet[2729]: I1108 00:17:39.084291 2729 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/d3463313-ed82-40c5-914e-c6418d31744b-kubelet-dir\") pod \"csi-node-driver-tzsgq\" (UID: \"d3463313-ed82-40c5-914e-c6418d31744b\") " pod="calico-system/csi-node-driver-tzsgq" Nov 8 00:17:39.084790 kubelet[2729]: E1108 00:17:39.084741 2729 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:17:39.084790 kubelet[2729]: W1108 00:17:39.084755 2729 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:17:39.084790 kubelet[2729]: E1108 00:17:39.084770 2729 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:17:39.087910 kubelet[2729]: E1108 00:17:39.087868 2729 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:17:39.087910 kubelet[2729]: W1108 00:17:39.087891 2729 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:17:39.087910 kubelet[2729]: E1108 00:17:39.087914 2729 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:17:39.089106 kubelet[2729]: E1108 00:17:39.088441 2729 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:17:39.089106 kubelet[2729]: W1108 00:17:39.088453 2729 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:17:39.089106 kubelet[2729]: E1108 00:17:39.088471 2729 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:17:39.089464 kubelet[2729]: E1108 00:17:39.089444 2729 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:17:39.089464 kubelet[2729]: W1108 00:17:39.089458 2729 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:17:39.089546 kubelet[2729]: E1108 00:17:39.089476 2729 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:17:39.125346 containerd[1543]: time="2025-11-08T00:17:39.125036578Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-vdv2v,Uid:1dccf2ea-689c-4481-bed3-8909f8b297ab,Namespace:calico-system,Attempt:0,}" Nov 8 00:17:39.147454 systemd[1]: Started cri-containerd-520939bcf0c05043ddd29af103e35bb38f6aeecfaed3ba90f71815698b4c048b.scope - libcontainer container 520939bcf0c05043ddd29af103e35bb38f6aeecfaed3ba90f71815698b4c048b. Nov 8 00:17:39.185867 kubelet[2729]: E1108 00:17:39.185677 2729 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:17:39.185867 kubelet[2729]: W1108 00:17:39.185693 2729 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:17:39.185867 kubelet[2729]: E1108 00:17:39.185706 2729 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:17:39.198449 kubelet[2729]: E1108 00:17:39.186033 2729 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:17:39.198449 kubelet[2729]: W1108 00:17:39.186038 2729 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:17:39.198449 kubelet[2729]: E1108 00:17:39.186045 2729 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:17:39.198449 kubelet[2729]: E1108 00:17:39.186231 2729 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:17:39.198449 kubelet[2729]: W1108 00:17:39.186236 2729 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:17:39.198449 kubelet[2729]: E1108 00:17:39.186242 2729 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:17:39.198449 kubelet[2729]: E1108 00:17:39.186458 2729 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:17:39.198449 kubelet[2729]: W1108 00:17:39.186463 2729 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:17:39.198449 kubelet[2729]: E1108 00:17:39.186469 2729 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:17:39.198449 kubelet[2729]: E1108 00:17:39.186609 2729 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:17:39.198653 containerd[1543]: time="2025-11-08T00:17:39.196116509Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-6d4b498cff-js2r5,Uid:1a4bd873-b45a-4866-9e39-1bd94a08685c,Namespace:calico-system,Attempt:0,} returns sandbox id \"520939bcf0c05043ddd29af103e35bb38f6aeecfaed3ba90f71815698b4c048b\"" Nov 8 00:17:39.198653 containerd[1543]: time="2025-11-08T00:17:39.197579098Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.4\"" Nov 8 00:17:39.198694 kubelet[2729]: W1108 00:17:39.186614 2729 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:17:39.198694 kubelet[2729]: E1108 00:17:39.186619 2729 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:17:39.198694 kubelet[2729]: E1108 00:17:39.187118 2729 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:17:39.198694 kubelet[2729]: W1108 00:17:39.187123 2729 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:17:39.198694 kubelet[2729]: E1108 00:17:39.187129 2729 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:17:39.198694 kubelet[2729]: E1108 00:17:39.187403 2729 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:17:39.198694 kubelet[2729]: W1108 00:17:39.187409 2729 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:17:39.198694 kubelet[2729]: E1108 00:17:39.187415 2729 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:17:39.198694 kubelet[2729]: E1108 00:17:39.187562 2729 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:17:39.198694 kubelet[2729]: W1108 00:17:39.187567 2729 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:17:39.198861 kubelet[2729]: E1108 00:17:39.187573 2729 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:17:39.198861 kubelet[2729]: E1108 00:17:39.187734 2729 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:17:39.198861 kubelet[2729]: W1108 00:17:39.187739 2729 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:17:39.198861 kubelet[2729]: E1108 00:17:39.187795 2729 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:17:39.198861 kubelet[2729]: E1108 00:17:39.188341 2729 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:17:39.198861 kubelet[2729]: W1108 00:17:39.188348 2729 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:17:39.198861 kubelet[2729]: E1108 00:17:39.188354 2729 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:17:39.198861 kubelet[2729]: E1108 00:17:39.189625 2729 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:17:39.198861 kubelet[2729]: W1108 00:17:39.189631 2729 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:17:39.198861 kubelet[2729]: E1108 00:17:39.189638 2729 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:17:39.199032 kubelet[2729]: E1108 00:17:39.189866 2729 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:17:39.199032 kubelet[2729]: W1108 00:17:39.189873 2729 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:17:39.199032 kubelet[2729]: E1108 00:17:39.189879 2729 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:17:39.199032 kubelet[2729]: E1108 00:17:39.190106 2729 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:17:39.199032 kubelet[2729]: W1108 00:17:39.190111 2729 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:17:39.199032 kubelet[2729]: E1108 00:17:39.190117 2729 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:17:39.199032 kubelet[2729]: E1108 00:17:39.190658 2729 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:17:39.199032 kubelet[2729]: W1108 00:17:39.190664 2729 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:17:39.199032 kubelet[2729]: E1108 00:17:39.190670 2729 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:17:39.199032 kubelet[2729]: E1108 00:17:39.190836 2729 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:17:39.199326 kubelet[2729]: W1108 00:17:39.190842 2729 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:17:39.199326 kubelet[2729]: E1108 00:17:39.190847 2729 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:17:39.199326 kubelet[2729]: E1108 00:17:39.190970 2729 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:17:39.199326 kubelet[2729]: W1108 00:17:39.190975 2729 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:17:39.199326 kubelet[2729]: E1108 00:17:39.190980 2729 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:17:39.199326 kubelet[2729]: E1108 00:17:39.191350 2729 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:17:39.199326 kubelet[2729]: W1108 00:17:39.191355 2729 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:17:39.199326 kubelet[2729]: E1108 00:17:39.191361 2729 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:17:39.199326 kubelet[2729]: E1108 00:17:39.191489 2729 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:17:39.199326 kubelet[2729]: W1108 00:17:39.191494 2729 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:17:39.199499 kubelet[2729]: E1108 00:17:39.191500 2729 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:17:39.199499 kubelet[2729]: E1108 00:17:39.192335 2729 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:17:39.199499 kubelet[2729]: W1108 00:17:39.192341 2729 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:17:39.199499 kubelet[2729]: E1108 00:17:39.192348 2729 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:17:39.199499 kubelet[2729]: E1108 00:17:39.192478 2729 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:17:39.199499 kubelet[2729]: W1108 00:17:39.192483 2729 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:17:39.199499 kubelet[2729]: E1108 00:17:39.192489 2729 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:17:39.199499 kubelet[2729]: E1108 00:17:39.192618 2729 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:17:39.199499 kubelet[2729]: W1108 00:17:39.192623 2729 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:17:39.199499 kubelet[2729]: E1108 00:17:39.192628 2729 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:17:39.199687 kubelet[2729]: E1108 00:17:39.192797 2729 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:17:39.199687 kubelet[2729]: W1108 00:17:39.192803 2729 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:17:39.199687 kubelet[2729]: E1108 00:17:39.192809 2729 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:17:39.199687 kubelet[2729]: E1108 00:17:39.192932 2729 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:17:39.199687 kubelet[2729]: W1108 00:17:39.192941 2729 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:17:39.199687 kubelet[2729]: E1108 00:17:39.192947 2729 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:17:39.199687 kubelet[2729]: E1108 00:17:39.193074 2729 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:17:39.199687 kubelet[2729]: W1108 00:17:39.193079 2729 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:17:39.199687 kubelet[2729]: E1108 00:17:39.193085 2729 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:17:39.211957 kubelet[2729]: E1108 00:17:39.208098 2729 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:17:39.211957 kubelet[2729]: W1108 00:17:39.208116 2729 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:17:39.211957 kubelet[2729]: E1108 00:17:39.208132 2729 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:17:39.230731 kubelet[2729]: E1108 00:17:39.230694 2729 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:17:39.230996 kubelet[2729]: W1108 00:17:39.230713 2729 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:17:39.230996 kubelet[2729]: E1108 00:17:39.230837 2729 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:17:39.277929 containerd[1543]: time="2025-11-08T00:17:39.276207499Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 8 00:17:39.277929 containerd[1543]: time="2025-11-08T00:17:39.276265816Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 8 00:17:39.277929 containerd[1543]: time="2025-11-08T00:17:39.276352013Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 8 00:17:39.277929 containerd[1543]: time="2025-11-08T00:17:39.276478802Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 8 00:17:39.308516 systemd[1]: Started cri-containerd-1ef97721f0bac3c21c75e20b3e20be131a54e6a26af25d3e165f4b3f46b43e68.scope - libcontainer container 1ef97721f0bac3c21c75e20b3e20be131a54e6a26af25d3e165f4b3f46b43e68. Nov 8 00:17:39.328688 containerd[1543]: time="2025-11-08T00:17:39.328652889Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-vdv2v,Uid:1dccf2ea-689c-4481-bed3-8909f8b297ab,Namespace:calico-system,Attempt:0,} returns sandbox id \"1ef97721f0bac3c21c75e20b3e20be131a54e6a26af25d3e165f4b3f46b43e68\"" Nov 8 00:17:40.643047 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1489314416.mount: Deactivated successfully. Nov 8 00:17:41.266644 kubelet[2729]: E1108 00:17:41.266292 2729 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-tzsgq" podUID="d3463313-ed82-40c5-914e-c6418d31744b" Nov 8 00:17:41.468539 containerd[1543]: time="2025-11-08T00:17:41.467821911Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:17:41.468539 containerd[1543]: time="2025-11-08T00:17:41.468464093Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.30.4: active requests=0, bytes read=35234628" Nov 8 00:17:41.473340 containerd[1543]: time="2025-11-08T00:17:41.473282363Z" level=info msg="ImageCreate event name:\"sha256:aa1490366a77160b4cc8f9af82281ab7201ffda0882871f860e1eb1c4f825958\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:17:41.474940 containerd[1543]: time="2025-11-08T00:17:41.474912278Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:6f437220b5b3c627fb4a0fc8dc323363101f3c22a8f337612c2a1ddfb73b810c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:17:41.476482 containerd[1543]: time="2025-11-08T00:17:41.475199966Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.30.4\" with image id \"sha256:aa1490366a77160b4cc8f9af82281ab7201ffda0882871f860e1eb1c4f825958\", repo tag \"ghcr.io/flatcar/calico/typha:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:6f437220b5b3c627fb4a0fc8dc323363101f3c22a8f337612c2a1ddfb73b810c\", size \"35234482\" in 2.277601241s" Nov 8 00:17:41.476482 containerd[1543]: time="2025-11-08T00:17:41.475219212Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.4\" returns image reference \"sha256:aa1490366a77160b4cc8f9af82281ab7201ffda0882871f860e1eb1c4f825958\"" Nov 8 00:17:41.476982 containerd[1543]: time="2025-11-08T00:17:41.476641948Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\"" Nov 8 00:17:41.489539 containerd[1543]: time="2025-11-08T00:17:41.489502840Z" level=info msg="CreateContainer within sandbox \"520939bcf0c05043ddd29af103e35bb38f6aeecfaed3ba90f71815698b4c048b\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Nov 8 00:17:41.503537 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3204463978.mount: Deactivated successfully. Nov 8 00:17:41.504958 containerd[1543]: time="2025-11-08T00:17:41.504087098Z" level=info msg="CreateContainer within sandbox \"520939bcf0c05043ddd29af103e35bb38f6aeecfaed3ba90f71815698b4c048b\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"c548c018d102010cf91765976895b35b3bb1a76ef698a6c7dea4964864449915\"" Nov 8 00:17:41.511059 containerd[1543]: time="2025-11-08T00:17:41.511019787Z" level=info msg="StartContainer for \"c548c018d102010cf91765976895b35b3bb1a76ef698a6c7dea4964864449915\"" Nov 8 00:17:41.571409 systemd[1]: Started cri-containerd-c548c018d102010cf91765976895b35b3bb1a76ef698a6c7dea4964864449915.scope - libcontainer container c548c018d102010cf91765976895b35b3bb1a76ef698a6c7dea4964864449915. Nov 8 00:17:41.623457 containerd[1543]: time="2025-11-08T00:17:41.623292591Z" level=info msg="StartContainer for \"c548c018d102010cf91765976895b35b3bb1a76ef698a6c7dea4964864449915\" returns successfully" Nov 8 00:17:42.600447 kubelet[2729]: E1108 00:17:42.600422 2729 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:17:42.600447 kubelet[2729]: W1108 00:17:42.600438 2729 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:17:42.600447 kubelet[2729]: E1108 00:17:42.600454 2729 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:17:42.600808 kubelet[2729]: E1108 00:17:42.600546 2729 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:17:42.600808 kubelet[2729]: W1108 00:17:42.600550 2729 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:17:42.600808 kubelet[2729]: E1108 00:17:42.600556 2729 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:17:42.600808 kubelet[2729]: E1108 00:17:42.600655 2729 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:17:42.600808 kubelet[2729]: W1108 00:17:42.600660 2729 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:17:42.600808 kubelet[2729]: E1108 00:17:42.600665 2729 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:17:42.600808 kubelet[2729]: E1108 00:17:42.600774 2729 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:17:42.600808 kubelet[2729]: W1108 00:17:42.600779 2729 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:17:42.600808 kubelet[2729]: E1108 00:17:42.600784 2729 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:17:42.604536 kubelet[2729]: E1108 00:17:42.600889 2729 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:17:42.604536 kubelet[2729]: W1108 00:17:42.600893 2729 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:17:42.604536 kubelet[2729]: E1108 00:17:42.600898 2729 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:17:42.604536 kubelet[2729]: E1108 00:17:42.600993 2729 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:17:42.604536 kubelet[2729]: W1108 00:17:42.600998 2729 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:17:42.604536 kubelet[2729]: E1108 00:17:42.601004 2729 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:17:42.604536 kubelet[2729]: E1108 00:17:42.601084 2729 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:17:42.604536 kubelet[2729]: W1108 00:17:42.601088 2729 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:17:42.604536 kubelet[2729]: E1108 00:17:42.601093 2729 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:17:42.604536 kubelet[2729]: E1108 00:17:42.601213 2729 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:17:42.609563 kubelet[2729]: W1108 00:17:42.601218 2729 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:17:42.609563 kubelet[2729]: E1108 00:17:42.601224 2729 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:17:42.609563 kubelet[2729]: E1108 00:17:42.601341 2729 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:17:42.609563 kubelet[2729]: W1108 00:17:42.601345 2729 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:17:42.609563 kubelet[2729]: E1108 00:17:42.601350 2729 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:17:42.609563 kubelet[2729]: E1108 00:17:42.601458 2729 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:17:42.609563 kubelet[2729]: W1108 00:17:42.601462 2729 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:17:42.609563 kubelet[2729]: E1108 00:17:42.601467 2729 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:17:42.609563 kubelet[2729]: E1108 00:17:42.601566 2729 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:17:42.609563 kubelet[2729]: W1108 00:17:42.601570 2729 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:17:42.614434 kubelet[2729]: E1108 00:17:42.601574 2729 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:17:42.614434 kubelet[2729]: E1108 00:17:42.601663 2729 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:17:42.614434 kubelet[2729]: W1108 00:17:42.601667 2729 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:17:42.614434 kubelet[2729]: E1108 00:17:42.601672 2729 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:17:42.614434 kubelet[2729]: E1108 00:17:42.601806 2729 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:17:42.614434 kubelet[2729]: W1108 00:17:42.601811 2729 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:17:42.614434 kubelet[2729]: E1108 00:17:42.601815 2729 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:17:42.614434 kubelet[2729]: E1108 00:17:42.602026 2729 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:17:42.614434 kubelet[2729]: W1108 00:17:42.602031 2729 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:17:42.614434 kubelet[2729]: E1108 00:17:42.602036 2729 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:17:42.614603 kubelet[2729]: E1108 00:17:42.602129 2729 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:17:42.614603 kubelet[2729]: W1108 00:17:42.602134 2729 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:17:42.614603 kubelet[2729]: E1108 00:17:42.602138 2729 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:17:42.614603 kubelet[2729]: E1108 00:17:42.611070 2729 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:17:42.614603 kubelet[2729]: W1108 00:17:42.611091 2729 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:17:42.614603 kubelet[2729]: E1108 00:17:42.611213 2729 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:17:42.614603 kubelet[2729]: E1108 00:17:42.611478 2729 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:17:42.614603 kubelet[2729]: W1108 00:17:42.611486 2729 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:17:42.614603 kubelet[2729]: E1108 00:17:42.611494 2729 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:17:42.614603 kubelet[2729]: E1108 00:17:42.611631 2729 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:17:42.614769 kubelet[2729]: W1108 00:17:42.611637 2729 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:17:42.614769 kubelet[2729]: E1108 00:17:42.611642 2729 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:17:42.614769 kubelet[2729]: E1108 00:17:42.611758 2729 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:17:42.614769 kubelet[2729]: W1108 00:17:42.611762 2729 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:17:42.614769 kubelet[2729]: E1108 00:17:42.611767 2729 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:17:42.614769 kubelet[2729]: E1108 00:17:42.611959 2729 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:17:42.614769 kubelet[2729]: W1108 00:17:42.611964 2729 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:17:42.614769 kubelet[2729]: E1108 00:17:42.611970 2729 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:17:42.614769 kubelet[2729]: E1108 00:17:42.612147 2729 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:17:42.614769 kubelet[2729]: W1108 00:17:42.612155 2729 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:17:42.614923 kubelet[2729]: E1108 00:17:42.612162 2729 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:17:42.614923 kubelet[2729]: E1108 00:17:42.612330 2729 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:17:42.614923 kubelet[2729]: W1108 00:17:42.612335 2729 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:17:42.614923 kubelet[2729]: E1108 00:17:42.612341 2729 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:17:42.614923 kubelet[2729]: E1108 00:17:42.612463 2729 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:17:42.614923 kubelet[2729]: W1108 00:17:42.612482 2729 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:17:42.614923 kubelet[2729]: E1108 00:17:42.612492 2729 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:17:42.614923 kubelet[2729]: E1108 00:17:42.612665 2729 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:17:42.614923 kubelet[2729]: W1108 00:17:42.612672 2729 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:17:42.614923 kubelet[2729]: E1108 00:17:42.612677 2729 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:17:42.615083 kubelet[2729]: E1108 00:17:42.612820 2729 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:17:42.615083 kubelet[2729]: W1108 00:17:42.612828 2729 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:17:42.615083 kubelet[2729]: E1108 00:17:42.612835 2729 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:17:42.615083 kubelet[2729]: E1108 00:17:42.612945 2729 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:17:42.615083 kubelet[2729]: W1108 00:17:42.612950 2729 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:17:42.615083 kubelet[2729]: E1108 00:17:42.612954 2729 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:17:42.615083 kubelet[2729]: E1108 00:17:42.613131 2729 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:17:42.615083 kubelet[2729]: W1108 00:17:42.613135 2729 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:17:42.615083 kubelet[2729]: E1108 00:17:42.613140 2729 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:17:42.615083 kubelet[2729]: E1108 00:17:42.613448 2729 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:17:42.615473 kubelet[2729]: W1108 00:17:42.613623 2729 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:17:42.615473 kubelet[2729]: E1108 00:17:42.613633 2729 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:17:42.615473 kubelet[2729]: E1108 00:17:42.613763 2729 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:17:42.615473 kubelet[2729]: W1108 00:17:42.613779 2729 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:17:42.615473 kubelet[2729]: E1108 00:17:42.613787 2729 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:17:42.615473 kubelet[2729]: E1108 00:17:42.613899 2729 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:17:42.615473 kubelet[2729]: W1108 00:17:42.613903 2729 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:17:42.615473 kubelet[2729]: E1108 00:17:42.613908 2729 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:17:42.615473 kubelet[2729]: E1108 00:17:42.614042 2729 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:17:42.615473 kubelet[2729]: W1108 00:17:42.614047 2729 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:17:42.615641 kubelet[2729]: E1108 00:17:42.614051 2729 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:17:42.615641 kubelet[2729]: E1108 00:17:42.615170 2729 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:17:42.615641 kubelet[2729]: W1108 00:17:42.615177 2729 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:17:42.615641 kubelet[2729]: E1108 00:17:42.615182 2729 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:17:42.615641 kubelet[2729]: E1108 00:17:42.615352 2729 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:17:42.615641 kubelet[2729]: W1108 00:17:42.615357 2729 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:17:42.615641 kubelet[2729]: E1108 00:17:42.615362 2729 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:17:42.937185 containerd[1543]: time="2025-11-08T00:17:42.937160472Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:17:42.942896 containerd[1543]: time="2025-11-08T00:17:42.942417138Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4: active requests=0, bytes read=4446754" Nov 8 00:17:42.946236 containerd[1543]: time="2025-11-08T00:17:42.946040528Z" level=info msg="ImageCreate event name:\"sha256:570719e9c34097019014ae2ad94edf4e523bc6892e77fb1c64c23e5b7f390fe5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:17:42.951450 containerd[1543]: time="2025-11-08T00:17:42.951423128Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:50bdfe370b7308fa9957ed1eaccd094aa4f27f9a4f1dfcfef2f8a7696a1551e1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:17:42.952441 containerd[1543]: time="2025-11-08T00:17:42.951890814Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" with image id \"sha256:570719e9c34097019014ae2ad94edf4e523bc6892e77fb1c64c23e5b7f390fe5\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:50bdfe370b7308fa9957ed1eaccd094aa4f27f9a4f1dfcfef2f8a7696a1551e1\", size \"5941314\" in 1.475228771s" Nov 8 00:17:42.952441 containerd[1543]: time="2025-11-08T00:17:42.951910835Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" returns image reference \"sha256:570719e9c34097019014ae2ad94edf4e523bc6892e77fb1c64c23e5b7f390fe5\"" Nov 8 00:17:42.961085 containerd[1543]: time="2025-11-08T00:17:42.961060142Z" level=info msg="CreateContainer within sandbox \"1ef97721f0bac3c21c75e20b3e20be131a54e6a26af25d3e165f4b3f46b43e68\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Nov 8 00:17:42.994710 containerd[1543]: time="2025-11-08T00:17:42.994680199Z" level=info msg="CreateContainer within sandbox \"1ef97721f0bac3c21c75e20b3e20be131a54e6a26af25d3e165f4b3f46b43e68\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"3d961b45e398ed02f25785a944c0f20e05c3ac3ae536d955f66ccb6084c06aac\"" Nov 8 00:17:42.996699 containerd[1543]: time="2025-11-08T00:17:42.995822136Z" level=info msg="StartContainer for \"3d961b45e398ed02f25785a944c0f20e05c3ac3ae536d955f66ccb6084c06aac\"" Nov 8 00:17:43.021416 systemd[1]: Started cri-containerd-3d961b45e398ed02f25785a944c0f20e05c3ac3ae536d955f66ccb6084c06aac.scope - libcontainer container 3d961b45e398ed02f25785a944c0f20e05c3ac3ae536d955f66ccb6084c06aac. Nov 8 00:17:43.042280 containerd[1543]: time="2025-11-08T00:17:43.042203816Z" level=info msg="StartContainer for \"3d961b45e398ed02f25785a944c0f20e05c3ac3ae536d955f66ccb6084c06aac\" returns successfully" Nov 8 00:17:43.050426 systemd[1]: cri-containerd-3d961b45e398ed02f25785a944c0f20e05c3ac3ae536d955f66ccb6084c06aac.scope: Deactivated successfully. Nov 8 00:17:43.063134 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-3d961b45e398ed02f25785a944c0f20e05c3ac3ae536d955f66ccb6084c06aac-rootfs.mount: Deactivated successfully. Nov 8 00:17:43.266092 kubelet[2729]: E1108 00:17:43.265999 2729 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-tzsgq" podUID="d3463313-ed82-40c5-914e-c6418d31744b" Nov 8 00:17:43.509835 containerd[1543]: time="2025-11-08T00:17:43.503554809Z" level=info msg="shim disconnected" id=3d961b45e398ed02f25785a944c0f20e05c3ac3ae536d955f66ccb6084c06aac namespace=k8s.io Nov 8 00:17:43.509835 containerd[1543]: time="2025-11-08T00:17:43.509691535Z" level=warning msg="cleaning up after shim disconnected" id=3d961b45e398ed02f25785a944c0f20e05c3ac3ae536d955f66ccb6084c06aac namespace=k8s.io Nov 8 00:17:43.509835 containerd[1543]: time="2025-11-08T00:17:43.509702915Z" level=info msg="cleaning up dead shim" namespace=k8s.io Nov 8 00:17:43.596385 kubelet[2729]: I1108 00:17:43.595863 2729 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Nov 8 00:17:43.598025 containerd[1543]: time="2025-11-08T00:17:43.597996743Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.4\"" Nov 8 00:17:43.640383 kubelet[2729]: I1108 00:17:43.639203 2729 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-6d4b498cff-js2r5" podStartSLOduration=3.360571193 podStartE2EDuration="5.639193208s" podCreationTimestamp="2025-11-08 00:17:38 +0000 UTC" firstStartedPulling="2025-11-08 00:17:39.197141882 +0000 UTC m=+21.079052652" lastFinishedPulling="2025-11-08 00:17:41.475763901 +0000 UTC m=+23.357674667" observedRunningTime="2025-11-08 00:17:42.610953416 +0000 UTC m=+24.492864187" watchObservedRunningTime="2025-11-08 00:17:43.639193208 +0000 UTC m=+25.521103986" Nov 8 00:17:45.266133 kubelet[2729]: E1108 00:17:45.266091 2729 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-tzsgq" podUID="d3463313-ed82-40c5-914e-c6418d31744b" Nov 8 00:17:47.091222 containerd[1543]: time="2025-11-08T00:17:47.091183771Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:17:47.091802 containerd[1543]: time="2025-11-08T00:17:47.091703695Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.30.4: active requests=0, bytes read=70446859" Nov 8 00:17:47.092215 containerd[1543]: time="2025-11-08T00:17:47.092065058Z" level=info msg="ImageCreate event name:\"sha256:24e1e7377c738d4080eb462a29e2c6756d383d8d25ad87b7f49165581f20c3cd\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:17:47.093515 containerd[1543]: time="2025-11-08T00:17:47.093490245Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:273501a9cfbd848ade2b6a8452dfafdd3adb4f9bf9aec45c398a5d19b8026627\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:17:47.094174 containerd[1543]: time="2025-11-08T00:17:47.093894916Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.30.4\" with image id \"sha256:24e1e7377c738d4080eb462a29e2c6756d383d8d25ad87b7f49165581f20c3cd\", repo tag \"ghcr.io/flatcar/calico/cni:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:273501a9cfbd848ade2b6a8452dfafdd3adb4f9bf9aec45c398a5d19b8026627\", size \"71941459\" in 3.495873257s" Nov 8 00:17:47.094174 containerd[1543]: time="2025-11-08T00:17:47.093915309Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.4\" returns image reference \"sha256:24e1e7377c738d4080eb462a29e2c6756d383d8d25ad87b7f49165581f20c3cd\"" Nov 8 00:17:47.097541 containerd[1543]: time="2025-11-08T00:17:47.097454784Z" level=info msg="CreateContainer within sandbox \"1ef97721f0bac3c21c75e20b3e20be131a54e6a26af25d3e165f4b3f46b43e68\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Nov 8 00:17:47.114748 containerd[1543]: time="2025-11-08T00:17:47.114646377Z" level=info msg="CreateContainer within sandbox \"1ef97721f0bac3c21c75e20b3e20be131a54e6a26af25d3e165f4b3f46b43e68\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"4faeb01d050c9783fc5cd265c9ff95f85c3f97ecb83c3c059640b5ae5437794d\"" Nov 8 00:17:47.115246 containerd[1543]: time="2025-11-08T00:17:47.115225755Z" level=info msg="StartContainer for \"4faeb01d050c9783fc5cd265c9ff95f85c3f97ecb83c3c059640b5ae5437794d\"" Nov 8 00:17:47.140405 systemd[1]: Started cri-containerd-4faeb01d050c9783fc5cd265c9ff95f85c3f97ecb83c3c059640b5ae5437794d.scope - libcontainer container 4faeb01d050c9783fc5cd265c9ff95f85c3f97ecb83c3c059640b5ae5437794d. Nov 8 00:17:47.188641 containerd[1543]: time="2025-11-08T00:17:47.188553510Z" level=info msg="StartContainer for \"4faeb01d050c9783fc5cd265c9ff95f85c3f97ecb83c3c059640b5ae5437794d\" returns successfully" Nov 8 00:17:47.266540 kubelet[2729]: E1108 00:17:47.266505 2729 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-tzsgq" podUID="d3463313-ed82-40c5-914e-c6418d31744b" Nov 8 00:17:48.791603 systemd[1]: cri-containerd-4faeb01d050c9783fc5cd265c9ff95f85c3f97ecb83c3c059640b5ae5437794d.scope: Deactivated successfully. Nov 8 00:17:48.813381 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-4faeb01d050c9783fc5cd265c9ff95f85c3f97ecb83c3c059640b5ae5437794d-rootfs.mount: Deactivated successfully. Nov 8 00:17:48.874363 kubelet[2729]: I1108 00:17:48.874127 2729 kubelet_node_status.go:439] "Fast updating node status as it just became ready" Nov 8 00:17:48.988582 containerd[1543]: time="2025-11-08T00:17:48.988478519Z" level=info msg="shim disconnected" id=4faeb01d050c9783fc5cd265c9ff95f85c3f97ecb83c3c059640b5ae5437794d namespace=k8s.io Nov 8 00:17:48.988582 containerd[1543]: time="2025-11-08T00:17:48.988543439Z" level=warning msg="cleaning up after shim disconnected" id=4faeb01d050c9783fc5cd265c9ff95f85c3f97ecb83c3c059640b5ae5437794d namespace=k8s.io Nov 8 00:17:48.988582 containerd[1543]: time="2025-11-08T00:17:48.988554718Z" level=info msg="cleaning up dead shim" namespace=k8s.io Nov 8 00:17:49.037669 systemd[1]: Created slice kubepods-burstable-pode0e38e94_a84e_4a13_adfe_e757102f7549.slice - libcontainer container kubepods-burstable-pode0e38e94_a84e_4a13_adfe_e757102f7549.slice. Nov 8 00:17:49.043031 systemd[1]: Created slice kubepods-besteffort-pode3e70873_2d37_40cc_9ba8_c206d83d372d.slice - libcontainer container kubepods-besteffort-pode3e70873_2d37_40cc_9ba8_c206d83d372d.slice. Nov 8 00:17:49.050428 systemd[1]: Created slice kubepods-besteffort-podd70d6c69_7a19_4335_9162_f8ca1575049e.slice - libcontainer container kubepods-besteffort-podd70d6c69_7a19_4335_9162_f8ca1575049e.slice. Nov 8 00:17:49.060966 systemd[1]: Created slice kubepods-besteffort-podfcbbea23_b3d5_4961_9b75_cd8ace86f6c6.slice - libcontainer container kubepods-besteffort-podfcbbea23_b3d5_4961_9b75_cd8ace86f6c6.slice. Nov 8 00:17:49.066212 systemd[1]: Created slice kubepods-burstable-poddb03c30a_8018_4cf5_ac73_034017743c72.slice - libcontainer container kubepods-burstable-poddb03c30a_8018_4cf5_ac73_034017743c72.slice. Nov 8 00:17:49.073045 systemd[1]: Created slice kubepods-besteffort-pode5b24471_658f_4121_a78d_73d2e59f83f1.slice - libcontainer container kubepods-besteffort-pode5b24471_658f_4121_a78d_73d2e59f83f1.slice. Nov 8 00:17:49.081723 systemd[1]: Created slice kubepods-besteffort-pod793d1558_9161_49f1_a744_65e7ee945bd5.slice - libcontainer container kubepods-besteffort-pod793d1558_9161_49f1_a744_65e7ee945bd5.slice. Nov 8 00:17:49.084065 kubelet[2729]: I1108 00:17:49.083493 2729 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rszsm\" (UniqueName: \"kubernetes.io/projected/e5b24471-658f-4121-a78d-73d2e59f83f1-kube-api-access-rszsm\") pod \"calico-apiserver-5d46c4cb7-dlrvn\" (UID: \"e5b24471-658f-4121-a78d-73d2e59f83f1\") " pod="calico-apiserver/calico-apiserver-5d46c4cb7-dlrvn" Nov 8 00:17:49.084065 kubelet[2729]: I1108 00:17:49.083515 2729 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/fcbbea23-b3d5-4961-9b75-cd8ace86f6c6-calico-apiserver-certs\") pod \"calico-apiserver-5d46c4cb7-lvp22\" (UID: \"fcbbea23-b3d5-4961-9b75-cd8ace86f6c6\") " pod="calico-apiserver/calico-apiserver-5d46c4cb7-lvp22" Nov 8 00:17:49.084065 kubelet[2729]: I1108 00:17:49.083524 2729 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p5qmc\" (UniqueName: \"kubernetes.io/projected/fcbbea23-b3d5-4961-9b75-cd8ace86f6c6-kube-api-access-p5qmc\") pod \"calico-apiserver-5d46c4cb7-lvp22\" (UID: \"fcbbea23-b3d5-4961-9b75-cd8ace86f6c6\") " pod="calico-apiserver/calico-apiserver-5d46c4cb7-lvp22" Nov 8 00:17:49.084065 kubelet[2729]: I1108 00:17:49.083536 2729 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7thxx\" (UniqueName: \"kubernetes.io/projected/356509c0-5649-4d69-a66f-1b0a4ca00464-kube-api-access-7thxx\") pod \"goldmane-7c778bb748-dfl5f\" (UID: \"356509c0-5649-4d69-a66f-1b0a4ca00464\") " pod="calico-system/goldmane-7c778bb748-dfl5f" Nov 8 00:17:49.084065 kubelet[2729]: I1108 00:17:49.083545 2729 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/793d1558-9161-49f1-a744-65e7ee945bd5-calico-apiserver-certs\") pod \"calico-apiserver-7c7df7fbc6-cwgpp\" (UID: \"793d1558-9161-49f1-a744-65e7ee945bd5\") " pod="calico-apiserver/calico-apiserver-7c7df7fbc6-cwgpp" Nov 8 00:17:49.084811 kubelet[2729]: I1108 00:17:49.083556 2729 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/e0e38e94-a84e-4a13-adfe-e757102f7549-config-volume\") pod \"coredns-66bc5c9577-2wlbm\" (UID: \"e0e38e94-a84e-4a13-adfe-e757102f7549\") " pod="kube-system/coredns-66bc5c9577-2wlbm" Nov 8 00:17:49.084811 kubelet[2729]: I1108 00:17:49.083570 2729 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r9rfr\" (UniqueName: \"kubernetes.io/projected/e0e38e94-a84e-4a13-adfe-e757102f7549-kube-api-access-r9rfr\") pod \"coredns-66bc5c9577-2wlbm\" (UID: \"e0e38e94-a84e-4a13-adfe-e757102f7549\") " pod="kube-system/coredns-66bc5c9577-2wlbm" Nov 8 00:17:49.084811 kubelet[2729]: I1108 00:17:49.083582 2729 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/356509c0-5649-4d69-a66f-1b0a4ca00464-config\") pod \"goldmane-7c778bb748-dfl5f\" (UID: \"356509c0-5649-4d69-a66f-1b0a4ca00464\") " pod="calico-system/goldmane-7c778bb748-dfl5f" Nov 8 00:17:49.084811 kubelet[2729]: I1108 00:17:49.083592 2729 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-57lcn\" (UniqueName: \"kubernetes.io/projected/793d1558-9161-49f1-a744-65e7ee945bd5-kube-api-access-57lcn\") pod \"calico-apiserver-7c7df7fbc6-cwgpp\" (UID: \"793d1558-9161-49f1-a744-65e7ee945bd5\") " pod="calico-apiserver/calico-apiserver-7c7df7fbc6-cwgpp" Nov 8 00:17:49.084811 kubelet[2729]: I1108 00:17:49.083603 2729 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/356509c0-5649-4d69-a66f-1b0a4ca00464-goldmane-key-pair\") pod \"goldmane-7c778bb748-dfl5f\" (UID: \"356509c0-5649-4d69-a66f-1b0a4ca00464\") " pod="calico-system/goldmane-7c778bb748-dfl5f" Nov 8 00:17:49.084902 kubelet[2729]: I1108 00:17:49.083611 2729 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lg8sn\" (UniqueName: \"kubernetes.io/projected/db03c30a-8018-4cf5-ac73-034017743c72-kube-api-access-lg8sn\") pod \"coredns-66bc5c9577-fnnck\" (UID: \"db03c30a-8018-4cf5-ac73-034017743c72\") " pod="kube-system/coredns-66bc5c9577-fnnck" Nov 8 00:17:49.084902 kubelet[2729]: I1108 00:17:49.083624 2729 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/356509c0-5649-4d69-a66f-1b0a4ca00464-goldmane-ca-bundle\") pod \"goldmane-7c778bb748-dfl5f\" (UID: \"356509c0-5649-4d69-a66f-1b0a4ca00464\") " pod="calico-system/goldmane-7c778bb748-dfl5f" Nov 8 00:17:49.084902 kubelet[2729]: I1108 00:17:49.083639 2729 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/db03c30a-8018-4cf5-ac73-034017743c72-config-volume\") pod \"coredns-66bc5c9577-fnnck\" (UID: \"db03c30a-8018-4cf5-ac73-034017743c72\") " pod="kube-system/coredns-66bc5c9577-fnnck" Nov 8 00:17:49.084902 kubelet[2729]: I1108 00:17:49.083649 2729 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/e3e70873-2d37-40cc-9ba8-c206d83d372d-whisker-backend-key-pair\") pod \"whisker-b555dc9bb-5mdbn\" (UID: \"e3e70873-2d37-40cc-9ba8-c206d83d372d\") " pod="calico-system/whisker-b555dc9bb-5mdbn" Nov 8 00:17:49.084902 kubelet[2729]: I1108 00:17:49.083658 2729 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-55tlm\" (UniqueName: \"kubernetes.io/projected/e3e70873-2d37-40cc-9ba8-c206d83d372d-kube-api-access-55tlm\") pod \"whisker-b555dc9bb-5mdbn\" (UID: \"e3e70873-2d37-40cc-9ba8-c206d83d372d\") " pod="calico-system/whisker-b555dc9bb-5mdbn" Nov 8 00:17:49.085029 kubelet[2729]: I1108 00:17:49.083670 2729 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d70d6c69-7a19-4335-9162-f8ca1575049e-tigera-ca-bundle\") pod \"calico-kube-controllers-6499957458-l5qw9\" (UID: \"d70d6c69-7a19-4335-9162-f8ca1575049e\") " pod="calico-system/calico-kube-controllers-6499957458-l5qw9" Nov 8 00:17:49.085029 kubelet[2729]: I1108 00:17:49.083680 2729 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5kt8q\" (UniqueName: \"kubernetes.io/projected/d70d6c69-7a19-4335-9162-f8ca1575049e-kube-api-access-5kt8q\") pod \"calico-kube-controllers-6499957458-l5qw9\" (UID: \"d70d6c69-7a19-4335-9162-f8ca1575049e\") " pod="calico-system/calico-kube-controllers-6499957458-l5qw9" Nov 8 00:17:49.085029 kubelet[2729]: I1108 00:17:49.083691 2729 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/e5b24471-658f-4121-a78d-73d2e59f83f1-calico-apiserver-certs\") pod \"calico-apiserver-5d46c4cb7-dlrvn\" (UID: \"e5b24471-658f-4121-a78d-73d2e59f83f1\") " pod="calico-apiserver/calico-apiserver-5d46c4cb7-dlrvn" Nov 8 00:17:49.085029 kubelet[2729]: I1108 00:17:49.083699 2729 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e3e70873-2d37-40cc-9ba8-c206d83d372d-whisker-ca-bundle\") pod \"whisker-b555dc9bb-5mdbn\" (UID: \"e3e70873-2d37-40cc-9ba8-c206d83d372d\") " pod="calico-system/whisker-b555dc9bb-5mdbn" Nov 8 00:17:49.088885 systemd[1]: Created slice kubepods-besteffort-pod356509c0_5649_4d69_a66f_1b0a4ca00464.slice - libcontainer container kubepods-besteffort-pod356509c0_5649_4d69_a66f_1b0a4ca00464.slice. Nov 8 00:17:49.269634 systemd[1]: Created slice kubepods-besteffort-podd3463313_ed82_40c5_914e_c6418d31744b.slice - libcontainer container kubepods-besteffort-podd3463313_ed82_40c5_914e_c6418d31744b.slice. Nov 8 00:17:49.273024 containerd[1543]: time="2025-11-08T00:17:49.272735360Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-tzsgq,Uid:d3463313-ed82-40c5-914e-c6418d31744b,Namespace:calico-system,Attempt:0,}" Nov 8 00:17:49.341750 containerd[1543]: time="2025-11-08T00:17:49.341666801Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-2wlbm,Uid:e0e38e94-a84e-4a13-adfe-e757102f7549,Namespace:kube-system,Attempt:0,}" Nov 8 00:17:49.362071 containerd[1543]: time="2025-11-08T00:17:49.362042144Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-b555dc9bb-5mdbn,Uid:e3e70873-2d37-40cc-9ba8-c206d83d372d,Namespace:calico-system,Attempt:0,}" Nov 8 00:17:49.394529 containerd[1543]: time="2025-11-08T00:17:49.394500369Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-7c778bb748-dfl5f,Uid:356509c0-5649-4d69-a66f-1b0a4ca00464,Namespace:calico-system,Attempt:0,}" Nov 8 00:17:49.397396 containerd[1543]: time="2025-11-08T00:17:49.394745250Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6499957458-l5qw9,Uid:d70d6c69-7a19-4335-9162-f8ca1575049e,Namespace:calico-system,Attempt:0,}" Nov 8 00:17:49.397396 containerd[1543]: time="2025-11-08T00:17:49.394772176Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5d46c4cb7-lvp22,Uid:fcbbea23-b3d5-4961-9b75-cd8ace86f6c6,Namespace:calico-apiserver,Attempt:0,}" Nov 8 00:17:49.397396 containerd[1543]: time="2025-11-08T00:17:49.394794030Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7c7df7fbc6-cwgpp,Uid:793d1558-9161-49f1-a744-65e7ee945bd5,Namespace:calico-apiserver,Attempt:0,}" Nov 8 00:17:49.397396 containerd[1543]: time="2025-11-08T00:17:49.394919958Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-fnnck,Uid:db03c30a-8018-4cf5-ac73-034017743c72,Namespace:kube-system,Attempt:0,}" Nov 8 00:17:49.397396 containerd[1543]: time="2025-11-08T00:17:49.394943261Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5d46c4cb7-dlrvn,Uid:e5b24471-658f-4121-a78d-73d2e59f83f1,Namespace:calico-apiserver,Attempt:0,}" Nov 8 00:17:49.620967 containerd[1543]: time="2025-11-08T00:17:49.619538520Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.4\"" Nov 8 00:17:49.666033 containerd[1543]: time="2025-11-08T00:17:49.665920069Z" level=error msg="Failed to destroy network for sandbox \"ee2cf1a199360f64515f46bdacd124dce16c2a8ed3b173c5c3709f9e3b04f4a7\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:17:49.666140 containerd[1543]: time="2025-11-08T00:17:49.666112936Z" level=error msg="Failed to destroy network for sandbox \"8557d16401825c2a62303c4719a620125662cea10ab8955e9009e3e14d7542a6\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:17:49.667862 containerd[1543]: time="2025-11-08T00:17:49.667751568Z" level=error msg="encountered an error cleaning up failed sandbox \"ee2cf1a199360f64515f46bdacd124dce16c2a8ed3b173c5c3709f9e3b04f4a7\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:17:49.667862 containerd[1543]: time="2025-11-08T00:17:49.667818545Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5d46c4cb7-lvp22,Uid:fcbbea23-b3d5-4961-9b75-cd8ace86f6c6,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"ee2cf1a199360f64515f46bdacd124dce16c2a8ed3b173c5c3709f9e3b04f4a7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:17:49.672874 containerd[1543]: time="2025-11-08T00:17:49.671506295Z" level=error msg="encountered an error cleaning up failed sandbox \"8557d16401825c2a62303c4719a620125662cea10ab8955e9009e3e14d7542a6\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:17:49.672874 containerd[1543]: time="2025-11-08T00:17:49.672701505Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5d46c4cb7-dlrvn,Uid:e5b24471-658f-4121-a78d-73d2e59f83f1,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"8557d16401825c2a62303c4719a620125662cea10ab8955e9009e3e14d7542a6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:17:49.674200 containerd[1543]: time="2025-11-08T00:17:49.674168343Z" level=error msg="Failed to destroy network for sandbox \"3126821bbfea469243aa27d5e202334d7d987a6ac75558cb9162e36c191a677a\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:17:49.675469 containerd[1543]: time="2025-11-08T00:17:49.674756646Z" level=error msg="encountered an error cleaning up failed sandbox \"3126821bbfea469243aa27d5e202334d7d987a6ac75558cb9162e36c191a677a\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:17:49.675469 containerd[1543]: time="2025-11-08T00:17:49.674798570Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-7c778bb748-dfl5f,Uid:356509c0-5649-4d69-a66f-1b0a4ca00464,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"3126821bbfea469243aa27d5e202334d7d987a6ac75558cb9162e36c191a677a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:17:49.676792 containerd[1543]: time="2025-11-08T00:17:49.676741860Z" level=error msg="Failed to destroy network for sandbox \"3ebdc4d9a01763eea01ca5fb0bc2fae25d77ab7f5023d32d3f027b0d16b1bccf\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:17:49.677253 containerd[1543]: time="2025-11-08T00:17:49.677234419Z" level=error msg="encountered an error cleaning up failed sandbox \"3ebdc4d9a01763eea01ca5fb0bc2fae25d77ab7f5023d32d3f027b0d16b1bccf\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:17:49.677391 containerd[1543]: time="2025-11-08T00:17:49.677371371Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-tzsgq,Uid:d3463313-ed82-40c5-914e-c6418d31744b,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"3ebdc4d9a01763eea01ca5fb0bc2fae25d77ab7f5023d32d3f027b0d16b1bccf\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:17:49.677548 containerd[1543]: time="2025-11-08T00:17:49.677535219Z" level=error msg="Failed to destroy network for sandbox \"0960d3ae8b5b3c21ff19533fbc15f4e824c0d345e0b13869dfc4866f06a56644\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:17:49.677818 containerd[1543]: time="2025-11-08T00:17:49.677802512Z" level=error msg="encountered an error cleaning up failed sandbox \"0960d3ae8b5b3c21ff19533fbc15f4e824c0d345e0b13869dfc4866f06a56644\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:17:49.677883 containerd[1543]: time="2025-11-08T00:17:49.677870001Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-2wlbm,Uid:e0e38e94-a84e-4a13-adfe-e757102f7549,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"0960d3ae8b5b3c21ff19533fbc15f4e824c0d345e0b13869dfc4866f06a56644\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:17:49.678895 containerd[1543]: time="2025-11-08T00:17:49.678878318Z" level=error msg="Failed to destroy network for sandbox \"0d8d645b2fd7dbff7492b0ab7a89c048990d7ad61034d0193bfe6e6d3f898170\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:17:49.679176 containerd[1543]: time="2025-11-08T00:17:49.679160226Z" level=error msg="encountered an error cleaning up failed sandbox \"0d8d645b2fd7dbff7492b0ab7a89c048990d7ad61034d0193bfe6e6d3f898170\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:17:49.679789 containerd[1543]: time="2025-11-08T00:17:49.679774604Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-fnnck,Uid:db03c30a-8018-4cf5-ac73-034017743c72,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"0d8d645b2fd7dbff7492b0ab7a89c048990d7ad61034d0193bfe6e6d3f898170\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:17:49.680379 kubelet[2729]: E1108 00:17:49.680340 2729 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3ebdc4d9a01763eea01ca5fb0bc2fae25d77ab7f5023d32d3f027b0d16b1bccf\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:17:49.680510 containerd[1543]: time="2025-11-08T00:17:49.680497027Z" level=error msg="Failed to destroy network for sandbox \"adfa604ce3bcfc0ed43e3117a339cba8e13558a9118239a877677f9f6f30f63e\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:17:49.680556 kubelet[2729]: E1108 00:17:49.680368 2729 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3126821bbfea469243aa27d5e202334d7d987a6ac75558cb9162e36c191a677a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:17:49.680778 containerd[1543]: time="2025-11-08T00:17:49.680764138Z" level=error msg="encountered an error cleaning up failed sandbox \"adfa604ce3bcfc0ed43e3117a339cba8e13558a9118239a877677f9f6f30f63e\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:17:49.680841 containerd[1543]: time="2025-11-08T00:17:49.680828419Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6499957458-l5qw9,Uid:d70d6c69-7a19-4335-9162-f8ca1575049e,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"adfa604ce3bcfc0ed43e3117a339cba8e13558a9118239a877677f9f6f30f63e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:17:49.680967 containerd[1543]: time="2025-11-08T00:17:49.680955514Z" level=error msg="Failed to destroy network for sandbox \"29472098667a80ad47b6d7e756e614c3a90287fca77f9dc02a8625bfeb8ff85d\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:17:49.682174 kubelet[2729]: E1108 00:17:49.680996 2729 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3ebdc4d9a01763eea01ca5fb0bc2fae25d77ab7f5023d32d3f027b0d16b1bccf\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-tzsgq" Nov 8 00:17:49.682174 kubelet[2729]: E1108 00:17:49.681018 2729 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3ebdc4d9a01763eea01ca5fb0bc2fae25d77ab7f5023d32d3f027b0d16b1bccf\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-tzsgq" Nov 8 00:17:49.682174 kubelet[2729]: E1108 00:17:49.681060 2729 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-tzsgq_calico-system(d3463313-ed82-40c5-914e-c6418d31744b)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-tzsgq_calico-system(d3463313-ed82-40c5-914e-c6418d31744b)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"3ebdc4d9a01763eea01ca5fb0bc2fae25d77ab7f5023d32d3f027b0d16b1bccf\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-tzsgq" podUID="d3463313-ed82-40c5-914e-c6418d31744b" Nov 8 00:17:49.682719 kubelet[2729]: E1108 00:17:49.681142 2729 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3126821bbfea469243aa27d5e202334d7d987a6ac75558cb9162e36c191a677a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-7c778bb748-dfl5f" Nov 8 00:17:49.682719 kubelet[2729]: E1108 00:17:49.681166 2729 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3126821bbfea469243aa27d5e202334d7d987a6ac75558cb9162e36c191a677a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-7c778bb748-dfl5f" Nov 8 00:17:49.682719 kubelet[2729]: E1108 00:17:49.681190 2729 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-7c778bb748-dfl5f_calico-system(356509c0-5649-4d69-a66f-1b0a4ca00464)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-7c778bb748-dfl5f_calico-system(356509c0-5649-4d69-a66f-1b0a4ca00464)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"3126821bbfea469243aa27d5e202334d7d987a6ac75558cb9162e36c191a677a\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-7c778bb748-dfl5f" podUID="356509c0-5649-4d69-a66f-1b0a4ca00464" Nov 8 00:17:49.682835 kubelet[2729]: E1108 00:17:49.681343 2729 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0d8d645b2fd7dbff7492b0ab7a89c048990d7ad61034d0193bfe6e6d3f898170\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:17:49.682835 kubelet[2729]: E1108 00:17:49.681359 2729 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0d8d645b2fd7dbff7492b0ab7a89c048990d7ad61034d0193bfe6e6d3f898170\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-66bc5c9577-fnnck" Nov 8 00:17:49.682835 kubelet[2729]: E1108 00:17:49.681369 2729 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0d8d645b2fd7dbff7492b0ab7a89c048990d7ad61034d0193bfe6e6d3f898170\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-66bc5c9577-fnnck" Nov 8 00:17:49.682921 kubelet[2729]: E1108 00:17:49.681388 2729 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-66bc5c9577-fnnck_kube-system(db03c30a-8018-4cf5-ac73-034017743c72)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-66bc5c9577-fnnck_kube-system(db03c30a-8018-4cf5-ac73-034017743c72)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"0d8d645b2fd7dbff7492b0ab7a89c048990d7ad61034d0193bfe6e6d3f898170\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-66bc5c9577-fnnck" podUID="db03c30a-8018-4cf5-ac73-034017743c72" Nov 8 00:17:49.682921 kubelet[2729]: E1108 00:17:49.681407 2729 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0960d3ae8b5b3c21ff19533fbc15f4e824c0d345e0b13869dfc4866f06a56644\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:17:49.682921 kubelet[2729]: E1108 00:17:49.681417 2729 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0960d3ae8b5b3c21ff19533fbc15f4e824c0d345e0b13869dfc4866f06a56644\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-66bc5c9577-2wlbm" Nov 8 00:17:49.683629 containerd[1543]: time="2025-11-08T00:17:49.682830778Z" level=error msg="encountered an error cleaning up failed sandbox \"29472098667a80ad47b6d7e756e614c3a90287fca77f9dc02a8625bfeb8ff85d\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:17:49.683629 containerd[1543]: time="2025-11-08T00:17:49.682874025Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7c7df7fbc6-cwgpp,Uid:793d1558-9161-49f1-a744-65e7ee945bd5,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"29472098667a80ad47b6d7e756e614c3a90287fca77f9dc02a8625bfeb8ff85d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:17:49.683746 kubelet[2729]: E1108 00:17:49.681425 2729 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0960d3ae8b5b3c21ff19533fbc15f4e824c0d345e0b13869dfc4866f06a56644\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-66bc5c9577-2wlbm" Nov 8 00:17:49.683746 kubelet[2729]: E1108 00:17:49.681440 2729 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-66bc5c9577-2wlbm_kube-system(e0e38e94-a84e-4a13-adfe-e757102f7549)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-66bc5c9577-2wlbm_kube-system(e0e38e94-a84e-4a13-adfe-e757102f7549)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"0960d3ae8b5b3c21ff19533fbc15f4e824c0d345e0b13869dfc4866f06a56644\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-66bc5c9577-2wlbm" podUID="e0e38e94-a84e-4a13-adfe-e757102f7549" Nov 8 00:17:49.683746 kubelet[2729]: E1108 00:17:49.681470 2729 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8557d16401825c2a62303c4719a620125662cea10ab8955e9009e3e14d7542a6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:17:49.683820 kubelet[2729]: E1108 00:17:49.681486 2729 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8557d16401825c2a62303c4719a620125662cea10ab8955e9009e3e14d7542a6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5d46c4cb7-dlrvn" Nov 8 00:17:49.683820 kubelet[2729]: E1108 00:17:49.681494 2729 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8557d16401825c2a62303c4719a620125662cea10ab8955e9009e3e14d7542a6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5d46c4cb7-dlrvn" Nov 8 00:17:49.683820 kubelet[2729]: E1108 00:17:49.681513 2729 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-5d46c4cb7-dlrvn_calico-apiserver(e5b24471-658f-4121-a78d-73d2e59f83f1)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-5d46c4cb7-dlrvn_calico-apiserver(e5b24471-658f-4121-a78d-73d2e59f83f1)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"8557d16401825c2a62303c4719a620125662cea10ab8955e9009e3e14d7542a6\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-5d46c4cb7-dlrvn" podUID="e5b24471-658f-4121-a78d-73d2e59f83f1" Nov 8 00:17:49.683894 kubelet[2729]: E1108 00:17:49.681229 2729 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ee2cf1a199360f64515f46bdacd124dce16c2a8ed3b173c5c3709f9e3b04f4a7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:17:49.683894 kubelet[2729]: E1108 00:17:49.682436 2729 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"adfa604ce3bcfc0ed43e3117a339cba8e13558a9118239a877677f9f6f30f63e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:17:49.683894 kubelet[2729]: E1108 00:17:49.682458 2729 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"adfa604ce3bcfc0ed43e3117a339cba8e13558a9118239a877677f9f6f30f63e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-6499957458-l5qw9" Nov 8 00:17:49.683894 kubelet[2729]: E1108 00:17:49.682473 2729 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"adfa604ce3bcfc0ed43e3117a339cba8e13558a9118239a877677f9f6f30f63e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-6499957458-l5qw9" Nov 8 00:17:49.683967 kubelet[2729]: E1108 00:17:49.682482 2729 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ee2cf1a199360f64515f46bdacd124dce16c2a8ed3b173c5c3709f9e3b04f4a7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5d46c4cb7-lvp22" Nov 8 00:17:49.683967 kubelet[2729]: E1108 00:17:49.682497 2729 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-6499957458-l5qw9_calico-system(d70d6c69-7a19-4335-9162-f8ca1575049e)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-6499957458-l5qw9_calico-system(d70d6c69-7a19-4335-9162-f8ca1575049e)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"adfa604ce3bcfc0ed43e3117a339cba8e13558a9118239a877677f9f6f30f63e\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-6499957458-l5qw9" podUID="d70d6c69-7a19-4335-9162-f8ca1575049e" Nov 8 00:17:49.683967 kubelet[2729]: E1108 00:17:49.682499 2729 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ee2cf1a199360f64515f46bdacd124dce16c2a8ed3b173c5c3709f9e3b04f4a7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5d46c4cb7-lvp22" Nov 8 00:17:49.684036 kubelet[2729]: E1108 00:17:49.682562 2729 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-5d46c4cb7-lvp22_calico-apiserver(fcbbea23-b3d5-4961-9b75-cd8ace86f6c6)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-5d46c4cb7-lvp22_calico-apiserver(fcbbea23-b3d5-4961-9b75-cd8ace86f6c6)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"ee2cf1a199360f64515f46bdacd124dce16c2a8ed3b173c5c3709f9e3b04f4a7\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-5d46c4cb7-lvp22" podUID="fcbbea23-b3d5-4961-9b75-cd8ace86f6c6" Nov 8 00:17:49.684036 kubelet[2729]: E1108 00:17:49.682997 2729 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"29472098667a80ad47b6d7e756e614c3a90287fca77f9dc02a8625bfeb8ff85d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:17:49.684036 kubelet[2729]: E1108 00:17:49.683017 2729 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"29472098667a80ad47b6d7e756e614c3a90287fca77f9dc02a8625bfeb8ff85d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7c7df7fbc6-cwgpp" Nov 8 00:17:49.684126 kubelet[2729]: E1108 00:17:49.683030 2729 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"29472098667a80ad47b6d7e756e614c3a90287fca77f9dc02a8625bfeb8ff85d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7c7df7fbc6-cwgpp" Nov 8 00:17:49.684126 kubelet[2729]: E1108 00:17:49.683060 2729 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-7c7df7fbc6-cwgpp_calico-apiserver(793d1558-9161-49f1-a744-65e7ee945bd5)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-7c7df7fbc6-cwgpp_calico-apiserver(793d1558-9161-49f1-a744-65e7ee945bd5)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"29472098667a80ad47b6d7e756e614c3a90287fca77f9dc02a8625bfeb8ff85d\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-7c7df7fbc6-cwgpp" podUID="793d1558-9161-49f1-a744-65e7ee945bd5" Nov 8 00:17:49.685580 containerd[1543]: time="2025-11-08T00:17:49.685420694Z" level=error msg="Failed to destroy network for sandbox \"323f688a526e8c1f75dd0090eb473941e081eba26dfcd558b02454ff5ab11dff\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:17:49.685956 containerd[1543]: time="2025-11-08T00:17:49.685933595Z" level=error msg="encountered an error cleaning up failed sandbox \"323f688a526e8c1f75dd0090eb473941e081eba26dfcd558b02454ff5ab11dff\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:17:49.685990 containerd[1543]: time="2025-11-08T00:17:49.685970731Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-b555dc9bb-5mdbn,Uid:e3e70873-2d37-40cc-9ba8-c206d83d372d,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"323f688a526e8c1f75dd0090eb473941e081eba26dfcd558b02454ff5ab11dff\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:17:49.686210 kubelet[2729]: E1108 00:17:49.686103 2729 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"323f688a526e8c1f75dd0090eb473941e081eba26dfcd558b02454ff5ab11dff\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:17:49.686210 kubelet[2729]: E1108 00:17:49.686138 2729 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"323f688a526e8c1f75dd0090eb473941e081eba26dfcd558b02454ff5ab11dff\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-b555dc9bb-5mdbn" Nov 8 00:17:49.686210 kubelet[2729]: E1108 00:17:49.686151 2729 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"323f688a526e8c1f75dd0090eb473941e081eba26dfcd558b02454ff5ab11dff\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-b555dc9bb-5mdbn" Nov 8 00:17:49.686297 kubelet[2729]: E1108 00:17:49.686180 2729 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-b555dc9bb-5mdbn_calico-system(e3e70873-2d37-40cc-9ba8-c206d83d372d)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-b555dc9bb-5mdbn_calico-system(e3e70873-2d37-40cc-9ba8-c206d83d372d)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"323f688a526e8c1f75dd0090eb473941e081eba26dfcd558b02454ff5ab11dff\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-b555dc9bb-5mdbn" podUID="e3e70873-2d37-40cc-9ba8-c206d83d372d" Nov 8 00:17:49.814685 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-3ebdc4d9a01763eea01ca5fb0bc2fae25d77ab7f5023d32d3f027b0d16b1bccf-shm.mount: Deactivated successfully. Nov 8 00:17:50.657062 kubelet[2729]: I1108 00:17:50.657040 2729 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="29472098667a80ad47b6d7e756e614c3a90287fca77f9dc02a8625bfeb8ff85d" Nov 8 00:17:50.658191 kubelet[2729]: I1108 00:17:50.658174 2729 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3ebdc4d9a01763eea01ca5fb0bc2fae25d77ab7f5023d32d3f027b0d16b1bccf" Nov 8 00:17:50.658809 kubelet[2729]: I1108 00:17:50.658795 2729 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="323f688a526e8c1f75dd0090eb473941e081eba26dfcd558b02454ff5ab11dff" Nov 8 00:17:50.660653 kubelet[2729]: I1108 00:17:50.660634 2729 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ee2cf1a199360f64515f46bdacd124dce16c2a8ed3b173c5c3709f9e3b04f4a7" Nov 8 00:17:50.665006 kubelet[2729]: I1108 00:17:50.661540 2729 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="adfa604ce3bcfc0ed43e3117a339cba8e13558a9118239a877677f9f6f30f63e" Nov 8 00:17:50.665006 kubelet[2729]: I1108 00:17:50.662271 2729 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0960d3ae8b5b3c21ff19533fbc15f4e824c0d345e0b13869dfc4866f06a56644" Nov 8 00:17:50.665006 kubelet[2729]: I1108 00:17:50.663231 2729 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8557d16401825c2a62303c4719a620125662cea10ab8955e9009e3e14d7542a6" Nov 8 00:17:50.665006 kubelet[2729]: I1108 00:17:50.664558 2729 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0d8d645b2fd7dbff7492b0ab7a89c048990d7ad61034d0193bfe6e6d3f898170" Nov 8 00:17:50.666253 kubelet[2729]: I1108 00:17:50.665908 2729 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3126821bbfea469243aa27d5e202334d7d987a6ac75558cb9162e36c191a677a" Nov 8 00:17:50.728982 containerd[1543]: time="2025-11-08T00:17:50.728956705Z" level=info msg="StopPodSandbox for \"3126821bbfea469243aa27d5e202334d7d987a6ac75558cb9162e36c191a677a\"" Nov 8 00:17:50.729392 containerd[1543]: time="2025-11-08T00:17:50.729258763Z" level=info msg="StopPodSandbox for \"323f688a526e8c1f75dd0090eb473941e081eba26dfcd558b02454ff5ab11dff\"" Nov 8 00:17:50.730027 containerd[1543]: time="2025-11-08T00:17:50.729998232Z" level=info msg="Ensure that sandbox 3126821bbfea469243aa27d5e202334d7d987a6ac75558cb9162e36c191a677a in task-service has been cleanup successfully" Nov 8 00:17:50.730168 containerd[1543]: time="2025-11-08T00:17:50.730059764Z" level=info msg="Ensure that sandbox 323f688a526e8c1f75dd0090eb473941e081eba26dfcd558b02454ff5ab11dff in task-service has been cleanup successfully" Nov 8 00:17:50.730917 containerd[1543]: time="2025-11-08T00:17:50.729133973Z" level=info msg="StopPodSandbox for \"adfa604ce3bcfc0ed43e3117a339cba8e13558a9118239a877677f9f6f30f63e\"" Nov 8 00:17:50.731042 containerd[1543]: time="2025-11-08T00:17:50.731031415Z" level=info msg="Ensure that sandbox adfa604ce3bcfc0ed43e3117a339cba8e13558a9118239a877677f9f6f30f63e in task-service has been cleanup successfully" Nov 8 00:17:50.731515 containerd[1543]: time="2025-11-08T00:17:50.729176035Z" level=info msg="StopPodSandbox for \"8557d16401825c2a62303c4719a620125662cea10ab8955e9009e3e14d7542a6\"" Nov 8 00:17:50.731515 containerd[1543]: time="2025-11-08T00:17:50.731405233Z" level=info msg="Ensure that sandbox 8557d16401825c2a62303c4719a620125662cea10ab8955e9009e3e14d7542a6 in task-service has been cleanup successfully" Nov 8 00:17:50.733886 containerd[1543]: time="2025-11-08T00:17:50.729161402Z" level=info msg="StopPodSandbox for \"0960d3ae8b5b3c21ff19533fbc15f4e824c0d345e0b13869dfc4866f06a56644\"" Nov 8 00:17:50.734035 containerd[1543]: time="2025-11-08T00:17:50.734024872Z" level=info msg="Ensure that sandbox 0960d3ae8b5b3c21ff19533fbc15f4e824c0d345e0b13869dfc4866f06a56644 in task-service has been cleanup successfully" Nov 8 00:17:50.735720 containerd[1543]: time="2025-11-08T00:17:50.729234653Z" level=info msg="StopPodSandbox for \"3ebdc4d9a01763eea01ca5fb0bc2fae25d77ab7f5023d32d3f027b0d16b1bccf\"" Nov 8 00:17:50.735908 containerd[1543]: time="2025-11-08T00:17:50.735896941Z" level=info msg="Ensure that sandbox 3ebdc4d9a01763eea01ca5fb0bc2fae25d77ab7f5023d32d3f027b0d16b1bccf in task-service has been cleanup successfully" Nov 8 00:17:50.737569 containerd[1543]: time="2025-11-08T00:17:50.729190154Z" level=info msg="StopPodSandbox for \"0d8d645b2fd7dbff7492b0ab7a89c048990d7ad61034d0193bfe6e6d3f898170\"" Nov 8 00:17:50.737669 containerd[1543]: time="2025-11-08T00:17:50.737655203Z" level=info msg="Ensure that sandbox 0d8d645b2fd7dbff7492b0ab7a89c048990d7ad61034d0193bfe6e6d3f898170 in task-service has been cleanup successfully" Nov 8 00:17:50.738012 containerd[1543]: time="2025-11-08T00:17:50.729205561Z" level=info msg="StopPodSandbox for \"29472098667a80ad47b6d7e756e614c3a90287fca77f9dc02a8625bfeb8ff85d\"" Nov 8 00:17:50.738093 containerd[1543]: time="2025-11-08T00:17:50.738080399Z" level=info msg="Ensure that sandbox 29472098667a80ad47b6d7e756e614c3a90287fca77f9dc02a8625bfeb8ff85d in task-service has been cleanup successfully" Nov 8 00:17:50.738395 containerd[1543]: time="2025-11-08T00:17:50.729225576Z" level=info msg="StopPodSandbox for \"ee2cf1a199360f64515f46bdacd124dce16c2a8ed3b173c5c3709f9e3b04f4a7\"" Nov 8 00:17:50.738395 containerd[1543]: time="2025-11-08T00:17:50.738376150Z" level=info msg="Ensure that sandbox ee2cf1a199360f64515f46bdacd124dce16c2a8ed3b173c5c3709f9e3b04f4a7 in task-service has been cleanup successfully" Nov 8 00:17:50.777792 containerd[1543]: time="2025-11-08T00:17:50.777758623Z" level=error msg="StopPodSandbox for \"323f688a526e8c1f75dd0090eb473941e081eba26dfcd558b02454ff5ab11dff\" failed" error="failed to destroy network for sandbox \"323f688a526e8c1f75dd0090eb473941e081eba26dfcd558b02454ff5ab11dff\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:17:50.778071 kubelet[2729]: E1108 00:17:50.778041 2729 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"323f688a526e8c1f75dd0090eb473941e081eba26dfcd558b02454ff5ab11dff\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="323f688a526e8c1f75dd0090eb473941e081eba26dfcd558b02454ff5ab11dff" Nov 8 00:17:50.778117 kubelet[2729]: E1108 00:17:50.778089 2729 kuberuntime_manager.go:1665] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"323f688a526e8c1f75dd0090eb473941e081eba26dfcd558b02454ff5ab11dff"} Nov 8 00:17:50.778138 kubelet[2729]: E1108 00:17:50.778122 2729 kuberuntime_manager.go:1233] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"e3e70873-2d37-40cc-9ba8-c206d83d372d\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"323f688a526e8c1f75dd0090eb473941e081eba26dfcd558b02454ff5ab11dff\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Nov 8 00:17:50.778182 kubelet[2729]: E1108 00:17:50.778142 2729 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"e3e70873-2d37-40cc-9ba8-c206d83d372d\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"323f688a526e8c1f75dd0090eb473941e081eba26dfcd558b02454ff5ab11dff\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-b555dc9bb-5mdbn" podUID="e3e70873-2d37-40cc-9ba8-c206d83d372d" Nov 8 00:17:50.782611 containerd[1543]: time="2025-11-08T00:17:50.782516739Z" level=error msg="StopPodSandbox for \"3ebdc4d9a01763eea01ca5fb0bc2fae25d77ab7f5023d32d3f027b0d16b1bccf\" failed" error="failed to destroy network for sandbox \"3ebdc4d9a01763eea01ca5fb0bc2fae25d77ab7f5023d32d3f027b0d16b1bccf\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:17:50.782978 kubelet[2729]: E1108 00:17:50.782879 2729 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"3ebdc4d9a01763eea01ca5fb0bc2fae25d77ab7f5023d32d3f027b0d16b1bccf\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="3ebdc4d9a01763eea01ca5fb0bc2fae25d77ab7f5023d32d3f027b0d16b1bccf" Nov 8 00:17:50.782978 kubelet[2729]: E1108 00:17:50.782914 2729 kuberuntime_manager.go:1665] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"3ebdc4d9a01763eea01ca5fb0bc2fae25d77ab7f5023d32d3f027b0d16b1bccf"} Nov 8 00:17:50.783847 kubelet[2729]: E1108 00:17:50.783107 2729 kuberuntime_manager.go:1233] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"d3463313-ed82-40c5-914e-c6418d31744b\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"3ebdc4d9a01763eea01ca5fb0bc2fae25d77ab7f5023d32d3f027b0d16b1bccf\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Nov 8 00:17:50.783847 kubelet[2729]: E1108 00:17:50.783130 2729 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"d3463313-ed82-40c5-914e-c6418d31744b\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"3ebdc4d9a01763eea01ca5fb0bc2fae25d77ab7f5023d32d3f027b0d16b1bccf\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-tzsgq" podUID="d3463313-ed82-40c5-914e-c6418d31744b" Nov 8 00:17:50.801419 containerd[1543]: time="2025-11-08T00:17:50.801332198Z" level=error msg="StopPodSandbox for \"8557d16401825c2a62303c4719a620125662cea10ab8955e9009e3e14d7542a6\" failed" error="failed to destroy network for sandbox \"8557d16401825c2a62303c4719a620125662cea10ab8955e9009e3e14d7542a6\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:17:50.801609 kubelet[2729]: E1108 00:17:50.801523 2729 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"8557d16401825c2a62303c4719a620125662cea10ab8955e9009e3e14d7542a6\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="8557d16401825c2a62303c4719a620125662cea10ab8955e9009e3e14d7542a6" Nov 8 00:17:50.801609 kubelet[2729]: E1108 00:17:50.801566 2729 kuberuntime_manager.go:1665] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"8557d16401825c2a62303c4719a620125662cea10ab8955e9009e3e14d7542a6"} Nov 8 00:17:50.801609 kubelet[2729]: E1108 00:17:50.801587 2729 kuberuntime_manager.go:1233] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"e5b24471-658f-4121-a78d-73d2e59f83f1\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"8557d16401825c2a62303c4719a620125662cea10ab8955e9009e3e14d7542a6\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Nov 8 00:17:50.801609 kubelet[2729]: E1108 00:17:50.801605 2729 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"e5b24471-658f-4121-a78d-73d2e59f83f1\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"8557d16401825c2a62303c4719a620125662cea10ab8955e9009e3e14d7542a6\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-5d46c4cb7-dlrvn" podUID="e5b24471-658f-4121-a78d-73d2e59f83f1" Nov 8 00:17:50.808935 containerd[1543]: time="2025-11-08T00:17:50.808853113Z" level=error msg="StopPodSandbox for \"0d8d645b2fd7dbff7492b0ab7a89c048990d7ad61034d0193bfe6e6d3f898170\" failed" error="failed to destroy network for sandbox \"0d8d645b2fd7dbff7492b0ab7a89c048990d7ad61034d0193bfe6e6d3f898170\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:17:50.809221 kubelet[2729]: E1108 00:17:50.809017 2729 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"0d8d645b2fd7dbff7492b0ab7a89c048990d7ad61034d0193bfe6e6d3f898170\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="0d8d645b2fd7dbff7492b0ab7a89c048990d7ad61034d0193bfe6e6d3f898170" Nov 8 00:17:50.809221 kubelet[2729]: E1108 00:17:50.809050 2729 kuberuntime_manager.go:1665] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"0d8d645b2fd7dbff7492b0ab7a89c048990d7ad61034d0193bfe6e6d3f898170"} Nov 8 00:17:50.809221 kubelet[2729]: E1108 00:17:50.809069 2729 kuberuntime_manager.go:1233] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"db03c30a-8018-4cf5-ac73-034017743c72\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"0d8d645b2fd7dbff7492b0ab7a89c048990d7ad61034d0193bfe6e6d3f898170\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Nov 8 00:17:50.809221 kubelet[2729]: E1108 00:17:50.809089 2729 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"db03c30a-8018-4cf5-ac73-034017743c72\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"0d8d645b2fd7dbff7492b0ab7a89c048990d7ad61034d0193bfe6e6d3f898170\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-66bc5c9577-fnnck" podUID="db03c30a-8018-4cf5-ac73-034017743c72" Nov 8 00:17:50.811585 containerd[1543]: time="2025-11-08T00:17:50.811562672Z" level=error msg="StopPodSandbox for \"0960d3ae8b5b3c21ff19533fbc15f4e824c0d345e0b13869dfc4866f06a56644\" failed" error="failed to destroy network for sandbox \"0960d3ae8b5b3c21ff19533fbc15f4e824c0d345e0b13869dfc4866f06a56644\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:17:50.811787 kubelet[2729]: E1108 00:17:50.811768 2729 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"0960d3ae8b5b3c21ff19533fbc15f4e824c0d345e0b13869dfc4866f06a56644\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="0960d3ae8b5b3c21ff19533fbc15f4e824c0d345e0b13869dfc4866f06a56644" Nov 8 00:17:50.811821 kubelet[2729]: E1108 00:17:50.811808 2729 kuberuntime_manager.go:1665] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"0960d3ae8b5b3c21ff19533fbc15f4e824c0d345e0b13869dfc4866f06a56644"} Nov 8 00:17:50.811880 kubelet[2729]: E1108 00:17:50.811826 2729 kuberuntime_manager.go:1233] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"e0e38e94-a84e-4a13-adfe-e757102f7549\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"0960d3ae8b5b3c21ff19533fbc15f4e824c0d345e0b13869dfc4866f06a56644\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Nov 8 00:17:50.811880 kubelet[2729]: E1108 00:17:50.811843 2729 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"e0e38e94-a84e-4a13-adfe-e757102f7549\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"0960d3ae8b5b3c21ff19533fbc15f4e824c0d345e0b13869dfc4866f06a56644\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-66bc5c9577-2wlbm" podUID="e0e38e94-a84e-4a13-adfe-e757102f7549" Nov 8 00:17:50.813474 containerd[1543]: time="2025-11-08T00:17:50.813454772Z" level=error msg="StopPodSandbox for \"adfa604ce3bcfc0ed43e3117a339cba8e13558a9118239a877677f9f6f30f63e\" failed" error="failed to destroy network for sandbox \"adfa604ce3bcfc0ed43e3117a339cba8e13558a9118239a877677f9f6f30f63e\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:17:50.813593 containerd[1543]: time="2025-11-08T00:17:50.813580794Z" level=error msg="StopPodSandbox for \"29472098667a80ad47b6d7e756e614c3a90287fca77f9dc02a8625bfeb8ff85d\" failed" error="failed to destroy network for sandbox \"29472098667a80ad47b6d7e756e614c3a90287fca77f9dc02a8625bfeb8ff85d\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:17:50.813732 kubelet[2729]: E1108 00:17:50.813704 2729 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"29472098667a80ad47b6d7e756e614c3a90287fca77f9dc02a8625bfeb8ff85d\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="29472098667a80ad47b6d7e756e614c3a90287fca77f9dc02a8625bfeb8ff85d" Nov 8 00:17:50.813732 kubelet[2729]: E1108 00:17:50.813727 2729 kuberuntime_manager.go:1665] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"29472098667a80ad47b6d7e756e614c3a90287fca77f9dc02a8625bfeb8ff85d"} Nov 8 00:17:50.813785 kubelet[2729]: E1108 00:17:50.813752 2729 kuberuntime_manager.go:1233] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"793d1558-9161-49f1-a744-65e7ee945bd5\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"29472098667a80ad47b6d7e756e614c3a90287fca77f9dc02a8625bfeb8ff85d\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Nov 8 00:17:50.813785 kubelet[2729]: E1108 00:17:50.813771 2729 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"793d1558-9161-49f1-a744-65e7ee945bd5\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"29472098667a80ad47b6d7e756e614c3a90287fca77f9dc02a8625bfeb8ff85d\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-7c7df7fbc6-cwgpp" podUID="793d1558-9161-49f1-a744-65e7ee945bd5" Nov 8 00:17:50.813851 kubelet[2729]: E1108 00:17:50.813791 2729 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"adfa604ce3bcfc0ed43e3117a339cba8e13558a9118239a877677f9f6f30f63e\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="adfa604ce3bcfc0ed43e3117a339cba8e13558a9118239a877677f9f6f30f63e" Nov 8 00:17:50.813851 kubelet[2729]: E1108 00:17:50.813802 2729 kuberuntime_manager.go:1665] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"adfa604ce3bcfc0ed43e3117a339cba8e13558a9118239a877677f9f6f30f63e"} Nov 8 00:17:50.813851 kubelet[2729]: E1108 00:17:50.813812 2729 kuberuntime_manager.go:1233] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"d70d6c69-7a19-4335-9162-f8ca1575049e\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"adfa604ce3bcfc0ed43e3117a339cba8e13558a9118239a877677f9f6f30f63e\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Nov 8 00:17:50.813851 kubelet[2729]: E1108 00:17:50.813821 2729 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"d70d6c69-7a19-4335-9162-f8ca1575049e\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"adfa604ce3bcfc0ed43e3117a339cba8e13558a9118239a877677f9f6f30f63e\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-6499957458-l5qw9" podUID="d70d6c69-7a19-4335-9162-f8ca1575049e" Nov 8 00:17:50.814513 containerd[1543]: time="2025-11-08T00:17:50.814473720Z" level=error msg="StopPodSandbox for \"ee2cf1a199360f64515f46bdacd124dce16c2a8ed3b173c5c3709f9e3b04f4a7\" failed" error="failed to destroy network for sandbox \"ee2cf1a199360f64515f46bdacd124dce16c2a8ed3b173c5c3709f9e3b04f4a7\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:17:50.814573 kubelet[2729]: E1108 00:17:50.814557 2729 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"ee2cf1a199360f64515f46bdacd124dce16c2a8ed3b173c5c3709f9e3b04f4a7\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="ee2cf1a199360f64515f46bdacd124dce16c2a8ed3b173c5c3709f9e3b04f4a7" Nov 8 00:17:50.814604 kubelet[2729]: E1108 00:17:50.814575 2729 kuberuntime_manager.go:1665] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"ee2cf1a199360f64515f46bdacd124dce16c2a8ed3b173c5c3709f9e3b04f4a7"} Nov 8 00:17:50.814604 kubelet[2729]: E1108 00:17:50.814590 2729 kuberuntime_manager.go:1233] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"fcbbea23-b3d5-4961-9b75-cd8ace86f6c6\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"ee2cf1a199360f64515f46bdacd124dce16c2a8ed3b173c5c3709f9e3b04f4a7\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Nov 8 00:17:50.814673 kubelet[2729]: E1108 00:17:50.814602 2729 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"fcbbea23-b3d5-4961-9b75-cd8ace86f6c6\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"ee2cf1a199360f64515f46bdacd124dce16c2a8ed3b173c5c3709f9e3b04f4a7\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-5d46c4cb7-lvp22" podUID="fcbbea23-b3d5-4961-9b75-cd8ace86f6c6" Nov 8 00:17:50.817414 containerd[1543]: time="2025-11-08T00:17:50.817328942Z" level=error msg="StopPodSandbox for \"3126821bbfea469243aa27d5e202334d7d987a6ac75558cb9162e36c191a677a\" failed" error="failed to destroy network for sandbox \"3126821bbfea469243aa27d5e202334d7d987a6ac75558cb9162e36c191a677a\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:17:50.817499 kubelet[2729]: E1108 00:17:50.817461 2729 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"3126821bbfea469243aa27d5e202334d7d987a6ac75558cb9162e36c191a677a\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="3126821bbfea469243aa27d5e202334d7d987a6ac75558cb9162e36c191a677a" Nov 8 00:17:50.817535 kubelet[2729]: E1108 00:17:50.817501 2729 kuberuntime_manager.go:1665] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"3126821bbfea469243aa27d5e202334d7d987a6ac75558cb9162e36c191a677a"} Nov 8 00:17:50.817535 kubelet[2729]: E1108 00:17:50.817520 2729 kuberuntime_manager.go:1233] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"356509c0-5649-4d69-a66f-1b0a4ca00464\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"3126821bbfea469243aa27d5e202334d7d987a6ac75558cb9162e36c191a677a\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Nov 8 00:17:50.817657 kubelet[2729]: E1108 00:17:50.817534 2729 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"356509c0-5649-4d69-a66f-1b0a4ca00464\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"3126821bbfea469243aa27d5e202334d7d987a6ac75558cb9162e36c191a677a\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-7c778bb748-dfl5f" podUID="356509c0-5649-4d69-a66f-1b0a4ca00464" Nov 8 00:17:54.787111 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1197116412.mount: Deactivated successfully. Nov 8 00:17:54.803889 containerd[1543]: time="2025-11-08T00:17:54.803825589Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:17:54.808856 containerd[1543]: time="2025-11-08T00:17:54.808222623Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.30.4: active requests=0, bytes read=156883675" Nov 8 00:17:54.823251 containerd[1543]: time="2025-11-08T00:17:54.823226120Z" level=info msg="ImageCreate event name:\"sha256:833e8e11d9dc187377eab6f31e275114a6b0f8f0afc3bf578a2a00507e85afc9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:17:54.823794 containerd[1543]: time="2025-11-08T00:17:54.823773104Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.30.4\" with image id \"sha256:833e8e11d9dc187377eab6f31e275114a6b0f8f0afc3bf578a2a00507e85afc9\", repo tag \"ghcr.io/flatcar/calico/node:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/node@sha256:e92cca333202c87d07bf57f38182fd68f0779f912ef55305eda1fccc9f33667c\", size \"156883537\" in 5.204202076s" Nov 8 00:17:54.823830 containerd[1543]: time="2025-11-08T00:17:54.823795799Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.4\" returns image reference \"sha256:833e8e11d9dc187377eab6f31e275114a6b0f8f0afc3bf578a2a00507e85afc9\"" Nov 8 00:17:54.824107 containerd[1543]: time="2025-11-08T00:17:54.824093582Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:e92cca333202c87d07bf57f38182fd68f0779f912ef55305eda1fccc9f33667c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:17:54.866986 containerd[1543]: time="2025-11-08T00:17:54.866961234Z" level=info msg="CreateContainer within sandbox \"1ef97721f0bac3c21c75e20b3e20be131a54e6a26af25d3e165f4b3f46b43e68\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Nov 8 00:17:55.004013 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3225214361.mount: Deactivated successfully. Nov 8 00:17:55.049145 containerd[1543]: time="2025-11-08T00:17:55.049049006Z" level=info msg="CreateContainer within sandbox \"1ef97721f0bac3c21c75e20b3e20be131a54e6a26af25d3e165f4b3f46b43e68\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"5b59bcd0c557d2bcdd45ae63b8f3415d4d3abbb27e19caf3e7e3176018c58de0\"" Nov 8 00:17:55.074834 containerd[1543]: time="2025-11-08T00:17:55.074803903Z" level=info msg="StartContainer for \"5b59bcd0c557d2bcdd45ae63b8f3415d4d3abbb27e19caf3e7e3176018c58de0\"" Nov 8 00:17:55.155528 systemd[1]: Started cri-containerd-5b59bcd0c557d2bcdd45ae63b8f3415d4d3abbb27e19caf3e7e3176018c58de0.scope - libcontainer container 5b59bcd0c557d2bcdd45ae63b8f3415d4d3abbb27e19caf3e7e3176018c58de0. Nov 8 00:17:55.181069 containerd[1543]: time="2025-11-08T00:17:55.181016926Z" level=info msg="StartContainer for \"5b59bcd0c557d2bcdd45ae63b8f3415d4d3abbb27e19caf3e7e3176018c58de0\" returns successfully" Nov 8 00:17:55.718660 kubelet[2729]: I1108 00:17:55.704515 2729 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-vdv2v" podStartSLOduration=2.198021491 podStartE2EDuration="17.692278949s" podCreationTimestamp="2025-11-08 00:17:38 +0000 UTC" firstStartedPulling="2025-11-08 00:17:39.330086674 +0000 UTC m=+21.211997443" lastFinishedPulling="2025-11-08 00:17:54.824344133 +0000 UTC m=+36.706254901" observedRunningTime="2025-11-08 00:17:55.692147691 +0000 UTC m=+37.574058463" watchObservedRunningTime="2025-11-08 00:17:55.692278949 +0000 UTC m=+37.574189715" Nov 8 00:17:55.931392 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Nov 8 00:17:56.001641 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Nov 8 00:17:56.441884 containerd[1543]: time="2025-11-08T00:17:56.441822460Z" level=info msg="StopPodSandbox for \"323f688a526e8c1f75dd0090eb473941e081eba26dfcd558b02454ff5ab11dff\"" Nov 8 00:17:56.700462 systemd[1]: run-containerd-runc-k8s.io-5b59bcd0c557d2bcdd45ae63b8f3415d4d3abbb27e19caf3e7e3176018c58de0-runc.SB689S.mount: Deactivated successfully. Nov 8 00:17:56.943709 containerd[1543]: 2025-11-08 00:17:56.539 [INFO][3999] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="323f688a526e8c1f75dd0090eb473941e081eba26dfcd558b02454ff5ab11dff" Nov 8 00:17:56.943709 containerd[1543]: 2025-11-08 00:17:56.541 [INFO][3999] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="323f688a526e8c1f75dd0090eb473941e081eba26dfcd558b02454ff5ab11dff" iface="eth0" netns="/var/run/netns/cni-e9a7ad66-139e-0732-1a5f-0567a9e72687" Nov 8 00:17:56.943709 containerd[1543]: 2025-11-08 00:17:56.543 [INFO][3999] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="323f688a526e8c1f75dd0090eb473941e081eba26dfcd558b02454ff5ab11dff" iface="eth0" netns="/var/run/netns/cni-e9a7ad66-139e-0732-1a5f-0567a9e72687" Nov 8 00:17:56.943709 containerd[1543]: 2025-11-08 00:17:56.544 [INFO][3999] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="323f688a526e8c1f75dd0090eb473941e081eba26dfcd558b02454ff5ab11dff" iface="eth0" netns="/var/run/netns/cni-e9a7ad66-139e-0732-1a5f-0567a9e72687" Nov 8 00:17:56.943709 containerd[1543]: 2025-11-08 00:17:56.544 [INFO][3999] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="323f688a526e8c1f75dd0090eb473941e081eba26dfcd558b02454ff5ab11dff" Nov 8 00:17:56.943709 containerd[1543]: 2025-11-08 00:17:56.544 [INFO][3999] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="323f688a526e8c1f75dd0090eb473941e081eba26dfcd558b02454ff5ab11dff" Nov 8 00:17:56.943709 containerd[1543]: 2025-11-08 00:17:56.885 [INFO][4006] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="323f688a526e8c1f75dd0090eb473941e081eba26dfcd558b02454ff5ab11dff" HandleID="k8s-pod-network.323f688a526e8c1f75dd0090eb473941e081eba26dfcd558b02454ff5ab11dff" Workload="localhost-k8s-whisker--b555dc9bb--5mdbn-eth0" Nov 8 00:17:56.943709 containerd[1543]: 2025-11-08 00:17:56.901 [INFO][4006] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 8 00:17:56.943709 containerd[1543]: 2025-11-08 00:17:56.905 [INFO][4006] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 8 00:17:56.943709 containerd[1543]: 2025-11-08 00:17:56.937 [WARNING][4006] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="323f688a526e8c1f75dd0090eb473941e081eba26dfcd558b02454ff5ab11dff" HandleID="k8s-pod-network.323f688a526e8c1f75dd0090eb473941e081eba26dfcd558b02454ff5ab11dff" Workload="localhost-k8s-whisker--b555dc9bb--5mdbn-eth0" Nov 8 00:17:56.943709 containerd[1543]: 2025-11-08 00:17:56.937 [INFO][4006] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="323f688a526e8c1f75dd0090eb473941e081eba26dfcd558b02454ff5ab11dff" HandleID="k8s-pod-network.323f688a526e8c1f75dd0090eb473941e081eba26dfcd558b02454ff5ab11dff" Workload="localhost-k8s-whisker--b555dc9bb--5mdbn-eth0" Nov 8 00:17:56.943709 containerd[1543]: 2025-11-08 00:17:56.941 [INFO][4006] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 8 00:17:56.943709 containerd[1543]: 2025-11-08 00:17:56.942 [INFO][3999] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="323f688a526e8c1f75dd0090eb473941e081eba26dfcd558b02454ff5ab11dff" Nov 8 00:17:56.945785 systemd[1]: run-netns-cni\x2de9a7ad66\x2d139e\x2d0732\x2d1a5f\x2d0567a9e72687.mount: Deactivated successfully. Nov 8 00:17:56.950196 containerd[1543]: time="2025-11-08T00:17:56.950165629Z" level=info msg="TearDown network for sandbox \"323f688a526e8c1f75dd0090eb473941e081eba26dfcd558b02454ff5ab11dff\" successfully" Nov 8 00:17:56.950694 containerd[1543]: time="2025-11-08T00:17:56.950197271Z" level=info msg="StopPodSandbox for \"323f688a526e8c1f75dd0090eb473941e081eba26dfcd558b02454ff5ab11dff\" returns successfully" Nov 8 00:17:57.040254 kubelet[2729]: I1108 00:17:57.040152 2729 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kube-api-access-55tlm\" (UniqueName: \"kubernetes.io/projected/e3e70873-2d37-40cc-9ba8-c206d83d372d-kube-api-access-55tlm\") pod \"e3e70873-2d37-40cc-9ba8-c206d83d372d\" (UID: \"e3e70873-2d37-40cc-9ba8-c206d83d372d\") " Nov 8 00:17:57.044089 kubelet[2729]: I1108 00:17:57.043906 2729 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e3e70873-2d37-40cc-9ba8-c206d83d372d-whisker-ca-bundle\") pod \"e3e70873-2d37-40cc-9ba8-c206d83d372d\" (UID: \"e3e70873-2d37-40cc-9ba8-c206d83d372d\") " Nov 8 00:17:57.044089 kubelet[2729]: I1108 00:17:57.043932 2729 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/e3e70873-2d37-40cc-9ba8-c206d83d372d-whisker-backend-key-pair\") pod \"e3e70873-2d37-40cc-9ba8-c206d83d372d\" (UID: \"e3e70873-2d37-40cc-9ba8-c206d83d372d\") " Nov 8 00:17:57.070954 kubelet[2729]: I1108 00:17:57.070920 2729 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e3e70873-2d37-40cc-9ba8-c206d83d372d-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "e3e70873-2d37-40cc-9ba8-c206d83d372d" (UID: "e3e70873-2d37-40cc-9ba8-c206d83d372d"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGIDValue "" Nov 8 00:17:57.071984 systemd[1]: var-lib-kubelet-pods-e3e70873\x2d2d37\x2d40cc\x2d9ba8\x2dc206d83d372d-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Nov 8 00:17:57.072764 kubelet[2729]: I1108 00:17:57.068535 2729 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e3e70873-2d37-40cc-9ba8-c206d83d372d-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "e3e70873-2d37-40cc-9ba8-c206d83d372d" (UID: "e3e70873-2d37-40cc-9ba8-c206d83d372d"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Nov 8 00:17:57.075323 systemd[1]: var-lib-kubelet-pods-e3e70873\x2d2d37\x2d40cc\x2d9ba8\x2dc206d83d372d-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d55tlm.mount: Deactivated successfully. Nov 8 00:17:57.075631 kubelet[2729]: I1108 00:17:57.075602 2729 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e3e70873-2d37-40cc-9ba8-c206d83d372d-kube-api-access-55tlm" (OuterVolumeSpecName: "kube-api-access-55tlm") pod "e3e70873-2d37-40cc-9ba8-c206d83d372d" (UID: "e3e70873-2d37-40cc-9ba8-c206d83d372d"). InnerVolumeSpecName "kube-api-access-55tlm". PluginName "kubernetes.io/projected", VolumeGIDValue "" Nov 8 00:17:57.144685 kubelet[2729]: I1108 00:17:57.144644 2729 reconciler_common.go:299] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e3e70873-2d37-40cc-9ba8-c206d83d372d-whisker-ca-bundle\") on node \"localhost\" DevicePath \"\"" Nov 8 00:17:57.144685 kubelet[2729]: I1108 00:17:57.144676 2729 reconciler_common.go:299] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/e3e70873-2d37-40cc-9ba8-c206d83d372d-whisker-backend-key-pair\") on node \"localhost\" DevicePath \"\"" Nov 8 00:17:57.144685 kubelet[2729]: I1108 00:17:57.144684 2729 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-55tlm\" (UniqueName: \"kubernetes.io/projected/e3e70873-2d37-40cc-9ba8-c206d83d372d-kube-api-access-55tlm\") on node \"localhost\" DevicePath \"\"" Nov 8 00:17:57.682571 systemd[1]: Removed slice kubepods-besteffort-pode3e70873_2d37_40cc_9ba8_c206d83d372d.slice - libcontainer container kubepods-besteffort-pode3e70873_2d37_40cc_9ba8_c206d83d372d.slice. Nov 8 00:17:57.784788 systemd[1]: Created slice kubepods-besteffort-pod05f2c2bc_5f37_4718_b09a_298a728d0047.slice - libcontainer container kubepods-besteffort-pod05f2c2bc_5f37_4718_b09a_298a728d0047.slice. Nov 8 00:17:57.849098 kubelet[2729]: I1108 00:17:57.849045 2729 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/05f2c2bc-5f37-4718-b09a-298a728d0047-whisker-backend-key-pair\") pod \"whisker-54745c6b5b-z6gpb\" (UID: \"05f2c2bc-5f37-4718-b09a-298a728d0047\") " pod="calico-system/whisker-54745c6b5b-z6gpb" Nov 8 00:17:57.849098 kubelet[2729]: I1108 00:17:57.849091 2729 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fmdtc\" (UniqueName: \"kubernetes.io/projected/05f2c2bc-5f37-4718-b09a-298a728d0047-kube-api-access-fmdtc\") pod \"whisker-54745c6b5b-z6gpb\" (UID: \"05f2c2bc-5f37-4718-b09a-298a728d0047\") " pod="calico-system/whisker-54745c6b5b-z6gpb" Nov 8 00:17:57.849098 kubelet[2729]: I1108 00:17:57.849107 2729 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/05f2c2bc-5f37-4718-b09a-298a728d0047-whisker-ca-bundle\") pod \"whisker-54745c6b5b-z6gpb\" (UID: \"05f2c2bc-5f37-4718-b09a-298a728d0047\") " pod="calico-system/whisker-54745c6b5b-z6gpb" Nov 8 00:17:58.092449 containerd[1543]: time="2025-11-08T00:17:58.092328259Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-54745c6b5b-z6gpb,Uid:05f2c2bc-5f37-4718-b09a-298a728d0047,Namespace:calico-system,Attempt:0,}" Nov 8 00:17:58.193379 systemd-networkd[1249]: califac2af73eb3: Link UP Nov 8 00:17:58.193951 systemd-networkd[1249]: califac2af73eb3: Gained carrier Nov 8 00:17:58.207389 containerd[1543]: 2025-11-08 00:17:58.119 [INFO][4133] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Nov 8 00:17:58.207389 containerd[1543]: 2025-11-08 00:17:58.127 [INFO][4133] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-whisker--54745c6b5b--z6gpb-eth0 whisker-54745c6b5b- calico-system 05f2c2bc-5f37-4718-b09a-298a728d0047 932 0 2025-11-08 00:17:57 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:54745c6b5b projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s localhost whisker-54745c6b5b-z6gpb eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] califac2af73eb3 [] [] }} ContainerID="2551542717f61fa7e36091865160f983458ed4472fb6c261230d6928331f70c4" Namespace="calico-system" Pod="whisker-54745c6b5b-z6gpb" WorkloadEndpoint="localhost-k8s-whisker--54745c6b5b--z6gpb-" Nov 8 00:17:58.207389 containerd[1543]: 2025-11-08 00:17:58.127 [INFO][4133] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="2551542717f61fa7e36091865160f983458ed4472fb6c261230d6928331f70c4" Namespace="calico-system" Pod="whisker-54745c6b5b-z6gpb" WorkloadEndpoint="localhost-k8s-whisker--54745c6b5b--z6gpb-eth0" Nov 8 00:17:58.207389 containerd[1543]: 2025-11-08 00:17:58.147 [INFO][4144] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="2551542717f61fa7e36091865160f983458ed4472fb6c261230d6928331f70c4" HandleID="k8s-pod-network.2551542717f61fa7e36091865160f983458ed4472fb6c261230d6928331f70c4" Workload="localhost-k8s-whisker--54745c6b5b--z6gpb-eth0" Nov 8 00:17:58.207389 containerd[1543]: 2025-11-08 00:17:58.147 [INFO][4144] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="2551542717f61fa7e36091865160f983458ed4472fb6c261230d6928331f70c4" HandleID="k8s-pod-network.2551542717f61fa7e36091865160f983458ed4472fb6c261230d6928331f70c4" Workload="localhost-k8s-whisker--54745c6b5b--z6gpb-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00024f260), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"whisker-54745c6b5b-z6gpb", "timestamp":"2025-11-08 00:17:58.14745803 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 8 00:17:58.207389 containerd[1543]: 2025-11-08 00:17:58.147 [INFO][4144] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 8 00:17:58.207389 containerd[1543]: 2025-11-08 00:17:58.147 [INFO][4144] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 8 00:17:58.207389 containerd[1543]: 2025-11-08 00:17:58.147 [INFO][4144] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Nov 8 00:17:58.207389 containerd[1543]: 2025-11-08 00:17:58.155 [INFO][4144] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.2551542717f61fa7e36091865160f983458ed4472fb6c261230d6928331f70c4" host="localhost" Nov 8 00:17:58.207389 containerd[1543]: 2025-11-08 00:17:58.163 [INFO][4144] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Nov 8 00:17:58.207389 containerd[1543]: 2025-11-08 00:17:58.165 [INFO][4144] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Nov 8 00:17:58.207389 containerd[1543]: 2025-11-08 00:17:58.166 [INFO][4144] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Nov 8 00:17:58.207389 containerd[1543]: 2025-11-08 00:17:58.166 [INFO][4144] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Nov 8 00:17:58.207389 containerd[1543]: 2025-11-08 00:17:58.167 [INFO][4144] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.2551542717f61fa7e36091865160f983458ed4472fb6c261230d6928331f70c4" host="localhost" Nov 8 00:17:58.207389 containerd[1543]: 2025-11-08 00:17:58.167 [INFO][4144] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.2551542717f61fa7e36091865160f983458ed4472fb6c261230d6928331f70c4 Nov 8 00:17:58.207389 containerd[1543]: 2025-11-08 00:17:58.169 [INFO][4144] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.2551542717f61fa7e36091865160f983458ed4472fb6c261230d6928331f70c4" host="localhost" Nov 8 00:17:58.207389 containerd[1543]: 2025-11-08 00:17:58.172 [INFO][4144] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.129/26] block=192.168.88.128/26 handle="k8s-pod-network.2551542717f61fa7e36091865160f983458ed4472fb6c261230d6928331f70c4" host="localhost" Nov 8 00:17:58.207389 containerd[1543]: 2025-11-08 00:17:58.172 [INFO][4144] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.129/26] handle="k8s-pod-network.2551542717f61fa7e36091865160f983458ed4472fb6c261230d6928331f70c4" host="localhost" Nov 8 00:17:58.207389 containerd[1543]: 2025-11-08 00:17:58.172 [INFO][4144] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 8 00:17:58.207389 containerd[1543]: 2025-11-08 00:17:58.172 [INFO][4144] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.129/26] IPv6=[] ContainerID="2551542717f61fa7e36091865160f983458ed4472fb6c261230d6928331f70c4" HandleID="k8s-pod-network.2551542717f61fa7e36091865160f983458ed4472fb6c261230d6928331f70c4" Workload="localhost-k8s-whisker--54745c6b5b--z6gpb-eth0" Nov 8 00:17:58.207987 containerd[1543]: 2025-11-08 00:17:58.174 [INFO][4133] cni-plugin/k8s.go 418: Populated endpoint ContainerID="2551542717f61fa7e36091865160f983458ed4472fb6c261230d6928331f70c4" Namespace="calico-system" Pod="whisker-54745c6b5b-z6gpb" WorkloadEndpoint="localhost-k8s-whisker--54745c6b5b--z6gpb-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--54745c6b5b--z6gpb-eth0", GenerateName:"whisker-54745c6b5b-", Namespace:"calico-system", SelfLink:"", UID:"05f2c2bc-5f37-4718-b09a-298a728d0047", ResourceVersion:"932", Generation:0, CreationTimestamp:time.Date(2025, time.November, 8, 0, 17, 57, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"54745c6b5b", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"whisker-54745c6b5b-z6gpb", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"califac2af73eb3", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 8 00:17:58.207987 containerd[1543]: 2025-11-08 00:17:58.175 [INFO][4133] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.129/32] ContainerID="2551542717f61fa7e36091865160f983458ed4472fb6c261230d6928331f70c4" Namespace="calico-system" Pod="whisker-54745c6b5b-z6gpb" WorkloadEndpoint="localhost-k8s-whisker--54745c6b5b--z6gpb-eth0" Nov 8 00:17:58.207987 containerd[1543]: 2025-11-08 00:17:58.175 [INFO][4133] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to califac2af73eb3 ContainerID="2551542717f61fa7e36091865160f983458ed4472fb6c261230d6928331f70c4" Namespace="calico-system" Pod="whisker-54745c6b5b-z6gpb" WorkloadEndpoint="localhost-k8s-whisker--54745c6b5b--z6gpb-eth0" Nov 8 00:17:58.207987 containerd[1543]: 2025-11-08 00:17:58.190 [INFO][4133] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="2551542717f61fa7e36091865160f983458ed4472fb6c261230d6928331f70c4" Namespace="calico-system" Pod="whisker-54745c6b5b-z6gpb" WorkloadEndpoint="localhost-k8s-whisker--54745c6b5b--z6gpb-eth0" Nov 8 00:17:58.207987 containerd[1543]: 2025-11-08 00:17:58.191 [INFO][4133] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="2551542717f61fa7e36091865160f983458ed4472fb6c261230d6928331f70c4" Namespace="calico-system" Pod="whisker-54745c6b5b-z6gpb" WorkloadEndpoint="localhost-k8s-whisker--54745c6b5b--z6gpb-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--54745c6b5b--z6gpb-eth0", GenerateName:"whisker-54745c6b5b-", Namespace:"calico-system", SelfLink:"", UID:"05f2c2bc-5f37-4718-b09a-298a728d0047", ResourceVersion:"932", Generation:0, CreationTimestamp:time.Date(2025, time.November, 8, 0, 17, 57, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"54745c6b5b", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"2551542717f61fa7e36091865160f983458ed4472fb6c261230d6928331f70c4", Pod:"whisker-54745c6b5b-z6gpb", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"califac2af73eb3", MAC:"d6:9a:6d:bd:95:cc", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 8 00:17:58.207987 containerd[1543]: 2025-11-08 00:17:58.204 [INFO][4133] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="2551542717f61fa7e36091865160f983458ed4472fb6c261230d6928331f70c4" Namespace="calico-system" Pod="whisker-54745c6b5b-z6gpb" WorkloadEndpoint="localhost-k8s-whisker--54745c6b5b--z6gpb-eth0" Nov 8 00:17:58.227753 containerd[1543]: time="2025-11-08T00:17:58.226521120Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 8 00:17:58.227753 containerd[1543]: time="2025-11-08T00:17:58.226583789Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 8 00:17:58.227753 containerd[1543]: time="2025-11-08T00:17:58.226594129Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 8 00:17:58.227753 containerd[1543]: time="2025-11-08T00:17:58.226659318Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 8 00:17:58.248556 systemd[1]: Started cri-containerd-2551542717f61fa7e36091865160f983458ed4472fb6c261230d6928331f70c4.scope - libcontainer container 2551542717f61fa7e36091865160f983458ed4472fb6c261230d6928331f70c4. Nov 8 00:17:58.257526 systemd-resolved[1470]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Nov 8 00:17:58.293652 containerd[1543]: time="2025-11-08T00:17:58.293472262Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-54745c6b5b-z6gpb,Uid:05f2c2bc-5f37-4718-b09a-298a728d0047,Namespace:calico-system,Attempt:0,} returns sandbox id \"2551542717f61fa7e36091865160f983458ed4472fb6c261230d6928331f70c4\"" Nov 8 00:17:58.321263 kubelet[2729]: I1108 00:17:58.321235 2729 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e3e70873-2d37-40cc-9ba8-c206d83d372d" path="/var/lib/kubelet/pods/e3e70873-2d37-40cc-9ba8-c206d83d372d/volumes" Nov 8 00:17:58.322709 containerd[1543]: time="2025-11-08T00:17:58.322677609Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Nov 8 00:17:58.684272 containerd[1543]: time="2025-11-08T00:17:58.684168341Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 8 00:17:58.693522 containerd[1543]: time="2025-11-08T00:17:58.690048332Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Nov 8 00:17:58.693522 containerd[1543]: time="2025-11-08T00:17:58.690067501Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Nov 8 00:17:58.709467 kubelet[2729]: E1108 00:17:58.709433 2729 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Nov 8 00:17:58.714978 kubelet[2729]: E1108 00:17:58.709475 2729 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Nov 8 00:17:58.723149 kubelet[2729]: E1108 00:17:58.723123 2729 kuberuntime_manager.go:1449] "Unhandled Error" err="container whisker start failed in pod whisker-54745c6b5b-z6gpb_calico-system(05f2c2bc-5f37-4718-b09a-298a728d0047): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Nov 8 00:17:58.724011 containerd[1543]: time="2025-11-08T00:17:58.723917128Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Nov 8 00:17:59.071386 containerd[1543]: time="2025-11-08T00:17:59.071281880Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 8 00:17:59.072266 containerd[1543]: time="2025-11-08T00:17:59.071944320Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Nov 8 00:17:59.072266 containerd[1543]: time="2025-11-08T00:17:59.072000858Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Nov 8 00:17:59.072378 kubelet[2729]: E1108 00:17:59.072104 2729 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Nov 8 00:17:59.072378 kubelet[2729]: E1108 00:17:59.072138 2729 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Nov 8 00:17:59.072378 kubelet[2729]: E1108 00:17:59.072196 2729 kuberuntime_manager.go:1449] "Unhandled Error" err="container whisker-backend start failed in pod whisker-54745c6b5b-z6gpb_calico-system(05f2c2bc-5f37-4718-b09a-298a728d0047): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Nov 8 00:17:59.072473 kubelet[2729]: E1108 00:17:59.072229 2729 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-54745c6b5b-z6gpb" podUID="05f2c2bc-5f37-4718-b09a-298a728d0047" Nov 8 00:17:59.613056 systemd-networkd[1249]: califac2af73eb3: Gained IPv6LL Nov 8 00:17:59.686150 kubelet[2729]: E1108 00:17:59.685716 2729 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-54745c6b5b-z6gpb" podUID="05f2c2bc-5f37-4718-b09a-298a728d0047" Nov 8 00:18:03.273947 containerd[1543]: time="2025-11-08T00:18:03.273668708Z" level=info msg="StopPodSandbox for \"0960d3ae8b5b3c21ff19533fbc15f4e824c0d345e0b13869dfc4866f06a56644\"" Nov 8 00:18:03.274491 containerd[1543]: time="2025-11-08T00:18:03.274364841Z" level=info msg="StopPodSandbox for \"8557d16401825c2a62303c4719a620125662cea10ab8955e9009e3e14d7542a6\"" Nov 8 00:18:03.344323 containerd[1543]: 2025-11-08 00:18:03.310 [INFO][4325] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="0960d3ae8b5b3c21ff19533fbc15f4e824c0d345e0b13869dfc4866f06a56644" Nov 8 00:18:03.344323 containerd[1543]: 2025-11-08 00:18:03.311 [INFO][4325] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="0960d3ae8b5b3c21ff19533fbc15f4e824c0d345e0b13869dfc4866f06a56644" iface="eth0" netns="/var/run/netns/cni-50cd08b7-50f6-31bd-3a5e-fb34fe4783dc" Nov 8 00:18:03.344323 containerd[1543]: 2025-11-08 00:18:03.312 [INFO][4325] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="0960d3ae8b5b3c21ff19533fbc15f4e824c0d345e0b13869dfc4866f06a56644" iface="eth0" netns="/var/run/netns/cni-50cd08b7-50f6-31bd-3a5e-fb34fe4783dc" Nov 8 00:18:03.344323 containerd[1543]: 2025-11-08 00:18:03.313 [INFO][4325] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="0960d3ae8b5b3c21ff19533fbc15f4e824c0d345e0b13869dfc4866f06a56644" iface="eth0" netns="/var/run/netns/cni-50cd08b7-50f6-31bd-3a5e-fb34fe4783dc" Nov 8 00:18:03.344323 containerd[1543]: 2025-11-08 00:18:03.313 [INFO][4325] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="0960d3ae8b5b3c21ff19533fbc15f4e824c0d345e0b13869dfc4866f06a56644" Nov 8 00:18:03.344323 containerd[1543]: 2025-11-08 00:18:03.313 [INFO][4325] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="0960d3ae8b5b3c21ff19533fbc15f4e824c0d345e0b13869dfc4866f06a56644" Nov 8 00:18:03.344323 containerd[1543]: 2025-11-08 00:18:03.335 [INFO][4336] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="0960d3ae8b5b3c21ff19533fbc15f4e824c0d345e0b13869dfc4866f06a56644" HandleID="k8s-pod-network.0960d3ae8b5b3c21ff19533fbc15f4e824c0d345e0b13869dfc4866f06a56644" Workload="localhost-k8s-coredns--66bc5c9577--2wlbm-eth0" Nov 8 00:18:03.344323 containerd[1543]: 2025-11-08 00:18:03.335 [INFO][4336] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 8 00:18:03.344323 containerd[1543]: 2025-11-08 00:18:03.335 [INFO][4336] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 8 00:18:03.344323 containerd[1543]: 2025-11-08 00:18:03.339 [WARNING][4336] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="0960d3ae8b5b3c21ff19533fbc15f4e824c0d345e0b13869dfc4866f06a56644" HandleID="k8s-pod-network.0960d3ae8b5b3c21ff19533fbc15f4e824c0d345e0b13869dfc4866f06a56644" Workload="localhost-k8s-coredns--66bc5c9577--2wlbm-eth0" Nov 8 00:18:03.344323 containerd[1543]: 2025-11-08 00:18:03.339 [INFO][4336] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="0960d3ae8b5b3c21ff19533fbc15f4e824c0d345e0b13869dfc4866f06a56644" HandleID="k8s-pod-network.0960d3ae8b5b3c21ff19533fbc15f4e824c0d345e0b13869dfc4866f06a56644" Workload="localhost-k8s-coredns--66bc5c9577--2wlbm-eth0" Nov 8 00:18:03.344323 containerd[1543]: 2025-11-08 00:18:03.340 [INFO][4336] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 8 00:18:03.344323 containerd[1543]: 2025-11-08 00:18:03.342 [INFO][4325] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="0960d3ae8b5b3c21ff19533fbc15f4e824c0d345e0b13869dfc4866f06a56644" Nov 8 00:18:03.346509 containerd[1543]: time="2025-11-08T00:18:03.346044395Z" level=info msg="TearDown network for sandbox \"0960d3ae8b5b3c21ff19533fbc15f4e824c0d345e0b13869dfc4866f06a56644\" successfully" Nov 8 00:18:03.346509 containerd[1543]: time="2025-11-08T00:18:03.346091376Z" level=info msg="StopPodSandbox for \"0960d3ae8b5b3c21ff19533fbc15f4e824c0d345e0b13869dfc4866f06a56644\" returns successfully" Nov 8 00:18:03.346731 systemd[1]: run-netns-cni\x2d50cd08b7\x2d50f6\x2d31bd\x2d3a5e\x2dfb34fe4783dc.mount: Deactivated successfully. Nov 8 00:18:03.348326 containerd[1543]: time="2025-11-08T00:18:03.348244576Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-2wlbm,Uid:e0e38e94-a84e-4a13-adfe-e757102f7549,Namespace:kube-system,Attempt:1,}" Nov 8 00:18:03.351848 containerd[1543]: 2025-11-08 00:18:03.312 [INFO][4321] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="8557d16401825c2a62303c4719a620125662cea10ab8955e9009e3e14d7542a6" Nov 8 00:18:03.351848 containerd[1543]: 2025-11-08 00:18:03.312 [INFO][4321] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="8557d16401825c2a62303c4719a620125662cea10ab8955e9009e3e14d7542a6" iface="eth0" netns="/var/run/netns/cni-e2be9191-dbd9-a869-e6cc-2593a0331def" Nov 8 00:18:03.351848 containerd[1543]: 2025-11-08 00:18:03.312 [INFO][4321] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="8557d16401825c2a62303c4719a620125662cea10ab8955e9009e3e14d7542a6" iface="eth0" netns="/var/run/netns/cni-e2be9191-dbd9-a869-e6cc-2593a0331def" Nov 8 00:18:03.351848 containerd[1543]: 2025-11-08 00:18:03.313 [INFO][4321] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="8557d16401825c2a62303c4719a620125662cea10ab8955e9009e3e14d7542a6" iface="eth0" netns="/var/run/netns/cni-e2be9191-dbd9-a869-e6cc-2593a0331def" Nov 8 00:18:03.351848 containerd[1543]: 2025-11-08 00:18:03.313 [INFO][4321] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="8557d16401825c2a62303c4719a620125662cea10ab8955e9009e3e14d7542a6" Nov 8 00:18:03.351848 containerd[1543]: 2025-11-08 00:18:03.313 [INFO][4321] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="8557d16401825c2a62303c4719a620125662cea10ab8955e9009e3e14d7542a6" Nov 8 00:18:03.351848 containerd[1543]: 2025-11-08 00:18:03.335 [INFO][4335] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="8557d16401825c2a62303c4719a620125662cea10ab8955e9009e3e14d7542a6" HandleID="k8s-pod-network.8557d16401825c2a62303c4719a620125662cea10ab8955e9009e3e14d7542a6" Workload="localhost-k8s-calico--apiserver--5d46c4cb7--dlrvn-eth0" Nov 8 00:18:03.351848 containerd[1543]: 2025-11-08 00:18:03.335 [INFO][4335] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 8 00:18:03.351848 containerd[1543]: 2025-11-08 00:18:03.340 [INFO][4335] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 8 00:18:03.351848 containerd[1543]: 2025-11-08 00:18:03.344 [WARNING][4335] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="8557d16401825c2a62303c4719a620125662cea10ab8955e9009e3e14d7542a6" HandleID="k8s-pod-network.8557d16401825c2a62303c4719a620125662cea10ab8955e9009e3e14d7542a6" Workload="localhost-k8s-calico--apiserver--5d46c4cb7--dlrvn-eth0" Nov 8 00:18:03.351848 containerd[1543]: 2025-11-08 00:18:03.346 [INFO][4335] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="8557d16401825c2a62303c4719a620125662cea10ab8955e9009e3e14d7542a6" HandleID="k8s-pod-network.8557d16401825c2a62303c4719a620125662cea10ab8955e9009e3e14d7542a6" Workload="localhost-k8s-calico--apiserver--5d46c4cb7--dlrvn-eth0" Nov 8 00:18:03.351848 containerd[1543]: 2025-11-08 00:18:03.348 [INFO][4335] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 8 00:18:03.351848 containerd[1543]: 2025-11-08 00:18:03.350 [INFO][4321] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="8557d16401825c2a62303c4719a620125662cea10ab8955e9009e3e14d7542a6" Nov 8 00:18:03.353201 containerd[1543]: time="2025-11-08T00:18:03.352422919Z" level=info msg="TearDown network for sandbox \"8557d16401825c2a62303c4719a620125662cea10ab8955e9009e3e14d7542a6\" successfully" Nov 8 00:18:03.353201 containerd[1543]: time="2025-11-08T00:18:03.352437580Z" level=info msg="StopPodSandbox for \"8557d16401825c2a62303c4719a620125662cea10ab8955e9009e3e14d7542a6\" returns successfully" Nov 8 00:18:03.355158 systemd[1]: run-netns-cni\x2de2be9191\x2ddbd9\x2da869\x2de6cc\x2d2593a0331def.mount: Deactivated successfully. Nov 8 00:18:03.357128 containerd[1543]: time="2025-11-08T00:18:03.356181811Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5d46c4cb7-dlrvn,Uid:e5b24471-658f-4121-a78d-73d2e59f83f1,Namespace:calico-apiserver,Attempt:1,}" Nov 8 00:18:03.440137 systemd-networkd[1249]: cali276e7617470: Link UP Nov 8 00:18:03.440884 systemd-networkd[1249]: cali276e7617470: Gained carrier Nov 8 00:18:03.454550 containerd[1543]: 2025-11-08 00:18:03.382 [INFO][4348] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Nov 8 00:18:03.454550 containerd[1543]: 2025-11-08 00:18:03.390 [INFO][4348] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--66bc5c9577--2wlbm-eth0 coredns-66bc5c9577- kube-system e0e38e94-a84e-4a13-adfe-e757102f7549 962 0 2025-11-08 00:17:24 +0000 UTC map[k8s-app:kube-dns pod-template-hash:66bc5c9577 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-66bc5c9577-2wlbm eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali276e7617470 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 } {liveness-probe TCP 8080 0 } {readiness-probe TCP 8181 0 }] [] }} ContainerID="141fd53f8d4c79f2b77a592f437b270dba0028365ce5886411f6b5db25c50245" Namespace="kube-system" Pod="coredns-66bc5c9577-2wlbm" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--2wlbm-" Nov 8 00:18:03.454550 containerd[1543]: 2025-11-08 00:18:03.390 [INFO][4348] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="141fd53f8d4c79f2b77a592f437b270dba0028365ce5886411f6b5db25c50245" Namespace="kube-system" Pod="coredns-66bc5c9577-2wlbm" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--2wlbm-eth0" Nov 8 00:18:03.454550 containerd[1543]: 2025-11-08 00:18:03.413 [INFO][4373] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="141fd53f8d4c79f2b77a592f437b270dba0028365ce5886411f6b5db25c50245" HandleID="k8s-pod-network.141fd53f8d4c79f2b77a592f437b270dba0028365ce5886411f6b5db25c50245" Workload="localhost-k8s-coredns--66bc5c9577--2wlbm-eth0" Nov 8 00:18:03.454550 containerd[1543]: 2025-11-08 00:18:03.413 [INFO][4373] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="141fd53f8d4c79f2b77a592f437b270dba0028365ce5886411f6b5db25c50245" HandleID="k8s-pod-network.141fd53f8d4c79f2b77a592f437b270dba0028365ce5886411f6b5db25c50245" Workload="localhost-k8s-coredns--66bc5c9577--2wlbm-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002d4f30), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-66bc5c9577-2wlbm", "timestamp":"2025-11-08 00:18:03.413681186 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 8 00:18:03.454550 containerd[1543]: 2025-11-08 00:18:03.413 [INFO][4373] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 8 00:18:03.454550 containerd[1543]: 2025-11-08 00:18:03.413 [INFO][4373] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 8 00:18:03.454550 containerd[1543]: 2025-11-08 00:18:03.413 [INFO][4373] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Nov 8 00:18:03.454550 containerd[1543]: 2025-11-08 00:18:03.419 [INFO][4373] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.141fd53f8d4c79f2b77a592f437b270dba0028365ce5886411f6b5db25c50245" host="localhost" Nov 8 00:18:03.454550 containerd[1543]: 2025-11-08 00:18:03.421 [INFO][4373] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Nov 8 00:18:03.454550 containerd[1543]: 2025-11-08 00:18:03.428 [INFO][4373] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Nov 8 00:18:03.454550 containerd[1543]: 2025-11-08 00:18:03.429 [INFO][4373] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Nov 8 00:18:03.454550 containerd[1543]: 2025-11-08 00:18:03.430 [INFO][4373] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Nov 8 00:18:03.454550 containerd[1543]: 2025-11-08 00:18:03.430 [INFO][4373] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.141fd53f8d4c79f2b77a592f437b270dba0028365ce5886411f6b5db25c50245" host="localhost" Nov 8 00:18:03.454550 containerd[1543]: 2025-11-08 00:18:03.431 [INFO][4373] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.141fd53f8d4c79f2b77a592f437b270dba0028365ce5886411f6b5db25c50245 Nov 8 00:18:03.454550 containerd[1543]: 2025-11-08 00:18:03.432 [INFO][4373] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.141fd53f8d4c79f2b77a592f437b270dba0028365ce5886411f6b5db25c50245" host="localhost" Nov 8 00:18:03.454550 containerd[1543]: 2025-11-08 00:18:03.435 [INFO][4373] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.130/26] block=192.168.88.128/26 handle="k8s-pod-network.141fd53f8d4c79f2b77a592f437b270dba0028365ce5886411f6b5db25c50245" host="localhost" Nov 8 00:18:03.454550 containerd[1543]: 2025-11-08 00:18:03.435 [INFO][4373] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.130/26] handle="k8s-pod-network.141fd53f8d4c79f2b77a592f437b270dba0028365ce5886411f6b5db25c50245" host="localhost" Nov 8 00:18:03.454550 containerd[1543]: 2025-11-08 00:18:03.435 [INFO][4373] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 8 00:18:03.454550 containerd[1543]: 2025-11-08 00:18:03.435 [INFO][4373] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.130/26] IPv6=[] ContainerID="141fd53f8d4c79f2b77a592f437b270dba0028365ce5886411f6b5db25c50245" HandleID="k8s-pod-network.141fd53f8d4c79f2b77a592f437b270dba0028365ce5886411f6b5db25c50245" Workload="localhost-k8s-coredns--66bc5c9577--2wlbm-eth0" Nov 8 00:18:03.455014 containerd[1543]: 2025-11-08 00:18:03.437 [INFO][4348] cni-plugin/k8s.go 418: Populated endpoint ContainerID="141fd53f8d4c79f2b77a592f437b270dba0028365ce5886411f6b5db25c50245" Namespace="kube-system" Pod="coredns-66bc5c9577-2wlbm" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--2wlbm-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--66bc5c9577--2wlbm-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"e0e38e94-a84e-4a13-adfe-e757102f7549", ResourceVersion:"962", Generation:0, CreationTimestamp:time.Date(2025, time.November, 8, 0, 17, 24, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-66bc5c9577-2wlbm", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali276e7617470", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 8 00:18:03.455014 containerd[1543]: 2025-11-08 00:18:03.437 [INFO][4348] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.130/32] ContainerID="141fd53f8d4c79f2b77a592f437b270dba0028365ce5886411f6b5db25c50245" Namespace="kube-system" Pod="coredns-66bc5c9577-2wlbm" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--2wlbm-eth0" Nov 8 00:18:03.455014 containerd[1543]: 2025-11-08 00:18:03.437 [INFO][4348] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali276e7617470 ContainerID="141fd53f8d4c79f2b77a592f437b270dba0028365ce5886411f6b5db25c50245" Namespace="kube-system" Pod="coredns-66bc5c9577-2wlbm" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--2wlbm-eth0" Nov 8 00:18:03.455014 containerd[1543]: 2025-11-08 00:18:03.441 [INFO][4348] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="141fd53f8d4c79f2b77a592f437b270dba0028365ce5886411f6b5db25c50245" Namespace="kube-system" Pod="coredns-66bc5c9577-2wlbm" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--2wlbm-eth0" Nov 8 00:18:03.455014 containerd[1543]: 2025-11-08 00:18:03.441 [INFO][4348] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="141fd53f8d4c79f2b77a592f437b270dba0028365ce5886411f6b5db25c50245" Namespace="kube-system" Pod="coredns-66bc5c9577-2wlbm" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--2wlbm-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--66bc5c9577--2wlbm-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"e0e38e94-a84e-4a13-adfe-e757102f7549", ResourceVersion:"962", Generation:0, CreationTimestamp:time.Date(2025, time.November, 8, 0, 17, 24, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"141fd53f8d4c79f2b77a592f437b270dba0028365ce5886411f6b5db25c50245", Pod:"coredns-66bc5c9577-2wlbm", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali276e7617470", MAC:"36:49:df:68:dc:a4", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 8 00:18:03.455014 containerd[1543]: 2025-11-08 00:18:03.450 [INFO][4348] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="141fd53f8d4c79f2b77a592f437b270dba0028365ce5886411f6b5db25c50245" Namespace="kube-system" Pod="coredns-66bc5c9577-2wlbm" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--2wlbm-eth0" Nov 8 00:18:03.467161 containerd[1543]: time="2025-11-08T00:18:03.467094014Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 8 00:18:03.467161 containerd[1543]: time="2025-11-08T00:18:03.467143459Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 8 00:18:03.467161 containerd[1543]: time="2025-11-08T00:18:03.467158318Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 8 00:18:03.467337 containerd[1543]: time="2025-11-08T00:18:03.467224434Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 8 00:18:03.483492 systemd[1]: Started cri-containerd-141fd53f8d4c79f2b77a592f437b270dba0028365ce5886411f6b5db25c50245.scope - libcontainer container 141fd53f8d4c79f2b77a592f437b270dba0028365ce5886411f6b5db25c50245. Nov 8 00:18:03.492233 systemd-resolved[1470]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Nov 8 00:18:03.514082 containerd[1543]: time="2025-11-08T00:18:03.514056813Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-2wlbm,Uid:e0e38e94-a84e-4a13-adfe-e757102f7549,Namespace:kube-system,Attempt:1,} returns sandbox id \"141fd53f8d4c79f2b77a592f437b270dba0028365ce5886411f6b5db25c50245\"" Nov 8 00:18:03.517128 containerd[1543]: time="2025-11-08T00:18:03.517052138Z" level=info msg="CreateContainer within sandbox \"141fd53f8d4c79f2b77a592f437b270dba0028365ce5886411f6b5db25c50245\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Nov 8 00:18:03.528851 containerd[1543]: time="2025-11-08T00:18:03.528040269Z" level=info msg="CreateContainer within sandbox \"141fd53f8d4c79f2b77a592f437b270dba0028365ce5886411f6b5db25c50245\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"327effaf9d5a6e5e99ca9310bda7a2b4bba9337ff350e449b1b5f5e50a79bad1\"" Nov 8 00:18:03.530207 containerd[1543]: time="2025-11-08T00:18:03.529365918Z" level=info msg="StartContainer for \"327effaf9d5a6e5e99ca9310bda7a2b4bba9337ff350e449b1b5f5e50a79bad1\"" Nov 8 00:18:03.555461 systemd[1]: Started cri-containerd-327effaf9d5a6e5e99ca9310bda7a2b4bba9337ff350e449b1b5f5e50a79bad1.scope - libcontainer container 327effaf9d5a6e5e99ca9310bda7a2b4bba9337ff350e449b1b5f5e50a79bad1. Nov 8 00:18:03.562528 systemd-networkd[1249]: cali538aaa1971b: Link UP Nov 8 00:18:03.563611 systemd-networkd[1249]: cali538aaa1971b: Gained carrier Nov 8 00:18:03.575705 containerd[1543]: 2025-11-08 00:18:03.387 [INFO][4358] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Nov 8 00:18:03.575705 containerd[1543]: 2025-11-08 00:18:03.396 [INFO][4358] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--5d46c4cb7--dlrvn-eth0 calico-apiserver-5d46c4cb7- calico-apiserver e5b24471-658f-4121-a78d-73d2e59f83f1 963 0 2025-11-08 00:17:34 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:5d46c4cb7 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-5d46c4cb7-dlrvn eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali538aaa1971b [] [] }} ContainerID="6a9dcd0f439be7c3bac591ee9279cf3599c4788f33b07b2cc20518a744c689ea" Namespace="calico-apiserver" Pod="calico-apiserver-5d46c4cb7-dlrvn" WorkloadEndpoint="localhost-k8s-calico--apiserver--5d46c4cb7--dlrvn-" Nov 8 00:18:03.575705 containerd[1543]: 2025-11-08 00:18:03.396 [INFO][4358] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="6a9dcd0f439be7c3bac591ee9279cf3599c4788f33b07b2cc20518a744c689ea" Namespace="calico-apiserver" Pod="calico-apiserver-5d46c4cb7-dlrvn" WorkloadEndpoint="localhost-k8s-calico--apiserver--5d46c4cb7--dlrvn-eth0" Nov 8 00:18:03.575705 containerd[1543]: 2025-11-08 00:18:03.427 [INFO][4378] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="6a9dcd0f439be7c3bac591ee9279cf3599c4788f33b07b2cc20518a744c689ea" HandleID="k8s-pod-network.6a9dcd0f439be7c3bac591ee9279cf3599c4788f33b07b2cc20518a744c689ea" Workload="localhost-k8s-calico--apiserver--5d46c4cb7--dlrvn-eth0" Nov 8 00:18:03.575705 containerd[1543]: 2025-11-08 00:18:03.427 [INFO][4378] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="6a9dcd0f439be7c3bac591ee9279cf3599c4788f33b07b2cc20518a744c689ea" HandleID="k8s-pod-network.6a9dcd0f439be7c3bac591ee9279cf3599c4788f33b07b2cc20518a744c689ea" Workload="localhost-k8s-calico--apiserver--5d46c4cb7--dlrvn-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002d5000), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-5d46c4cb7-dlrvn", "timestamp":"2025-11-08 00:18:03.427096726 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 8 00:18:03.575705 containerd[1543]: 2025-11-08 00:18:03.427 [INFO][4378] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 8 00:18:03.575705 containerd[1543]: 2025-11-08 00:18:03.435 [INFO][4378] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 8 00:18:03.575705 containerd[1543]: 2025-11-08 00:18:03.435 [INFO][4378] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Nov 8 00:18:03.575705 containerd[1543]: 2025-11-08 00:18:03.520 [INFO][4378] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.6a9dcd0f439be7c3bac591ee9279cf3599c4788f33b07b2cc20518a744c689ea" host="localhost" Nov 8 00:18:03.575705 containerd[1543]: 2025-11-08 00:18:03.524 [INFO][4378] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Nov 8 00:18:03.575705 containerd[1543]: 2025-11-08 00:18:03.529 [INFO][4378] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Nov 8 00:18:03.575705 containerd[1543]: 2025-11-08 00:18:03.534 [INFO][4378] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Nov 8 00:18:03.575705 containerd[1543]: 2025-11-08 00:18:03.537 [INFO][4378] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Nov 8 00:18:03.575705 containerd[1543]: 2025-11-08 00:18:03.537 [INFO][4378] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.6a9dcd0f439be7c3bac591ee9279cf3599c4788f33b07b2cc20518a744c689ea" host="localhost" Nov 8 00:18:03.575705 containerd[1543]: 2025-11-08 00:18:03.540 [INFO][4378] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.6a9dcd0f439be7c3bac591ee9279cf3599c4788f33b07b2cc20518a744c689ea Nov 8 00:18:03.575705 containerd[1543]: 2025-11-08 00:18:03.544 [INFO][4378] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.6a9dcd0f439be7c3bac591ee9279cf3599c4788f33b07b2cc20518a744c689ea" host="localhost" Nov 8 00:18:03.575705 containerd[1543]: 2025-11-08 00:18:03.550 [INFO][4378] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.131/26] block=192.168.88.128/26 handle="k8s-pod-network.6a9dcd0f439be7c3bac591ee9279cf3599c4788f33b07b2cc20518a744c689ea" host="localhost" Nov 8 00:18:03.575705 containerd[1543]: 2025-11-08 00:18:03.552 [INFO][4378] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.131/26] handle="k8s-pod-network.6a9dcd0f439be7c3bac591ee9279cf3599c4788f33b07b2cc20518a744c689ea" host="localhost" Nov 8 00:18:03.575705 containerd[1543]: 2025-11-08 00:18:03.552 [INFO][4378] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 8 00:18:03.575705 containerd[1543]: 2025-11-08 00:18:03.552 [INFO][4378] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.131/26] IPv6=[] ContainerID="6a9dcd0f439be7c3bac591ee9279cf3599c4788f33b07b2cc20518a744c689ea" HandleID="k8s-pod-network.6a9dcd0f439be7c3bac591ee9279cf3599c4788f33b07b2cc20518a744c689ea" Workload="localhost-k8s-calico--apiserver--5d46c4cb7--dlrvn-eth0" Nov 8 00:18:03.576238 containerd[1543]: 2025-11-08 00:18:03.554 [INFO][4358] cni-plugin/k8s.go 418: Populated endpoint ContainerID="6a9dcd0f439be7c3bac591ee9279cf3599c4788f33b07b2cc20518a744c689ea" Namespace="calico-apiserver" Pod="calico-apiserver-5d46c4cb7-dlrvn" WorkloadEndpoint="localhost-k8s-calico--apiserver--5d46c4cb7--dlrvn-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--5d46c4cb7--dlrvn-eth0", GenerateName:"calico-apiserver-5d46c4cb7-", Namespace:"calico-apiserver", SelfLink:"", UID:"e5b24471-658f-4121-a78d-73d2e59f83f1", ResourceVersion:"963", Generation:0, CreationTimestamp:time.Date(2025, time.November, 8, 0, 17, 34, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5d46c4cb7", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-5d46c4cb7-dlrvn", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali538aaa1971b", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 8 00:18:03.576238 containerd[1543]: 2025-11-08 00:18:03.558 [INFO][4358] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.131/32] ContainerID="6a9dcd0f439be7c3bac591ee9279cf3599c4788f33b07b2cc20518a744c689ea" Namespace="calico-apiserver" Pod="calico-apiserver-5d46c4cb7-dlrvn" WorkloadEndpoint="localhost-k8s-calico--apiserver--5d46c4cb7--dlrvn-eth0" Nov 8 00:18:03.576238 containerd[1543]: 2025-11-08 00:18:03.558 [INFO][4358] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali538aaa1971b ContainerID="6a9dcd0f439be7c3bac591ee9279cf3599c4788f33b07b2cc20518a744c689ea" Namespace="calico-apiserver" Pod="calico-apiserver-5d46c4cb7-dlrvn" WorkloadEndpoint="localhost-k8s-calico--apiserver--5d46c4cb7--dlrvn-eth0" Nov 8 00:18:03.576238 containerd[1543]: 2025-11-08 00:18:03.564 [INFO][4358] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="6a9dcd0f439be7c3bac591ee9279cf3599c4788f33b07b2cc20518a744c689ea" Namespace="calico-apiserver" Pod="calico-apiserver-5d46c4cb7-dlrvn" WorkloadEndpoint="localhost-k8s-calico--apiserver--5d46c4cb7--dlrvn-eth0" Nov 8 00:18:03.576238 containerd[1543]: 2025-11-08 00:18:03.564 [INFO][4358] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="6a9dcd0f439be7c3bac591ee9279cf3599c4788f33b07b2cc20518a744c689ea" Namespace="calico-apiserver" Pod="calico-apiserver-5d46c4cb7-dlrvn" WorkloadEndpoint="localhost-k8s-calico--apiserver--5d46c4cb7--dlrvn-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--5d46c4cb7--dlrvn-eth0", GenerateName:"calico-apiserver-5d46c4cb7-", Namespace:"calico-apiserver", SelfLink:"", UID:"e5b24471-658f-4121-a78d-73d2e59f83f1", ResourceVersion:"963", Generation:0, CreationTimestamp:time.Date(2025, time.November, 8, 0, 17, 34, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5d46c4cb7", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"6a9dcd0f439be7c3bac591ee9279cf3599c4788f33b07b2cc20518a744c689ea", Pod:"calico-apiserver-5d46c4cb7-dlrvn", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali538aaa1971b", MAC:"66:50:05:a7:fa:d9", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 8 00:18:03.576238 containerd[1543]: 2025-11-08 00:18:03.573 [INFO][4358] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="6a9dcd0f439be7c3bac591ee9279cf3599c4788f33b07b2cc20518a744c689ea" Namespace="calico-apiserver" Pod="calico-apiserver-5d46c4cb7-dlrvn" WorkloadEndpoint="localhost-k8s-calico--apiserver--5d46c4cb7--dlrvn-eth0" Nov 8 00:18:03.590559 containerd[1543]: time="2025-11-08T00:18:03.590383357Z" level=info msg="StartContainer for \"327effaf9d5a6e5e99ca9310bda7a2b4bba9337ff350e449b1b5f5e50a79bad1\" returns successfully" Nov 8 00:18:03.595361 containerd[1543]: time="2025-11-08T00:18:03.595274304Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 8 00:18:03.595361 containerd[1543]: time="2025-11-08T00:18:03.595322560Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 8 00:18:03.595361 containerd[1543]: time="2025-11-08T00:18:03.595336423Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 8 00:18:03.595512 containerd[1543]: time="2025-11-08T00:18:03.595388377Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 8 00:18:03.608415 systemd[1]: Started cri-containerd-6a9dcd0f439be7c3bac591ee9279cf3599c4788f33b07b2cc20518a744c689ea.scope - libcontainer container 6a9dcd0f439be7c3bac591ee9279cf3599c4788f33b07b2cc20518a744c689ea. Nov 8 00:18:03.616629 systemd-resolved[1470]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Nov 8 00:18:03.639283 containerd[1543]: time="2025-11-08T00:18:03.639120303Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5d46c4cb7-dlrvn,Uid:e5b24471-658f-4121-a78d-73d2e59f83f1,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"6a9dcd0f439be7c3bac591ee9279cf3599c4788f33b07b2cc20518a744c689ea\"" Nov 8 00:18:03.641266 containerd[1543]: time="2025-11-08T00:18:03.641144164Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 8 00:18:03.706579 kubelet[2729]: I1108 00:18:03.706536 2729 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-2wlbm" podStartSLOduration=39.706518908 podStartE2EDuration="39.706518908s" podCreationTimestamp="2025-11-08 00:17:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-08 00:18:03.706430128 +0000 UTC m=+45.588340896" watchObservedRunningTime="2025-11-08 00:18:03.706518908 +0000 UTC m=+45.588429673" Nov 8 00:18:03.986726 containerd[1543]: time="2025-11-08T00:18:03.986695137Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 8 00:18:03.995691 containerd[1543]: time="2025-11-08T00:18:03.990367269Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 8 00:18:03.995691 containerd[1543]: time="2025-11-08T00:18:03.990420804Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Nov 8 00:18:03.995759 kubelet[2729]: E1108 00:18:03.990505 2729 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 8 00:18:03.995759 kubelet[2729]: E1108 00:18:03.990549 2729 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 8 00:18:03.995759 kubelet[2729]: E1108 00:18:03.990603 2729 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-apiserver start failed in pod calico-apiserver-5d46c4cb7-dlrvn_calico-apiserver(e5b24471-658f-4121-a78d-73d2e59f83f1): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 8 00:18:03.995759 kubelet[2729]: E1108 00:18:03.990626 2729 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5d46c4cb7-dlrvn" podUID="e5b24471-658f-4121-a78d-73d2e59f83f1" Nov 8 00:18:04.267297 containerd[1543]: time="2025-11-08T00:18:04.267165665Z" level=info msg="StopPodSandbox for \"29472098667a80ad47b6d7e756e614c3a90287fca77f9dc02a8625bfeb8ff85d\"" Nov 8 00:18:04.267401 containerd[1543]: time="2025-11-08T00:18:04.267344514Z" level=info msg="StopPodSandbox for \"0d8d645b2fd7dbff7492b0ab7a89c048990d7ad61034d0193bfe6e6d3f898170\"" Nov 8 00:18:04.268040 containerd[1543]: time="2025-11-08T00:18:04.267575431Z" level=info msg="StopPodSandbox for \"adfa604ce3bcfc0ed43e3117a339cba8e13558a9118239a877677f9f6f30f63e\"" Nov 8 00:18:04.375722 containerd[1543]: 2025-11-08 00:18:04.326 [INFO][4571] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="adfa604ce3bcfc0ed43e3117a339cba8e13558a9118239a877677f9f6f30f63e" Nov 8 00:18:04.375722 containerd[1543]: 2025-11-08 00:18:04.327 [INFO][4571] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="adfa604ce3bcfc0ed43e3117a339cba8e13558a9118239a877677f9f6f30f63e" iface="eth0" netns="/var/run/netns/cni-76564a99-8375-8fca-c119-e5a2b4ef7e00" Nov 8 00:18:04.375722 containerd[1543]: 2025-11-08 00:18:04.327 [INFO][4571] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="adfa604ce3bcfc0ed43e3117a339cba8e13558a9118239a877677f9f6f30f63e" iface="eth0" netns="/var/run/netns/cni-76564a99-8375-8fca-c119-e5a2b4ef7e00" Nov 8 00:18:04.375722 containerd[1543]: 2025-11-08 00:18:04.327 [INFO][4571] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="adfa604ce3bcfc0ed43e3117a339cba8e13558a9118239a877677f9f6f30f63e" iface="eth0" netns="/var/run/netns/cni-76564a99-8375-8fca-c119-e5a2b4ef7e00" Nov 8 00:18:04.375722 containerd[1543]: 2025-11-08 00:18:04.327 [INFO][4571] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="adfa604ce3bcfc0ed43e3117a339cba8e13558a9118239a877677f9f6f30f63e" Nov 8 00:18:04.375722 containerd[1543]: 2025-11-08 00:18:04.327 [INFO][4571] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="adfa604ce3bcfc0ed43e3117a339cba8e13558a9118239a877677f9f6f30f63e" Nov 8 00:18:04.375722 containerd[1543]: 2025-11-08 00:18:04.362 [INFO][4590] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="adfa604ce3bcfc0ed43e3117a339cba8e13558a9118239a877677f9f6f30f63e" HandleID="k8s-pod-network.adfa604ce3bcfc0ed43e3117a339cba8e13558a9118239a877677f9f6f30f63e" Workload="localhost-k8s-calico--kube--controllers--6499957458--l5qw9-eth0" Nov 8 00:18:04.375722 containerd[1543]: 2025-11-08 00:18:04.362 [INFO][4590] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 8 00:18:04.375722 containerd[1543]: 2025-11-08 00:18:04.362 [INFO][4590] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 8 00:18:04.375722 containerd[1543]: 2025-11-08 00:18:04.370 [WARNING][4590] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="adfa604ce3bcfc0ed43e3117a339cba8e13558a9118239a877677f9f6f30f63e" HandleID="k8s-pod-network.adfa604ce3bcfc0ed43e3117a339cba8e13558a9118239a877677f9f6f30f63e" Workload="localhost-k8s-calico--kube--controllers--6499957458--l5qw9-eth0" Nov 8 00:18:04.375722 containerd[1543]: 2025-11-08 00:18:04.370 [INFO][4590] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="adfa604ce3bcfc0ed43e3117a339cba8e13558a9118239a877677f9f6f30f63e" HandleID="k8s-pod-network.adfa604ce3bcfc0ed43e3117a339cba8e13558a9118239a877677f9f6f30f63e" Workload="localhost-k8s-calico--kube--controllers--6499957458--l5qw9-eth0" Nov 8 00:18:04.375722 containerd[1543]: 2025-11-08 00:18:04.371 [INFO][4590] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 8 00:18:04.375722 containerd[1543]: 2025-11-08 00:18:04.373 [INFO][4571] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="adfa604ce3bcfc0ed43e3117a339cba8e13558a9118239a877677f9f6f30f63e" Nov 8 00:18:04.379151 containerd[1543]: time="2025-11-08T00:18:04.378234208Z" level=info msg="TearDown network for sandbox \"adfa604ce3bcfc0ed43e3117a339cba8e13558a9118239a877677f9f6f30f63e\" successfully" Nov 8 00:18:04.379151 containerd[1543]: time="2025-11-08T00:18:04.378257863Z" level=info msg="StopPodSandbox for \"adfa604ce3bcfc0ed43e3117a339cba8e13558a9118239a877677f9f6f30f63e\" returns successfully" Nov 8 00:18:04.378684 systemd[1]: run-netns-cni\x2d76564a99\x2d8375\x2d8fca\x2dc119\x2de5a2b4ef7e00.mount: Deactivated successfully. Nov 8 00:18:04.382878 containerd[1543]: time="2025-11-08T00:18:04.382549024Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6499957458-l5qw9,Uid:d70d6c69-7a19-4335-9162-f8ca1575049e,Namespace:calico-system,Attempt:1,}" Nov 8 00:18:04.384315 containerd[1543]: 2025-11-08 00:18:04.335 [INFO][4572] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="29472098667a80ad47b6d7e756e614c3a90287fca77f9dc02a8625bfeb8ff85d" Nov 8 00:18:04.384315 containerd[1543]: 2025-11-08 00:18:04.335 [INFO][4572] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="29472098667a80ad47b6d7e756e614c3a90287fca77f9dc02a8625bfeb8ff85d" iface="eth0" netns="/var/run/netns/cni-12a42c71-6d1e-57cc-42f0-dfa8f8ae7464" Nov 8 00:18:04.384315 containerd[1543]: 2025-11-08 00:18:04.335 [INFO][4572] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="29472098667a80ad47b6d7e756e614c3a90287fca77f9dc02a8625bfeb8ff85d" iface="eth0" netns="/var/run/netns/cni-12a42c71-6d1e-57cc-42f0-dfa8f8ae7464" Nov 8 00:18:04.384315 containerd[1543]: 2025-11-08 00:18:04.336 [INFO][4572] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="29472098667a80ad47b6d7e756e614c3a90287fca77f9dc02a8625bfeb8ff85d" iface="eth0" netns="/var/run/netns/cni-12a42c71-6d1e-57cc-42f0-dfa8f8ae7464" Nov 8 00:18:04.384315 containerd[1543]: 2025-11-08 00:18:04.336 [INFO][4572] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="29472098667a80ad47b6d7e756e614c3a90287fca77f9dc02a8625bfeb8ff85d" Nov 8 00:18:04.384315 containerd[1543]: 2025-11-08 00:18:04.336 [INFO][4572] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="29472098667a80ad47b6d7e756e614c3a90287fca77f9dc02a8625bfeb8ff85d" Nov 8 00:18:04.384315 containerd[1543]: 2025-11-08 00:18:04.368 [INFO][4595] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="29472098667a80ad47b6d7e756e614c3a90287fca77f9dc02a8625bfeb8ff85d" HandleID="k8s-pod-network.29472098667a80ad47b6d7e756e614c3a90287fca77f9dc02a8625bfeb8ff85d" Workload="localhost-k8s-calico--apiserver--7c7df7fbc6--cwgpp-eth0" Nov 8 00:18:04.384315 containerd[1543]: 2025-11-08 00:18:04.368 [INFO][4595] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 8 00:18:04.384315 containerd[1543]: 2025-11-08 00:18:04.371 [INFO][4595] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 8 00:18:04.384315 containerd[1543]: 2025-11-08 00:18:04.378 [WARNING][4595] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="29472098667a80ad47b6d7e756e614c3a90287fca77f9dc02a8625bfeb8ff85d" HandleID="k8s-pod-network.29472098667a80ad47b6d7e756e614c3a90287fca77f9dc02a8625bfeb8ff85d" Workload="localhost-k8s-calico--apiserver--7c7df7fbc6--cwgpp-eth0" Nov 8 00:18:04.384315 containerd[1543]: 2025-11-08 00:18:04.379 [INFO][4595] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="29472098667a80ad47b6d7e756e614c3a90287fca77f9dc02a8625bfeb8ff85d" HandleID="k8s-pod-network.29472098667a80ad47b6d7e756e614c3a90287fca77f9dc02a8625bfeb8ff85d" Workload="localhost-k8s-calico--apiserver--7c7df7fbc6--cwgpp-eth0" Nov 8 00:18:04.384315 containerd[1543]: 2025-11-08 00:18:04.381 [INFO][4595] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 8 00:18:04.384315 containerd[1543]: 2025-11-08 00:18:04.383 [INFO][4572] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="29472098667a80ad47b6d7e756e614c3a90287fca77f9dc02a8625bfeb8ff85d" Nov 8 00:18:04.386421 containerd[1543]: time="2025-11-08T00:18:04.384407691Z" level=info msg="TearDown network for sandbox \"29472098667a80ad47b6d7e756e614c3a90287fca77f9dc02a8625bfeb8ff85d\" successfully" Nov 8 00:18:04.386421 containerd[1543]: time="2025-11-08T00:18:04.384419391Z" level=info msg="StopPodSandbox for \"29472098667a80ad47b6d7e756e614c3a90287fca77f9dc02a8625bfeb8ff85d\" returns successfully" Nov 8 00:18:04.386711 systemd[1]: run-netns-cni\x2d12a42c71\x2d6d1e\x2d57cc\x2d42f0\x2ddfa8f8ae7464.mount: Deactivated successfully. Nov 8 00:18:04.387266 containerd[1543]: time="2025-11-08T00:18:04.387249891Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7c7df7fbc6-cwgpp,Uid:793d1558-9161-49f1-a744-65e7ee945bd5,Namespace:calico-apiserver,Attempt:1,}" Nov 8 00:18:04.405120 containerd[1543]: 2025-11-08 00:18:04.338 [INFO][4579] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="0d8d645b2fd7dbff7492b0ab7a89c048990d7ad61034d0193bfe6e6d3f898170" Nov 8 00:18:04.405120 containerd[1543]: 2025-11-08 00:18:04.338 [INFO][4579] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="0d8d645b2fd7dbff7492b0ab7a89c048990d7ad61034d0193bfe6e6d3f898170" iface="eth0" netns="/var/run/netns/cni-3e336e52-9b7c-106d-cb01-87364cbcb23e" Nov 8 00:18:04.405120 containerd[1543]: 2025-11-08 00:18:04.339 [INFO][4579] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="0d8d645b2fd7dbff7492b0ab7a89c048990d7ad61034d0193bfe6e6d3f898170" iface="eth0" netns="/var/run/netns/cni-3e336e52-9b7c-106d-cb01-87364cbcb23e" Nov 8 00:18:04.405120 containerd[1543]: 2025-11-08 00:18:04.339 [INFO][4579] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="0d8d645b2fd7dbff7492b0ab7a89c048990d7ad61034d0193bfe6e6d3f898170" iface="eth0" netns="/var/run/netns/cni-3e336e52-9b7c-106d-cb01-87364cbcb23e" Nov 8 00:18:04.405120 containerd[1543]: 2025-11-08 00:18:04.339 [INFO][4579] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="0d8d645b2fd7dbff7492b0ab7a89c048990d7ad61034d0193bfe6e6d3f898170" Nov 8 00:18:04.405120 containerd[1543]: 2025-11-08 00:18:04.339 [INFO][4579] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="0d8d645b2fd7dbff7492b0ab7a89c048990d7ad61034d0193bfe6e6d3f898170" Nov 8 00:18:04.405120 containerd[1543]: 2025-11-08 00:18:04.388 [INFO][4600] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="0d8d645b2fd7dbff7492b0ab7a89c048990d7ad61034d0193bfe6e6d3f898170" HandleID="k8s-pod-network.0d8d645b2fd7dbff7492b0ab7a89c048990d7ad61034d0193bfe6e6d3f898170" Workload="localhost-k8s-coredns--66bc5c9577--fnnck-eth0" Nov 8 00:18:04.405120 containerd[1543]: 2025-11-08 00:18:04.388 [INFO][4600] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 8 00:18:04.405120 containerd[1543]: 2025-11-08 00:18:04.388 [INFO][4600] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 8 00:18:04.405120 containerd[1543]: 2025-11-08 00:18:04.396 [WARNING][4600] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="0d8d645b2fd7dbff7492b0ab7a89c048990d7ad61034d0193bfe6e6d3f898170" HandleID="k8s-pod-network.0d8d645b2fd7dbff7492b0ab7a89c048990d7ad61034d0193bfe6e6d3f898170" Workload="localhost-k8s-coredns--66bc5c9577--fnnck-eth0" Nov 8 00:18:04.405120 containerd[1543]: 2025-11-08 00:18:04.396 [INFO][4600] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="0d8d645b2fd7dbff7492b0ab7a89c048990d7ad61034d0193bfe6e6d3f898170" HandleID="k8s-pod-network.0d8d645b2fd7dbff7492b0ab7a89c048990d7ad61034d0193bfe6e6d3f898170" Workload="localhost-k8s-coredns--66bc5c9577--fnnck-eth0" Nov 8 00:18:04.405120 containerd[1543]: 2025-11-08 00:18:04.398 [INFO][4600] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 8 00:18:04.405120 containerd[1543]: 2025-11-08 00:18:04.402 [INFO][4579] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="0d8d645b2fd7dbff7492b0ab7a89c048990d7ad61034d0193bfe6e6d3f898170" Nov 8 00:18:04.406406 containerd[1543]: time="2025-11-08T00:18:04.405378642Z" level=info msg="TearDown network for sandbox \"0d8d645b2fd7dbff7492b0ab7a89c048990d7ad61034d0193bfe6e6d3f898170\" successfully" Nov 8 00:18:04.406406 containerd[1543]: time="2025-11-08T00:18:04.405394847Z" level=info msg="StopPodSandbox for \"0d8d645b2fd7dbff7492b0ab7a89c048990d7ad61034d0193bfe6e6d3f898170\" returns successfully" Nov 8 00:18:04.407423 containerd[1543]: time="2025-11-08T00:18:04.407406519Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-fnnck,Uid:db03c30a-8018-4cf5-ac73-034017743c72,Namespace:kube-system,Attempt:1,}" Nov 8 00:18:04.506237 systemd-networkd[1249]: caliceb13f98382: Link UP Nov 8 00:18:04.507261 systemd-networkd[1249]: caliceb13f98382: Gained carrier Nov 8 00:18:04.522337 containerd[1543]: 2025-11-08 00:18:04.419 [INFO][4611] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Nov 8 00:18:04.522337 containerd[1543]: 2025-11-08 00:18:04.431 [INFO][4611] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--kube--controllers--6499957458--l5qw9-eth0 calico-kube-controllers-6499957458- calico-system d70d6c69-7a19-4335-9162-f8ca1575049e 984 0 2025-11-08 00:17:39 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:6499957458 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s localhost calico-kube-controllers-6499957458-l5qw9 eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] caliceb13f98382 [] [] }} ContainerID="2b530f3903106175358f1058f8def2f707644836b8f8501b68ca0ee317efa136" Namespace="calico-system" Pod="calico-kube-controllers-6499957458-l5qw9" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--6499957458--l5qw9-" Nov 8 00:18:04.522337 containerd[1543]: 2025-11-08 00:18:04.431 [INFO][4611] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="2b530f3903106175358f1058f8def2f707644836b8f8501b68ca0ee317efa136" Namespace="calico-system" Pod="calico-kube-controllers-6499957458-l5qw9" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--6499957458--l5qw9-eth0" Nov 8 00:18:04.522337 containerd[1543]: 2025-11-08 00:18:04.471 [INFO][4643] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="2b530f3903106175358f1058f8def2f707644836b8f8501b68ca0ee317efa136" HandleID="k8s-pod-network.2b530f3903106175358f1058f8def2f707644836b8f8501b68ca0ee317efa136" Workload="localhost-k8s-calico--kube--controllers--6499957458--l5qw9-eth0" Nov 8 00:18:04.522337 containerd[1543]: 2025-11-08 00:18:04.471 [INFO][4643] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="2b530f3903106175358f1058f8def2f707644836b8f8501b68ca0ee317efa136" HandleID="k8s-pod-network.2b530f3903106175358f1058f8def2f707644836b8f8501b68ca0ee317efa136" Workload="localhost-k8s-calico--kube--controllers--6499957458--l5qw9-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002d55a0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"calico-kube-controllers-6499957458-l5qw9", "timestamp":"2025-11-08 00:18:04.471043022 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 8 00:18:04.522337 containerd[1543]: 2025-11-08 00:18:04.471 [INFO][4643] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 8 00:18:04.522337 containerd[1543]: 2025-11-08 00:18:04.471 [INFO][4643] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 8 00:18:04.522337 containerd[1543]: 2025-11-08 00:18:04.471 [INFO][4643] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Nov 8 00:18:04.522337 containerd[1543]: 2025-11-08 00:18:04.476 [INFO][4643] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.2b530f3903106175358f1058f8def2f707644836b8f8501b68ca0ee317efa136" host="localhost" Nov 8 00:18:04.522337 containerd[1543]: 2025-11-08 00:18:04.479 [INFO][4643] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Nov 8 00:18:04.522337 containerd[1543]: 2025-11-08 00:18:04.483 [INFO][4643] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Nov 8 00:18:04.522337 containerd[1543]: 2025-11-08 00:18:04.485 [INFO][4643] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Nov 8 00:18:04.522337 containerd[1543]: 2025-11-08 00:18:04.488 [INFO][4643] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Nov 8 00:18:04.522337 containerd[1543]: 2025-11-08 00:18:04.488 [INFO][4643] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.2b530f3903106175358f1058f8def2f707644836b8f8501b68ca0ee317efa136" host="localhost" Nov 8 00:18:04.522337 containerd[1543]: 2025-11-08 00:18:04.491 [INFO][4643] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.2b530f3903106175358f1058f8def2f707644836b8f8501b68ca0ee317efa136 Nov 8 00:18:04.522337 containerd[1543]: 2025-11-08 00:18:04.495 [INFO][4643] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.2b530f3903106175358f1058f8def2f707644836b8f8501b68ca0ee317efa136" host="localhost" Nov 8 00:18:04.522337 containerd[1543]: 2025-11-08 00:18:04.499 [INFO][4643] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.132/26] block=192.168.88.128/26 handle="k8s-pod-network.2b530f3903106175358f1058f8def2f707644836b8f8501b68ca0ee317efa136" host="localhost" Nov 8 00:18:04.522337 containerd[1543]: 2025-11-08 00:18:04.499 [INFO][4643] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.132/26] handle="k8s-pod-network.2b530f3903106175358f1058f8def2f707644836b8f8501b68ca0ee317efa136" host="localhost" Nov 8 00:18:04.522337 containerd[1543]: 2025-11-08 00:18:04.499 [INFO][4643] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 8 00:18:04.522337 containerd[1543]: 2025-11-08 00:18:04.499 [INFO][4643] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.132/26] IPv6=[] ContainerID="2b530f3903106175358f1058f8def2f707644836b8f8501b68ca0ee317efa136" HandleID="k8s-pod-network.2b530f3903106175358f1058f8def2f707644836b8f8501b68ca0ee317efa136" Workload="localhost-k8s-calico--kube--controllers--6499957458--l5qw9-eth0" Nov 8 00:18:04.523510 containerd[1543]: 2025-11-08 00:18:04.501 [INFO][4611] cni-plugin/k8s.go 418: Populated endpoint ContainerID="2b530f3903106175358f1058f8def2f707644836b8f8501b68ca0ee317efa136" Namespace="calico-system" Pod="calico-kube-controllers-6499957458-l5qw9" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--6499957458--l5qw9-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--6499957458--l5qw9-eth0", GenerateName:"calico-kube-controllers-6499957458-", Namespace:"calico-system", SelfLink:"", UID:"d70d6c69-7a19-4335-9162-f8ca1575049e", ResourceVersion:"984", Generation:0, CreationTimestamp:time.Date(2025, time.November, 8, 0, 17, 39, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"6499957458", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-kube-controllers-6499957458-l5qw9", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"caliceb13f98382", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 8 00:18:04.523510 containerd[1543]: 2025-11-08 00:18:04.502 [INFO][4611] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.132/32] ContainerID="2b530f3903106175358f1058f8def2f707644836b8f8501b68ca0ee317efa136" Namespace="calico-system" Pod="calico-kube-controllers-6499957458-l5qw9" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--6499957458--l5qw9-eth0" Nov 8 00:18:04.523510 containerd[1543]: 2025-11-08 00:18:04.502 [INFO][4611] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to caliceb13f98382 ContainerID="2b530f3903106175358f1058f8def2f707644836b8f8501b68ca0ee317efa136" Namespace="calico-system" Pod="calico-kube-controllers-6499957458-l5qw9" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--6499957458--l5qw9-eth0" Nov 8 00:18:04.523510 containerd[1543]: 2025-11-08 00:18:04.505 [INFO][4611] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="2b530f3903106175358f1058f8def2f707644836b8f8501b68ca0ee317efa136" Namespace="calico-system" Pod="calico-kube-controllers-6499957458-l5qw9" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--6499957458--l5qw9-eth0" Nov 8 00:18:04.523510 containerd[1543]: 2025-11-08 00:18:04.507 [INFO][4611] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="2b530f3903106175358f1058f8def2f707644836b8f8501b68ca0ee317efa136" Namespace="calico-system" Pod="calico-kube-controllers-6499957458-l5qw9" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--6499957458--l5qw9-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--6499957458--l5qw9-eth0", GenerateName:"calico-kube-controllers-6499957458-", Namespace:"calico-system", SelfLink:"", UID:"d70d6c69-7a19-4335-9162-f8ca1575049e", ResourceVersion:"984", Generation:0, CreationTimestamp:time.Date(2025, time.November, 8, 0, 17, 39, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"6499957458", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"2b530f3903106175358f1058f8def2f707644836b8f8501b68ca0ee317efa136", Pod:"calico-kube-controllers-6499957458-l5qw9", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"caliceb13f98382", MAC:"7e:68:9c:91:a7:80", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 8 00:18:04.523510 containerd[1543]: 2025-11-08 00:18:04.519 [INFO][4611] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="2b530f3903106175358f1058f8def2f707644836b8f8501b68ca0ee317efa136" Namespace="calico-system" Pod="calico-kube-controllers-6499957458-l5qw9" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--6499957458--l5qw9-eth0" Nov 8 00:18:04.534426 systemd-networkd[1249]: cali276e7617470: Gained IPv6LL Nov 8 00:18:04.537505 containerd[1543]: time="2025-11-08T00:18:04.537400371Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 8 00:18:04.537505 containerd[1543]: time="2025-11-08T00:18:04.537451722Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 8 00:18:04.537959 containerd[1543]: time="2025-11-08T00:18:04.537509406Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 8 00:18:04.537959 containerd[1543]: time="2025-11-08T00:18:04.537575192Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 8 00:18:04.554468 systemd[1]: Started cri-containerd-2b530f3903106175358f1058f8def2f707644836b8f8501b68ca0ee317efa136.scope - libcontainer container 2b530f3903106175358f1058f8def2f707644836b8f8501b68ca0ee317efa136. Nov 8 00:18:04.563322 systemd-resolved[1470]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Nov 8 00:18:04.591748 containerd[1543]: time="2025-11-08T00:18:04.591502211Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6499957458-l5qw9,Uid:d70d6c69-7a19-4335-9162-f8ca1575049e,Namespace:calico-system,Attempt:1,} returns sandbox id \"2b530f3903106175358f1058f8def2f707644836b8f8501b68ca0ee317efa136\"" Nov 8 00:18:04.593862 containerd[1543]: time="2025-11-08T00:18:04.593811627Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Nov 8 00:18:04.608298 systemd-networkd[1249]: cali2af051215d1: Link UP Nov 8 00:18:04.609176 systemd-networkd[1249]: cali2af051215d1: Gained carrier Nov 8 00:18:04.621654 containerd[1543]: 2025-11-08 00:18:04.430 [INFO][4622] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Nov 8 00:18:04.621654 containerd[1543]: 2025-11-08 00:18:04.438 [INFO][4622] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--7c7df7fbc6--cwgpp-eth0 calico-apiserver-7c7df7fbc6- calico-apiserver 793d1558-9161-49f1-a744-65e7ee945bd5 985 0 2025-11-08 00:17:35 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:7c7df7fbc6 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-7c7df7fbc6-cwgpp eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali2af051215d1 [] [] }} ContainerID="09ca7dac31dca105c05e461535d001a82979b85a21d084dea33d974b1d2fb8f2" Namespace="calico-apiserver" Pod="calico-apiserver-7c7df7fbc6-cwgpp" WorkloadEndpoint="localhost-k8s-calico--apiserver--7c7df7fbc6--cwgpp-" Nov 8 00:18:04.621654 containerd[1543]: 2025-11-08 00:18:04.438 [INFO][4622] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="09ca7dac31dca105c05e461535d001a82979b85a21d084dea33d974b1d2fb8f2" Namespace="calico-apiserver" Pod="calico-apiserver-7c7df7fbc6-cwgpp" WorkloadEndpoint="localhost-k8s-calico--apiserver--7c7df7fbc6--cwgpp-eth0" Nov 8 00:18:04.621654 containerd[1543]: 2025-11-08 00:18:04.474 [INFO][4648] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="09ca7dac31dca105c05e461535d001a82979b85a21d084dea33d974b1d2fb8f2" HandleID="k8s-pod-network.09ca7dac31dca105c05e461535d001a82979b85a21d084dea33d974b1d2fb8f2" Workload="localhost-k8s-calico--apiserver--7c7df7fbc6--cwgpp-eth0" Nov 8 00:18:04.621654 containerd[1543]: 2025-11-08 00:18:04.474 [INFO][4648] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="09ca7dac31dca105c05e461535d001a82979b85a21d084dea33d974b1d2fb8f2" HandleID="k8s-pod-network.09ca7dac31dca105c05e461535d001a82979b85a21d084dea33d974b1d2fb8f2" Workload="localhost-k8s-calico--apiserver--7c7df7fbc6--cwgpp-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00024f660), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-7c7df7fbc6-cwgpp", "timestamp":"2025-11-08 00:18:04.474365374 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 8 00:18:04.621654 containerd[1543]: 2025-11-08 00:18:04.474 [INFO][4648] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 8 00:18:04.621654 containerd[1543]: 2025-11-08 00:18:04.499 [INFO][4648] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 8 00:18:04.621654 containerd[1543]: 2025-11-08 00:18:04.499 [INFO][4648] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Nov 8 00:18:04.621654 containerd[1543]: 2025-11-08 00:18:04.578 [INFO][4648] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.09ca7dac31dca105c05e461535d001a82979b85a21d084dea33d974b1d2fb8f2" host="localhost" Nov 8 00:18:04.621654 containerd[1543]: 2025-11-08 00:18:04.584 [INFO][4648] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Nov 8 00:18:04.621654 containerd[1543]: 2025-11-08 00:18:04.588 [INFO][4648] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Nov 8 00:18:04.621654 containerd[1543]: 2025-11-08 00:18:04.590 [INFO][4648] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Nov 8 00:18:04.621654 containerd[1543]: 2025-11-08 00:18:04.592 [INFO][4648] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Nov 8 00:18:04.621654 containerd[1543]: 2025-11-08 00:18:04.593 [INFO][4648] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.09ca7dac31dca105c05e461535d001a82979b85a21d084dea33d974b1d2fb8f2" host="localhost" Nov 8 00:18:04.621654 containerd[1543]: 2025-11-08 00:18:04.595 [INFO][4648] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.09ca7dac31dca105c05e461535d001a82979b85a21d084dea33d974b1d2fb8f2 Nov 8 00:18:04.621654 containerd[1543]: 2025-11-08 00:18:04.598 [INFO][4648] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.09ca7dac31dca105c05e461535d001a82979b85a21d084dea33d974b1d2fb8f2" host="localhost" Nov 8 00:18:04.621654 containerd[1543]: 2025-11-08 00:18:04.602 [INFO][4648] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.133/26] block=192.168.88.128/26 handle="k8s-pod-network.09ca7dac31dca105c05e461535d001a82979b85a21d084dea33d974b1d2fb8f2" host="localhost" Nov 8 00:18:04.621654 containerd[1543]: 2025-11-08 00:18:04.602 [INFO][4648] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.133/26] handle="k8s-pod-network.09ca7dac31dca105c05e461535d001a82979b85a21d084dea33d974b1d2fb8f2" host="localhost" Nov 8 00:18:04.621654 containerd[1543]: 2025-11-08 00:18:04.602 [INFO][4648] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 8 00:18:04.621654 containerd[1543]: 2025-11-08 00:18:04.602 [INFO][4648] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.133/26] IPv6=[] ContainerID="09ca7dac31dca105c05e461535d001a82979b85a21d084dea33d974b1d2fb8f2" HandleID="k8s-pod-network.09ca7dac31dca105c05e461535d001a82979b85a21d084dea33d974b1d2fb8f2" Workload="localhost-k8s-calico--apiserver--7c7df7fbc6--cwgpp-eth0" Nov 8 00:18:04.622761 containerd[1543]: 2025-11-08 00:18:04.604 [INFO][4622] cni-plugin/k8s.go 418: Populated endpoint ContainerID="09ca7dac31dca105c05e461535d001a82979b85a21d084dea33d974b1d2fb8f2" Namespace="calico-apiserver" Pod="calico-apiserver-7c7df7fbc6-cwgpp" WorkloadEndpoint="localhost-k8s-calico--apiserver--7c7df7fbc6--cwgpp-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--7c7df7fbc6--cwgpp-eth0", GenerateName:"calico-apiserver-7c7df7fbc6-", Namespace:"calico-apiserver", SelfLink:"", UID:"793d1558-9161-49f1-a744-65e7ee945bd5", ResourceVersion:"985", Generation:0, CreationTimestamp:time.Date(2025, time.November, 8, 0, 17, 35, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7c7df7fbc6", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-7c7df7fbc6-cwgpp", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali2af051215d1", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 8 00:18:04.622761 containerd[1543]: 2025-11-08 00:18:04.604 [INFO][4622] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.133/32] ContainerID="09ca7dac31dca105c05e461535d001a82979b85a21d084dea33d974b1d2fb8f2" Namespace="calico-apiserver" Pod="calico-apiserver-7c7df7fbc6-cwgpp" WorkloadEndpoint="localhost-k8s-calico--apiserver--7c7df7fbc6--cwgpp-eth0" Nov 8 00:18:04.622761 containerd[1543]: 2025-11-08 00:18:04.604 [INFO][4622] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali2af051215d1 ContainerID="09ca7dac31dca105c05e461535d001a82979b85a21d084dea33d974b1d2fb8f2" Namespace="calico-apiserver" Pod="calico-apiserver-7c7df7fbc6-cwgpp" WorkloadEndpoint="localhost-k8s-calico--apiserver--7c7df7fbc6--cwgpp-eth0" Nov 8 00:18:04.622761 containerd[1543]: 2025-11-08 00:18:04.610 [INFO][4622] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="09ca7dac31dca105c05e461535d001a82979b85a21d084dea33d974b1d2fb8f2" Namespace="calico-apiserver" Pod="calico-apiserver-7c7df7fbc6-cwgpp" WorkloadEndpoint="localhost-k8s-calico--apiserver--7c7df7fbc6--cwgpp-eth0" Nov 8 00:18:04.622761 containerd[1543]: 2025-11-08 00:18:04.611 [INFO][4622] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="09ca7dac31dca105c05e461535d001a82979b85a21d084dea33d974b1d2fb8f2" Namespace="calico-apiserver" Pod="calico-apiserver-7c7df7fbc6-cwgpp" WorkloadEndpoint="localhost-k8s-calico--apiserver--7c7df7fbc6--cwgpp-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--7c7df7fbc6--cwgpp-eth0", GenerateName:"calico-apiserver-7c7df7fbc6-", Namespace:"calico-apiserver", SelfLink:"", UID:"793d1558-9161-49f1-a744-65e7ee945bd5", ResourceVersion:"985", Generation:0, CreationTimestamp:time.Date(2025, time.November, 8, 0, 17, 35, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7c7df7fbc6", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"09ca7dac31dca105c05e461535d001a82979b85a21d084dea33d974b1d2fb8f2", Pod:"calico-apiserver-7c7df7fbc6-cwgpp", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali2af051215d1", MAC:"9a:85:e9:44:20:d8", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 8 00:18:04.622761 containerd[1543]: 2025-11-08 00:18:04.620 [INFO][4622] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="09ca7dac31dca105c05e461535d001a82979b85a21d084dea33d974b1d2fb8f2" Namespace="calico-apiserver" Pod="calico-apiserver-7c7df7fbc6-cwgpp" WorkloadEndpoint="localhost-k8s-calico--apiserver--7c7df7fbc6--cwgpp-eth0" Nov 8 00:18:04.632884 containerd[1543]: time="2025-11-08T00:18:04.632826549Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 8 00:18:04.632983 containerd[1543]: time="2025-11-08T00:18:04.632892866Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 8 00:18:04.632983 containerd[1543]: time="2025-11-08T00:18:04.632929206Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 8 00:18:04.633056 containerd[1543]: time="2025-11-08T00:18:04.633017871Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 8 00:18:04.647428 systemd[1]: Started cri-containerd-09ca7dac31dca105c05e461535d001a82979b85a21d084dea33d974b1d2fb8f2.scope - libcontainer container 09ca7dac31dca105c05e461535d001a82979b85a21d084dea33d974b1d2fb8f2. Nov 8 00:18:04.656922 systemd-resolved[1470]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Nov 8 00:18:04.689526 containerd[1543]: time="2025-11-08T00:18:04.689495649Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7c7df7fbc6-cwgpp,Uid:793d1558-9161-49f1-a744-65e7ee945bd5,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"09ca7dac31dca105c05e461535d001a82979b85a21d084dea33d974b1d2fb8f2\"" Nov 8 00:18:04.707451 systemd-networkd[1249]: cali1a170b6c890: Link UP Nov 8 00:18:04.708180 systemd-networkd[1249]: cali1a170b6c890: Gained carrier Nov 8 00:18:04.709135 kubelet[2729]: E1108 00:18:04.708652 2729 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5d46c4cb7-dlrvn" podUID="e5b24471-658f-4121-a78d-73d2e59f83f1" Nov 8 00:18:04.732881 containerd[1543]: 2025-11-08 00:18:04.444 [INFO][4631] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Nov 8 00:18:04.732881 containerd[1543]: 2025-11-08 00:18:04.462 [INFO][4631] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--66bc5c9577--fnnck-eth0 coredns-66bc5c9577- kube-system db03c30a-8018-4cf5-ac73-034017743c72 986 0 2025-11-08 00:17:24 +0000 UTC map[k8s-app:kube-dns pod-template-hash:66bc5c9577 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-66bc5c9577-fnnck eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali1a170b6c890 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 } {liveness-probe TCP 8080 0 } {readiness-probe TCP 8181 0 }] [] }} ContainerID="8455899c39d869e2d7a6a5324a2cd25c7957f4e21b4eded79a60256d84a53b6e" Namespace="kube-system" Pod="coredns-66bc5c9577-fnnck" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--fnnck-" Nov 8 00:18:04.732881 containerd[1543]: 2025-11-08 00:18:04.462 [INFO][4631] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="8455899c39d869e2d7a6a5324a2cd25c7957f4e21b4eded79a60256d84a53b6e" Namespace="kube-system" Pod="coredns-66bc5c9577-fnnck" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--fnnck-eth0" Nov 8 00:18:04.732881 containerd[1543]: 2025-11-08 00:18:04.494 [INFO][4657] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="8455899c39d869e2d7a6a5324a2cd25c7957f4e21b4eded79a60256d84a53b6e" HandleID="k8s-pod-network.8455899c39d869e2d7a6a5324a2cd25c7957f4e21b4eded79a60256d84a53b6e" Workload="localhost-k8s-coredns--66bc5c9577--fnnck-eth0" Nov 8 00:18:04.732881 containerd[1543]: 2025-11-08 00:18:04.495 [INFO][4657] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="8455899c39d869e2d7a6a5324a2cd25c7957f4e21b4eded79a60256d84a53b6e" HandleID="k8s-pod-network.8455899c39d869e2d7a6a5324a2cd25c7957f4e21b4eded79a60256d84a53b6e" Workload="localhost-k8s-coredns--66bc5c9577--fnnck-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002d50b0), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-66bc5c9577-fnnck", "timestamp":"2025-11-08 00:18:04.494882703 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 8 00:18:04.732881 containerd[1543]: 2025-11-08 00:18:04.495 [INFO][4657] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 8 00:18:04.732881 containerd[1543]: 2025-11-08 00:18:04.602 [INFO][4657] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 8 00:18:04.732881 containerd[1543]: 2025-11-08 00:18:04.602 [INFO][4657] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Nov 8 00:18:04.732881 containerd[1543]: 2025-11-08 00:18:04.680 [INFO][4657] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.8455899c39d869e2d7a6a5324a2cd25c7957f4e21b4eded79a60256d84a53b6e" host="localhost" Nov 8 00:18:04.732881 containerd[1543]: 2025-11-08 00:18:04.684 [INFO][4657] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Nov 8 00:18:04.732881 containerd[1543]: 2025-11-08 00:18:04.689 [INFO][4657] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Nov 8 00:18:04.732881 containerd[1543]: 2025-11-08 00:18:04.690 [INFO][4657] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Nov 8 00:18:04.732881 containerd[1543]: 2025-11-08 00:18:04.693 [INFO][4657] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Nov 8 00:18:04.732881 containerd[1543]: 2025-11-08 00:18:04.693 [INFO][4657] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.8455899c39d869e2d7a6a5324a2cd25c7957f4e21b4eded79a60256d84a53b6e" host="localhost" Nov 8 00:18:04.732881 containerd[1543]: 2025-11-08 00:18:04.694 [INFO][4657] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.8455899c39d869e2d7a6a5324a2cd25c7957f4e21b4eded79a60256d84a53b6e Nov 8 00:18:04.732881 containerd[1543]: 2025-11-08 00:18:04.697 [INFO][4657] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.8455899c39d869e2d7a6a5324a2cd25c7957f4e21b4eded79a60256d84a53b6e" host="localhost" Nov 8 00:18:04.732881 containerd[1543]: 2025-11-08 00:18:04.701 [INFO][4657] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.134/26] block=192.168.88.128/26 handle="k8s-pod-network.8455899c39d869e2d7a6a5324a2cd25c7957f4e21b4eded79a60256d84a53b6e" host="localhost" Nov 8 00:18:04.732881 containerd[1543]: 2025-11-08 00:18:04.701 [INFO][4657] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.134/26] handle="k8s-pod-network.8455899c39d869e2d7a6a5324a2cd25c7957f4e21b4eded79a60256d84a53b6e" host="localhost" Nov 8 00:18:04.732881 containerd[1543]: 2025-11-08 00:18:04.702 [INFO][4657] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 8 00:18:04.732881 containerd[1543]: 2025-11-08 00:18:04.702 [INFO][4657] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.134/26] IPv6=[] ContainerID="8455899c39d869e2d7a6a5324a2cd25c7957f4e21b4eded79a60256d84a53b6e" HandleID="k8s-pod-network.8455899c39d869e2d7a6a5324a2cd25c7957f4e21b4eded79a60256d84a53b6e" Workload="localhost-k8s-coredns--66bc5c9577--fnnck-eth0" Nov 8 00:18:04.733743 containerd[1543]: 2025-11-08 00:18:04.704 [INFO][4631] cni-plugin/k8s.go 418: Populated endpoint ContainerID="8455899c39d869e2d7a6a5324a2cd25c7957f4e21b4eded79a60256d84a53b6e" Namespace="kube-system" Pod="coredns-66bc5c9577-fnnck" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--fnnck-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--66bc5c9577--fnnck-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"db03c30a-8018-4cf5-ac73-034017743c72", ResourceVersion:"986", Generation:0, CreationTimestamp:time.Date(2025, time.November, 8, 0, 17, 24, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-66bc5c9577-fnnck", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali1a170b6c890", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 8 00:18:04.733743 containerd[1543]: 2025-11-08 00:18:04.704 [INFO][4631] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.134/32] ContainerID="8455899c39d869e2d7a6a5324a2cd25c7957f4e21b4eded79a60256d84a53b6e" Namespace="kube-system" Pod="coredns-66bc5c9577-fnnck" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--fnnck-eth0" Nov 8 00:18:04.733743 containerd[1543]: 2025-11-08 00:18:04.704 [INFO][4631] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali1a170b6c890 ContainerID="8455899c39d869e2d7a6a5324a2cd25c7957f4e21b4eded79a60256d84a53b6e" Namespace="kube-system" Pod="coredns-66bc5c9577-fnnck" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--fnnck-eth0" Nov 8 00:18:04.733743 containerd[1543]: 2025-11-08 00:18:04.709 [INFO][4631] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="8455899c39d869e2d7a6a5324a2cd25c7957f4e21b4eded79a60256d84a53b6e" Namespace="kube-system" Pod="coredns-66bc5c9577-fnnck" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--fnnck-eth0" Nov 8 00:18:04.733743 containerd[1543]: 2025-11-08 00:18:04.710 [INFO][4631] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="8455899c39d869e2d7a6a5324a2cd25c7957f4e21b4eded79a60256d84a53b6e" Namespace="kube-system" Pod="coredns-66bc5c9577-fnnck" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--fnnck-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--66bc5c9577--fnnck-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"db03c30a-8018-4cf5-ac73-034017743c72", ResourceVersion:"986", Generation:0, CreationTimestamp:time.Date(2025, time.November, 8, 0, 17, 24, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"8455899c39d869e2d7a6a5324a2cd25c7957f4e21b4eded79a60256d84a53b6e", Pod:"coredns-66bc5c9577-fnnck", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali1a170b6c890", MAC:"d2:f5:fe:91:16:87", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 8 00:18:04.733743 containerd[1543]: 2025-11-08 00:18:04.728 [INFO][4631] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="8455899c39d869e2d7a6a5324a2cd25c7957f4e21b4eded79a60256d84a53b6e" Namespace="kube-system" Pod="coredns-66bc5c9577-fnnck" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--fnnck-eth0" Nov 8 00:18:04.760618 containerd[1543]: time="2025-11-08T00:18:04.759547144Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 8 00:18:04.760618 containerd[1543]: time="2025-11-08T00:18:04.759677210Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 8 00:18:04.760618 containerd[1543]: time="2025-11-08T00:18:04.759709275Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 8 00:18:04.760618 containerd[1543]: time="2025-11-08T00:18:04.759799366Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 8 00:18:04.776099 systemd[1]: Started cri-containerd-8455899c39d869e2d7a6a5324a2cd25c7957f4e21b4eded79a60256d84a53b6e.scope - libcontainer container 8455899c39d869e2d7a6a5324a2cd25c7957f4e21b4eded79a60256d84a53b6e. Nov 8 00:18:04.789157 systemd-resolved[1470]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Nov 8 00:18:04.811932 containerd[1543]: time="2025-11-08T00:18:04.811895562Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-fnnck,Uid:db03c30a-8018-4cf5-ac73-034017743c72,Namespace:kube-system,Attempt:1,} returns sandbox id \"8455899c39d869e2d7a6a5324a2cd25c7957f4e21b4eded79a60256d84a53b6e\"" Nov 8 00:18:04.822569 containerd[1543]: time="2025-11-08T00:18:04.822487507Z" level=info msg="CreateContainer within sandbox \"8455899c39d869e2d7a6a5324a2cd25c7957f4e21b4eded79a60256d84a53b6e\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Nov 8 00:18:04.844341 containerd[1543]: time="2025-11-08T00:18:04.844084081Z" level=info msg="CreateContainer within sandbox \"8455899c39d869e2d7a6a5324a2cd25c7957f4e21b4eded79a60256d84a53b6e\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"f3fd64a35bbc74903102fdefb7e1b0ca05113a0bf6806b35cb7fb3c3211e1588\"" Nov 8 00:18:04.844581 containerd[1543]: time="2025-11-08T00:18:04.844540751Z" level=info msg="StartContainer for \"f3fd64a35bbc74903102fdefb7e1b0ca05113a0bf6806b35cb7fb3c3211e1588\"" Nov 8 00:18:04.864446 systemd[1]: Started cri-containerd-f3fd64a35bbc74903102fdefb7e1b0ca05113a0bf6806b35cb7fb3c3211e1588.scope - libcontainer container f3fd64a35bbc74903102fdefb7e1b0ca05113a0bf6806b35cb7fb3c3211e1588. Nov 8 00:18:04.881110 containerd[1543]: time="2025-11-08T00:18:04.881060590Z" level=info msg="StartContainer for \"f3fd64a35bbc74903102fdefb7e1b0ca05113a0bf6806b35cb7fb3c3211e1588\" returns successfully" Nov 8 00:18:04.938699 containerd[1543]: time="2025-11-08T00:18:04.938592594Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 8 00:18:04.938901 containerd[1543]: time="2025-11-08T00:18:04.938883494Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Nov 8 00:18:04.939320 containerd[1543]: time="2025-11-08T00:18:04.938945957Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=85" Nov 8 00:18:04.939360 kubelet[2729]: E1108 00:18:04.939099 2729 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Nov 8 00:18:04.939360 kubelet[2729]: E1108 00:18:04.939131 2729 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Nov 8 00:18:04.939360 kubelet[2729]: E1108 00:18:04.939254 2729 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-kube-controllers start failed in pod calico-kube-controllers-6499957458-l5qw9_calico-system(d70d6c69-7a19-4335-9162-f8ca1575049e): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Nov 8 00:18:04.939360 kubelet[2729]: E1108 00:18:04.939278 2729 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-6499957458-l5qw9" podUID="d70d6c69-7a19-4335-9162-f8ca1575049e" Nov 8 00:18:04.940005 containerd[1543]: time="2025-11-08T00:18:04.939659243Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 8 00:18:05.268166 containerd[1543]: time="2025-11-08T00:18:05.267297210Z" level=info msg="StopPodSandbox for \"ee2cf1a199360f64515f46bdacd124dce16c2a8ed3b173c5c3709f9e3b04f4a7\"" Nov 8 00:18:05.288613 containerd[1543]: time="2025-11-08T00:18:05.288365229Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 8 00:18:05.288986 containerd[1543]: time="2025-11-08T00:18:05.288960655Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 8 00:18:05.289135 containerd[1543]: time="2025-11-08T00:18:05.289066357Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Nov 8 00:18:05.289785 kubelet[2729]: E1108 00:18:05.289251 2729 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 8 00:18:05.289785 kubelet[2729]: E1108 00:18:05.289277 2729 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 8 00:18:05.289785 kubelet[2729]: E1108 00:18:05.289374 2729 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-apiserver start failed in pod calico-apiserver-7c7df7fbc6-cwgpp_calico-apiserver(793d1558-9161-49f1-a744-65e7ee945bd5): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 8 00:18:05.289785 kubelet[2729]: E1108 00:18:05.289398 2729 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7c7df7fbc6-cwgpp" podUID="793d1558-9161-49f1-a744-65e7ee945bd5" Nov 8 00:18:05.321851 containerd[1543]: 2025-11-08 00:18:05.300 [INFO][4874] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="ee2cf1a199360f64515f46bdacd124dce16c2a8ed3b173c5c3709f9e3b04f4a7" Nov 8 00:18:05.321851 containerd[1543]: 2025-11-08 00:18:05.300 [INFO][4874] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="ee2cf1a199360f64515f46bdacd124dce16c2a8ed3b173c5c3709f9e3b04f4a7" iface="eth0" netns="/var/run/netns/cni-2c939f87-8233-a0eb-993b-ffe4d3bcd7f1" Nov 8 00:18:05.321851 containerd[1543]: 2025-11-08 00:18:05.301 [INFO][4874] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="ee2cf1a199360f64515f46bdacd124dce16c2a8ed3b173c5c3709f9e3b04f4a7" iface="eth0" netns="/var/run/netns/cni-2c939f87-8233-a0eb-993b-ffe4d3bcd7f1" Nov 8 00:18:05.321851 containerd[1543]: 2025-11-08 00:18:05.301 [INFO][4874] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="ee2cf1a199360f64515f46bdacd124dce16c2a8ed3b173c5c3709f9e3b04f4a7" iface="eth0" netns="/var/run/netns/cni-2c939f87-8233-a0eb-993b-ffe4d3bcd7f1" Nov 8 00:18:05.321851 containerd[1543]: 2025-11-08 00:18:05.301 [INFO][4874] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="ee2cf1a199360f64515f46bdacd124dce16c2a8ed3b173c5c3709f9e3b04f4a7" Nov 8 00:18:05.321851 containerd[1543]: 2025-11-08 00:18:05.301 [INFO][4874] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="ee2cf1a199360f64515f46bdacd124dce16c2a8ed3b173c5c3709f9e3b04f4a7" Nov 8 00:18:05.321851 containerd[1543]: 2025-11-08 00:18:05.314 [INFO][4881] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="ee2cf1a199360f64515f46bdacd124dce16c2a8ed3b173c5c3709f9e3b04f4a7" HandleID="k8s-pod-network.ee2cf1a199360f64515f46bdacd124dce16c2a8ed3b173c5c3709f9e3b04f4a7" Workload="localhost-k8s-calico--apiserver--5d46c4cb7--lvp22-eth0" Nov 8 00:18:05.321851 containerd[1543]: 2025-11-08 00:18:05.314 [INFO][4881] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 8 00:18:05.321851 containerd[1543]: 2025-11-08 00:18:05.314 [INFO][4881] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 8 00:18:05.321851 containerd[1543]: 2025-11-08 00:18:05.318 [WARNING][4881] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="ee2cf1a199360f64515f46bdacd124dce16c2a8ed3b173c5c3709f9e3b04f4a7" HandleID="k8s-pod-network.ee2cf1a199360f64515f46bdacd124dce16c2a8ed3b173c5c3709f9e3b04f4a7" Workload="localhost-k8s-calico--apiserver--5d46c4cb7--lvp22-eth0" Nov 8 00:18:05.321851 containerd[1543]: 2025-11-08 00:18:05.318 [INFO][4881] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="ee2cf1a199360f64515f46bdacd124dce16c2a8ed3b173c5c3709f9e3b04f4a7" HandleID="k8s-pod-network.ee2cf1a199360f64515f46bdacd124dce16c2a8ed3b173c5c3709f9e3b04f4a7" Workload="localhost-k8s-calico--apiserver--5d46c4cb7--lvp22-eth0" Nov 8 00:18:05.321851 containerd[1543]: 2025-11-08 00:18:05.319 [INFO][4881] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 8 00:18:05.321851 containerd[1543]: 2025-11-08 00:18:05.320 [INFO][4874] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="ee2cf1a199360f64515f46bdacd124dce16c2a8ed3b173c5c3709f9e3b04f4a7" Nov 8 00:18:05.322269 containerd[1543]: time="2025-11-08T00:18:05.321988547Z" level=info msg="TearDown network for sandbox \"ee2cf1a199360f64515f46bdacd124dce16c2a8ed3b173c5c3709f9e3b04f4a7\" successfully" Nov 8 00:18:05.322269 containerd[1543]: time="2025-11-08T00:18:05.322005305Z" level=info msg="StopPodSandbox for \"ee2cf1a199360f64515f46bdacd124dce16c2a8ed3b173c5c3709f9e3b04f4a7\" returns successfully" Nov 8 00:18:05.323787 containerd[1543]: time="2025-11-08T00:18:05.323556539Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5d46c4cb7-lvp22,Uid:fcbbea23-b3d5-4961-9b75-cd8ace86f6c6,Namespace:calico-apiserver,Attempt:1,}" Nov 8 00:18:05.353202 systemd[1]: run-netns-cni\x2d3e336e52\x2d9b7c\x2d106d\x2dcb01\x2d87364cbcb23e.mount: Deactivated successfully. Nov 8 00:18:05.353346 systemd[1]: run-netns-cni\x2d2c939f87\x2d8233\x2da0eb\x2d993b\x2dffe4d3bcd7f1.mount: Deactivated successfully. Nov 8 00:18:05.405408 systemd-networkd[1249]: cali9e3042cfcc3: Link UP Nov 8 00:18:05.406191 systemd-networkd[1249]: cali9e3042cfcc3: Gained carrier Nov 8 00:18:05.417674 containerd[1543]: 2025-11-08 00:18:05.352 [INFO][4887] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Nov 8 00:18:05.417674 containerd[1543]: 2025-11-08 00:18:05.362 [INFO][4887] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--5d46c4cb7--lvp22-eth0 calico-apiserver-5d46c4cb7- calico-apiserver fcbbea23-b3d5-4961-9b75-cd8ace86f6c6 1019 0 2025-11-08 00:17:34 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:5d46c4cb7 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-5d46c4cb7-lvp22 eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali9e3042cfcc3 [] [] }} ContainerID="9829ac0615ca5d57d564674b5fefa955560677f5c09c2add4cd43364d1ce68d5" Namespace="calico-apiserver" Pod="calico-apiserver-5d46c4cb7-lvp22" WorkloadEndpoint="localhost-k8s-calico--apiserver--5d46c4cb7--lvp22-" Nov 8 00:18:05.417674 containerd[1543]: 2025-11-08 00:18:05.362 [INFO][4887] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="9829ac0615ca5d57d564674b5fefa955560677f5c09c2add4cd43364d1ce68d5" Namespace="calico-apiserver" Pod="calico-apiserver-5d46c4cb7-lvp22" WorkloadEndpoint="localhost-k8s-calico--apiserver--5d46c4cb7--lvp22-eth0" Nov 8 00:18:05.417674 containerd[1543]: 2025-11-08 00:18:05.381 [INFO][4899] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="9829ac0615ca5d57d564674b5fefa955560677f5c09c2add4cd43364d1ce68d5" HandleID="k8s-pod-network.9829ac0615ca5d57d564674b5fefa955560677f5c09c2add4cd43364d1ce68d5" Workload="localhost-k8s-calico--apiserver--5d46c4cb7--lvp22-eth0" Nov 8 00:18:05.417674 containerd[1543]: 2025-11-08 00:18:05.381 [INFO][4899] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="9829ac0615ca5d57d564674b5fefa955560677f5c09c2add4cd43364d1ce68d5" HandleID="k8s-pod-network.9829ac0615ca5d57d564674b5fefa955560677f5c09c2add4cd43364d1ce68d5" Workload="localhost-k8s-calico--apiserver--5d46c4cb7--lvp22-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002d4fe0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-5d46c4cb7-lvp22", "timestamp":"2025-11-08 00:18:05.381352748 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 8 00:18:05.417674 containerd[1543]: 2025-11-08 00:18:05.381 [INFO][4899] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 8 00:18:05.417674 containerd[1543]: 2025-11-08 00:18:05.381 [INFO][4899] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 8 00:18:05.417674 containerd[1543]: 2025-11-08 00:18:05.381 [INFO][4899] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Nov 8 00:18:05.417674 containerd[1543]: 2025-11-08 00:18:05.386 [INFO][4899] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.9829ac0615ca5d57d564674b5fefa955560677f5c09c2add4cd43364d1ce68d5" host="localhost" Nov 8 00:18:05.417674 containerd[1543]: 2025-11-08 00:18:05.389 [INFO][4899] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Nov 8 00:18:05.417674 containerd[1543]: 2025-11-08 00:18:05.391 [INFO][4899] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Nov 8 00:18:05.417674 containerd[1543]: 2025-11-08 00:18:05.392 [INFO][4899] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Nov 8 00:18:05.417674 containerd[1543]: 2025-11-08 00:18:05.393 [INFO][4899] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Nov 8 00:18:05.417674 containerd[1543]: 2025-11-08 00:18:05.393 [INFO][4899] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.9829ac0615ca5d57d564674b5fefa955560677f5c09c2add4cd43364d1ce68d5" host="localhost" Nov 8 00:18:05.417674 containerd[1543]: 2025-11-08 00:18:05.394 [INFO][4899] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.9829ac0615ca5d57d564674b5fefa955560677f5c09c2add4cd43364d1ce68d5 Nov 8 00:18:05.417674 containerd[1543]: 2025-11-08 00:18:05.396 [INFO][4899] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.9829ac0615ca5d57d564674b5fefa955560677f5c09c2add4cd43364d1ce68d5" host="localhost" Nov 8 00:18:05.417674 containerd[1543]: 2025-11-08 00:18:05.399 [INFO][4899] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.135/26] block=192.168.88.128/26 handle="k8s-pod-network.9829ac0615ca5d57d564674b5fefa955560677f5c09c2add4cd43364d1ce68d5" host="localhost" Nov 8 00:18:05.417674 containerd[1543]: 2025-11-08 00:18:05.399 [INFO][4899] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.135/26] handle="k8s-pod-network.9829ac0615ca5d57d564674b5fefa955560677f5c09c2add4cd43364d1ce68d5" host="localhost" Nov 8 00:18:05.417674 containerd[1543]: 2025-11-08 00:18:05.399 [INFO][4899] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 8 00:18:05.417674 containerd[1543]: 2025-11-08 00:18:05.399 [INFO][4899] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.135/26] IPv6=[] ContainerID="9829ac0615ca5d57d564674b5fefa955560677f5c09c2add4cd43364d1ce68d5" HandleID="k8s-pod-network.9829ac0615ca5d57d564674b5fefa955560677f5c09c2add4cd43364d1ce68d5" Workload="localhost-k8s-calico--apiserver--5d46c4cb7--lvp22-eth0" Nov 8 00:18:05.419933 containerd[1543]: 2025-11-08 00:18:05.401 [INFO][4887] cni-plugin/k8s.go 418: Populated endpoint ContainerID="9829ac0615ca5d57d564674b5fefa955560677f5c09c2add4cd43364d1ce68d5" Namespace="calico-apiserver" Pod="calico-apiserver-5d46c4cb7-lvp22" WorkloadEndpoint="localhost-k8s-calico--apiserver--5d46c4cb7--lvp22-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--5d46c4cb7--lvp22-eth0", GenerateName:"calico-apiserver-5d46c4cb7-", Namespace:"calico-apiserver", SelfLink:"", UID:"fcbbea23-b3d5-4961-9b75-cd8ace86f6c6", ResourceVersion:"1019", Generation:0, CreationTimestamp:time.Date(2025, time.November, 8, 0, 17, 34, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5d46c4cb7", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-5d46c4cb7-lvp22", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali9e3042cfcc3", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 8 00:18:05.419933 containerd[1543]: 2025-11-08 00:18:05.402 [INFO][4887] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.135/32] ContainerID="9829ac0615ca5d57d564674b5fefa955560677f5c09c2add4cd43364d1ce68d5" Namespace="calico-apiserver" Pod="calico-apiserver-5d46c4cb7-lvp22" WorkloadEndpoint="localhost-k8s-calico--apiserver--5d46c4cb7--lvp22-eth0" Nov 8 00:18:05.419933 containerd[1543]: 2025-11-08 00:18:05.402 [INFO][4887] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali9e3042cfcc3 ContainerID="9829ac0615ca5d57d564674b5fefa955560677f5c09c2add4cd43364d1ce68d5" Namespace="calico-apiserver" Pod="calico-apiserver-5d46c4cb7-lvp22" WorkloadEndpoint="localhost-k8s-calico--apiserver--5d46c4cb7--lvp22-eth0" Nov 8 00:18:05.419933 containerd[1543]: 2025-11-08 00:18:05.406 [INFO][4887] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="9829ac0615ca5d57d564674b5fefa955560677f5c09c2add4cd43364d1ce68d5" Namespace="calico-apiserver" Pod="calico-apiserver-5d46c4cb7-lvp22" WorkloadEndpoint="localhost-k8s-calico--apiserver--5d46c4cb7--lvp22-eth0" Nov 8 00:18:05.419933 containerd[1543]: 2025-11-08 00:18:05.407 [INFO][4887] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="9829ac0615ca5d57d564674b5fefa955560677f5c09c2add4cd43364d1ce68d5" Namespace="calico-apiserver" Pod="calico-apiserver-5d46c4cb7-lvp22" WorkloadEndpoint="localhost-k8s-calico--apiserver--5d46c4cb7--lvp22-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--5d46c4cb7--lvp22-eth0", GenerateName:"calico-apiserver-5d46c4cb7-", Namespace:"calico-apiserver", SelfLink:"", UID:"fcbbea23-b3d5-4961-9b75-cd8ace86f6c6", ResourceVersion:"1019", Generation:0, CreationTimestamp:time.Date(2025, time.November, 8, 0, 17, 34, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5d46c4cb7", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"9829ac0615ca5d57d564674b5fefa955560677f5c09c2add4cd43364d1ce68d5", Pod:"calico-apiserver-5d46c4cb7-lvp22", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali9e3042cfcc3", MAC:"26:87:ae:3f:73:99", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 8 00:18:05.419933 containerd[1543]: 2025-11-08 00:18:05.415 [INFO][4887] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="9829ac0615ca5d57d564674b5fefa955560677f5c09c2add4cd43364d1ce68d5" Namespace="calico-apiserver" Pod="calico-apiserver-5d46c4cb7-lvp22" WorkloadEndpoint="localhost-k8s-calico--apiserver--5d46c4cb7--lvp22-eth0" Nov 8 00:18:05.429756 systemd-networkd[1249]: cali538aaa1971b: Gained IPv6LL Nov 8 00:18:05.433414 containerd[1543]: time="2025-11-08T00:18:05.433343687Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 8 00:18:05.433414 containerd[1543]: time="2025-11-08T00:18:05.433372274Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 8 00:18:05.433414 containerd[1543]: time="2025-11-08T00:18:05.433379371Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 8 00:18:05.433644 containerd[1543]: time="2025-11-08T00:18:05.433449865Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 8 00:18:05.446401 systemd[1]: run-containerd-runc-k8s.io-9829ac0615ca5d57d564674b5fefa955560677f5c09c2add4cd43364d1ce68d5-runc.9bM6HG.mount: Deactivated successfully. Nov 8 00:18:05.455421 systemd[1]: Started cri-containerd-9829ac0615ca5d57d564674b5fefa955560677f5c09c2add4cd43364d1ce68d5.scope - libcontainer container 9829ac0615ca5d57d564674b5fefa955560677f5c09c2add4cd43364d1ce68d5. Nov 8 00:18:05.463540 systemd-resolved[1470]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Nov 8 00:18:05.466347 kubelet[2729]: I1108 00:18:05.466308 2729 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Nov 8 00:18:05.508196 containerd[1543]: time="2025-11-08T00:18:05.508150951Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5d46c4cb7-lvp22,Uid:fcbbea23-b3d5-4961-9b75-cd8ace86f6c6,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"9829ac0615ca5d57d564674b5fefa955560677f5c09c2add4cd43364d1ce68d5\"" Nov 8 00:18:05.509950 containerd[1543]: time="2025-11-08T00:18:05.509055671Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 8 00:18:05.713587 kubelet[2729]: E1108 00:18:05.713437 2729 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7c7df7fbc6-cwgpp" podUID="793d1558-9161-49f1-a744-65e7ee945bd5" Nov 8 00:18:05.713587 kubelet[2729]: E1108 00:18:05.713541 2729 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-6499957458-l5qw9" podUID="d70d6c69-7a19-4335-9162-f8ca1575049e" Nov 8 00:18:05.718099 kubelet[2729]: I1108 00:18:05.718060 2729 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-fnnck" podStartSLOduration=41.718046793 podStartE2EDuration="41.718046793s" podCreationTimestamp="2025-11-08 00:17:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-08 00:18:05.717674735 +0000 UTC m=+47.599585505" watchObservedRunningTime="2025-11-08 00:18:05.718046793 +0000 UTC m=+47.599957565" Nov 8 00:18:05.864927 containerd[1543]: time="2025-11-08T00:18:05.864831567Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 8 00:18:05.868690 containerd[1543]: time="2025-11-08T00:18:05.868535777Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 8 00:18:05.868690 containerd[1543]: time="2025-11-08T00:18:05.868578891Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Nov 8 00:18:05.868766 kubelet[2729]: E1108 00:18:05.868681 2729 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 8 00:18:05.868766 kubelet[2729]: E1108 00:18:05.868721 2729 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 8 00:18:05.868830 kubelet[2729]: E1108 00:18:05.868770 2729 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-apiserver start failed in pod calico-apiserver-5d46c4cb7-lvp22_calico-apiserver(fcbbea23-b3d5-4961-9b75-cd8ace86f6c6): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 8 00:18:05.868830 kubelet[2729]: E1108 00:18:05.868794 2729 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5d46c4cb7-lvp22" podUID="fcbbea23-b3d5-4961-9b75-cd8ace86f6c6" Nov 8 00:18:05.877380 systemd-networkd[1249]: caliceb13f98382: Gained IPv6LL Nov 8 00:18:06.005434 systemd-networkd[1249]: cali1a170b6c890: Gained IPv6LL Nov 8 00:18:06.268643 containerd[1543]: time="2025-11-08T00:18:06.268218784Z" level=info msg="StopPodSandbox for \"3126821bbfea469243aa27d5e202334d7d987a6ac75558cb9162e36c191a677a\"" Nov 8 00:18:06.269324 containerd[1543]: time="2025-11-08T00:18:06.269129955Z" level=info msg="StopPodSandbox for \"3ebdc4d9a01763eea01ca5fb0bc2fae25d77ab7f5023d32d3f027b0d16b1bccf\"" Nov 8 00:18:06.359931 containerd[1543]: 2025-11-08 00:18:06.333 [INFO][5010] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="3126821bbfea469243aa27d5e202334d7d987a6ac75558cb9162e36c191a677a" Nov 8 00:18:06.359931 containerd[1543]: 2025-11-08 00:18:06.334 [INFO][5010] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="3126821bbfea469243aa27d5e202334d7d987a6ac75558cb9162e36c191a677a" iface="eth0" netns="/var/run/netns/cni-fe296215-4a6c-bed5-690e-2ffc8c8a4739" Nov 8 00:18:06.359931 containerd[1543]: 2025-11-08 00:18:06.334 [INFO][5010] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="3126821bbfea469243aa27d5e202334d7d987a6ac75558cb9162e36c191a677a" iface="eth0" netns="/var/run/netns/cni-fe296215-4a6c-bed5-690e-2ffc8c8a4739" Nov 8 00:18:06.359931 containerd[1543]: 2025-11-08 00:18:06.335 [INFO][5010] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="3126821bbfea469243aa27d5e202334d7d987a6ac75558cb9162e36c191a677a" iface="eth0" netns="/var/run/netns/cni-fe296215-4a6c-bed5-690e-2ffc8c8a4739" Nov 8 00:18:06.359931 containerd[1543]: 2025-11-08 00:18:06.335 [INFO][5010] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="3126821bbfea469243aa27d5e202334d7d987a6ac75558cb9162e36c191a677a" Nov 8 00:18:06.359931 containerd[1543]: 2025-11-08 00:18:06.335 [INFO][5010] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="3126821bbfea469243aa27d5e202334d7d987a6ac75558cb9162e36c191a677a" Nov 8 00:18:06.359931 containerd[1543]: 2025-11-08 00:18:06.352 [INFO][5028] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="3126821bbfea469243aa27d5e202334d7d987a6ac75558cb9162e36c191a677a" HandleID="k8s-pod-network.3126821bbfea469243aa27d5e202334d7d987a6ac75558cb9162e36c191a677a" Workload="localhost-k8s-goldmane--7c778bb748--dfl5f-eth0" Nov 8 00:18:06.359931 containerd[1543]: 2025-11-08 00:18:06.353 [INFO][5028] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 8 00:18:06.359931 containerd[1543]: 2025-11-08 00:18:06.353 [INFO][5028] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 8 00:18:06.359931 containerd[1543]: 2025-11-08 00:18:06.356 [WARNING][5028] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="3126821bbfea469243aa27d5e202334d7d987a6ac75558cb9162e36c191a677a" HandleID="k8s-pod-network.3126821bbfea469243aa27d5e202334d7d987a6ac75558cb9162e36c191a677a" Workload="localhost-k8s-goldmane--7c778bb748--dfl5f-eth0" Nov 8 00:18:06.359931 containerd[1543]: 2025-11-08 00:18:06.356 [INFO][5028] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="3126821bbfea469243aa27d5e202334d7d987a6ac75558cb9162e36c191a677a" HandleID="k8s-pod-network.3126821bbfea469243aa27d5e202334d7d987a6ac75558cb9162e36c191a677a" Workload="localhost-k8s-goldmane--7c778bb748--dfl5f-eth0" Nov 8 00:18:06.359931 containerd[1543]: 2025-11-08 00:18:06.357 [INFO][5028] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 8 00:18:06.359931 containerd[1543]: 2025-11-08 00:18:06.358 [INFO][5010] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="3126821bbfea469243aa27d5e202334d7d987a6ac75558cb9162e36c191a677a" Nov 8 00:18:06.362920 containerd[1543]: time="2025-11-08T00:18:06.362529885Z" level=info msg="TearDown network for sandbox \"3126821bbfea469243aa27d5e202334d7d987a6ac75558cb9162e36c191a677a\" successfully" Nov 8 00:18:06.362920 containerd[1543]: time="2025-11-08T00:18:06.362560893Z" level=info msg="StopPodSandbox for \"3126821bbfea469243aa27d5e202334d7d987a6ac75558cb9162e36c191a677a\" returns successfully" Nov 8 00:18:06.363626 systemd[1]: run-netns-cni\x2dfe296215\x2d4a6c\x2dbed5\x2d690e\x2d2ffc8c8a4739.mount: Deactivated successfully. Nov 8 00:18:06.364849 containerd[1543]: time="2025-11-08T00:18:06.364742886Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-7c778bb748-dfl5f,Uid:356509c0-5649-4d69-a66f-1b0a4ca00464,Namespace:calico-system,Attempt:1,}" Nov 8 00:18:06.368578 containerd[1543]: 2025-11-08 00:18:06.327 [INFO][5006] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="3ebdc4d9a01763eea01ca5fb0bc2fae25d77ab7f5023d32d3f027b0d16b1bccf" Nov 8 00:18:06.368578 containerd[1543]: 2025-11-08 00:18:06.327 [INFO][5006] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="3ebdc4d9a01763eea01ca5fb0bc2fae25d77ab7f5023d32d3f027b0d16b1bccf" iface="eth0" netns="/var/run/netns/cni-02806d41-627a-c223-3b7a-7d4cac975c3e" Nov 8 00:18:06.368578 containerd[1543]: 2025-11-08 00:18:06.327 [INFO][5006] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="3ebdc4d9a01763eea01ca5fb0bc2fae25d77ab7f5023d32d3f027b0d16b1bccf" iface="eth0" netns="/var/run/netns/cni-02806d41-627a-c223-3b7a-7d4cac975c3e" Nov 8 00:18:06.368578 containerd[1543]: 2025-11-08 00:18:06.327 [INFO][5006] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="3ebdc4d9a01763eea01ca5fb0bc2fae25d77ab7f5023d32d3f027b0d16b1bccf" iface="eth0" netns="/var/run/netns/cni-02806d41-627a-c223-3b7a-7d4cac975c3e" Nov 8 00:18:06.368578 containerd[1543]: 2025-11-08 00:18:06.327 [INFO][5006] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="3ebdc4d9a01763eea01ca5fb0bc2fae25d77ab7f5023d32d3f027b0d16b1bccf" Nov 8 00:18:06.368578 containerd[1543]: 2025-11-08 00:18:06.327 [INFO][5006] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="3ebdc4d9a01763eea01ca5fb0bc2fae25d77ab7f5023d32d3f027b0d16b1bccf" Nov 8 00:18:06.368578 containerd[1543]: 2025-11-08 00:18:06.353 [INFO][5023] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="3ebdc4d9a01763eea01ca5fb0bc2fae25d77ab7f5023d32d3f027b0d16b1bccf" HandleID="k8s-pod-network.3ebdc4d9a01763eea01ca5fb0bc2fae25d77ab7f5023d32d3f027b0d16b1bccf" Workload="localhost-k8s-csi--node--driver--tzsgq-eth0" Nov 8 00:18:06.368578 containerd[1543]: 2025-11-08 00:18:06.353 [INFO][5023] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 8 00:18:06.368578 containerd[1543]: 2025-11-08 00:18:06.357 [INFO][5023] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 8 00:18:06.368578 containerd[1543]: 2025-11-08 00:18:06.364 [WARNING][5023] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="3ebdc4d9a01763eea01ca5fb0bc2fae25d77ab7f5023d32d3f027b0d16b1bccf" HandleID="k8s-pod-network.3ebdc4d9a01763eea01ca5fb0bc2fae25d77ab7f5023d32d3f027b0d16b1bccf" Workload="localhost-k8s-csi--node--driver--tzsgq-eth0" Nov 8 00:18:06.368578 containerd[1543]: 2025-11-08 00:18:06.364 [INFO][5023] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="3ebdc4d9a01763eea01ca5fb0bc2fae25d77ab7f5023d32d3f027b0d16b1bccf" HandleID="k8s-pod-network.3ebdc4d9a01763eea01ca5fb0bc2fae25d77ab7f5023d32d3f027b0d16b1bccf" Workload="localhost-k8s-csi--node--driver--tzsgq-eth0" Nov 8 00:18:06.368578 containerd[1543]: 2025-11-08 00:18:06.365 [INFO][5023] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 8 00:18:06.368578 containerd[1543]: 2025-11-08 00:18:06.366 [INFO][5006] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="3ebdc4d9a01763eea01ca5fb0bc2fae25d77ab7f5023d32d3f027b0d16b1bccf" Nov 8 00:18:06.369969 containerd[1543]: time="2025-11-08T00:18:06.369429563Z" level=info msg="TearDown network for sandbox \"3ebdc4d9a01763eea01ca5fb0bc2fae25d77ab7f5023d32d3f027b0d16b1bccf\" successfully" Nov 8 00:18:06.369969 containerd[1543]: time="2025-11-08T00:18:06.369446782Z" level=info msg="StopPodSandbox for \"3ebdc4d9a01763eea01ca5fb0bc2fae25d77ab7f5023d32d3f027b0d16b1bccf\" returns successfully" Nov 8 00:18:06.370819 systemd[1]: run-netns-cni\x2d02806d41\x2d627a\x2dc223\x2d3b7a\x2d7d4cac975c3e.mount: Deactivated successfully. Nov 8 00:18:06.371612 containerd[1543]: time="2025-11-08T00:18:06.371428736Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-tzsgq,Uid:d3463313-ed82-40c5-914e-c6418d31744b,Namespace:calico-system,Attempt:1,}" Nov 8 00:18:06.463628 systemd-networkd[1249]: calia0ee7659087: Link UP Nov 8 00:18:06.463992 systemd-networkd[1249]: calia0ee7659087: Gained carrier Nov 8 00:18:06.472933 containerd[1543]: 2025-11-08 00:18:06.405 [INFO][5047] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Nov 8 00:18:06.472933 containerd[1543]: 2025-11-08 00:18:06.414 [INFO][5047] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-csi--node--driver--tzsgq-eth0 csi-node-driver- calico-system d3463313-ed82-40c5-914e-c6418d31744b 1064 0 2025-11-08 00:17:39 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:9d99788f7 k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s localhost csi-node-driver-tzsgq eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] calia0ee7659087 [] [] }} ContainerID="e72fab85c4d784bb910f91f4836b0b9b1b64f57d505e77f8ab227d1f7d497a8b" Namespace="calico-system" Pod="csi-node-driver-tzsgq" WorkloadEndpoint="localhost-k8s-csi--node--driver--tzsgq-" Nov 8 00:18:06.472933 containerd[1543]: 2025-11-08 00:18:06.414 [INFO][5047] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="e72fab85c4d784bb910f91f4836b0b9b1b64f57d505e77f8ab227d1f7d497a8b" Namespace="calico-system" Pod="csi-node-driver-tzsgq" WorkloadEndpoint="localhost-k8s-csi--node--driver--tzsgq-eth0" Nov 8 00:18:06.472933 containerd[1543]: 2025-11-08 00:18:06.435 [INFO][5060] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="e72fab85c4d784bb910f91f4836b0b9b1b64f57d505e77f8ab227d1f7d497a8b" HandleID="k8s-pod-network.e72fab85c4d784bb910f91f4836b0b9b1b64f57d505e77f8ab227d1f7d497a8b" Workload="localhost-k8s-csi--node--driver--tzsgq-eth0" Nov 8 00:18:06.472933 containerd[1543]: 2025-11-08 00:18:06.435 [INFO][5060] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="e72fab85c4d784bb910f91f4836b0b9b1b64f57d505e77f8ab227d1f7d497a8b" HandleID="k8s-pod-network.e72fab85c4d784bb910f91f4836b0b9b1b64f57d505e77f8ab227d1f7d497a8b" Workload="localhost-k8s-csi--node--driver--tzsgq-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00024f090), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"csi-node-driver-tzsgq", "timestamp":"2025-11-08 00:18:06.435100181 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 8 00:18:06.472933 containerd[1543]: 2025-11-08 00:18:06.435 [INFO][5060] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 8 00:18:06.472933 containerd[1543]: 2025-11-08 00:18:06.435 [INFO][5060] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 8 00:18:06.472933 containerd[1543]: 2025-11-08 00:18:06.435 [INFO][5060] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Nov 8 00:18:06.472933 containerd[1543]: 2025-11-08 00:18:06.439 [INFO][5060] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.e72fab85c4d784bb910f91f4836b0b9b1b64f57d505e77f8ab227d1f7d497a8b" host="localhost" Nov 8 00:18:06.472933 containerd[1543]: 2025-11-08 00:18:06.441 [INFO][5060] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Nov 8 00:18:06.472933 containerd[1543]: 2025-11-08 00:18:06.443 [INFO][5060] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Nov 8 00:18:06.472933 containerd[1543]: 2025-11-08 00:18:06.444 [INFO][5060] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Nov 8 00:18:06.472933 containerd[1543]: 2025-11-08 00:18:06.446 [INFO][5060] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Nov 8 00:18:06.472933 containerd[1543]: 2025-11-08 00:18:06.446 [INFO][5060] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.e72fab85c4d784bb910f91f4836b0b9b1b64f57d505e77f8ab227d1f7d497a8b" host="localhost" Nov 8 00:18:06.472933 containerd[1543]: 2025-11-08 00:18:06.446 [INFO][5060] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.e72fab85c4d784bb910f91f4836b0b9b1b64f57d505e77f8ab227d1f7d497a8b Nov 8 00:18:06.472933 containerd[1543]: 2025-11-08 00:18:06.450 [INFO][5060] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.e72fab85c4d784bb910f91f4836b0b9b1b64f57d505e77f8ab227d1f7d497a8b" host="localhost" Nov 8 00:18:06.472933 containerd[1543]: 2025-11-08 00:18:06.453 [INFO][5060] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.136/26] block=192.168.88.128/26 handle="k8s-pod-network.e72fab85c4d784bb910f91f4836b0b9b1b64f57d505e77f8ab227d1f7d497a8b" host="localhost" Nov 8 00:18:06.472933 containerd[1543]: 2025-11-08 00:18:06.453 [INFO][5060] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.136/26] handle="k8s-pod-network.e72fab85c4d784bb910f91f4836b0b9b1b64f57d505e77f8ab227d1f7d497a8b" host="localhost" Nov 8 00:18:06.472933 containerd[1543]: 2025-11-08 00:18:06.453 [INFO][5060] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 8 00:18:06.472933 containerd[1543]: 2025-11-08 00:18:06.453 [INFO][5060] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.136/26] IPv6=[] ContainerID="e72fab85c4d784bb910f91f4836b0b9b1b64f57d505e77f8ab227d1f7d497a8b" HandleID="k8s-pod-network.e72fab85c4d784bb910f91f4836b0b9b1b64f57d505e77f8ab227d1f7d497a8b" Workload="localhost-k8s-csi--node--driver--tzsgq-eth0" Nov 8 00:18:06.475493 containerd[1543]: 2025-11-08 00:18:06.455 [INFO][5047] cni-plugin/k8s.go 418: Populated endpoint ContainerID="e72fab85c4d784bb910f91f4836b0b9b1b64f57d505e77f8ab227d1f7d497a8b" Namespace="calico-system" Pod="csi-node-driver-tzsgq" WorkloadEndpoint="localhost-k8s-csi--node--driver--tzsgq-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--tzsgq-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"d3463313-ed82-40c5-914e-c6418d31744b", ResourceVersion:"1064", Generation:0, CreationTimestamp:time.Date(2025, time.November, 8, 0, 17, 39, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"9d99788f7", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"csi-node-driver-tzsgq", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calia0ee7659087", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 8 00:18:06.475493 containerd[1543]: 2025-11-08 00:18:06.455 [INFO][5047] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.136/32] ContainerID="e72fab85c4d784bb910f91f4836b0b9b1b64f57d505e77f8ab227d1f7d497a8b" Namespace="calico-system" Pod="csi-node-driver-tzsgq" WorkloadEndpoint="localhost-k8s-csi--node--driver--tzsgq-eth0" Nov 8 00:18:06.475493 containerd[1543]: 2025-11-08 00:18:06.455 [INFO][5047] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calia0ee7659087 ContainerID="e72fab85c4d784bb910f91f4836b0b9b1b64f57d505e77f8ab227d1f7d497a8b" Namespace="calico-system" Pod="csi-node-driver-tzsgq" WorkloadEndpoint="localhost-k8s-csi--node--driver--tzsgq-eth0" Nov 8 00:18:06.475493 containerd[1543]: 2025-11-08 00:18:06.464 [INFO][5047] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="e72fab85c4d784bb910f91f4836b0b9b1b64f57d505e77f8ab227d1f7d497a8b" Namespace="calico-system" Pod="csi-node-driver-tzsgq" WorkloadEndpoint="localhost-k8s-csi--node--driver--tzsgq-eth0" Nov 8 00:18:06.475493 containerd[1543]: 2025-11-08 00:18:06.464 [INFO][5047] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="e72fab85c4d784bb910f91f4836b0b9b1b64f57d505e77f8ab227d1f7d497a8b" Namespace="calico-system" Pod="csi-node-driver-tzsgq" WorkloadEndpoint="localhost-k8s-csi--node--driver--tzsgq-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--tzsgq-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"d3463313-ed82-40c5-914e-c6418d31744b", ResourceVersion:"1064", Generation:0, CreationTimestamp:time.Date(2025, time.November, 8, 0, 17, 39, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"9d99788f7", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"e72fab85c4d784bb910f91f4836b0b9b1b64f57d505e77f8ab227d1f7d497a8b", Pod:"csi-node-driver-tzsgq", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calia0ee7659087", MAC:"66:4b:d9:df:fe:5b", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 8 00:18:06.475493 containerd[1543]: 2025-11-08 00:18:06.471 [INFO][5047] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="e72fab85c4d784bb910f91f4836b0b9b1b64f57d505e77f8ab227d1f7d497a8b" Namespace="calico-system" Pod="csi-node-driver-tzsgq" WorkloadEndpoint="localhost-k8s-csi--node--driver--tzsgq-eth0" Nov 8 00:18:06.499101 containerd[1543]: time="2025-11-08T00:18:06.498957200Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 8 00:18:06.499101 containerd[1543]: time="2025-11-08T00:18:06.498987968Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 8 00:18:06.499101 containerd[1543]: time="2025-11-08T00:18:06.498994815Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 8 00:18:06.499101 containerd[1543]: time="2025-11-08T00:18:06.499035072Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 8 00:18:06.521453 systemd[1]: Started cri-containerd-e72fab85c4d784bb910f91f4836b0b9b1b64f57d505e77f8ab227d1f7d497a8b.scope - libcontainer container e72fab85c4d784bb910f91f4836b0b9b1b64f57d505e77f8ab227d1f7d497a8b. Nov 8 00:18:06.529429 systemd-resolved[1470]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Nov 8 00:18:06.536533 containerd[1543]: time="2025-11-08T00:18:06.536459715Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-tzsgq,Uid:d3463313-ed82-40c5-914e-c6418d31744b,Namespace:calico-system,Attempt:1,} returns sandbox id \"e72fab85c4d784bb910f91f4836b0b9b1b64f57d505e77f8ab227d1f7d497a8b\"" Nov 8 00:18:06.545076 containerd[1543]: time="2025-11-08T00:18:06.544916435Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Nov 8 00:18:06.565354 systemd-networkd[1249]: cali168c62f2d16: Link UP Nov 8 00:18:06.565860 systemd-networkd[1249]: cali168c62f2d16: Gained carrier Nov 8 00:18:06.576650 containerd[1543]: 2025-11-08 00:18:06.405 [INFO][5037] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Nov 8 00:18:06.576650 containerd[1543]: 2025-11-08 00:18:06.414 [INFO][5037] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-goldmane--7c778bb748--dfl5f-eth0 goldmane-7c778bb748- calico-system 356509c0-5649-4d69-a66f-1b0a4ca00464 1065 0 2025-11-08 00:17:36 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:7c778bb748 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s localhost goldmane-7c778bb748-dfl5f eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] cali168c62f2d16 [] [] }} ContainerID="0c7e567b64e03ba5608ce9abd60453447735d79fd3908434640d7522eafec598" Namespace="calico-system" Pod="goldmane-7c778bb748-dfl5f" WorkloadEndpoint="localhost-k8s-goldmane--7c778bb748--dfl5f-" Nov 8 00:18:06.576650 containerd[1543]: 2025-11-08 00:18:06.414 [INFO][5037] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="0c7e567b64e03ba5608ce9abd60453447735d79fd3908434640d7522eafec598" Namespace="calico-system" Pod="goldmane-7c778bb748-dfl5f" WorkloadEndpoint="localhost-k8s-goldmane--7c778bb748--dfl5f-eth0" Nov 8 00:18:06.576650 containerd[1543]: 2025-11-08 00:18:06.437 [INFO][5065] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="0c7e567b64e03ba5608ce9abd60453447735d79fd3908434640d7522eafec598" HandleID="k8s-pod-network.0c7e567b64e03ba5608ce9abd60453447735d79fd3908434640d7522eafec598" Workload="localhost-k8s-goldmane--7c778bb748--dfl5f-eth0" Nov 8 00:18:06.576650 containerd[1543]: 2025-11-08 00:18:06.438 [INFO][5065] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="0c7e567b64e03ba5608ce9abd60453447735d79fd3908434640d7522eafec598" HandleID="k8s-pod-network.0c7e567b64e03ba5608ce9abd60453447735d79fd3908434640d7522eafec598" Workload="localhost-k8s-goldmane--7c778bb748--dfl5f-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002d5010), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"goldmane-7c778bb748-dfl5f", "timestamp":"2025-11-08 00:18:06.437938686 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 8 00:18:06.576650 containerd[1543]: 2025-11-08 00:18:06.438 [INFO][5065] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 8 00:18:06.576650 containerd[1543]: 2025-11-08 00:18:06.453 [INFO][5065] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 8 00:18:06.576650 containerd[1543]: 2025-11-08 00:18:06.453 [INFO][5065] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Nov 8 00:18:06.576650 containerd[1543]: 2025-11-08 00:18:06.541 [INFO][5065] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.0c7e567b64e03ba5608ce9abd60453447735d79fd3908434640d7522eafec598" host="localhost" Nov 8 00:18:06.576650 containerd[1543]: 2025-11-08 00:18:06.547 [INFO][5065] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Nov 8 00:18:06.576650 containerd[1543]: 2025-11-08 00:18:06.550 [INFO][5065] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Nov 8 00:18:06.576650 containerd[1543]: 2025-11-08 00:18:06.551 [INFO][5065] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Nov 8 00:18:06.576650 containerd[1543]: 2025-11-08 00:18:06.553 [INFO][5065] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Nov 8 00:18:06.576650 containerd[1543]: 2025-11-08 00:18:06.553 [INFO][5065] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.0c7e567b64e03ba5608ce9abd60453447735d79fd3908434640d7522eafec598" host="localhost" Nov 8 00:18:06.576650 containerd[1543]: 2025-11-08 00:18:06.554 [INFO][5065] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.0c7e567b64e03ba5608ce9abd60453447735d79fd3908434640d7522eafec598 Nov 8 00:18:06.576650 containerd[1543]: 2025-11-08 00:18:06.556 [INFO][5065] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.0c7e567b64e03ba5608ce9abd60453447735d79fd3908434640d7522eafec598" host="localhost" Nov 8 00:18:06.576650 containerd[1543]: 2025-11-08 00:18:06.559 [INFO][5065] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.137/26] block=192.168.88.128/26 handle="k8s-pod-network.0c7e567b64e03ba5608ce9abd60453447735d79fd3908434640d7522eafec598" host="localhost" Nov 8 00:18:06.576650 containerd[1543]: 2025-11-08 00:18:06.559 [INFO][5065] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.137/26] handle="k8s-pod-network.0c7e567b64e03ba5608ce9abd60453447735d79fd3908434640d7522eafec598" host="localhost" Nov 8 00:18:06.576650 containerd[1543]: 2025-11-08 00:18:06.559 [INFO][5065] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 8 00:18:06.576650 containerd[1543]: 2025-11-08 00:18:06.559 [INFO][5065] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.137/26] IPv6=[] ContainerID="0c7e567b64e03ba5608ce9abd60453447735d79fd3908434640d7522eafec598" HandleID="k8s-pod-network.0c7e567b64e03ba5608ce9abd60453447735d79fd3908434640d7522eafec598" Workload="localhost-k8s-goldmane--7c778bb748--dfl5f-eth0" Nov 8 00:18:06.577101 containerd[1543]: 2025-11-08 00:18:06.561 [INFO][5037] cni-plugin/k8s.go 418: Populated endpoint ContainerID="0c7e567b64e03ba5608ce9abd60453447735d79fd3908434640d7522eafec598" Namespace="calico-system" Pod="goldmane-7c778bb748-dfl5f" WorkloadEndpoint="localhost-k8s-goldmane--7c778bb748--dfl5f-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--7c778bb748--dfl5f-eth0", GenerateName:"goldmane-7c778bb748-", Namespace:"calico-system", SelfLink:"", UID:"356509c0-5649-4d69-a66f-1b0a4ca00464", ResourceVersion:"1065", Generation:0, CreationTimestamp:time.Date(2025, time.November, 8, 0, 17, 36, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"7c778bb748", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"goldmane-7c778bb748-dfl5f", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.137/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali168c62f2d16", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 8 00:18:06.577101 containerd[1543]: 2025-11-08 00:18:06.561 [INFO][5037] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.137/32] ContainerID="0c7e567b64e03ba5608ce9abd60453447735d79fd3908434640d7522eafec598" Namespace="calico-system" Pod="goldmane-7c778bb748-dfl5f" WorkloadEndpoint="localhost-k8s-goldmane--7c778bb748--dfl5f-eth0" Nov 8 00:18:06.577101 containerd[1543]: 2025-11-08 00:18:06.561 [INFO][5037] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali168c62f2d16 ContainerID="0c7e567b64e03ba5608ce9abd60453447735d79fd3908434640d7522eafec598" Namespace="calico-system" Pod="goldmane-7c778bb748-dfl5f" WorkloadEndpoint="localhost-k8s-goldmane--7c778bb748--dfl5f-eth0" Nov 8 00:18:06.577101 containerd[1543]: 2025-11-08 00:18:06.566 [INFO][5037] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="0c7e567b64e03ba5608ce9abd60453447735d79fd3908434640d7522eafec598" Namespace="calico-system" Pod="goldmane-7c778bb748-dfl5f" WorkloadEndpoint="localhost-k8s-goldmane--7c778bb748--dfl5f-eth0" Nov 8 00:18:06.577101 containerd[1543]: 2025-11-08 00:18:06.567 [INFO][5037] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="0c7e567b64e03ba5608ce9abd60453447735d79fd3908434640d7522eafec598" Namespace="calico-system" Pod="goldmane-7c778bb748-dfl5f" WorkloadEndpoint="localhost-k8s-goldmane--7c778bb748--dfl5f-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--7c778bb748--dfl5f-eth0", GenerateName:"goldmane-7c778bb748-", Namespace:"calico-system", SelfLink:"", UID:"356509c0-5649-4d69-a66f-1b0a4ca00464", ResourceVersion:"1065", Generation:0, CreationTimestamp:time.Date(2025, time.November, 8, 0, 17, 36, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"7c778bb748", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"0c7e567b64e03ba5608ce9abd60453447735d79fd3908434640d7522eafec598", Pod:"goldmane-7c778bb748-dfl5f", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.137/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali168c62f2d16", MAC:"52:6f:c2:64:f3:be", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 8 00:18:06.577101 containerd[1543]: 2025-11-08 00:18:06.575 [INFO][5037] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="0c7e567b64e03ba5608ce9abd60453447735d79fd3908434640d7522eafec598" Namespace="calico-system" Pod="goldmane-7c778bb748-dfl5f" WorkloadEndpoint="localhost-k8s-goldmane--7c778bb748--dfl5f-eth0" Nov 8 00:18:06.589453 containerd[1543]: time="2025-11-08T00:18:06.589365889Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 8 00:18:06.589453 containerd[1543]: time="2025-11-08T00:18:06.589437741Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 8 00:18:06.589651 containerd[1543]: time="2025-11-08T00:18:06.589449524Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 8 00:18:06.589651 containerd[1543]: time="2025-11-08T00:18:06.589512899Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 8 00:18:06.607602 systemd[1]: Started cri-containerd-0c7e567b64e03ba5608ce9abd60453447735d79fd3908434640d7522eafec598.scope - libcontainer container 0c7e567b64e03ba5608ce9abd60453447735d79fd3908434640d7522eafec598. Nov 8 00:18:06.618348 systemd-resolved[1470]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Nov 8 00:18:06.640323 containerd[1543]: time="2025-11-08T00:18:06.640135160Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-7c778bb748-dfl5f,Uid:356509c0-5649-4d69-a66f-1b0a4ca00464,Namespace:calico-system,Attempt:1,} returns sandbox id \"0c7e567b64e03ba5608ce9abd60453447735d79fd3908434640d7522eafec598\"" Nov 8 00:18:06.646439 systemd-networkd[1249]: cali2af051215d1: Gained IPv6LL Nov 8 00:18:06.718592 kubelet[2729]: E1108 00:18:06.718569 2729 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5d46c4cb7-lvp22" podUID="fcbbea23-b3d5-4961-9b75-cd8ace86f6c6" Nov 8 00:18:06.838489 systemd-networkd[1249]: cali9e3042cfcc3: Gained IPv6LL Nov 8 00:18:06.884322 kernel: bpftool[5190]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set Nov 8 00:18:06.888876 containerd[1543]: time="2025-11-08T00:18:06.888756600Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 8 00:18:06.889155 containerd[1543]: time="2025-11-08T00:18:06.889093024Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Nov 8 00:18:06.889219 containerd[1543]: time="2025-11-08T00:18:06.889133304Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Nov 8 00:18:06.889319 kubelet[2729]: E1108 00:18:06.889261 2729 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Nov 8 00:18:06.889436 kubelet[2729]: E1108 00:18:06.889412 2729 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Nov 8 00:18:06.889577 kubelet[2729]: E1108 00:18:06.889550 2729 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-csi start failed in pod csi-node-driver-tzsgq_calico-system(d3463313-ed82-40c5-914e-c6418d31744b): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Nov 8 00:18:06.890179 containerd[1543]: time="2025-11-08T00:18:06.890076439Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Nov 8 00:18:07.073805 systemd-networkd[1249]: vxlan.calico: Link UP Nov 8 00:18:07.073811 systemd-networkd[1249]: vxlan.calico: Gained carrier Nov 8 00:18:07.235272 containerd[1543]: time="2025-11-08T00:18:07.235239189Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 8 00:18:07.235559 containerd[1543]: time="2025-11-08T00:18:07.235535101Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Nov 8 00:18:07.235612 containerd[1543]: time="2025-11-08T00:18:07.235587510Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=77" Nov 8 00:18:07.235752 kubelet[2729]: E1108 00:18:07.235701 2729 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Nov 8 00:18:07.235752 kubelet[2729]: E1108 00:18:07.235741 2729 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Nov 8 00:18:07.236179 kubelet[2729]: E1108 00:18:07.235970 2729 kuberuntime_manager.go:1449] "Unhandled Error" err="container goldmane start failed in pod goldmane-7c778bb748-dfl5f_calico-system(356509c0-5649-4d69-a66f-1b0a4ca00464): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Nov 8 00:18:07.236179 kubelet[2729]: E1108 00:18:07.236001 2729 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-dfl5f" podUID="356509c0-5649-4d69-a66f-1b0a4ca00464" Nov 8 00:18:07.236483 containerd[1543]: time="2025-11-08T00:18:07.236470649Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Nov 8 00:18:07.680905 containerd[1543]: time="2025-11-08T00:18:07.680765269Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 8 00:18:07.689369 containerd[1543]: time="2025-11-08T00:18:07.689252147Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Nov 8 00:18:07.689369 containerd[1543]: time="2025-11-08T00:18:07.689313026Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Nov 8 00:18:07.690764 kubelet[2729]: E1108 00:18:07.689631 2729 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Nov 8 00:18:07.690764 kubelet[2729]: E1108 00:18:07.689670 2729 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Nov 8 00:18:07.690764 kubelet[2729]: E1108 00:18:07.689727 2729 kuberuntime_manager.go:1449] "Unhandled Error" err="container csi-node-driver-registrar start failed in pod csi-node-driver-tzsgq_calico-system(d3463313-ed82-40c5-914e-c6418d31744b): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Nov 8 00:18:07.690967 kubelet[2729]: E1108 00:18:07.689761 2729 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-tzsgq" podUID="d3463313-ed82-40c5-914e-c6418d31744b" Nov 8 00:18:07.721580 kubelet[2729]: E1108 00:18:07.721526 2729 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-dfl5f" podUID="356509c0-5649-4d69-a66f-1b0a4ca00464" Nov 8 00:18:07.722460 kubelet[2729]: E1108 00:18:07.721845 2729 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-tzsgq" podUID="d3463313-ed82-40c5-914e-c6418d31744b" Nov 8 00:18:08.309558 systemd-networkd[1249]: calia0ee7659087: Gained IPv6LL Nov 8 00:18:08.373437 systemd-networkd[1249]: cali168c62f2d16: Gained IPv6LL Nov 8 00:18:08.821433 systemd-networkd[1249]: vxlan.calico: Gained IPv6LL Nov 8 00:18:10.268317 containerd[1543]: time="2025-11-08T00:18:10.268276949Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Nov 8 00:18:10.610401 containerd[1543]: time="2025-11-08T00:18:10.610163539Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 8 00:18:10.610640 containerd[1543]: time="2025-11-08T00:18:10.610613323Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Nov 8 00:18:10.610717 containerd[1543]: time="2025-11-08T00:18:10.610677942Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Nov 8 00:18:10.610872 kubelet[2729]: E1108 00:18:10.610826 2729 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Nov 8 00:18:10.610872 kubelet[2729]: E1108 00:18:10.610867 2729 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Nov 8 00:18:10.611734 kubelet[2729]: E1108 00:18:10.610928 2729 kuberuntime_manager.go:1449] "Unhandled Error" err="container whisker start failed in pod whisker-54745c6b5b-z6gpb_calico-system(05f2c2bc-5f37-4718-b09a-298a728d0047): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Nov 8 00:18:10.612328 containerd[1543]: time="2025-11-08T00:18:10.612292315Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Nov 8 00:18:10.953750 containerd[1543]: time="2025-11-08T00:18:10.953709516Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 8 00:18:10.964917 containerd[1543]: time="2025-11-08T00:18:10.964883535Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Nov 8 00:18:10.964980 containerd[1543]: time="2025-11-08T00:18:10.964948863Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Nov 8 00:18:10.965226 kubelet[2729]: E1108 00:18:10.965194 2729 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Nov 8 00:18:10.965273 kubelet[2729]: E1108 00:18:10.965240 2729 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Nov 8 00:18:10.965345 kubelet[2729]: E1108 00:18:10.965326 2729 kuberuntime_manager.go:1449] "Unhandled Error" err="container whisker-backend start failed in pod whisker-54745c6b5b-z6gpb_calico-system(05f2c2bc-5f37-4718-b09a-298a728d0047): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Nov 8 00:18:10.965381 kubelet[2729]: E1108 00:18:10.965367 2729 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-54745c6b5b-z6gpb" podUID="05f2c2bc-5f37-4718-b09a-298a728d0047" Nov 8 00:18:16.267697 containerd[1543]: time="2025-11-08T00:18:16.267614811Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 8 00:18:16.639499 containerd[1543]: time="2025-11-08T00:18:16.639179589Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 8 00:18:16.639686 containerd[1543]: time="2025-11-08T00:18:16.639644710Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 8 00:18:16.640105 containerd[1543]: time="2025-11-08T00:18:16.639708664Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Nov 8 00:18:16.640325 kubelet[2729]: E1108 00:18:16.640186 2729 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 8 00:18:16.640325 kubelet[2729]: E1108 00:18:16.640238 2729 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 8 00:18:16.641249 kubelet[2729]: E1108 00:18:16.640413 2729 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-apiserver start failed in pod calico-apiserver-5d46c4cb7-dlrvn_calico-apiserver(e5b24471-658f-4121-a78d-73d2e59f83f1): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 8 00:18:16.641249 kubelet[2729]: E1108 00:18:16.640456 2729 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5d46c4cb7-dlrvn" podUID="e5b24471-658f-4121-a78d-73d2e59f83f1" Nov 8 00:18:18.257837 containerd[1543]: time="2025-11-08T00:18:18.257798855Z" level=info msg="StopPodSandbox for \"323f688a526e8c1f75dd0090eb473941e081eba26dfcd558b02454ff5ab11dff\"" Nov 8 00:18:18.285844 containerd[1543]: time="2025-11-08T00:18:18.285463842Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 8 00:18:18.631469 containerd[1543]: time="2025-11-08T00:18:18.631382409Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 8 00:18:18.632549 containerd[1543]: time="2025-11-08T00:18:18.632526799Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 8 00:18:18.632606 containerd[1543]: time="2025-11-08T00:18:18.632581862Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Nov 8 00:18:18.632714 kubelet[2729]: E1108 00:18:18.632685 2729 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 8 00:18:18.632900 kubelet[2729]: E1108 00:18:18.632721 2729 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 8 00:18:18.632900 kubelet[2729]: E1108 00:18:18.632771 2729 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-apiserver start failed in pod calico-apiserver-5d46c4cb7-lvp22_calico-apiserver(fcbbea23-b3d5-4961-9b75-cd8ace86f6c6): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 8 00:18:18.632900 kubelet[2729]: E1108 00:18:18.632796 2729 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5d46c4cb7-lvp22" podUID="fcbbea23-b3d5-4961-9b75-cd8ace86f6c6" Nov 8 00:18:18.656791 containerd[1543]: 2025-11-08 00:18:18.625 [WARNING][5317] cni-plugin/k8s.go 598: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="323f688a526e8c1f75dd0090eb473941e081eba26dfcd558b02454ff5ab11dff" WorkloadEndpoint="localhost-k8s-whisker--b555dc9bb--5mdbn-eth0" Nov 8 00:18:18.656791 containerd[1543]: 2025-11-08 00:18:18.625 [INFO][5317] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="323f688a526e8c1f75dd0090eb473941e081eba26dfcd558b02454ff5ab11dff" Nov 8 00:18:18.656791 containerd[1543]: 2025-11-08 00:18:18.625 [INFO][5317] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="323f688a526e8c1f75dd0090eb473941e081eba26dfcd558b02454ff5ab11dff" iface="eth0" netns="" Nov 8 00:18:18.656791 containerd[1543]: 2025-11-08 00:18:18.625 [INFO][5317] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="323f688a526e8c1f75dd0090eb473941e081eba26dfcd558b02454ff5ab11dff" Nov 8 00:18:18.656791 containerd[1543]: 2025-11-08 00:18:18.625 [INFO][5317] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="323f688a526e8c1f75dd0090eb473941e081eba26dfcd558b02454ff5ab11dff" Nov 8 00:18:18.656791 containerd[1543]: 2025-11-08 00:18:18.644 [INFO][5324] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="323f688a526e8c1f75dd0090eb473941e081eba26dfcd558b02454ff5ab11dff" HandleID="k8s-pod-network.323f688a526e8c1f75dd0090eb473941e081eba26dfcd558b02454ff5ab11dff" Workload="localhost-k8s-whisker--b555dc9bb--5mdbn-eth0" Nov 8 00:18:18.656791 containerd[1543]: 2025-11-08 00:18:18.644 [INFO][5324] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 8 00:18:18.656791 containerd[1543]: 2025-11-08 00:18:18.644 [INFO][5324] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 8 00:18:18.656791 containerd[1543]: 2025-11-08 00:18:18.650 [WARNING][5324] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="323f688a526e8c1f75dd0090eb473941e081eba26dfcd558b02454ff5ab11dff" HandleID="k8s-pod-network.323f688a526e8c1f75dd0090eb473941e081eba26dfcd558b02454ff5ab11dff" Workload="localhost-k8s-whisker--b555dc9bb--5mdbn-eth0" Nov 8 00:18:18.656791 containerd[1543]: 2025-11-08 00:18:18.650 [INFO][5324] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="323f688a526e8c1f75dd0090eb473941e081eba26dfcd558b02454ff5ab11dff" HandleID="k8s-pod-network.323f688a526e8c1f75dd0090eb473941e081eba26dfcd558b02454ff5ab11dff" Workload="localhost-k8s-whisker--b555dc9bb--5mdbn-eth0" Nov 8 00:18:18.656791 containerd[1543]: 2025-11-08 00:18:18.653 [INFO][5324] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 8 00:18:18.656791 containerd[1543]: 2025-11-08 00:18:18.655 [INFO][5317] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="323f688a526e8c1f75dd0090eb473941e081eba26dfcd558b02454ff5ab11dff" Nov 8 00:18:18.656791 containerd[1543]: time="2025-11-08T00:18:18.656697575Z" level=info msg="TearDown network for sandbox \"323f688a526e8c1f75dd0090eb473941e081eba26dfcd558b02454ff5ab11dff\" successfully" Nov 8 00:18:18.656791 containerd[1543]: time="2025-11-08T00:18:18.656717455Z" level=info msg="StopPodSandbox for \"323f688a526e8c1f75dd0090eb473941e081eba26dfcd558b02454ff5ab11dff\" returns successfully" Nov 8 00:18:18.784287 containerd[1543]: time="2025-11-08T00:18:18.784067313Z" level=info msg="RemovePodSandbox for \"323f688a526e8c1f75dd0090eb473941e081eba26dfcd558b02454ff5ab11dff\"" Nov 8 00:18:18.784287 containerd[1543]: time="2025-11-08T00:18:18.784096096Z" level=info msg="Forcibly stopping sandbox \"323f688a526e8c1f75dd0090eb473941e081eba26dfcd558b02454ff5ab11dff\"" Nov 8 00:18:18.830791 containerd[1543]: 2025-11-08 00:18:18.808 [WARNING][5338] cni-plugin/k8s.go 598: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="323f688a526e8c1f75dd0090eb473941e081eba26dfcd558b02454ff5ab11dff" WorkloadEndpoint="localhost-k8s-whisker--b555dc9bb--5mdbn-eth0" Nov 8 00:18:18.830791 containerd[1543]: 2025-11-08 00:18:18.808 [INFO][5338] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="323f688a526e8c1f75dd0090eb473941e081eba26dfcd558b02454ff5ab11dff" Nov 8 00:18:18.830791 containerd[1543]: 2025-11-08 00:18:18.808 [INFO][5338] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="323f688a526e8c1f75dd0090eb473941e081eba26dfcd558b02454ff5ab11dff" iface="eth0" netns="" Nov 8 00:18:18.830791 containerd[1543]: 2025-11-08 00:18:18.808 [INFO][5338] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="323f688a526e8c1f75dd0090eb473941e081eba26dfcd558b02454ff5ab11dff" Nov 8 00:18:18.830791 containerd[1543]: 2025-11-08 00:18:18.808 [INFO][5338] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="323f688a526e8c1f75dd0090eb473941e081eba26dfcd558b02454ff5ab11dff" Nov 8 00:18:18.830791 containerd[1543]: 2025-11-08 00:18:18.823 [INFO][5346] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="323f688a526e8c1f75dd0090eb473941e081eba26dfcd558b02454ff5ab11dff" HandleID="k8s-pod-network.323f688a526e8c1f75dd0090eb473941e081eba26dfcd558b02454ff5ab11dff" Workload="localhost-k8s-whisker--b555dc9bb--5mdbn-eth0" Nov 8 00:18:18.830791 containerd[1543]: 2025-11-08 00:18:18.823 [INFO][5346] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 8 00:18:18.830791 containerd[1543]: 2025-11-08 00:18:18.823 [INFO][5346] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 8 00:18:18.830791 containerd[1543]: 2025-11-08 00:18:18.827 [WARNING][5346] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="323f688a526e8c1f75dd0090eb473941e081eba26dfcd558b02454ff5ab11dff" HandleID="k8s-pod-network.323f688a526e8c1f75dd0090eb473941e081eba26dfcd558b02454ff5ab11dff" Workload="localhost-k8s-whisker--b555dc9bb--5mdbn-eth0" Nov 8 00:18:18.830791 containerd[1543]: 2025-11-08 00:18:18.827 [INFO][5346] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="323f688a526e8c1f75dd0090eb473941e081eba26dfcd558b02454ff5ab11dff" HandleID="k8s-pod-network.323f688a526e8c1f75dd0090eb473941e081eba26dfcd558b02454ff5ab11dff" Workload="localhost-k8s-whisker--b555dc9bb--5mdbn-eth0" Nov 8 00:18:18.830791 containerd[1543]: 2025-11-08 00:18:18.828 [INFO][5346] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 8 00:18:18.830791 containerd[1543]: 2025-11-08 00:18:18.829 [INFO][5338] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="323f688a526e8c1f75dd0090eb473941e081eba26dfcd558b02454ff5ab11dff" Nov 8 00:18:18.832077 containerd[1543]: time="2025-11-08T00:18:18.831144892Z" level=info msg="TearDown network for sandbox \"323f688a526e8c1f75dd0090eb473941e081eba26dfcd558b02454ff5ab11dff\" successfully" Nov 8 00:18:18.847154 containerd[1543]: time="2025-11-08T00:18:18.846572483Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"323f688a526e8c1f75dd0090eb473941e081eba26dfcd558b02454ff5ab11dff\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Nov 8 00:18:18.847154 containerd[1543]: time="2025-11-08T00:18:18.846609218Z" level=info msg="RemovePodSandbox \"323f688a526e8c1f75dd0090eb473941e081eba26dfcd558b02454ff5ab11dff\" returns successfully" Nov 8 00:18:18.847676 containerd[1543]: time="2025-11-08T00:18:18.847663626Z" level=info msg="StopPodSandbox for \"8557d16401825c2a62303c4719a620125662cea10ab8955e9009e3e14d7542a6\"" Nov 8 00:18:18.902679 containerd[1543]: 2025-11-08 00:18:18.870 [WARNING][5360] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="8557d16401825c2a62303c4719a620125662cea10ab8955e9009e3e14d7542a6" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--5d46c4cb7--dlrvn-eth0", GenerateName:"calico-apiserver-5d46c4cb7-", Namespace:"calico-apiserver", SelfLink:"", UID:"e5b24471-658f-4121-a78d-73d2e59f83f1", ResourceVersion:"1130", Generation:0, CreationTimestamp:time.Date(2025, time.November, 8, 0, 17, 34, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5d46c4cb7", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"6a9dcd0f439be7c3bac591ee9279cf3599c4788f33b07b2cc20518a744c689ea", Pod:"calico-apiserver-5d46c4cb7-dlrvn", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali538aaa1971b", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 8 00:18:18.902679 containerd[1543]: 2025-11-08 00:18:18.871 [INFO][5360] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="8557d16401825c2a62303c4719a620125662cea10ab8955e9009e3e14d7542a6" Nov 8 00:18:18.902679 containerd[1543]: 2025-11-08 00:18:18.871 [INFO][5360] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="8557d16401825c2a62303c4719a620125662cea10ab8955e9009e3e14d7542a6" iface="eth0" netns="" Nov 8 00:18:18.902679 containerd[1543]: 2025-11-08 00:18:18.871 [INFO][5360] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="8557d16401825c2a62303c4719a620125662cea10ab8955e9009e3e14d7542a6" Nov 8 00:18:18.902679 containerd[1543]: 2025-11-08 00:18:18.871 [INFO][5360] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="8557d16401825c2a62303c4719a620125662cea10ab8955e9009e3e14d7542a6" Nov 8 00:18:18.902679 containerd[1543]: 2025-11-08 00:18:18.887 [INFO][5367] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="8557d16401825c2a62303c4719a620125662cea10ab8955e9009e3e14d7542a6" HandleID="k8s-pod-network.8557d16401825c2a62303c4719a620125662cea10ab8955e9009e3e14d7542a6" Workload="localhost-k8s-calico--apiserver--5d46c4cb7--dlrvn-eth0" Nov 8 00:18:18.902679 containerd[1543]: 2025-11-08 00:18:18.887 [INFO][5367] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 8 00:18:18.902679 containerd[1543]: 2025-11-08 00:18:18.888 [INFO][5367] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 8 00:18:18.902679 containerd[1543]: 2025-11-08 00:18:18.899 [WARNING][5367] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="8557d16401825c2a62303c4719a620125662cea10ab8955e9009e3e14d7542a6" HandleID="k8s-pod-network.8557d16401825c2a62303c4719a620125662cea10ab8955e9009e3e14d7542a6" Workload="localhost-k8s-calico--apiserver--5d46c4cb7--dlrvn-eth0" Nov 8 00:18:18.902679 containerd[1543]: 2025-11-08 00:18:18.899 [INFO][5367] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="8557d16401825c2a62303c4719a620125662cea10ab8955e9009e3e14d7542a6" HandleID="k8s-pod-network.8557d16401825c2a62303c4719a620125662cea10ab8955e9009e3e14d7542a6" Workload="localhost-k8s-calico--apiserver--5d46c4cb7--dlrvn-eth0" Nov 8 00:18:18.902679 containerd[1543]: 2025-11-08 00:18:18.900 [INFO][5367] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 8 00:18:18.902679 containerd[1543]: 2025-11-08 00:18:18.901 [INFO][5360] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="8557d16401825c2a62303c4719a620125662cea10ab8955e9009e3e14d7542a6" Nov 8 00:18:18.902679 containerd[1543]: time="2025-11-08T00:18:18.902604639Z" level=info msg="TearDown network for sandbox \"8557d16401825c2a62303c4719a620125662cea10ab8955e9009e3e14d7542a6\" successfully" Nov 8 00:18:18.902679 containerd[1543]: time="2025-11-08T00:18:18.902621323Z" level=info msg="StopPodSandbox for \"8557d16401825c2a62303c4719a620125662cea10ab8955e9009e3e14d7542a6\" returns successfully" Nov 8 00:18:18.920228 containerd[1543]: time="2025-11-08T00:18:18.920034733Z" level=info msg="RemovePodSandbox for \"8557d16401825c2a62303c4719a620125662cea10ab8955e9009e3e14d7542a6\"" Nov 8 00:18:18.920228 containerd[1543]: time="2025-11-08T00:18:18.920063453Z" level=info msg="Forcibly stopping sandbox \"8557d16401825c2a62303c4719a620125662cea10ab8955e9009e3e14d7542a6\"" Nov 8 00:18:18.967480 containerd[1543]: 2025-11-08 00:18:18.942 [WARNING][5381] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="8557d16401825c2a62303c4719a620125662cea10ab8955e9009e3e14d7542a6" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--5d46c4cb7--dlrvn-eth0", GenerateName:"calico-apiserver-5d46c4cb7-", Namespace:"calico-apiserver", SelfLink:"", UID:"e5b24471-658f-4121-a78d-73d2e59f83f1", ResourceVersion:"1130", Generation:0, CreationTimestamp:time.Date(2025, time.November, 8, 0, 17, 34, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5d46c4cb7", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"6a9dcd0f439be7c3bac591ee9279cf3599c4788f33b07b2cc20518a744c689ea", Pod:"calico-apiserver-5d46c4cb7-dlrvn", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali538aaa1971b", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 8 00:18:18.967480 containerd[1543]: 2025-11-08 00:18:18.943 [INFO][5381] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="8557d16401825c2a62303c4719a620125662cea10ab8955e9009e3e14d7542a6" Nov 8 00:18:18.967480 containerd[1543]: 2025-11-08 00:18:18.943 [INFO][5381] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="8557d16401825c2a62303c4719a620125662cea10ab8955e9009e3e14d7542a6" iface="eth0" netns="" Nov 8 00:18:18.967480 containerd[1543]: 2025-11-08 00:18:18.943 [INFO][5381] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="8557d16401825c2a62303c4719a620125662cea10ab8955e9009e3e14d7542a6" Nov 8 00:18:18.967480 containerd[1543]: 2025-11-08 00:18:18.943 [INFO][5381] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="8557d16401825c2a62303c4719a620125662cea10ab8955e9009e3e14d7542a6" Nov 8 00:18:18.967480 containerd[1543]: 2025-11-08 00:18:18.959 [INFO][5388] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="8557d16401825c2a62303c4719a620125662cea10ab8955e9009e3e14d7542a6" HandleID="k8s-pod-network.8557d16401825c2a62303c4719a620125662cea10ab8955e9009e3e14d7542a6" Workload="localhost-k8s-calico--apiserver--5d46c4cb7--dlrvn-eth0" Nov 8 00:18:18.967480 containerd[1543]: 2025-11-08 00:18:18.959 [INFO][5388] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 8 00:18:18.967480 containerd[1543]: 2025-11-08 00:18:18.959 [INFO][5388] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 8 00:18:18.967480 containerd[1543]: 2025-11-08 00:18:18.963 [WARNING][5388] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="8557d16401825c2a62303c4719a620125662cea10ab8955e9009e3e14d7542a6" HandleID="k8s-pod-network.8557d16401825c2a62303c4719a620125662cea10ab8955e9009e3e14d7542a6" Workload="localhost-k8s-calico--apiserver--5d46c4cb7--dlrvn-eth0" Nov 8 00:18:18.967480 containerd[1543]: 2025-11-08 00:18:18.963 [INFO][5388] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="8557d16401825c2a62303c4719a620125662cea10ab8955e9009e3e14d7542a6" HandleID="k8s-pod-network.8557d16401825c2a62303c4719a620125662cea10ab8955e9009e3e14d7542a6" Workload="localhost-k8s-calico--apiserver--5d46c4cb7--dlrvn-eth0" Nov 8 00:18:18.967480 containerd[1543]: 2025-11-08 00:18:18.964 [INFO][5388] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 8 00:18:18.967480 containerd[1543]: 2025-11-08 00:18:18.965 [INFO][5381] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="8557d16401825c2a62303c4719a620125662cea10ab8955e9009e3e14d7542a6" Nov 8 00:18:18.967480 containerd[1543]: time="2025-11-08T00:18:18.966605918Z" level=info msg="TearDown network for sandbox \"8557d16401825c2a62303c4719a620125662cea10ab8955e9009e3e14d7542a6\" successfully" Nov 8 00:18:18.997604 containerd[1543]: time="2025-11-08T00:18:18.997541516Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"8557d16401825c2a62303c4719a620125662cea10ab8955e9009e3e14d7542a6\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Nov 8 00:18:18.997604 containerd[1543]: time="2025-11-08T00:18:18.997584591Z" level=info msg="RemovePodSandbox \"8557d16401825c2a62303c4719a620125662cea10ab8955e9009e3e14d7542a6\" returns successfully" Nov 8 00:18:18.998152 containerd[1543]: time="2025-11-08T00:18:18.998133751Z" level=info msg="StopPodSandbox for \"29472098667a80ad47b6d7e756e614c3a90287fca77f9dc02a8625bfeb8ff85d\"" Nov 8 00:18:19.072786 containerd[1543]: 2025-11-08 00:18:19.049 [WARNING][5402] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="29472098667a80ad47b6d7e756e614c3a90287fca77f9dc02a8625bfeb8ff85d" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--7c7df7fbc6--cwgpp-eth0", GenerateName:"calico-apiserver-7c7df7fbc6-", Namespace:"calico-apiserver", SelfLink:"", UID:"793d1558-9161-49f1-a744-65e7ee945bd5", ResourceVersion:"1045", Generation:0, CreationTimestamp:time.Date(2025, time.November, 8, 0, 17, 35, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7c7df7fbc6", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"09ca7dac31dca105c05e461535d001a82979b85a21d084dea33d974b1d2fb8f2", Pod:"calico-apiserver-7c7df7fbc6-cwgpp", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali2af051215d1", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 8 00:18:19.072786 containerd[1543]: 2025-11-08 00:18:19.049 [INFO][5402] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="29472098667a80ad47b6d7e756e614c3a90287fca77f9dc02a8625bfeb8ff85d" Nov 8 00:18:19.072786 containerd[1543]: 2025-11-08 00:18:19.049 [INFO][5402] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="29472098667a80ad47b6d7e756e614c3a90287fca77f9dc02a8625bfeb8ff85d" iface="eth0" netns="" Nov 8 00:18:19.072786 containerd[1543]: 2025-11-08 00:18:19.049 [INFO][5402] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="29472098667a80ad47b6d7e756e614c3a90287fca77f9dc02a8625bfeb8ff85d" Nov 8 00:18:19.072786 containerd[1543]: 2025-11-08 00:18:19.049 [INFO][5402] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="29472098667a80ad47b6d7e756e614c3a90287fca77f9dc02a8625bfeb8ff85d" Nov 8 00:18:19.072786 containerd[1543]: 2025-11-08 00:18:19.065 [INFO][5409] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="29472098667a80ad47b6d7e756e614c3a90287fca77f9dc02a8625bfeb8ff85d" HandleID="k8s-pod-network.29472098667a80ad47b6d7e756e614c3a90287fca77f9dc02a8625bfeb8ff85d" Workload="localhost-k8s-calico--apiserver--7c7df7fbc6--cwgpp-eth0" Nov 8 00:18:19.072786 containerd[1543]: 2025-11-08 00:18:19.065 [INFO][5409] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 8 00:18:19.072786 containerd[1543]: 2025-11-08 00:18:19.065 [INFO][5409] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 8 00:18:19.072786 containerd[1543]: 2025-11-08 00:18:19.069 [WARNING][5409] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="29472098667a80ad47b6d7e756e614c3a90287fca77f9dc02a8625bfeb8ff85d" HandleID="k8s-pod-network.29472098667a80ad47b6d7e756e614c3a90287fca77f9dc02a8625bfeb8ff85d" Workload="localhost-k8s-calico--apiserver--7c7df7fbc6--cwgpp-eth0" Nov 8 00:18:19.072786 containerd[1543]: 2025-11-08 00:18:19.069 [INFO][5409] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="29472098667a80ad47b6d7e756e614c3a90287fca77f9dc02a8625bfeb8ff85d" HandleID="k8s-pod-network.29472098667a80ad47b6d7e756e614c3a90287fca77f9dc02a8625bfeb8ff85d" Workload="localhost-k8s-calico--apiserver--7c7df7fbc6--cwgpp-eth0" Nov 8 00:18:19.072786 containerd[1543]: 2025-11-08 00:18:19.070 [INFO][5409] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 8 00:18:19.072786 containerd[1543]: 2025-11-08 00:18:19.071 [INFO][5402] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="29472098667a80ad47b6d7e756e614c3a90287fca77f9dc02a8625bfeb8ff85d" Nov 8 00:18:19.073210 containerd[1543]: time="2025-11-08T00:18:19.073113322Z" level=info msg="TearDown network for sandbox \"29472098667a80ad47b6d7e756e614c3a90287fca77f9dc02a8625bfeb8ff85d\" successfully" Nov 8 00:18:19.073210 containerd[1543]: time="2025-11-08T00:18:19.073134254Z" level=info msg="StopPodSandbox for \"29472098667a80ad47b6d7e756e614c3a90287fca77f9dc02a8625bfeb8ff85d\" returns successfully" Nov 8 00:18:19.073581 containerd[1543]: time="2025-11-08T00:18:19.073564049Z" level=info msg="RemovePodSandbox for \"29472098667a80ad47b6d7e756e614c3a90287fca77f9dc02a8625bfeb8ff85d\"" Nov 8 00:18:19.073621 containerd[1543]: time="2025-11-08T00:18:19.073583950Z" level=info msg="Forcibly stopping sandbox \"29472098667a80ad47b6d7e756e614c3a90287fca77f9dc02a8625bfeb8ff85d\"" Nov 8 00:18:19.181363 containerd[1543]: 2025-11-08 00:18:19.094 [WARNING][5423] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="29472098667a80ad47b6d7e756e614c3a90287fca77f9dc02a8625bfeb8ff85d" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--7c7df7fbc6--cwgpp-eth0", GenerateName:"calico-apiserver-7c7df7fbc6-", Namespace:"calico-apiserver", SelfLink:"", UID:"793d1558-9161-49f1-a744-65e7ee945bd5", ResourceVersion:"1045", Generation:0, CreationTimestamp:time.Date(2025, time.November, 8, 0, 17, 35, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7c7df7fbc6", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"09ca7dac31dca105c05e461535d001a82979b85a21d084dea33d974b1d2fb8f2", Pod:"calico-apiserver-7c7df7fbc6-cwgpp", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali2af051215d1", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 8 00:18:19.181363 containerd[1543]: 2025-11-08 00:18:19.095 [INFO][5423] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="29472098667a80ad47b6d7e756e614c3a90287fca77f9dc02a8625bfeb8ff85d" Nov 8 00:18:19.181363 containerd[1543]: 2025-11-08 00:18:19.095 [INFO][5423] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="29472098667a80ad47b6d7e756e614c3a90287fca77f9dc02a8625bfeb8ff85d" iface="eth0" netns="" Nov 8 00:18:19.181363 containerd[1543]: 2025-11-08 00:18:19.095 [INFO][5423] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="29472098667a80ad47b6d7e756e614c3a90287fca77f9dc02a8625bfeb8ff85d" Nov 8 00:18:19.181363 containerd[1543]: 2025-11-08 00:18:19.095 [INFO][5423] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="29472098667a80ad47b6d7e756e614c3a90287fca77f9dc02a8625bfeb8ff85d" Nov 8 00:18:19.181363 containerd[1543]: 2025-11-08 00:18:19.173 [INFO][5430] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="29472098667a80ad47b6d7e756e614c3a90287fca77f9dc02a8625bfeb8ff85d" HandleID="k8s-pod-network.29472098667a80ad47b6d7e756e614c3a90287fca77f9dc02a8625bfeb8ff85d" Workload="localhost-k8s-calico--apiserver--7c7df7fbc6--cwgpp-eth0" Nov 8 00:18:19.181363 containerd[1543]: 2025-11-08 00:18:19.173 [INFO][5430] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 8 00:18:19.181363 containerd[1543]: 2025-11-08 00:18:19.173 [INFO][5430] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 8 00:18:19.181363 containerd[1543]: 2025-11-08 00:18:19.177 [WARNING][5430] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="29472098667a80ad47b6d7e756e614c3a90287fca77f9dc02a8625bfeb8ff85d" HandleID="k8s-pod-network.29472098667a80ad47b6d7e756e614c3a90287fca77f9dc02a8625bfeb8ff85d" Workload="localhost-k8s-calico--apiserver--7c7df7fbc6--cwgpp-eth0" Nov 8 00:18:19.181363 containerd[1543]: 2025-11-08 00:18:19.177 [INFO][5430] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="29472098667a80ad47b6d7e756e614c3a90287fca77f9dc02a8625bfeb8ff85d" HandleID="k8s-pod-network.29472098667a80ad47b6d7e756e614c3a90287fca77f9dc02a8625bfeb8ff85d" Workload="localhost-k8s-calico--apiserver--7c7df7fbc6--cwgpp-eth0" Nov 8 00:18:19.181363 containerd[1543]: 2025-11-08 00:18:19.178 [INFO][5430] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 8 00:18:19.181363 containerd[1543]: 2025-11-08 00:18:19.179 [INFO][5423] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="29472098667a80ad47b6d7e756e614c3a90287fca77f9dc02a8625bfeb8ff85d" Nov 8 00:18:19.181363 containerd[1543]: time="2025-11-08T00:18:19.181186709Z" level=info msg="TearDown network for sandbox \"29472098667a80ad47b6d7e756e614c3a90287fca77f9dc02a8625bfeb8ff85d\" successfully" Nov 8 00:18:19.214944 containerd[1543]: time="2025-11-08T00:18:19.214902520Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"29472098667a80ad47b6d7e756e614c3a90287fca77f9dc02a8625bfeb8ff85d\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Nov 8 00:18:19.215056 containerd[1543]: time="2025-11-08T00:18:19.214960040Z" level=info msg="RemovePodSandbox \"29472098667a80ad47b6d7e756e614c3a90287fca77f9dc02a8625bfeb8ff85d\" returns successfully" Nov 8 00:18:19.215433 containerd[1543]: time="2025-11-08T00:18:19.215404252Z" level=info msg="StopPodSandbox for \"3ebdc4d9a01763eea01ca5fb0bc2fae25d77ab7f5023d32d3f027b0d16b1bccf\"" Nov 8 00:18:19.304805 containerd[1543]: 2025-11-08 00:18:19.237 [WARNING][5444] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="3ebdc4d9a01763eea01ca5fb0bc2fae25d77ab7f5023d32d3f027b0d16b1bccf" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--tzsgq-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"d3463313-ed82-40c5-914e-c6418d31744b", ResourceVersion:"1094", Generation:0, CreationTimestamp:time.Date(2025, time.November, 8, 0, 17, 39, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"9d99788f7", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"e72fab85c4d784bb910f91f4836b0b9b1b64f57d505e77f8ab227d1f7d497a8b", Pod:"csi-node-driver-tzsgq", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calia0ee7659087", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 8 00:18:19.304805 containerd[1543]: 2025-11-08 00:18:19.237 [INFO][5444] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="3ebdc4d9a01763eea01ca5fb0bc2fae25d77ab7f5023d32d3f027b0d16b1bccf" Nov 8 00:18:19.304805 containerd[1543]: 2025-11-08 00:18:19.237 [INFO][5444] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="3ebdc4d9a01763eea01ca5fb0bc2fae25d77ab7f5023d32d3f027b0d16b1bccf" iface="eth0" netns="" Nov 8 00:18:19.304805 containerd[1543]: 2025-11-08 00:18:19.237 [INFO][5444] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="3ebdc4d9a01763eea01ca5fb0bc2fae25d77ab7f5023d32d3f027b0d16b1bccf" Nov 8 00:18:19.304805 containerd[1543]: 2025-11-08 00:18:19.237 [INFO][5444] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="3ebdc4d9a01763eea01ca5fb0bc2fae25d77ab7f5023d32d3f027b0d16b1bccf" Nov 8 00:18:19.304805 containerd[1543]: 2025-11-08 00:18:19.295 [INFO][5451] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="3ebdc4d9a01763eea01ca5fb0bc2fae25d77ab7f5023d32d3f027b0d16b1bccf" HandleID="k8s-pod-network.3ebdc4d9a01763eea01ca5fb0bc2fae25d77ab7f5023d32d3f027b0d16b1bccf" Workload="localhost-k8s-csi--node--driver--tzsgq-eth0" Nov 8 00:18:19.304805 containerd[1543]: 2025-11-08 00:18:19.296 [INFO][5451] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 8 00:18:19.304805 containerd[1543]: 2025-11-08 00:18:19.296 [INFO][5451] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 8 00:18:19.304805 containerd[1543]: 2025-11-08 00:18:19.300 [WARNING][5451] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="3ebdc4d9a01763eea01ca5fb0bc2fae25d77ab7f5023d32d3f027b0d16b1bccf" HandleID="k8s-pod-network.3ebdc4d9a01763eea01ca5fb0bc2fae25d77ab7f5023d32d3f027b0d16b1bccf" Workload="localhost-k8s-csi--node--driver--tzsgq-eth0" Nov 8 00:18:19.304805 containerd[1543]: 2025-11-08 00:18:19.300 [INFO][5451] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="3ebdc4d9a01763eea01ca5fb0bc2fae25d77ab7f5023d32d3f027b0d16b1bccf" HandleID="k8s-pod-network.3ebdc4d9a01763eea01ca5fb0bc2fae25d77ab7f5023d32d3f027b0d16b1bccf" Workload="localhost-k8s-csi--node--driver--tzsgq-eth0" Nov 8 00:18:19.304805 containerd[1543]: 2025-11-08 00:18:19.302 [INFO][5451] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 8 00:18:19.304805 containerd[1543]: 2025-11-08 00:18:19.303 [INFO][5444] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="3ebdc4d9a01763eea01ca5fb0bc2fae25d77ab7f5023d32d3f027b0d16b1bccf" Nov 8 00:18:19.304805 containerd[1543]: time="2025-11-08T00:18:19.304599910Z" level=info msg="TearDown network for sandbox \"3ebdc4d9a01763eea01ca5fb0bc2fae25d77ab7f5023d32d3f027b0d16b1bccf\" successfully" Nov 8 00:18:19.304805 containerd[1543]: time="2025-11-08T00:18:19.304616140Z" level=info msg="StopPodSandbox for \"3ebdc4d9a01763eea01ca5fb0bc2fae25d77ab7f5023d32d3f027b0d16b1bccf\" returns successfully" Nov 8 00:18:19.305436 containerd[1543]: time="2025-11-08T00:18:19.304912707Z" level=info msg="RemovePodSandbox for \"3ebdc4d9a01763eea01ca5fb0bc2fae25d77ab7f5023d32d3f027b0d16b1bccf\"" Nov 8 00:18:19.305436 containerd[1543]: time="2025-11-08T00:18:19.304928147Z" level=info msg="Forcibly stopping sandbox \"3ebdc4d9a01763eea01ca5fb0bc2fae25d77ab7f5023d32d3f027b0d16b1bccf\"" Nov 8 00:18:19.356802 containerd[1543]: 2025-11-08 00:18:19.337 [WARNING][5465] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="3ebdc4d9a01763eea01ca5fb0bc2fae25d77ab7f5023d32d3f027b0d16b1bccf" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--tzsgq-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"d3463313-ed82-40c5-914e-c6418d31744b", ResourceVersion:"1094", Generation:0, CreationTimestamp:time.Date(2025, time.November, 8, 0, 17, 39, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"9d99788f7", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"e72fab85c4d784bb910f91f4836b0b9b1b64f57d505e77f8ab227d1f7d497a8b", Pod:"csi-node-driver-tzsgq", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calia0ee7659087", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 8 00:18:19.356802 containerd[1543]: 2025-11-08 00:18:19.337 [INFO][5465] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="3ebdc4d9a01763eea01ca5fb0bc2fae25d77ab7f5023d32d3f027b0d16b1bccf" Nov 8 00:18:19.356802 containerd[1543]: 2025-11-08 00:18:19.337 [INFO][5465] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="3ebdc4d9a01763eea01ca5fb0bc2fae25d77ab7f5023d32d3f027b0d16b1bccf" iface="eth0" netns="" Nov 8 00:18:19.356802 containerd[1543]: 2025-11-08 00:18:19.337 [INFO][5465] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="3ebdc4d9a01763eea01ca5fb0bc2fae25d77ab7f5023d32d3f027b0d16b1bccf" Nov 8 00:18:19.356802 containerd[1543]: 2025-11-08 00:18:19.337 [INFO][5465] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="3ebdc4d9a01763eea01ca5fb0bc2fae25d77ab7f5023d32d3f027b0d16b1bccf" Nov 8 00:18:19.356802 containerd[1543]: 2025-11-08 00:18:19.350 [INFO][5472] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="3ebdc4d9a01763eea01ca5fb0bc2fae25d77ab7f5023d32d3f027b0d16b1bccf" HandleID="k8s-pod-network.3ebdc4d9a01763eea01ca5fb0bc2fae25d77ab7f5023d32d3f027b0d16b1bccf" Workload="localhost-k8s-csi--node--driver--tzsgq-eth0" Nov 8 00:18:19.356802 containerd[1543]: 2025-11-08 00:18:19.350 [INFO][5472] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 8 00:18:19.356802 containerd[1543]: 2025-11-08 00:18:19.350 [INFO][5472] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 8 00:18:19.356802 containerd[1543]: 2025-11-08 00:18:19.353 [WARNING][5472] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="3ebdc4d9a01763eea01ca5fb0bc2fae25d77ab7f5023d32d3f027b0d16b1bccf" HandleID="k8s-pod-network.3ebdc4d9a01763eea01ca5fb0bc2fae25d77ab7f5023d32d3f027b0d16b1bccf" Workload="localhost-k8s-csi--node--driver--tzsgq-eth0" Nov 8 00:18:19.356802 containerd[1543]: 2025-11-08 00:18:19.353 [INFO][5472] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="3ebdc4d9a01763eea01ca5fb0bc2fae25d77ab7f5023d32d3f027b0d16b1bccf" HandleID="k8s-pod-network.3ebdc4d9a01763eea01ca5fb0bc2fae25d77ab7f5023d32d3f027b0d16b1bccf" Workload="localhost-k8s-csi--node--driver--tzsgq-eth0" Nov 8 00:18:19.356802 containerd[1543]: 2025-11-08 00:18:19.354 [INFO][5472] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 8 00:18:19.356802 containerd[1543]: 2025-11-08 00:18:19.355 [INFO][5465] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="3ebdc4d9a01763eea01ca5fb0bc2fae25d77ab7f5023d32d3f027b0d16b1bccf" Nov 8 00:18:19.357111 containerd[1543]: time="2025-11-08T00:18:19.356827786Z" level=info msg="TearDown network for sandbox \"3ebdc4d9a01763eea01ca5fb0bc2fae25d77ab7f5023d32d3f027b0d16b1bccf\" successfully" Nov 8 00:18:19.387882 containerd[1543]: time="2025-11-08T00:18:19.387828794Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"3ebdc4d9a01763eea01ca5fb0bc2fae25d77ab7f5023d32d3f027b0d16b1bccf\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Nov 8 00:18:19.387987 containerd[1543]: time="2025-11-08T00:18:19.387894739Z" level=info msg="RemovePodSandbox \"3ebdc4d9a01763eea01ca5fb0bc2fae25d77ab7f5023d32d3f027b0d16b1bccf\" returns successfully" Nov 8 00:18:19.388240 containerd[1543]: time="2025-11-08T00:18:19.388221814Z" level=info msg="StopPodSandbox for \"0d8d645b2fd7dbff7492b0ab7a89c048990d7ad61034d0193bfe6e6d3f898170\"" Nov 8 00:18:19.432293 containerd[1543]: 2025-11-08 00:18:19.410 [WARNING][5487] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="0d8d645b2fd7dbff7492b0ab7a89c048990d7ad61034d0193bfe6e6d3f898170" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--66bc5c9577--fnnck-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"db03c30a-8018-4cf5-ac73-034017743c72", ResourceVersion:"1054", Generation:0, CreationTimestamp:time.Date(2025, time.November, 8, 0, 17, 24, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"8455899c39d869e2d7a6a5324a2cd25c7957f4e21b4eded79a60256d84a53b6e", Pod:"coredns-66bc5c9577-fnnck", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali1a170b6c890", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 8 00:18:19.432293 containerd[1543]: 2025-11-08 00:18:19.410 [INFO][5487] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="0d8d645b2fd7dbff7492b0ab7a89c048990d7ad61034d0193bfe6e6d3f898170" Nov 8 00:18:19.432293 containerd[1543]: 2025-11-08 00:18:19.410 [INFO][5487] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="0d8d645b2fd7dbff7492b0ab7a89c048990d7ad61034d0193bfe6e6d3f898170" iface="eth0" netns="" Nov 8 00:18:19.432293 containerd[1543]: 2025-11-08 00:18:19.410 [INFO][5487] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="0d8d645b2fd7dbff7492b0ab7a89c048990d7ad61034d0193bfe6e6d3f898170" Nov 8 00:18:19.432293 containerd[1543]: 2025-11-08 00:18:19.410 [INFO][5487] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="0d8d645b2fd7dbff7492b0ab7a89c048990d7ad61034d0193bfe6e6d3f898170" Nov 8 00:18:19.432293 containerd[1543]: 2025-11-08 00:18:19.424 [INFO][5494] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="0d8d645b2fd7dbff7492b0ab7a89c048990d7ad61034d0193bfe6e6d3f898170" HandleID="k8s-pod-network.0d8d645b2fd7dbff7492b0ab7a89c048990d7ad61034d0193bfe6e6d3f898170" Workload="localhost-k8s-coredns--66bc5c9577--fnnck-eth0" Nov 8 00:18:19.432293 containerd[1543]: 2025-11-08 00:18:19.424 [INFO][5494] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 8 00:18:19.432293 containerd[1543]: 2025-11-08 00:18:19.424 [INFO][5494] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 8 00:18:19.432293 containerd[1543]: 2025-11-08 00:18:19.428 [WARNING][5494] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="0d8d645b2fd7dbff7492b0ab7a89c048990d7ad61034d0193bfe6e6d3f898170" HandleID="k8s-pod-network.0d8d645b2fd7dbff7492b0ab7a89c048990d7ad61034d0193bfe6e6d3f898170" Workload="localhost-k8s-coredns--66bc5c9577--fnnck-eth0" Nov 8 00:18:19.432293 containerd[1543]: 2025-11-08 00:18:19.428 [INFO][5494] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="0d8d645b2fd7dbff7492b0ab7a89c048990d7ad61034d0193bfe6e6d3f898170" HandleID="k8s-pod-network.0d8d645b2fd7dbff7492b0ab7a89c048990d7ad61034d0193bfe6e6d3f898170" Workload="localhost-k8s-coredns--66bc5c9577--fnnck-eth0" Nov 8 00:18:19.432293 containerd[1543]: 2025-11-08 00:18:19.429 [INFO][5494] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 8 00:18:19.432293 containerd[1543]: 2025-11-08 00:18:19.430 [INFO][5487] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="0d8d645b2fd7dbff7492b0ab7a89c048990d7ad61034d0193bfe6e6d3f898170" Nov 8 00:18:19.432293 containerd[1543]: time="2025-11-08T00:18:19.432270243Z" level=info msg="TearDown network for sandbox \"0d8d645b2fd7dbff7492b0ab7a89c048990d7ad61034d0193bfe6e6d3f898170\" successfully" Nov 8 00:18:19.432293 containerd[1543]: time="2025-11-08T00:18:19.432286295Z" level=info msg="StopPodSandbox for \"0d8d645b2fd7dbff7492b0ab7a89c048990d7ad61034d0193bfe6e6d3f898170\" returns successfully" Nov 8 00:18:19.433336 containerd[1543]: time="2025-11-08T00:18:19.432571946Z" level=info msg="RemovePodSandbox for \"0d8d645b2fd7dbff7492b0ab7a89c048990d7ad61034d0193bfe6e6d3f898170\"" Nov 8 00:18:19.433336 containerd[1543]: time="2025-11-08T00:18:19.432588680Z" level=info msg="Forcibly stopping sandbox \"0d8d645b2fd7dbff7492b0ab7a89c048990d7ad61034d0193bfe6e6d3f898170\"" Nov 8 00:18:19.476954 containerd[1543]: 2025-11-08 00:18:19.455 [WARNING][5508] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="0d8d645b2fd7dbff7492b0ab7a89c048990d7ad61034d0193bfe6e6d3f898170" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--66bc5c9577--fnnck-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"db03c30a-8018-4cf5-ac73-034017743c72", ResourceVersion:"1054", Generation:0, CreationTimestamp:time.Date(2025, time.November, 8, 0, 17, 24, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"8455899c39d869e2d7a6a5324a2cd25c7957f4e21b4eded79a60256d84a53b6e", Pod:"coredns-66bc5c9577-fnnck", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali1a170b6c890", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 8 00:18:19.476954 containerd[1543]: 2025-11-08 00:18:19.455 [INFO][5508] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="0d8d645b2fd7dbff7492b0ab7a89c048990d7ad61034d0193bfe6e6d3f898170" Nov 8 00:18:19.476954 containerd[1543]: 2025-11-08 00:18:19.455 [INFO][5508] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="0d8d645b2fd7dbff7492b0ab7a89c048990d7ad61034d0193bfe6e6d3f898170" iface="eth0" netns="" Nov 8 00:18:19.476954 containerd[1543]: 2025-11-08 00:18:19.455 [INFO][5508] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="0d8d645b2fd7dbff7492b0ab7a89c048990d7ad61034d0193bfe6e6d3f898170" Nov 8 00:18:19.476954 containerd[1543]: 2025-11-08 00:18:19.455 [INFO][5508] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="0d8d645b2fd7dbff7492b0ab7a89c048990d7ad61034d0193bfe6e6d3f898170" Nov 8 00:18:19.476954 containerd[1543]: 2025-11-08 00:18:19.469 [INFO][5515] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="0d8d645b2fd7dbff7492b0ab7a89c048990d7ad61034d0193bfe6e6d3f898170" HandleID="k8s-pod-network.0d8d645b2fd7dbff7492b0ab7a89c048990d7ad61034d0193bfe6e6d3f898170" Workload="localhost-k8s-coredns--66bc5c9577--fnnck-eth0" Nov 8 00:18:19.476954 containerd[1543]: 2025-11-08 00:18:19.469 [INFO][5515] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 8 00:18:19.476954 containerd[1543]: 2025-11-08 00:18:19.469 [INFO][5515] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 8 00:18:19.476954 containerd[1543]: 2025-11-08 00:18:19.473 [WARNING][5515] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="0d8d645b2fd7dbff7492b0ab7a89c048990d7ad61034d0193bfe6e6d3f898170" HandleID="k8s-pod-network.0d8d645b2fd7dbff7492b0ab7a89c048990d7ad61034d0193bfe6e6d3f898170" Workload="localhost-k8s-coredns--66bc5c9577--fnnck-eth0" Nov 8 00:18:19.476954 containerd[1543]: 2025-11-08 00:18:19.473 [INFO][5515] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="0d8d645b2fd7dbff7492b0ab7a89c048990d7ad61034d0193bfe6e6d3f898170" HandleID="k8s-pod-network.0d8d645b2fd7dbff7492b0ab7a89c048990d7ad61034d0193bfe6e6d3f898170" Workload="localhost-k8s-coredns--66bc5c9577--fnnck-eth0" Nov 8 00:18:19.476954 containerd[1543]: 2025-11-08 00:18:19.474 [INFO][5515] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 8 00:18:19.476954 containerd[1543]: 2025-11-08 00:18:19.475 [INFO][5508] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="0d8d645b2fd7dbff7492b0ab7a89c048990d7ad61034d0193bfe6e6d3f898170" Nov 8 00:18:19.477295 containerd[1543]: time="2025-11-08T00:18:19.476982639Z" level=info msg="TearDown network for sandbox \"0d8d645b2fd7dbff7492b0ab7a89c048990d7ad61034d0193bfe6e6d3f898170\" successfully" Nov 8 00:18:19.509673 containerd[1543]: time="2025-11-08T00:18:19.509641907Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"0d8d645b2fd7dbff7492b0ab7a89c048990d7ad61034d0193bfe6e6d3f898170\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Nov 8 00:18:19.509765 containerd[1543]: time="2025-11-08T00:18:19.509682939Z" level=info msg="RemovePodSandbox \"0d8d645b2fd7dbff7492b0ab7a89c048990d7ad61034d0193bfe6e6d3f898170\" returns successfully" Nov 8 00:18:19.510092 containerd[1543]: time="2025-11-08T00:18:19.510074193Z" level=info msg="StopPodSandbox for \"0960d3ae8b5b3c21ff19533fbc15f4e824c0d345e0b13869dfc4866f06a56644\"" Nov 8 00:18:19.552515 containerd[1543]: 2025-11-08 00:18:19.531 [WARNING][5529] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="0960d3ae8b5b3c21ff19533fbc15f4e824c0d345e0b13869dfc4866f06a56644" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--66bc5c9577--2wlbm-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"e0e38e94-a84e-4a13-adfe-e757102f7549", ResourceVersion:"1004", Generation:0, CreationTimestamp:time.Date(2025, time.November, 8, 0, 17, 24, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"141fd53f8d4c79f2b77a592f437b270dba0028365ce5886411f6b5db25c50245", Pod:"coredns-66bc5c9577-2wlbm", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali276e7617470", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 8 00:18:19.552515 containerd[1543]: 2025-11-08 00:18:19.531 [INFO][5529] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="0960d3ae8b5b3c21ff19533fbc15f4e824c0d345e0b13869dfc4866f06a56644" Nov 8 00:18:19.552515 containerd[1543]: 2025-11-08 00:18:19.531 [INFO][5529] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="0960d3ae8b5b3c21ff19533fbc15f4e824c0d345e0b13869dfc4866f06a56644" iface="eth0" netns="" Nov 8 00:18:19.552515 containerd[1543]: 2025-11-08 00:18:19.531 [INFO][5529] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="0960d3ae8b5b3c21ff19533fbc15f4e824c0d345e0b13869dfc4866f06a56644" Nov 8 00:18:19.552515 containerd[1543]: 2025-11-08 00:18:19.531 [INFO][5529] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="0960d3ae8b5b3c21ff19533fbc15f4e824c0d345e0b13869dfc4866f06a56644" Nov 8 00:18:19.552515 containerd[1543]: 2025-11-08 00:18:19.545 [INFO][5537] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="0960d3ae8b5b3c21ff19533fbc15f4e824c0d345e0b13869dfc4866f06a56644" HandleID="k8s-pod-network.0960d3ae8b5b3c21ff19533fbc15f4e824c0d345e0b13869dfc4866f06a56644" Workload="localhost-k8s-coredns--66bc5c9577--2wlbm-eth0" Nov 8 00:18:19.552515 containerd[1543]: 2025-11-08 00:18:19.545 [INFO][5537] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 8 00:18:19.552515 containerd[1543]: 2025-11-08 00:18:19.545 [INFO][5537] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 8 00:18:19.552515 containerd[1543]: 2025-11-08 00:18:19.549 [WARNING][5537] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="0960d3ae8b5b3c21ff19533fbc15f4e824c0d345e0b13869dfc4866f06a56644" HandleID="k8s-pod-network.0960d3ae8b5b3c21ff19533fbc15f4e824c0d345e0b13869dfc4866f06a56644" Workload="localhost-k8s-coredns--66bc5c9577--2wlbm-eth0" Nov 8 00:18:19.552515 containerd[1543]: 2025-11-08 00:18:19.549 [INFO][5537] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="0960d3ae8b5b3c21ff19533fbc15f4e824c0d345e0b13869dfc4866f06a56644" HandleID="k8s-pod-network.0960d3ae8b5b3c21ff19533fbc15f4e824c0d345e0b13869dfc4866f06a56644" Workload="localhost-k8s-coredns--66bc5c9577--2wlbm-eth0" Nov 8 00:18:19.552515 containerd[1543]: 2025-11-08 00:18:19.550 [INFO][5537] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 8 00:18:19.552515 containerd[1543]: 2025-11-08 00:18:19.551 [INFO][5529] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="0960d3ae8b5b3c21ff19533fbc15f4e824c0d345e0b13869dfc4866f06a56644" Nov 8 00:18:19.553093 containerd[1543]: time="2025-11-08T00:18:19.552539545Z" level=info msg="TearDown network for sandbox \"0960d3ae8b5b3c21ff19533fbc15f4e824c0d345e0b13869dfc4866f06a56644\" successfully" Nov 8 00:18:19.553093 containerd[1543]: time="2025-11-08T00:18:19.552556209Z" level=info msg="StopPodSandbox for \"0960d3ae8b5b3c21ff19533fbc15f4e824c0d345e0b13869dfc4866f06a56644\" returns successfully" Nov 8 00:18:19.553093 containerd[1543]: time="2025-11-08T00:18:19.553013566Z" level=info msg="RemovePodSandbox for \"0960d3ae8b5b3c21ff19533fbc15f4e824c0d345e0b13869dfc4866f06a56644\"" Nov 8 00:18:19.553093 containerd[1543]: time="2025-11-08T00:18:19.553029787Z" level=info msg="Forcibly stopping sandbox \"0960d3ae8b5b3c21ff19533fbc15f4e824c0d345e0b13869dfc4866f06a56644\"" Nov 8 00:18:19.593416 containerd[1543]: 2025-11-08 00:18:19.573 [WARNING][5551] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="0960d3ae8b5b3c21ff19533fbc15f4e824c0d345e0b13869dfc4866f06a56644" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--66bc5c9577--2wlbm-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"e0e38e94-a84e-4a13-adfe-e757102f7549", ResourceVersion:"1004", Generation:0, CreationTimestamp:time.Date(2025, time.November, 8, 0, 17, 24, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"141fd53f8d4c79f2b77a592f437b270dba0028365ce5886411f6b5db25c50245", Pod:"coredns-66bc5c9577-2wlbm", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali276e7617470", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 8 00:18:19.593416 containerd[1543]: 2025-11-08 00:18:19.573 [INFO][5551] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="0960d3ae8b5b3c21ff19533fbc15f4e824c0d345e0b13869dfc4866f06a56644" Nov 8 00:18:19.593416 containerd[1543]: 2025-11-08 00:18:19.573 [INFO][5551] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="0960d3ae8b5b3c21ff19533fbc15f4e824c0d345e0b13869dfc4866f06a56644" iface="eth0" netns="" Nov 8 00:18:19.593416 containerd[1543]: 2025-11-08 00:18:19.573 [INFO][5551] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="0960d3ae8b5b3c21ff19533fbc15f4e824c0d345e0b13869dfc4866f06a56644" Nov 8 00:18:19.593416 containerd[1543]: 2025-11-08 00:18:19.573 [INFO][5551] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="0960d3ae8b5b3c21ff19533fbc15f4e824c0d345e0b13869dfc4866f06a56644" Nov 8 00:18:19.593416 containerd[1543]: 2025-11-08 00:18:19.586 [INFO][5558] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="0960d3ae8b5b3c21ff19533fbc15f4e824c0d345e0b13869dfc4866f06a56644" HandleID="k8s-pod-network.0960d3ae8b5b3c21ff19533fbc15f4e824c0d345e0b13869dfc4866f06a56644" Workload="localhost-k8s-coredns--66bc5c9577--2wlbm-eth0" Nov 8 00:18:19.593416 containerd[1543]: 2025-11-08 00:18:19.586 [INFO][5558] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 8 00:18:19.593416 containerd[1543]: 2025-11-08 00:18:19.586 [INFO][5558] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 8 00:18:19.593416 containerd[1543]: 2025-11-08 00:18:19.590 [WARNING][5558] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="0960d3ae8b5b3c21ff19533fbc15f4e824c0d345e0b13869dfc4866f06a56644" HandleID="k8s-pod-network.0960d3ae8b5b3c21ff19533fbc15f4e824c0d345e0b13869dfc4866f06a56644" Workload="localhost-k8s-coredns--66bc5c9577--2wlbm-eth0" Nov 8 00:18:19.593416 containerd[1543]: 2025-11-08 00:18:19.590 [INFO][5558] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="0960d3ae8b5b3c21ff19533fbc15f4e824c0d345e0b13869dfc4866f06a56644" HandleID="k8s-pod-network.0960d3ae8b5b3c21ff19533fbc15f4e824c0d345e0b13869dfc4866f06a56644" Workload="localhost-k8s-coredns--66bc5c9577--2wlbm-eth0" Nov 8 00:18:19.593416 containerd[1543]: 2025-11-08 00:18:19.591 [INFO][5558] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 8 00:18:19.593416 containerd[1543]: 2025-11-08 00:18:19.592 [INFO][5551] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="0960d3ae8b5b3c21ff19533fbc15f4e824c0d345e0b13869dfc4866f06a56644" Nov 8 00:18:19.593916 containerd[1543]: time="2025-11-08T00:18:19.593439912Z" level=info msg="TearDown network for sandbox \"0960d3ae8b5b3c21ff19533fbc15f4e824c0d345e0b13869dfc4866f06a56644\" successfully" Nov 8 00:18:19.609618 containerd[1543]: time="2025-11-08T00:18:19.609592718Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"0960d3ae8b5b3c21ff19533fbc15f4e824c0d345e0b13869dfc4866f06a56644\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Nov 8 00:18:19.609667 containerd[1543]: time="2025-11-08T00:18:19.609634974Z" level=info msg="RemovePodSandbox \"0960d3ae8b5b3c21ff19533fbc15f4e824c0d345e0b13869dfc4866f06a56644\" returns successfully" Nov 8 00:18:19.610142 containerd[1543]: time="2025-11-08T00:18:19.609949837Z" level=info msg="StopPodSandbox for \"ee2cf1a199360f64515f46bdacd124dce16c2a8ed3b173c5c3709f9e3b04f4a7\"" Nov 8 00:18:19.653494 containerd[1543]: 2025-11-08 00:18:19.631 [WARNING][5572] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="ee2cf1a199360f64515f46bdacd124dce16c2a8ed3b173c5c3709f9e3b04f4a7" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--5d46c4cb7--lvp22-eth0", GenerateName:"calico-apiserver-5d46c4cb7-", Namespace:"calico-apiserver", SelfLink:"", UID:"fcbbea23-b3d5-4961-9b75-cd8ace86f6c6", ResourceVersion:"1140", Generation:0, CreationTimestamp:time.Date(2025, time.November, 8, 0, 17, 34, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5d46c4cb7", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"9829ac0615ca5d57d564674b5fefa955560677f5c09c2add4cd43364d1ce68d5", Pod:"calico-apiserver-5d46c4cb7-lvp22", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali9e3042cfcc3", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 8 00:18:19.653494 containerd[1543]: 2025-11-08 00:18:19.631 [INFO][5572] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="ee2cf1a199360f64515f46bdacd124dce16c2a8ed3b173c5c3709f9e3b04f4a7" Nov 8 00:18:19.653494 containerd[1543]: 2025-11-08 00:18:19.631 [INFO][5572] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="ee2cf1a199360f64515f46bdacd124dce16c2a8ed3b173c5c3709f9e3b04f4a7" iface="eth0" netns="" Nov 8 00:18:19.653494 containerd[1543]: 2025-11-08 00:18:19.631 [INFO][5572] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="ee2cf1a199360f64515f46bdacd124dce16c2a8ed3b173c5c3709f9e3b04f4a7" Nov 8 00:18:19.653494 containerd[1543]: 2025-11-08 00:18:19.632 [INFO][5572] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="ee2cf1a199360f64515f46bdacd124dce16c2a8ed3b173c5c3709f9e3b04f4a7" Nov 8 00:18:19.653494 containerd[1543]: 2025-11-08 00:18:19.646 [INFO][5579] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="ee2cf1a199360f64515f46bdacd124dce16c2a8ed3b173c5c3709f9e3b04f4a7" HandleID="k8s-pod-network.ee2cf1a199360f64515f46bdacd124dce16c2a8ed3b173c5c3709f9e3b04f4a7" Workload="localhost-k8s-calico--apiserver--5d46c4cb7--lvp22-eth0" Nov 8 00:18:19.653494 containerd[1543]: 2025-11-08 00:18:19.646 [INFO][5579] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 8 00:18:19.653494 containerd[1543]: 2025-11-08 00:18:19.646 [INFO][5579] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 8 00:18:19.653494 containerd[1543]: 2025-11-08 00:18:19.650 [WARNING][5579] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="ee2cf1a199360f64515f46bdacd124dce16c2a8ed3b173c5c3709f9e3b04f4a7" HandleID="k8s-pod-network.ee2cf1a199360f64515f46bdacd124dce16c2a8ed3b173c5c3709f9e3b04f4a7" Workload="localhost-k8s-calico--apiserver--5d46c4cb7--lvp22-eth0" Nov 8 00:18:19.653494 containerd[1543]: 2025-11-08 00:18:19.650 [INFO][5579] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="ee2cf1a199360f64515f46bdacd124dce16c2a8ed3b173c5c3709f9e3b04f4a7" HandleID="k8s-pod-network.ee2cf1a199360f64515f46bdacd124dce16c2a8ed3b173c5c3709f9e3b04f4a7" Workload="localhost-k8s-calico--apiserver--5d46c4cb7--lvp22-eth0" Nov 8 00:18:19.653494 containerd[1543]: 2025-11-08 00:18:19.651 [INFO][5579] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 8 00:18:19.653494 containerd[1543]: 2025-11-08 00:18:19.652 [INFO][5572] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="ee2cf1a199360f64515f46bdacd124dce16c2a8ed3b173c5c3709f9e3b04f4a7" Nov 8 00:18:19.653938 containerd[1543]: time="2025-11-08T00:18:19.653846679Z" level=info msg="TearDown network for sandbox \"ee2cf1a199360f64515f46bdacd124dce16c2a8ed3b173c5c3709f9e3b04f4a7\" successfully" Nov 8 00:18:19.653938 containerd[1543]: time="2025-11-08T00:18:19.653864293Z" level=info msg="StopPodSandbox for \"ee2cf1a199360f64515f46bdacd124dce16c2a8ed3b173c5c3709f9e3b04f4a7\" returns successfully" Nov 8 00:18:19.654170 containerd[1543]: time="2025-11-08T00:18:19.654158347Z" level=info msg="RemovePodSandbox for \"ee2cf1a199360f64515f46bdacd124dce16c2a8ed3b173c5c3709f9e3b04f4a7\"" Nov 8 00:18:19.654197 containerd[1543]: time="2025-11-08T00:18:19.654175928Z" level=info msg="Forcibly stopping sandbox \"ee2cf1a199360f64515f46bdacd124dce16c2a8ed3b173c5c3709f9e3b04f4a7\"" Nov 8 00:18:19.697231 containerd[1543]: 2025-11-08 00:18:19.675 [WARNING][5593] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="ee2cf1a199360f64515f46bdacd124dce16c2a8ed3b173c5c3709f9e3b04f4a7" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--5d46c4cb7--lvp22-eth0", GenerateName:"calico-apiserver-5d46c4cb7-", Namespace:"calico-apiserver", SelfLink:"", UID:"fcbbea23-b3d5-4961-9b75-cd8ace86f6c6", ResourceVersion:"1140", Generation:0, CreationTimestamp:time.Date(2025, time.November, 8, 0, 17, 34, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5d46c4cb7", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"9829ac0615ca5d57d564674b5fefa955560677f5c09c2add4cd43364d1ce68d5", Pod:"calico-apiserver-5d46c4cb7-lvp22", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali9e3042cfcc3", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 8 00:18:19.697231 containerd[1543]: 2025-11-08 00:18:19.675 [INFO][5593] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="ee2cf1a199360f64515f46bdacd124dce16c2a8ed3b173c5c3709f9e3b04f4a7" Nov 8 00:18:19.697231 containerd[1543]: 2025-11-08 00:18:19.675 [INFO][5593] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="ee2cf1a199360f64515f46bdacd124dce16c2a8ed3b173c5c3709f9e3b04f4a7" iface="eth0" netns="" Nov 8 00:18:19.697231 containerd[1543]: 2025-11-08 00:18:19.675 [INFO][5593] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="ee2cf1a199360f64515f46bdacd124dce16c2a8ed3b173c5c3709f9e3b04f4a7" Nov 8 00:18:19.697231 containerd[1543]: 2025-11-08 00:18:19.675 [INFO][5593] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="ee2cf1a199360f64515f46bdacd124dce16c2a8ed3b173c5c3709f9e3b04f4a7" Nov 8 00:18:19.697231 containerd[1543]: 2025-11-08 00:18:19.690 [INFO][5600] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="ee2cf1a199360f64515f46bdacd124dce16c2a8ed3b173c5c3709f9e3b04f4a7" HandleID="k8s-pod-network.ee2cf1a199360f64515f46bdacd124dce16c2a8ed3b173c5c3709f9e3b04f4a7" Workload="localhost-k8s-calico--apiserver--5d46c4cb7--lvp22-eth0" Nov 8 00:18:19.697231 containerd[1543]: 2025-11-08 00:18:19.690 [INFO][5600] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 8 00:18:19.697231 containerd[1543]: 2025-11-08 00:18:19.690 [INFO][5600] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 8 00:18:19.697231 containerd[1543]: 2025-11-08 00:18:19.693 [WARNING][5600] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="ee2cf1a199360f64515f46bdacd124dce16c2a8ed3b173c5c3709f9e3b04f4a7" HandleID="k8s-pod-network.ee2cf1a199360f64515f46bdacd124dce16c2a8ed3b173c5c3709f9e3b04f4a7" Workload="localhost-k8s-calico--apiserver--5d46c4cb7--lvp22-eth0" Nov 8 00:18:19.697231 containerd[1543]: 2025-11-08 00:18:19.693 [INFO][5600] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="ee2cf1a199360f64515f46bdacd124dce16c2a8ed3b173c5c3709f9e3b04f4a7" HandleID="k8s-pod-network.ee2cf1a199360f64515f46bdacd124dce16c2a8ed3b173c5c3709f9e3b04f4a7" Workload="localhost-k8s-calico--apiserver--5d46c4cb7--lvp22-eth0" Nov 8 00:18:19.697231 containerd[1543]: 2025-11-08 00:18:19.694 [INFO][5600] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 8 00:18:19.697231 containerd[1543]: 2025-11-08 00:18:19.696 [INFO][5593] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="ee2cf1a199360f64515f46bdacd124dce16c2a8ed3b173c5c3709f9e3b04f4a7" Nov 8 00:18:19.697231 containerd[1543]: time="2025-11-08T00:18:19.697197297Z" level=info msg="TearDown network for sandbox \"ee2cf1a199360f64515f46bdacd124dce16c2a8ed3b173c5c3709f9e3b04f4a7\" successfully" Nov 8 00:18:19.713991 containerd[1543]: time="2025-11-08T00:18:19.713933479Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"ee2cf1a199360f64515f46bdacd124dce16c2a8ed3b173c5c3709f9e3b04f4a7\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Nov 8 00:18:19.713991 containerd[1543]: time="2025-11-08T00:18:19.713983182Z" level=info msg="RemovePodSandbox \"ee2cf1a199360f64515f46bdacd124dce16c2a8ed3b173c5c3709f9e3b04f4a7\" returns successfully" Nov 8 00:18:19.714650 containerd[1543]: time="2025-11-08T00:18:19.714475983Z" level=info msg="StopPodSandbox for \"3126821bbfea469243aa27d5e202334d7d987a6ac75558cb9162e36c191a677a\"" Nov 8 00:18:19.763159 containerd[1543]: 2025-11-08 00:18:19.737 [WARNING][5614] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="3126821bbfea469243aa27d5e202334d7d987a6ac75558cb9162e36c191a677a" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--7c778bb748--dfl5f-eth0", GenerateName:"goldmane-7c778bb748-", Namespace:"calico-system", SelfLink:"", UID:"356509c0-5649-4d69-a66f-1b0a4ca00464", ResourceVersion:"1097", Generation:0, CreationTimestamp:time.Date(2025, time.November, 8, 0, 17, 36, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"7c778bb748", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"0c7e567b64e03ba5608ce9abd60453447735d79fd3908434640d7522eafec598", Pod:"goldmane-7c778bb748-dfl5f", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.137/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali168c62f2d16", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 8 00:18:19.763159 containerd[1543]: 2025-11-08 00:18:19.738 [INFO][5614] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="3126821bbfea469243aa27d5e202334d7d987a6ac75558cb9162e36c191a677a" Nov 8 00:18:19.763159 containerd[1543]: 2025-11-08 00:18:19.738 [INFO][5614] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="3126821bbfea469243aa27d5e202334d7d987a6ac75558cb9162e36c191a677a" iface="eth0" netns="" Nov 8 00:18:19.763159 containerd[1543]: 2025-11-08 00:18:19.738 [INFO][5614] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="3126821bbfea469243aa27d5e202334d7d987a6ac75558cb9162e36c191a677a" Nov 8 00:18:19.763159 containerd[1543]: 2025-11-08 00:18:19.738 [INFO][5614] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="3126821bbfea469243aa27d5e202334d7d987a6ac75558cb9162e36c191a677a" Nov 8 00:18:19.763159 containerd[1543]: 2025-11-08 00:18:19.756 [INFO][5621] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="3126821bbfea469243aa27d5e202334d7d987a6ac75558cb9162e36c191a677a" HandleID="k8s-pod-network.3126821bbfea469243aa27d5e202334d7d987a6ac75558cb9162e36c191a677a" Workload="localhost-k8s-goldmane--7c778bb748--dfl5f-eth0" Nov 8 00:18:19.763159 containerd[1543]: 2025-11-08 00:18:19.756 [INFO][5621] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 8 00:18:19.763159 containerd[1543]: 2025-11-08 00:18:19.756 [INFO][5621] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 8 00:18:19.763159 containerd[1543]: 2025-11-08 00:18:19.760 [WARNING][5621] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="3126821bbfea469243aa27d5e202334d7d987a6ac75558cb9162e36c191a677a" HandleID="k8s-pod-network.3126821bbfea469243aa27d5e202334d7d987a6ac75558cb9162e36c191a677a" Workload="localhost-k8s-goldmane--7c778bb748--dfl5f-eth0" Nov 8 00:18:19.763159 containerd[1543]: 2025-11-08 00:18:19.760 [INFO][5621] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="3126821bbfea469243aa27d5e202334d7d987a6ac75558cb9162e36c191a677a" HandleID="k8s-pod-network.3126821bbfea469243aa27d5e202334d7d987a6ac75558cb9162e36c191a677a" Workload="localhost-k8s-goldmane--7c778bb748--dfl5f-eth0" Nov 8 00:18:19.763159 containerd[1543]: 2025-11-08 00:18:19.760 [INFO][5621] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 8 00:18:19.763159 containerd[1543]: 2025-11-08 00:18:19.762 [INFO][5614] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="3126821bbfea469243aa27d5e202334d7d987a6ac75558cb9162e36c191a677a" Nov 8 00:18:19.763755 containerd[1543]: time="2025-11-08T00:18:19.763589422Z" level=info msg="TearDown network for sandbox \"3126821bbfea469243aa27d5e202334d7d987a6ac75558cb9162e36c191a677a\" successfully" Nov 8 00:18:19.763755 containerd[1543]: time="2025-11-08T00:18:19.763605698Z" level=info msg="StopPodSandbox for \"3126821bbfea469243aa27d5e202334d7d987a6ac75558cb9162e36c191a677a\" returns successfully" Nov 8 00:18:19.763936 containerd[1543]: time="2025-11-08T00:18:19.763917368Z" level=info msg="RemovePodSandbox for \"3126821bbfea469243aa27d5e202334d7d987a6ac75558cb9162e36c191a677a\"" Nov 8 00:18:19.763960 containerd[1543]: time="2025-11-08T00:18:19.763935676Z" level=info msg="Forcibly stopping sandbox \"3126821bbfea469243aa27d5e202334d7d987a6ac75558cb9162e36c191a677a\"" Nov 8 00:18:19.809758 containerd[1543]: 2025-11-08 00:18:19.788 [WARNING][5635] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="3126821bbfea469243aa27d5e202334d7d987a6ac75558cb9162e36c191a677a" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--7c778bb748--dfl5f-eth0", GenerateName:"goldmane-7c778bb748-", Namespace:"calico-system", SelfLink:"", UID:"356509c0-5649-4d69-a66f-1b0a4ca00464", ResourceVersion:"1097", Generation:0, CreationTimestamp:time.Date(2025, time.November, 8, 0, 17, 36, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"7c778bb748", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"0c7e567b64e03ba5608ce9abd60453447735d79fd3908434640d7522eafec598", Pod:"goldmane-7c778bb748-dfl5f", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.137/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali168c62f2d16", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 8 00:18:19.809758 containerd[1543]: 2025-11-08 00:18:19.789 [INFO][5635] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="3126821bbfea469243aa27d5e202334d7d987a6ac75558cb9162e36c191a677a" Nov 8 00:18:19.809758 containerd[1543]: 2025-11-08 00:18:19.789 [INFO][5635] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="3126821bbfea469243aa27d5e202334d7d987a6ac75558cb9162e36c191a677a" iface="eth0" netns="" Nov 8 00:18:19.809758 containerd[1543]: 2025-11-08 00:18:19.789 [INFO][5635] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="3126821bbfea469243aa27d5e202334d7d987a6ac75558cb9162e36c191a677a" Nov 8 00:18:19.809758 containerd[1543]: 2025-11-08 00:18:19.789 [INFO][5635] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="3126821bbfea469243aa27d5e202334d7d987a6ac75558cb9162e36c191a677a" Nov 8 00:18:19.809758 containerd[1543]: 2025-11-08 00:18:19.802 [INFO][5642] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="3126821bbfea469243aa27d5e202334d7d987a6ac75558cb9162e36c191a677a" HandleID="k8s-pod-network.3126821bbfea469243aa27d5e202334d7d987a6ac75558cb9162e36c191a677a" Workload="localhost-k8s-goldmane--7c778bb748--dfl5f-eth0" Nov 8 00:18:19.809758 containerd[1543]: 2025-11-08 00:18:19.802 [INFO][5642] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 8 00:18:19.809758 containerd[1543]: 2025-11-08 00:18:19.802 [INFO][5642] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 8 00:18:19.809758 containerd[1543]: 2025-11-08 00:18:19.806 [WARNING][5642] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="3126821bbfea469243aa27d5e202334d7d987a6ac75558cb9162e36c191a677a" HandleID="k8s-pod-network.3126821bbfea469243aa27d5e202334d7d987a6ac75558cb9162e36c191a677a" Workload="localhost-k8s-goldmane--7c778bb748--dfl5f-eth0" Nov 8 00:18:19.809758 containerd[1543]: 2025-11-08 00:18:19.806 [INFO][5642] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="3126821bbfea469243aa27d5e202334d7d987a6ac75558cb9162e36c191a677a" HandleID="k8s-pod-network.3126821bbfea469243aa27d5e202334d7d987a6ac75558cb9162e36c191a677a" Workload="localhost-k8s-goldmane--7c778bb748--dfl5f-eth0" Nov 8 00:18:19.809758 containerd[1543]: 2025-11-08 00:18:19.807 [INFO][5642] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 8 00:18:19.809758 containerd[1543]: 2025-11-08 00:18:19.808 [INFO][5635] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="3126821bbfea469243aa27d5e202334d7d987a6ac75558cb9162e36c191a677a" Nov 8 00:18:19.814508 containerd[1543]: time="2025-11-08T00:18:19.809762304Z" level=info msg="TearDown network for sandbox \"3126821bbfea469243aa27d5e202334d7d987a6ac75558cb9162e36c191a677a\" successfully" Nov 8 00:18:19.825418 containerd[1543]: time="2025-11-08T00:18:19.825386695Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"3126821bbfea469243aa27d5e202334d7d987a6ac75558cb9162e36c191a677a\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Nov 8 00:18:19.825485 containerd[1543]: time="2025-11-08T00:18:19.825430315Z" level=info msg="RemovePodSandbox \"3126821bbfea469243aa27d5e202334d7d987a6ac75558cb9162e36c191a677a\" returns successfully" Nov 8 00:18:19.825934 containerd[1543]: time="2025-11-08T00:18:19.825750279Z" level=info msg="StopPodSandbox for \"adfa604ce3bcfc0ed43e3117a339cba8e13558a9118239a877677f9f6f30f63e\"" Nov 8 00:18:19.887835 containerd[1543]: 2025-11-08 00:18:19.859 [WARNING][5657] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="adfa604ce3bcfc0ed43e3117a339cba8e13558a9118239a877677f9f6f30f63e" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--6499957458--l5qw9-eth0", GenerateName:"calico-kube-controllers-6499957458-", Namespace:"calico-system", SelfLink:"", UID:"d70d6c69-7a19-4335-9162-f8ca1575049e", ResourceVersion:"1044", Generation:0, CreationTimestamp:time.Date(2025, time.November, 8, 0, 17, 39, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"6499957458", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"2b530f3903106175358f1058f8def2f707644836b8f8501b68ca0ee317efa136", Pod:"calico-kube-controllers-6499957458-l5qw9", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"caliceb13f98382", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 8 00:18:19.887835 containerd[1543]: 2025-11-08 00:18:19.859 [INFO][5657] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="adfa604ce3bcfc0ed43e3117a339cba8e13558a9118239a877677f9f6f30f63e" Nov 8 00:18:19.887835 containerd[1543]: 2025-11-08 00:18:19.859 [INFO][5657] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="adfa604ce3bcfc0ed43e3117a339cba8e13558a9118239a877677f9f6f30f63e" iface="eth0" netns="" Nov 8 00:18:19.887835 containerd[1543]: 2025-11-08 00:18:19.859 [INFO][5657] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="adfa604ce3bcfc0ed43e3117a339cba8e13558a9118239a877677f9f6f30f63e" Nov 8 00:18:19.887835 containerd[1543]: 2025-11-08 00:18:19.859 [INFO][5657] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="adfa604ce3bcfc0ed43e3117a339cba8e13558a9118239a877677f9f6f30f63e" Nov 8 00:18:19.887835 containerd[1543]: 2025-11-08 00:18:19.880 [INFO][5664] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="adfa604ce3bcfc0ed43e3117a339cba8e13558a9118239a877677f9f6f30f63e" HandleID="k8s-pod-network.adfa604ce3bcfc0ed43e3117a339cba8e13558a9118239a877677f9f6f30f63e" Workload="localhost-k8s-calico--kube--controllers--6499957458--l5qw9-eth0" Nov 8 00:18:19.887835 containerd[1543]: 2025-11-08 00:18:19.880 [INFO][5664] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 8 00:18:19.887835 containerd[1543]: 2025-11-08 00:18:19.880 [INFO][5664] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 8 00:18:19.887835 containerd[1543]: 2025-11-08 00:18:19.884 [WARNING][5664] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="adfa604ce3bcfc0ed43e3117a339cba8e13558a9118239a877677f9f6f30f63e" HandleID="k8s-pod-network.adfa604ce3bcfc0ed43e3117a339cba8e13558a9118239a877677f9f6f30f63e" Workload="localhost-k8s-calico--kube--controllers--6499957458--l5qw9-eth0" Nov 8 00:18:19.887835 containerd[1543]: 2025-11-08 00:18:19.884 [INFO][5664] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="adfa604ce3bcfc0ed43e3117a339cba8e13558a9118239a877677f9f6f30f63e" HandleID="k8s-pod-network.adfa604ce3bcfc0ed43e3117a339cba8e13558a9118239a877677f9f6f30f63e" Workload="localhost-k8s-calico--kube--controllers--6499957458--l5qw9-eth0" Nov 8 00:18:19.887835 containerd[1543]: 2025-11-08 00:18:19.885 [INFO][5664] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 8 00:18:19.887835 containerd[1543]: 2025-11-08 00:18:19.886 [INFO][5657] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="adfa604ce3bcfc0ed43e3117a339cba8e13558a9118239a877677f9f6f30f63e" Nov 8 00:18:19.888260 containerd[1543]: time="2025-11-08T00:18:19.887861285Z" level=info msg="TearDown network for sandbox \"adfa604ce3bcfc0ed43e3117a339cba8e13558a9118239a877677f9f6f30f63e\" successfully" Nov 8 00:18:19.888260 containerd[1543]: time="2025-11-08T00:18:19.887876856Z" level=info msg="StopPodSandbox for \"adfa604ce3bcfc0ed43e3117a339cba8e13558a9118239a877677f9f6f30f63e\" returns successfully" Nov 8 00:18:19.888260 containerd[1543]: time="2025-11-08T00:18:19.888237150Z" level=info msg="RemovePodSandbox for \"adfa604ce3bcfc0ed43e3117a339cba8e13558a9118239a877677f9f6f30f63e\"" Nov 8 00:18:19.888260 containerd[1543]: time="2025-11-08T00:18:19.888252166Z" level=info msg="Forcibly stopping sandbox \"adfa604ce3bcfc0ed43e3117a339cba8e13558a9118239a877677f9f6f30f63e\"" Nov 8 00:18:19.934377 containerd[1543]: 2025-11-08 00:18:19.911 [WARNING][5678] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="adfa604ce3bcfc0ed43e3117a339cba8e13558a9118239a877677f9f6f30f63e" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--6499957458--l5qw9-eth0", GenerateName:"calico-kube-controllers-6499957458-", Namespace:"calico-system", SelfLink:"", UID:"d70d6c69-7a19-4335-9162-f8ca1575049e", ResourceVersion:"1044", Generation:0, CreationTimestamp:time.Date(2025, time.November, 8, 0, 17, 39, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"6499957458", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"2b530f3903106175358f1058f8def2f707644836b8f8501b68ca0ee317efa136", Pod:"calico-kube-controllers-6499957458-l5qw9", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"caliceb13f98382", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 8 00:18:19.934377 containerd[1543]: 2025-11-08 00:18:19.911 [INFO][5678] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="adfa604ce3bcfc0ed43e3117a339cba8e13558a9118239a877677f9f6f30f63e" Nov 8 00:18:19.934377 containerd[1543]: 2025-11-08 00:18:19.911 [INFO][5678] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="adfa604ce3bcfc0ed43e3117a339cba8e13558a9118239a877677f9f6f30f63e" iface="eth0" netns="" Nov 8 00:18:19.934377 containerd[1543]: 2025-11-08 00:18:19.911 [INFO][5678] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="adfa604ce3bcfc0ed43e3117a339cba8e13558a9118239a877677f9f6f30f63e" Nov 8 00:18:19.934377 containerd[1543]: 2025-11-08 00:18:19.911 [INFO][5678] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="adfa604ce3bcfc0ed43e3117a339cba8e13558a9118239a877677f9f6f30f63e" Nov 8 00:18:19.934377 containerd[1543]: 2025-11-08 00:18:19.925 [INFO][5685] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="adfa604ce3bcfc0ed43e3117a339cba8e13558a9118239a877677f9f6f30f63e" HandleID="k8s-pod-network.adfa604ce3bcfc0ed43e3117a339cba8e13558a9118239a877677f9f6f30f63e" Workload="localhost-k8s-calico--kube--controllers--6499957458--l5qw9-eth0" Nov 8 00:18:19.934377 containerd[1543]: 2025-11-08 00:18:19.926 [INFO][5685] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 8 00:18:19.934377 containerd[1543]: 2025-11-08 00:18:19.926 [INFO][5685] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 8 00:18:19.934377 containerd[1543]: 2025-11-08 00:18:19.930 [WARNING][5685] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="adfa604ce3bcfc0ed43e3117a339cba8e13558a9118239a877677f9f6f30f63e" HandleID="k8s-pod-network.adfa604ce3bcfc0ed43e3117a339cba8e13558a9118239a877677f9f6f30f63e" Workload="localhost-k8s-calico--kube--controllers--6499957458--l5qw9-eth0" Nov 8 00:18:19.934377 containerd[1543]: 2025-11-08 00:18:19.930 [INFO][5685] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="adfa604ce3bcfc0ed43e3117a339cba8e13558a9118239a877677f9f6f30f63e" HandleID="k8s-pod-network.adfa604ce3bcfc0ed43e3117a339cba8e13558a9118239a877677f9f6f30f63e" Workload="localhost-k8s-calico--kube--controllers--6499957458--l5qw9-eth0" Nov 8 00:18:19.934377 containerd[1543]: 2025-11-08 00:18:19.930 [INFO][5685] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 8 00:18:19.934377 containerd[1543]: 2025-11-08 00:18:19.932 [INFO][5678] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="adfa604ce3bcfc0ed43e3117a339cba8e13558a9118239a877677f9f6f30f63e" Nov 8 00:18:19.934377 containerd[1543]: time="2025-11-08T00:18:19.933441259Z" level=info msg="TearDown network for sandbox \"adfa604ce3bcfc0ed43e3117a339cba8e13558a9118239a877677f9f6f30f63e\" successfully" Nov 8 00:18:19.936122 containerd[1543]: time="2025-11-08T00:18:19.935826063Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"adfa604ce3bcfc0ed43e3117a339cba8e13558a9118239a877677f9f6f30f63e\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Nov 8 00:18:19.936122 containerd[1543]: time="2025-11-08T00:18:19.935857891Z" level=info msg="RemovePodSandbox \"adfa604ce3bcfc0ed43e3117a339cba8e13558a9118239a877677f9f6f30f63e\" returns successfully" Nov 8 00:18:20.268713 containerd[1543]: time="2025-11-08T00:18:20.268135107Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Nov 8 00:18:20.628544 containerd[1543]: time="2025-11-08T00:18:20.628441277Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 8 00:18:20.628912 containerd[1543]: time="2025-11-08T00:18:20.628841695Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Nov 8 00:18:20.628969 containerd[1543]: time="2025-11-08T00:18:20.628933122Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=85" Nov 8 00:18:20.629237 kubelet[2729]: E1108 00:18:20.629139 2729 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Nov 8 00:18:20.629237 kubelet[2729]: E1108 00:18:20.629206 2729 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Nov 8 00:18:20.629546 kubelet[2729]: E1108 00:18:20.629296 2729 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-kube-controllers start failed in pod calico-kube-controllers-6499957458-l5qw9_calico-system(d70d6c69-7a19-4335-9162-f8ca1575049e): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Nov 8 00:18:20.629546 kubelet[2729]: E1108 00:18:20.629376 2729 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-6499957458-l5qw9" podUID="d70d6c69-7a19-4335-9162-f8ca1575049e" Nov 8 00:18:21.267487 containerd[1543]: time="2025-11-08T00:18:21.267334960Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 8 00:18:21.638492 containerd[1543]: time="2025-11-08T00:18:21.638389724Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 8 00:18:21.638977 containerd[1543]: time="2025-11-08T00:18:21.638946981Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 8 00:18:21.639061 containerd[1543]: time="2025-11-08T00:18:21.639012111Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Nov 8 00:18:21.639623 kubelet[2729]: E1108 00:18:21.639150 2729 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 8 00:18:21.639623 kubelet[2729]: E1108 00:18:21.639195 2729 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 8 00:18:21.639623 kubelet[2729]: E1108 00:18:21.639280 2729 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-apiserver start failed in pod calico-apiserver-7c7df7fbc6-cwgpp_calico-apiserver(793d1558-9161-49f1-a744-65e7ee945bd5): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 8 00:18:21.639623 kubelet[2729]: E1108 00:18:21.639323 2729 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7c7df7fbc6-cwgpp" podUID="793d1558-9161-49f1-a744-65e7ee945bd5" Nov 8 00:18:22.268741 containerd[1543]: time="2025-11-08T00:18:22.268373745Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Nov 8 00:18:22.632744 containerd[1543]: time="2025-11-08T00:18:22.632651361Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 8 00:18:22.635985 containerd[1543]: time="2025-11-08T00:18:22.635955422Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Nov 8 00:18:22.636048 containerd[1543]: time="2025-11-08T00:18:22.636015052Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Nov 8 00:18:22.636196 kubelet[2729]: E1108 00:18:22.636161 2729 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Nov 8 00:18:22.636269 kubelet[2729]: E1108 00:18:22.636200 2729 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Nov 8 00:18:22.637309 kubelet[2729]: E1108 00:18:22.636396 2729 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-csi start failed in pod csi-node-driver-tzsgq_calico-system(d3463313-ed82-40c5-914e-c6418d31744b): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Nov 8 00:18:22.637361 containerd[1543]: time="2025-11-08T00:18:22.636432655Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Nov 8 00:18:23.007618 containerd[1543]: time="2025-11-08T00:18:23.007525115Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 8 00:18:23.009289 containerd[1543]: time="2025-11-08T00:18:23.007873853Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Nov 8 00:18:23.009289 containerd[1543]: time="2025-11-08T00:18:23.007926439Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=77" Nov 8 00:18:23.009501 kubelet[2729]: E1108 00:18:23.008055 2729 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Nov 8 00:18:23.009501 kubelet[2729]: E1108 00:18:23.008089 2729 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Nov 8 00:18:23.009501 kubelet[2729]: E1108 00:18:23.008238 2729 kuberuntime_manager.go:1449] "Unhandled Error" err="container goldmane start failed in pod goldmane-7c778bb748-dfl5f_calico-system(356509c0-5649-4d69-a66f-1b0a4ca00464): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Nov 8 00:18:23.009501 kubelet[2729]: E1108 00:18:23.008265 2729 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-dfl5f" podUID="356509c0-5649-4d69-a66f-1b0a4ca00464" Nov 8 00:18:23.010894 containerd[1543]: time="2025-11-08T00:18:23.010039333Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Nov 8 00:18:23.269294 kubelet[2729]: E1108 00:18:23.268634 2729 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-54745c6b5b-z6gpb" podUID="05f2c2bc-5f37-4718-b09a-298a728d0047" Nov 8 00:18:23.373496 containerd[1543]: time="2025-11-08T00:18:23.373441820Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 8 00:18:23.373932 containerd[1543]: time="2025-11-08T00:18:23.373891465Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Nov 8 00:18:23.374051 containerd[1543]: time="2025-11-08T00:18:23.373965629Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Nov 8 00:18:23.374122 kubelet[2729]: E1108 00:18:23.374086 2729 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Nov 8 00:18:23.374199 kubelet[2729]: E1108 00:18:23.374130 2729 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Nov 8 00:18:23.374234 kubelet[2729]: E1108 00:18:23.374207 2729 kuberuntime_manager.go:1449] "Unhandled Error" err="container csi-node-driver-registrar start failed in pod csi-node-driver-tzsgq_calico-system(d3463313-ed82-40c5-914e-c6418d31744b): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Nov 8 00:18:23.374343 kubelet[2729]: E1108 00:18:23.374251 2729 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-tzsgq" podUID="d3463313-ed82-40c5-914e-c6418d31744b" Nov 8 00:18:30.269483 kubelet[2729]: E1108 00:18:30.268255 2729 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5d46c4cb7-dlrvn" podUID="e5b24471-658f-4121-a78d-73d2e59f83f1" Nov 8 00:18:34.268231 kubelet[2729]: E1108 00:18:34.267721 2729 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5d46c4cb7-lvp22" podUID="fcbbea23-b3d5-4961-9b75-cd8ace86f6c6" Nov 8 00:18:34.362284 kubelet[2729]: E1108 00:18:34.362239 2729 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-dfl5f" podUID="356509c0-5649-4d69-a66f-1b0a4ca00464" Nov 8 00:18:35.267340 kubelet[2729]: E1108 00:18:35.267283 2729 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-6499957458-l5qw9" podUID="d70d6c69-7a19-4335-9162-f8ca1575049e" Nov 8 00:18:36.270336 kubelet[2729]: E1108 00:18:36.268024 2729 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7c7df7fbc6-cwgpp" podUID="793d1558-9161-49f1-a744-65e7ee945bd5" Nov 8 00:18:36.271034 containerd[1543]: time="2025-11-08T00:18:36.270463463Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Nov 8 00:18:36.626051 containerd[1543]: time="2025-11-08T00:18:36.625820704Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 8 00:18:36.628096 containerd[1543]: time="2025-11-08T00:18:36.628066328Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Nov 8 00:18:36.628242 containerd[1543]: time="2025-11-08T00:18:36.628132740Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Nov 8 00:18:36.629346 kubelet[2729]: E1108 00:18:36.628391 2729 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Nov 8 00:18:36.629346 kubelet[2729]: E1108 00:18:36.628437 2729 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Nov 8 00:18:36.629346 kubelet[2729]: E1108 00:18:36.628508 2729 kuberuntime_manager.go:1449] "Unhandled Error" err="container whisker start failed in pod whisker-54745c6b5b-z6gpb_calico-system(05f2c2bc-5f37-4718-b09a-298a728d0047): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Nov 8 00:18:36.639384 containerd[1543]: time="2025-11-08T00:18:36.630049907Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Nov 8 00:18:36.968314 containerd[1543]: time="2025-11-08T00:18:36.967378698Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 8 00:18:36.970756 containerd[1543]: time="2025-11-08T00:18:36.970730136Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Nov 8 00:18:36.970913 containerd[1543]: time="2025-11-08T00:18:36.970747650Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Nov 8 00:18:36.971041 kubelet[2729]: E1108 00:18:36.971014 2729 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Nov 8 00:18:36.971077 kubelet[2729]: E1108 00:18:36.971056 2729 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Nov 8 00:18:36.971255 kubelet[2729]: E1108 00:18:36.971242 2729 kuberuntime_manager.go:1449] "Unhandled Error" err="container whisker-backend start failed in pod whisker-54745c6b5b-z6gpb_calico-system(05f2c2bc-5f37-4718-b09a-298a728d0047): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Nov 8 00:18:36.971347 kubelet[2729]: E1108 00:18:36.971321 2729 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-54745c6b5b-z6gpb" podUID="05f2c2bc-5f37-4718-b09a-298a728d0047" Nov 8 00:18:37.267128 kubelet[2729]: E1108 00:18:37.267040 2729 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-tzsgq" podUID="d3463313-ed82-40c5-914e-c6418d31744b" Nov 8 00:18:38.733522 systemd[1]: Started sshd@7-139.178.70.104:22-147.75.109.163:37442.service - OpenSSH per-connection server daemon (147.75.109.163:37442). Nov 8 00:18:38.946908 sshd[5727]: Accepted publickey for core from 147.75.109.163 port 37442 ssh2: RSA SHA256:w/xGfnUobRwx5tSVykkPMEgd5qYNjwGoDH1wcB/4M9g Nov 8 00:18:38.952852 sshd[5727]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 8 00:18:38.959632 systemd-logind[1520]: New session 10 of user core. Nov 8 00:18:38.963426 systemd[1]: Started session-10.scope - Session 10 of User core. Nov 8 00:18:39.803603 sshd[5727]: pam_unix(sshd:session): session closed for user core Nov 8 00:18:39.811549 systemd-logind[1520]: Session 10 logged out. Waiting for processes to exit. Nov 8 00:18:39.815709 systemd[1]: sshd@7-139.178.70.104:22-147.75.109.163:37442.service: Deactivated successfully. Nov 8 00:18:39.817677 systemd[1]: session-10.scope: Deactivated successfully. Nov 8 00:18:39.820070 systemd-logind[1520]: Removed session 10. Nov 8 00:18:42.277606 containerd[1543]: time="2025-11-08T00:18:42.277202582Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 8 00:18:42.618635 containerd[1543]: time="2025-11-08T00:18:42.618547559Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 8 00:18:42.628561 containerd[1543]: time="2025-11-08T00:18:42.628532657Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 8 00:18:42.628647 containerd[1543]: time="2025-11-08T00:18:42.628591441Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Nov 8 00:18:42.654647 kubelet[2729]: E1108 00:18:42.654609 2729 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 8 00:18:42.655496 kubelet[2729]: E1108 00:18:42.655472 2729 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 8 00:18:42.656254 kubelet[2729]: E1108 00:18:42.655540 2729 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-apiserver start failed in pod calico-apiserver-5d46c4cb7-dlrvn_calico-apiserver(e5b24471-658f-4121-a78d-73d2e59f83f1): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 8 00:18:42.656254 kubelet[2729]: E1108 00:18:42.655569 2729 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5d46c4cb7-dlrvn" podUID="e5b24471-658f-4121-a78d-73d2e59f83f1" Nov 8 00:18:44.814781 systemd[1]: Started sshd@8-139.178.70.104:22-147.75.109.163:44246.service - OpenSSH per-connection server daemon (147.75.109.163:44246). Nov 8 00:18:44.868772 sshd[5752]: Accepted publickey for core from 147.75.109.163 port 44246 ssh2: RSA SHA256:w/xGfnUobRwx5tSVykkPMEgd5qYNjwGoDH1wcB/4M9g Nov 8 00:18:44.869257 sshd[5752]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 8 00:18:44.872420 systemd-logind[1520]: New session 11 of user core. Nov 8 00:18:44.876450 systemd[1]: Started session-11.scope - Session 11 of User core. Nov 8 00:18:45.105593 sshd[5752]: pam_unix(sshd:session): session closed for user core Nov 8 00:18:45.108694 systemd[1]: sshd@8-139.178.70.104:22-147.75.109.163:44246.service: Deactivated successfully. Nov 8 00:18:45.110405 systemd[1]: session-11.scope: Deactivated successfully. Nov 8 00:18:45.113489 systemd-logind[1520]: Session 11 logged out. Waiting for processes to exit. Nov 8 00:18:45.115472 systemd-logind[1520]: Removed session 11. Nov 8 00:18:47.321465 containerd[1543]: time="2025-11-08T00:18:47.320916632Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 8 00:18:47.664759 containerd[1543]: time="2025-11-08T00:18:47.664514702Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 8 00:18:47.665204 containerd[1543]: time="2025-11-08T00:18:47.665179250Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 8 00:18:47.665441 containerd[1543]: time="2025-11-08T00:18:47.665260509Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Nov 8 00:18:47.665477 kubelet[2729]: E1108 00:18:47.665384 2729 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 8 00:18:47.665477 kubelet[2729]: E1108 00:18:47.665423 2729 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 8 00:18:47.665695 kubelet[2729]: E1108 00:18:47.665550 2729 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-apiserver start failed in pod calico-apiserver-7c7df7fbc6-cwgpp_calico-apiserver(793d1558-9161-49f1-a744-65e7ee945bd5): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 8 00:18:47.665695 kubelet[2729]: E1108 00:18:47.665571 2729 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7c7df7fbc6-cwgpp" podUID="793d1558-9161-49f1-a744-65e7ee945bd5" Nov 8 00:18:47.666244 containerd[1543]: time="2025-11-08T00:18:47.665870755Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Nov 8 00:18:47.999930 containerd[1543]: time="2025-11-08T00:18:47.999819510Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 8 00:18:48.005199 containerd[1543]: time="2025-11-08T00:18:48.005168195Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Nov 8 00:18:48.005255 containerd[1543]: time="2025-11-08T00:18:48.005234758Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=77" Nov 8 00:18:48.005425 kubelet[2729]: E1108 00:18:48.005392 2729 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Nov 8 00:18:48.005474 kubelet[2729]: E1108 00:18:48.005431 2729 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Nov 8 00:18:48.005502 kubelet[2729]: E1108 00:18:48.005483 2729 kuberuntime_manager.go:1449] "Unhandled Error" err="container goldmane start failed in pod goldmane-7c778bb748-dfl5f_calico-system(356509c0-5649-4d69-a66f-1b0a4ca00464): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Nov 8 00:18:48.005535 kubelet[2729]: E1108 00:18:48.005506 2729 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-dfl5f" podUID="356509c0-5649-4d69-a66f-1b0a4ca00464" Nov 8 00:18:48.267888 containerd[1543]: time="2025-11-08T00:18:48.267818580Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 8 00:18:48.757428 containerd[1543]: time="2025-11-08T00:18:48.757393995Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 8 00:18:48.757871 containerd[1543]: time="2025-11-08T00:18:48.757845841Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 8 00:18:48.757939 containerd[1543]: time="2025-11-08T00:18:48.757913048Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Nov 8 00:18:48.758151 kubelet[2729]: E1108 00:18:48.758036 2729 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 8 00:18:48.758151 kubelet[2729]: E1108 00:18:48.758065 2729 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 8 00:18:48.758151 kubelet[2729]: E1108 00:18:48.758112 2729 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-apiserver start failed in pod calico-apiserver-5d46c4cb7-lvp22_calico-apiserver(fcbbea23-b3d5-4961-9b75-cd8ace86f6c6): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 8 00:18:48.758151 kubelet[2729]: E1108 00:18:48.758141 2729 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5d46c4cb7-lvp22" podUID="fcbbea23-b3d5-4961-9b75-cd8ace86f6c6" Nov 8 00:18:50.121563 systemd[1]: Started sshd@9-139.178.70.104:22-147.75.109.163:37872.service - OpenSSH per-connection server daemon (147.75.109.163:37872). Nov 8 00:18:50.214063 sshd[5773]: Accepted publickey for core from 147.75.109.163 port 37872 ssh2: RSA SHA256:w/xGfnUobRwx5tSVykkPMEgd5qYNjwGoDH1wcB/4M9g Nov 8 00:18:50.214476 sshd[5773]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 8 00:18:50.218722 systemd-logind[1520]: New session 12 of user core. Nov 8 00:18:50.224464 systemd[1]: Started session-12.scope - Session 12 of User core. Nov 8 00:18:50.271181 containerd[1543]: time="2025-11-08T00:18:50.271156912Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Nov 8 00:18:50.411272 sshd[5773]: pam_unix(sshd:session): session closed for user core Nov 8 00:18:50.411613 systemd[1]: Started sshd@10-139.178.70.104:22-147.75.109.163:37882.service - OpenSSH per-connection server daemon (147.75.109.163:37882). Nov 8 00:18:50.422080 systemd-logind[1520]: Session 12 logged out. Waiting for processes to exit. Nov 8 00:18:50.424744 systemd[1]: sshd@9-139.178.70.104:22-147.75.109.163:37872.service: Deactivated successfully. Nov 8 00:18:50.426207 systemd[1]: session-12.scope: Deactivated successfully. Nov 8 00:18:50.427014 systemd-logind[1520]: Removed session 12. Nov 8 00:18:50.497986 sshd[5784]: Accepted publickey for core from 147.75.109.163 port 37882 ssh2: RSA SHA256:w/xGfnUobRwx5tSVykkPMEgd5qYNjwGoDH1wcB/4M9g Nov 8 00:18:50.498923 sshd[5784]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 8 00:18:50.502178 systemd-logind[1520]: New session 13 of user core. Nov 8 00:18:50.512428 systemd[1]: Started session-13.scope - Session 13 of User core. Nov 8 00:18:50.610791 containerd[1543]: time="2025-11-08T00:18:50.610586684Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 8 00:18:50.622255 containerd[1543]: time="2025-11-08T00:18:50.622138677Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Nov 8 00:18:50.622255 containerd[1543]: time="2025-11-08T00:18:50.622209037Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=85" Nov 8 00:18:50.638220 kubelet[2729]: E1108 00:18:50.622349 2729 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Nov 8 00:18:50.638220 kubelet[2729]: E1108 00:18:50.622381 2729 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Nov 8 00:18:50.638220 kubelet[2729]: E1108 00:18:50.622428 2729 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-kube-controllers start failed in pod calico-kube-controllers-6499957458-l5qw9_calico-system(d70d6c69-7a19-4335-9162-f8ca1575049e): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Nov 8 00:18:50.638220 kubelet[2729]: E1108 00:18:50.622450 2729 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-6499957458-l5qw9" podUID="d70d6c69-7a19-4335-9162-f8ca1575049e" Nov 8 00:18:50.794765 sshd[5784]: pam_unix(sshd:session): session closed for user core Nov 8 00:18:50.802432 systemd[1]: sshd@10-139.178.70.104:22-147.75.109.163:37882.service: Deactivated successfully. Nov 8 00:18:50.804051 systemd[1]: session-13.scope: Deactivated successfully. Nov 8 00:18:50.805662 systemd-logind[1520]: Session 13 logged out. Waiting for processes to exit. Nov 8 00:18:50.811588 systemd[1]: Started sshd@11-139.178.70.104:22-147.75.109.163:37884.service - OpenSSH per-connection server daemon (147.75.109.163:37884). Nov 8 00:18:50.812373 systemd-logind[1520]: Removed session 13. Nov 8 00:18:50.927377 sshd[5796]: Accepted publickey for core from 147.75.109.163 port 37884 ssh2: RSA SHA256:w/xGfnUobRwx5tSVykkPMEgd5qYNjwGoDH1wcB/4M9g Nov 8 00:18:50.928317 sshd[5796]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 8 00:18:50.931712 systemd-logind[1520]: New session 14 of user core. Nov 8 00:18:50.938558 systemd[1]: Started session-14.scope - Session 14 of User core. Nov 8 00:18:51.140575 sshd[5796]: pam_unix(sshd:session): session closed for user core Nov 8 00:18:51.142602 systemd[1]: sshd@11-139.178.70.104:22-147.75.109.163:37884.service: Deactivated successfully. Nov 8 00:18:51.143953 systemd[1]: session-14.scope: Deactivated successfully. Nov 8 00:18:51.144863 systemd-logind[1520]: Session 14 logged out. Waiting for processes to exit. Nov 8 00:18:51.145804 systemd-logind[1520]: Removed session 14. Nov 8 00:18:51.267647 containerd[1543]: time="2025-11-08T00:18:51.267589937Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Nov 8 00:18:51.268280 kubelet[2729]: E1108 00:18:51.268235 2729 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-54745c6b5b-z6gpb" podUID="05f2c2bc-5f37-4718-b09a-298a728d0047" Nov 8 00:18:51.644535 containerd[1543]: time="2025-11-08T00:18:51.644323075Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 8 00:18:51.645019 containerd[1543]: time="2025-11-08T00:18:51.644917593Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Nov 8 00:18:51.645019 containerd[1543]: time="2025-11-08T00:18:51.644978648Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Nov 8 00:18:51.645098 kubelet[2729]: E1108 00:18:51.645059 2729 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Nov 8 00:18:51.645098 kubelet[2729]: E1108 00:18:51.645087 2729 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Nov 8 00:18:51.645281 kubelet[2729]: E1108 00:18:51.645131 2729 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-csi start failed in pod csi-node-driver-tzsgq_calico-system(d3463313-ed82-40c5-914e-c6418d31744b): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Nov 8 00:18:51.646650 containerd[1543]: time="2025-11-08T00:18:51.646633406Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Nov 8 00:18:52.194617 containerd[1543]: time="2025-11-08T00:18:52.194518356Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 8 00:18:52.198605 containerd[1543]: time="2025-11-08T00:18:52.198423457Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Nov 8 00:18:52.198605 containerd[1543]: time="2025-11-08T00:18:52.198473855Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Nov 8 00:18:52.198908 kubelet[2729]: E1108 00:18:52.198743 2729 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Nov 8 00:18:52.198908 kubelet[2729]: E1108 00:18:52.198857 2729 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Nov 8 00:18:52.199070 kubelet[2729]: E1108 00:18:52.199006 2729 kuberuntime_manager.go:1449] "Unhandled Error" err="container csi-node-driver-registrar start failed in pod csi-node-driver-tzsgq_calico-system(d3463313-ed82-40c5-914e-c6418d31744b): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Nov 8 00:18:52.199070 kubelet[2729]: E1108 00:18:52.199039 2729 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-tzsgq" podUID="d3463313-ed82-40c5-914e-c6418d31744b" Nov 8 00:18:55.267317 kubelet[2729]: E1108 00:18:55.266996 2729 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5d46c4cb7-dlrvn" podUID="e5b24471-658f-4121-a78d-73d2e59f83f1" Nov 8 00:18:56.161390 systemd[1]: Started sshd@12-139.178.70.104:22-147.75.109.163:37886.service - OpenSSH per-connection server daemon (147.75.109.163:37886). Nov 8 00:18:56.189350 sshd[5815]: Accepted publickey for core from 147.75.109.163 port 37886 ssh2: RSA SHA256:w/xGfnUobRwx5tSVykkPMEgd5qYNjwGoDH1wcB/4M9g Nov 8 00:18:56.191787 sshd[5815]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 8 00:18:56.196000 systemd-logind[1520]: New session 15 of user core. Nov 8 00:18:56.200540 systemd[1]: Started session-15.scope - Session 15 of User core. Nov 8 00:18:56.316375 sshd[5815]: pam_unix(sshd:session): session closed for user core Nov 8 00:18:56.319604 systemd-logind[1520]: Session 15 logged out. Waiting for processes to exit. Nov 8 00:18:56.319854 systemd[1]: sshd@12-139.178.70.104:22-147.75.109.163:37886.service: Deactivated successfully. Nov 8 00:18:56.322133 systemd[1]: session-15.scope: Deactivated successfully. Nov 8 00:18:56.323730 systemd-logind[1520]: Removed session 15. Nov 8 00:19:01.267255 kubelet[2729]: E1108 00:19:01.267185 2729 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-6499957458-l5qw9" podUID="d70d6c69-7a19-4335-9162-f8ca1575049e" Nov 8 00:19:01.267255 kubelet[2729]: E1108 00:19:01.267215 2729 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-dfl5f" podUID="356509c0-5649-4d69-a66f-1b0a4ca00464" Nov 8 00:19:01.331491 systemd[1]: Started sshd@13-139.178.70.104:22-147.75.109.163:55576.service - OpenSSH per-connection server daemon (147.75.109.163:55576). Nov 8 00:19:01.536939 sshd[5849]: Accepted publickey for core from 147.75.109.163 port 55576 ssh2: RSA SHA256:w/xGfnUobRwx5tSVykkPMEgd5qYNjwGoDH1wcB/4M9g Nov 8 00:19:01.538048 sshd[5849]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 8 00:19:01.547081 systemd-logind[1520]: New session 16 of user core. Nov 8 00:19:01.552827 systemd[1]: Started session-16.scope - Session 16 of User core. Nov 8 00:19:01.861263 sshd[5849]: pam_unix(sshd:session): session closed for user core Nov 8 00:19:01.863930 systemd[1]: sshd@13-139.178.70.104:22-147.75.109.163:55576.service: Deactivated successfully. Nov 8 00:19:01.865122 systemd[1]: session-16.scope: Deactivated successfully. Nov 8 00:19:01.865596 systemd-logind[1520]: Session 16 logged out. Waiting for processes to exit. Nov 8 00:19:01.866143 systemd-logind[1520]: Removed session 16. Nov 8 00:19:02.268855 kubelet[2729]: E1108 00:19:02.268740 2729 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7c7df7fbc6-cwgpp" podUID="793d1558-9161-49f1-a744-65e7ee945bd5" Nov 8 00:19:03.269932 kubelet[2729]: E1108 00:19:03.269294 2729 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-tzsgq" podUID="d3463313-ed82-40c5-914e-c6418d31744b" Nov 8 00:19:03.269932 kubelet[2729]: E1108 00:19:03.269612 2729 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-54745c6b5b-z6gpb" podUID="05f2c2bc-5f37-4718-b09a-298a728d0047" Nov 8 00:19:04.291317 kubelet[2729]: E1108 00:19:04.290491 2729 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5d46c4cb7-lvp22" podUID="fcbbea23-b3d5-4961-9b75-cd8ace86f6c6" Nov 8 00:19:06.871560 systemd[1]: Started sshd@14-139.178.70.104:22-147.75.109.163:55590.service - OpenSSH per-connection server daemon (147.75.109.163:55590). Nov 8 00:19:06.924401 sshd[5863]: Accepted publickey for core from 147.75.109.163 port 55590 ssh2: RSA SHA256:w/xGfnUobRwx5tSVykkPMEgd5qYNjwGoDH1wcB/4M9g Nov 8 00:19:06.925511 sshd[5863]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 8 00:19:06.928452 systemd-logind[1520]: New session 17 of user core. Nov 8 00:19:06.932621 systemd[1]: Started session-17.scope - Session 17 of User core. Nov 8 00:19:07.050460 sshd[5863]: pam_unix(sshd:session): session closed for user core Nov 8 00:19:07.053839 systemd-logind[1520]: Session 17 logged out. Waiting for processes to exit. Nov 8 00:19:07.054428 systemd[1]: sshd@14-139.178.70.104:22-147.75.109.163:55590.service: Deactivated successfully. Nov 8 00:19:07.056084 systemd[1]: session-17.scope: Deactivated successfully. Nov 8 00:19:07.057019 systemd-logind[1520]: Removed session 17. Nov 8 00:19:07.266706 kubelet[2729]: E1108 00:19:07.266672 2729 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5d46c4cb7-dlrvn" podUID="e5b24471-658f-4121-a78d-73d2e59f83f1" Nov 8 00:19:12.059139 systemd[1]: Started sshd@15-139.178.70.104:22-147.75.109.163:57038.service - OpenSSH per-connection server daemon (147.75.109.163:57038). Nov 8 00:19:12.182198 sshd[5876]: Accepted publickey for core from 147.75.109.163 port 57038 ssh2: RSA SHA256:w/xGfnUobRwx5tSVykkPMEgd5qYNjwGoDH1wcB/4M9g Nov 8 00:19:12.183624 sshd[5876]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 8 00:19:12.187031 systemd-logind[1520]: New session 18 of user core. Nov 8 00:19:12.193471 systemd[1]: Started session-18.scope - Session 18 of User core. Nov 8 00:19:12.269173 kubelet[2729]: E1108 00:19:12.267942 2729 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-dfl5f" podUID="356509c0-5649-4d69-a66f-1b0a4ca00464" Nov 8 00:19:12.388226 sshd[5876]: pam_unix(sshd:session): session closed for user core Nov 8 00:19:12.393960 systemd[1]: sshd@15-139.178.70.104:22-147.75.109.163:57038.service: Deactivated successfully. Nov 8 00:19:12.397383 systemd[1]: session-18.scope: Deactivated successfully. Nov 8 00:19:12.398637 systemd-logind[1520]: Session 18 logged out. Waiting for processes to exit. Nov 8 00:19:12.407538 systemd[1]: Started sshd@16-139.178.70.104:22-147.75.109.163:57042.service - OpenSSH per-connection server daemon (147.75.109.163:57042). Nov 8 00:19:12.411914 systemd-logind[1520]: Removed session 18. Nov 8 00:19:12.443829 sshd[5889]: Accepted publickey for core from 147.75.109.163 port 57042 ssh2: RSA SHA256:w/xGfnUobRwx5tSVykkPMEgd5qYNjwGoDH1wcB/4M9g Nov 8 00:19:12.444488 sshd[5889]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 8 00:19:12.447985 systemd-logind[1520]: New session 19 of user core. Nov 8 00:19:12.451409 systemd[1]: Started session-19.scope - Session 19 of User core. Nov 8 00:19:13.093858 sshd[5889]: pam_unix(sshd:session): session closed for user core Nov 8 00:19:13.098992 systemd[1]: sshd@16-139.178.70.104:22-147.75.109.163:57042.service: Deactivated successfully. Nov 8 00:19:13.100825 systemd[1]: session-19.scope: Deactivated successfully. Nov 8 00:19:13.102633 systemd-logind[1520]: Session 19 logged out. Waiting for processes to exit. Nov 8 00:19:13.105881 systemd[1]: Started sshd@17-139.178.70.104:22-147.75.109.163:57044.service - OpenSSH per-connection server daemon (147.75.109.163:57044). Nov 8 00:19:13.107727 systemd-logind[1520]: Removed session 19. Nov 8 00:19:13.365830 sshd[5900]: Accepted publickey for core from 147.75.109.163 port 57044 ssh2: RSA SHA256:w/xGfnUobRwx5tSVykkPMEgd5qYNjwGoDH1wcB/4M9g Nov 8 00:19:13.366747 sshd[5900]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 8 00:19:13.369525 systemd-logind[1520]: New session 20 of user core. Nov 8 00:19:13.375404 systemd[1]: Started session-20.scope - Session 20 of User core. Nov 8 00:19:14.111758 sshd[5900]: pam_unix(sshd:session): session closed for user core Nov 8 00:19:14.117414 systemd[1]: Started sshd@18-139.178.70.104:22-147.75.109.163:57050.service - OpenSSH per-connection server daemon (147.75.109.163:57050). Nov 8 00:19:14.119650 systemd[1]: sshd@17-139.178.70.104:22-147.75.109.163:57044.service: Deactivated successfully. Nov 8 00:19:14.122197 systemd[1]: session-20.scope: Deactivated successfully. Nov 8 00:19:14.123730 systemd-logind[1520]: Session 20 logged out. Waiting for processes to exit. Nov 8 00:19:14.125034 systemd-logind[1520]: Removed session 20. Nov 8 00:19:14.165445 sshd[5916]: Accepted publickey for core from 147.75.109.163 port 57050 ssh2: RSA SHA256:w/xGfnUobRwx5tSVykkPMEgd5qYNjwGoDH1wcB/4M9g Nov 8 00:19:14.166755 sshd[5916]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 8 00:19:14.171439 systemd-logind[1520]: New session 21 of user core. Nov 8 00:19:14.176446 systemd[1]: Started session-21.scope - Session 21 of User core. Nov 8 00:19:14.270145 kubelet[2729]: E1108 00:19:14.269926 2729 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-6499957458-l5qw9" podUID="d70d6c69-7a19-4335-9162-f8ca1575049e" Nov 8 00:19:14.271461 kubelet[2729]: E1108 00:19:14.270893 2729 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-54745c6b5b-z6gpb" podUID="05f2c2bc-5f37-4718-b09a-298a728d0047" Nov 8 00:19:14.271461 kubelet[2729]: E1108 00:19:14.270974 2729 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-tzsgq" podUID="d3463313-ed82-40c5-914e-c6418d31744b" Nov 8 00:19:14.438659 sshd[5916]: pam_unix(sshd:session): session closed for user core Nov 8 00:19:14.445561 systemd[1]: sshd@18-139.178.70.104:22-147.75.109.163:57050.service: Deactivated successfully. Nov 8 00:19:14.448036 systemd[1]: session-21.scope: Deactivated successfully. Nov 8 00:19:14.450173 systemd-logind[1520]: Session 21 logged out. Waiting for processes to exit. Nov 8 00:19:14.457657 systemd[1]: Started sshd@19-139.178.70.104:22-147.75.109.163:57066.service - OpenSSH per-connection server daemon (147.75.109.163:57066). Nov 8 00:19:14.462268 systemd-logind[1520]: Removed session 21. Nov 8 00:19:14.508697 sshd[5931]: Accepted publickey for core from 147.75.109.163 port 57066 ssh2: RSA SHA256:w/xGfnUobRwx5tSVykkPMEgd5qYNjwGoDH1wcB/4M9g Nov 8 00:19:14.510031 sshd[5931]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 8 00:19:14.515067 systemd-logind[1520]: New session 22 of user core. Nov 8 00:19:14.521456 systemd[1]: Started session-22.scope - Session 22 of User core. Nov 8 00:19:14.625461 sshd[5931]: pam_unix(sshd:session): session closed for user core Nov 8 00:19:14.627572 systemd[1]: sshd@19-139.178.70.104:22-147.75.109.163:57066.service: Deactivated successfully. Nov 8 00:19:14.628705 systemd[1]: session-22.scope: Deactivated successfully. Nov 8 00:19:14.629220 systemd-logind[1520]: Session 22 logged out. Waiting for processes to exit. Nov 8 00:19:14.629890 systemd-logind[1520]: Removed session 22. Nov 8 00:19:15.267048 kubelet[2729]: E1108 00:19:15.266904 2729 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7c7df7fbc6-cwgpp" podUID="793d1558-9161-49f1-a744-65e7ee945bd5" Nov 8 00:19:17.273141 kubelet[2729]: E1108 00:19:17.270973 2729 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5d46c4cb7-lvp22" podUID="fcbbea23-b3d5-4961-9b75-cd8ace86f6c6" Nov 8 00:19:19.634995 systemd[1]: Started sshd@20-139.178.70.104:22-147.75.109.163:57070.service - OpenSSH per-connection server daemon (147.75.109.163:57070). Nov 8 00:19:19.719855 sshd[5952]: Accepted publickey for core from 147.75.109.163 port 57070 ssh2: RSA SHA256:w/xGfnUobRwx5tSVykkPMEgd5qYNjwGoDH1wcB/4M9g Nov 8 00:19:19.720983 sshd[5952]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 8 00:19:19.723665 systemd-logind[1520]: New session 23 of user core. Nov 8 00:19:19.731396 systemd[1]: Started session-23.scope - Session 23 of User core. Nov 8 00:19:19.881849 sshd[5952]: pam_unix(sshd:session): session closed for user core Nov 8 00:19:19.886371 systemd[1]: sshd@20-139.178.70.104:22-147.75.109.163:57070.service: Deactivated successfully. Nov 8 00:19:19.889049 systemd[1]: session-23.scope: Deactivated successfully. Nov 8 00:19:19.890671 systemd-logind[1520]: Session 23 logged out. Waiting for processes to exit. Nov 8 00:19:19.891691 systemd-logind[1520]: Removed session 23. Nov 8 00:19:20.267547 kubelet[2729]: E1108 00:19:20.267499 2729 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5d46c4cb7-dlrvn" podUID="e5b24471-658f-4121-a78d-73d2e59f83f1" Nov 8 00:19:23.267538 kubelet[2729]: E1108 00:19:23.267510 2729 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-dfl5f" podUID="356509c0-5649-4d69-a66f-1b0a4ca00464" Nov 8 00:19:24.896486 systemd[1]: Started sshd@21-139.178.70.104:22-147.75.109.163:46638.service - OpenSSH per-connection server daemon (147.75.109.163:46638). Nov 8 00:19:25.444186 sshd[5965]: Accepted publickey for core from 147.75.109.163 port 46638 ssh2: RSA SHA256:w/xGfnUobRwx5tSVykkPMEgd5qYNjwGoDH1wcB/4M9g Nov 8 00:19:25.459566 sshd[5965]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 8 00:19:25.472909 systemd-logind[1520]: New session 24 of user core. Nov 8 00:19:25.476431 systemd[1]: Started session-24.scope - Session 24 of User core. Nov 8 00:19:26.025938 sshd[5965]: pam_unix(sshd:session): session closed for user core Nov 8 00:19:26.029500 systemd[1]: sshd@21-139.178.70.104:22-147.75.109.163:46638.service: Deactivated successfully. Nov 8 00:19:26.030665 systemd[1]: session-24.scope: Deactivated successfully. Nov 8 00:19:26.034130 systemd-logind[1520]: Session 24 logged out. Waiting for processes to exit. Nov 8 00:19:26.036054 systemd-logind[1520]: Removed session 24. Nov 8 00:19:26.280441 kubelet[2729]: E1108 00:19:26.279693 2729 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-6499957458-l5qw9" podUID="d70d6c69-7a19-4335-9162-f8ca1575049e" Nov 8 00:19:27.267796 containerd[1543]: time="2025-11-08T00:19:27.267689533Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Nov 8 00:19:27.268131 kubelet[2729]: E1108 00:19:27.267597 2729 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7c7df7fbc6-cwgpp" podUID="793d1558-9161-49f1-a744-65e7ee945bd5" Nov 8 00:19:27.626369 containerd[1543]: time="2025-11-08T00:19:27.625291775Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 8 00:19:27.626906 containerd[1543]: time="2025-11-08T00:19:27.626869588Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Nov 8 00:19:27.626960 containerd[1543]: time="2025-11-08T00:19:27.626938525Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Nov 8 00:19:27.627194 kubelet[2729]: E1108 00:19:27.627100 2729 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Nov 8 00:19:27.627194 kubelet[2729]: E1108 00:19:27.627159 2729 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Nov 8 00:19:27.627736 kubelet[2729]: E1108 00:19:27.627485 2729 kuberuntime_manager.go:1449] "Unhandled Error" err="container whisker start failed in pod whisker-54745c6b5b-z6gpb_calico-system(05f2c2bc-5f37-4718-b09a-298a728d0047): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Nov 8 00:19:27.629241 containerd[1543]: time="2025-11-08T00:19:27.628245419Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Nov 8 00:19:27.987173 containerd[1543]: time="2025-11-08T00:19:27.987113969Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 8 00:19:27.987633 containerd[1543]: time="2025-11-08T00:19:27.987595905Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Nov 8 00:19:27.987733 containerd[1543]: time="2025-11-08T00:19:27.987662404Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Nov 8 00:19:27.987981 kubelet[2729]: E1108 00:19:27.987832 2729 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Nov 8 00:19:27.987981 kubelet[2729]: E1108 00:19:27.987873 2729 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Nov 8 00:19:27.987981 kubelet[2729]: E1108 00:19:27.987927 2729 kuberuntime_manager.go:1449] "Unhandled Error" err="container whisker-backend start failed in pod whisker-54745c6b5b-z6gpb_calico-system(05f2c2bc-5f37-4718-b09a-298a728d0047): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Nov 8 00:19:27.988074 kubelet[2729]: E1108 00:19:27.987955 2729 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-54745c6b5b-z6gpb" podUID="05f2c2bc-5f37-4718-b09a-298a728d0047" Nov 8 00:19:28.269146 kubelet[2729]: E1108 00:19:28.268718 2729 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-tzsgq" podUID="d3463313-ed82-40c5-914e-c6418d31744b"