Nov 8 00:28:23.735822 kernel: Linux version 6.6.113-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT_DYNAMIC Fri Nov 7 22:45:04 -00 2025 Nov 8 00:28:23.735838 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=vmware flatcar.autologin verity.usrhash=480a02cf7f2001774aa495c3e719d4173e968e6839485a7d2b207ef2facca472 Nov 8 00:28:23.735844 kernel: Disabled fast string operations Nov 8 00:28:23.735848 kernel: BIOS-provided physical RAM map: Nov 8 00:28:23.735852 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009ebff] usable Nov 8 00:28:23.735856 kernel: BIOS-e820: [mem 0x000000000009ec00-0x000000000009ffff] reserved Nov 8 00:28:23.735862 kernel: BIOS-e820: [mem 0x00000000000dc000-0x00000000000fffff] reserved Nov 8 00:28:23.735866 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000007fedffff] usable Nov 8 00:28:23.735870 kernel: BIOS-e820: [mem 0x000000007fee0000-0x000000007fefefff] ACPI data Nov 8 00:28:23.735874 kernel: BIOS-e820: [mem 0x000000007feff000-0x000000007fefffff] ACPI NVS Nov 8 00:28:23.735878 kernel: BIOS-e820: [mem 0x000000007ff00000-0x000000007fffffff] usable Nov 8 00:28:23.735882 kernel: BIOS-e820: [mem 0x00000000f0000000-0x00000000f7ffffff] reserved Nov 8 00:28:23.735886 kernel: BIOS-e820: [mem 0x00000000fec00000-0x00000000fec0ffff] reserved Nov 8 00:28:23.735890 kernel: BIOS-e820: [mem 0x00000000fee00000-0x00000000fee00fff] reserved Nov 8 00:28:23.735896 kernel: BIOS-e820: [mem 0x00000000fffe0000-0x00000000ffffffff] reserved Nov 8 00:28:23.735901 kernel: NX (Execute Disable) protection: active Nov 8 00:28:23.735905 kernel: APIC: Static calls initialized Nov 8 00:28:23.735910 kernel: SMBIOS 2.7 present. Nov 8 00:28:23.735914 kernel: DMI: VMware, Inc. VMware Virtual Platform/440BX Desktop Reference Platform, BIOS 6.00 05/28/2020 Nov 8 00:28:23.735919 kernel: vmware: hypercall mode: 0x00 Nov 8 00:28:23.735923 kernel: Hypervisor detected: VMware Nov 8 00:28:23.735928 kernel: vmware: TSC freq read from hypervisor : 3408.000 MHz Nov 8 00:28:23.735933 kernel: vmware: Host bus clock speed read from hypervisor : 66000000 Hz Nov 8 00:28:23.735938 kernel: vmware: using clock offset of 2547557263 ns Nov 8 00:28:23.735942 kernel: tsc: Detected 3408.000 MHz processor Nov 8 00:28:23.735947 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Nov 8 00:28:23.735953 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Nov 8 00:28:23.735957 kernel: last_pfn = 0x80000 max_arch_pfn = 0x400000000 Nov 8 00:28:23.735962 kernel: total RAM covered: 3072M Nov 8 00:28:23.735966 kernel: Found optimal setting for mtrr clean up Nov 8 00:28:23.735972 kernel: gran_size: 64K chunk_size: 64K num_reg: 2 lose cover RAM: 0G Nov 8 00:28:23.735977 kernel: MTRR map: 6 entries (5 fixed + 1 variable; max 21), built from 8 variable MTRRs Nov 8 00:28:23.735982 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Nov 8 00:28:23.735987 kernel: Using GB pages for direct mapping Nov 8 00:28:23.735991 kernel: ACPI: Early table checksum verification disabled Nov 8 00:28:23.735996 kernel: ACPI: RSDP 0x00000000000F6A00 000024 (v02 PTLTD ) Nov 8 00:28:23.736000 kernel: ACPI: XSDT 0x000000007FEE965B 00005C (v01 INTEL 440BX 06040000 VMW 01324272) Nov 8 00:28:23.736005 kernel: ACPI: FACP 0x000000007FEFEE73 0000F4 (v04 INTEL 440BX 06040000 PTL 000F4240) Nov 8 00:28:23.736010 kernel: ACPI: DSDT 0x000000007FEEAD55 01411E (v01 PTLTD Custom 06040000 MSFT 03000001) Nov 8 00:28:23.736015 kernel: ACPI: FACS 0x000000007FEFFFC0 000040 Nov 8 00:28:23.736022 kernel: ACPI: FACS 0x000000007FEFFFC0 000040 Nov 8 00:28:23.736026 kernel: ACPI: BOOT 0x000000007FEEAD2D 000028 (v01 PTLTD $SBFTBL$ 06040000 LTP 00000001) Nov 8 00:28:23.736031 kernel: ACPI: APIC 0x000000007FEEA5EB 000742 (v01 PTLTD ? APIC 06040000 LTP 00000000) Nov 8 00:28:23.736036 kernel: ACPI: MCFG 0x000000007FEEA5AF 00003C (v01 PTLTD $PCITBL$ 06040000 LTP 00000001) Nov 8 00:28:23.736041 kernel: ACPI: SRAT 0x000000007FEE9757 0008A8 (v02 VMWARE MEMPLUG 06040000 VMW 00000001) Nov 8 00:28:23.736047 kernel: ACPI: HPET 0x000000007FEE971F 000038 (v01 VMWARE VMW HPET 06040000 VMW 00000001) Nov 8 00:28:23.736052 kernel: ACPI: WAET 0x000000007FEE96F7 000028 (v01 VMWARE VMW WAET 06040000 VMW 00000001) Nov 8 00:28:23.736057 kernel: ACPI: Reserving FACP table memory at [mem 0x7fefee73-0x7fefef66] Nov 8 00:28:23.736062 kernel: ACPI: Reserving DSDT table memory at [mem 0x7feead55-0x7fefee72] Nov 8 00:28:23.736067 kernel: ACPI: Reserving FACS table memory at [mem 0x7fefffc0-0x7fefffff] Nov 8 00:28:23.736072 kernel: ACPI: Reserving FACS table memory at [mem 0x7fefffc0-0x7fefffff] Nov 8 00:28:23.736077 kernel: ACPI: Reserving BOOT table memory at [mem 0x7feead2d-0x7feead54] Nov 8 00:28:23.736081 kernel: ACPI: Reserving APIC table memory at [mem 0x7feea5eb-0x7feead2c] Nov 8 00:28:23.736086 kernel: ACPI: Reserving MCFG table memory at [mem 0x7feea5af-0x7feea5ea] Nov 8 00:28:23.736091 kernel: ACPI: Reserving SRAT table memory at [mem 0x7fee9757-0x7fee9ffe] Nov 8 00:28:23.736097 kernel: ACPI: Reserving HPET table memory at [mem 0x7fee971f-0x7fee9756] Nov 8 00:28:23.736102 kernel: ACPI: Reserving WAET table memory at [mem 0x7fee96f7-0x7fee971e] Nov 8 00:28:23.736106 kernel: system APIC only can use physical flat Nov 8 00:28:23.736111 kernel: APIC: Switched APIC routing to: physical flat Nov 8 00:28:23.736116 kernel: SRAT: PXM 0 -> APIC 0x00 -> Node 0 Nov 8 00:28:23.736121 kernel: SRAT: PXM 0 -> APIC 0x02 -> Node 0 Nov 8 00:28:23.736126 kernel: SRAT: PXM 0 -> APIC 0x04 -> Node 0 Nov 8 00:28:23.736131 kernel: SRAT: PXM 0 -> APIC 0x06 -> Node 0 Nov 8 00:28:23.736136 kernel: SRAT: PXM 0 -> APIC 0x08 -> Node 0 Nov 8 00:28:23.736141 kernel: SRAT: PXM 0 -> APIC 0x0a -> Node 0 Nov 8 00:28:23.736146 kernel: SRAT: PXM 0 -> APIC 0x0c -> Node 0 Nov 8 00:28:23.736151 kernel: SRAT: PXM 0 -> APIC 0x0e -> Node 0 Nov 8 00:28:23.736156 kernel: SRAT: PXM 0 -> APIC 0x10 -> Node 0 Nov 8 00:28:23.736160 kernel: SRAT: PXM 0 -> APIC 0x12 -> Node 0 Nov 8 00:28:23.736165 kernel: SRAT: PXM 0 -> APIC 0x14 -> Node 0 Nov 8 00:28:23.736170 kernel: SRAT: PXM 0 -> APIC 0x16 -> Node 0 Nov 8 00:28:23.736184 kernel: SRAT: PXM 0 -> APIC 0x18 -> Node 0 Nov 8 00:28:23.736190 kernel: SRAT: PXM 0 -> APIC 0x1a -> Node 0 Nov 8 00:28:23.736194 kernel: SRAT: PXM 0 -> APIC 0x1c -> Node 0 Nov 8 00:28:23.736201 kernel: SRAT: PXM 0 -> APIC 0x1e -> Node 0 Nov 8 00:28:23.736206 kernel: SRAT: PXM 0 -> APIC 0x20 -> Node 0 Nov 8 00:28:23.736211 kernel: SRAT: PXM 0 -> APIC 0x22 -> Node 0 Nov 8 00:28:23.736216 kernel: SRAT: PXM 0 -> APIC 0x24 -> Node 0 Nov 8 00:28:23.736221 kernel: SRAT: PXM 0 -> APIC 0x26 -> Node 0 Nov 8 00:28:23.736225 kernel: SRAT: PXM 0 -> APIC 0x28 -> Node 0 Nov 8 00:28:23.736230 kernel: SRAT: PXM 0 -> APIC 0x2a -> Node 0 Nov 8 00:28:23.736235 kernel: SRAT: PXM 0 -> APIC 0x2c -> Node 0 Nov 8 00:28:23.736240 kernel: SRAT: PXM 0 -> APIC 0x2e -> Node 0 Nov 8 00:28:23.736244 kernel: SRAT: PXM 0 -> APIC 0x30 -> Node 0 Nov 8 00:28:23.736249 kernel: SRAT: PXM 0 -> APIC 0x32 -> Node 0 Nov 8 00:28:23.736255 kernel: SRAT: PXM 0 -> APIC 0x34 -> Node 0 Nov 8 00:28:23.736259 kernel: SRAT: PXM 0 -> APIC 0x36 -> Node 0 Nov 8 00:28:23.736264 kernel: SRAT: PXM 0 -> APIC 0x38 -> Node 0 Nov 8 00:28:23.736269 kernel: SRAT: PXM 0 -> APIC 0x3a -> Node 0 Nov 8 00:28:23.736274 kernel: SRAT: PXM 0 -> APIC 0x3c -> Node 0 Nov 8 00:28:23.736279 kernel: SRAT: PXM 0 -> APIC 0x3e -> Node 0 Nov 8 00:28:23.736283 kernel: SRAT: PXM 0 -> APIC 0x40 -> Node 0 Nov 8 00:28:23.736288 kernel: SRAT: PXM 0 -> APIC 0x42 -> Node 0 Nov 8 00:28:23.736293 kernel: SRAT: PXM 0 -> APIC 0x44 -> Node 0 Nov 8 00:28:23.736298 kernel: SRAT: PXM 0 -> APIC 0x46 -> Node 0 Nov 8 00:28:23.736304 kernel: SRAT: PXM 0 -> APIC 0x48 -> Node 0 Nov 8 00:28:23.736309 kernel: SRAT: PXM 0 -> APIC 0x4a -> Node 0 Nov 8 00:28:23.736313 kernel: SRAT: PXM 0 -> APIC 0x4c -> Node 0 Nov 8 00:28:23.736318 kernel: SRAT: PXM 0 -> APIC 0x4e -> Node 0 Nov 8 00:28:23.736323 kernel: SRAT: PXM 0 -> APIC 0x50 -> Node 0 Nov 8 00:28:23.736328 kernel: SRAT: PXM 0 -> APIC 0x52 -> Node 0 Nov 8 00:28:23.736332 kernel: SRAT: PXM 0 -> APIC 0x54 -> Node 0 Nov 8 00:28:23.736337 kernel: SRAT: PXM 0 -> APIC 0x56 -> Node 0 Nov 8 00:28:23.736342 kernel: SRAT: PXM 0 -> APIC 0x58 -> Node 0 Nov 8 00:28:23.736346 kernel: SRAT: PXM 0 -> APIC 0x5a -> Node 0 Nov 8 00:28:23.736352 kernel: SRAT: PXM 0 -> APIC 0x5c -> Node 0 Nov 8 00:28:23.736357 kernel: SRAT: PXM 0 -> APIC 0x5e -> Node 0 Nov 8 00:28:23.736362 kernel: SRAT: PXM 0 -> APIC 0x60 -> Node 0 Nov 8 00:28:23.736366 kernel: SRAT: PXM 0 -> APIC 0x62 -> Node 0 Nov 8 00:28:23.736371 kernel: SRAT: PXM 0 -> APIC 0x64 -> Node 0 Nov 8 00:28:23.736376 kernel: SRAT: PXM 0 -> APIC 0x66 -> Node 0 Nov 8 00:28:23.736381 kernel: SRAT: PXM 0 -> APIC 0x68 -> Node 0 Nov 8 00:28:23.736385 kernel: SRAT: PXM 0 -> APIC 0x6a -> Node 0 Nov 8 00:28:23.736390 kernel: SRAT: PXM 0 -> APIC 0x6c -> Node 0 Nov 8 00:28:23.736395 kernel: SRAT: PXM 0 -> APIC 0x6e -> Node 0 Nov 8 00:28:23.736400 kernel: SRAT: PXM 0 -> APIC 0x70 -> Node 0 Nov 8 00:28:23.736405 kernel: SRAT: PXM 0 -> APIC 0x72 -> Node 0 Nov 8 00:28:23.736410 kernel: SRAT: PXM 0 -> APIC 0x74 -> Node 0 Nov 8 00:28:23.736419 kernel: SRAT: PXM 0 -> APIC 0x76 -> Node 0 Nov 8 00:28:23.736424 kernel: SRAT: PXM 0 -> APIC 0x78 -> Node 0 Nov 8 00:28:23.736429 kernel: SRAT: PXM 0 -> APIC 0x7a -> Node 0 Nov 8 00:28:23.736434 kernel: SRAT: PXM 0 -> APIC 0x7c -> Node 0 Nov 8 00:28:23.736439 kernel: SRAT: PXM 0 -> APIC 0x7e -> Node 0 Nov 8 00:28:23.736444 kernel: SRAT: PXM 0 -> APIC 0x80 -> Node 0 Nov 8 00:28:23.736450 kernel: SRAT: PXM 0 -> APIC 0x82 -> Node 0 Nov 8 00:28:23.736456 kernel: SRAT: PXM 0 -> APIC 0x84 -> Node 0 Nov 8 00:28:23.736461 kernel: SRAT: PXM 0 -> APIC 0x86 -> Node 0 Nov 8 00:28:23.736466 kernel: SRAT: PXM 0 -> APIC 0x88 -> Node 0 Nov 8 00:28:23.736471 kernel: SRAT: PXM 0 -> APIC 0x8a -> Node 0 Nov 8 00:28:23.736476 kernel: SRAT: PXM 0 -> APIC 0x8c -> Node 0 Nov 8 00:28:23.736481 kernel: SRAT: PXM 0 -> APIC 0x8e -> Node 0 Nov 8 00:28:23.736486 kernel: SRAT: PXM 0 -> APIC 0x90 -> Node 0 Nov 8 00:28:23.736491 kernel: SRAT: PXM 0 -> APIC 0x92 -> Node 0 Nov 8 00:28:23.736496 kernel: SRAT: PXM 0 -> APIC 0x94 -> Node 0 Nov 8 00:28:23.736502 kernel: SRAT: PXM 0 -> APIC 0x96 -> Node 0 Nov 8 00:28:23.736507 kernel: SRAT: PXM 0 -> APIC 0x98 -> Node 0 Nov 8 00:28:23.736512 kernel: SRAT: PXM 0 -> APIC 0x9a -> Node 0 Nov 8 00:28:23.736517 kernel: SRAT: PXM 0 -> APIC 0x9c -> Node 0 Nov 8 00:28:23.736522 kernel: SRAT: PXM 0 -> APIC 0x9e -> Node 0 Nov 8 00:28:23.736528 kernel: SRAT: PXM 0 -> APIC 0xa0 -> Node 0 Nov 8 00:28:23.736533 kernel: SRAT: PXM 0 -> APIC 0xa2 -> Node 0 Nov 8 00:28:23.736538 kernel: SRAT: PXM 0 -> APIC 0xa4 -> Node 0 Nov 8 00:28:23.736543 kernel: SRAT: PXM 0 -> APIC 0xa6 -> Node 0 Nov 8 00:28:23.736548 kernel: SRAT: PXM 0 -> APIC 0xa8 -> Node 0 Nov 8 00:28:23.736554 kernel: SRAT: PXM 0 -> APIC 0xaa -> Node 0 Nov 8 00:28:23.736559 kernel: SRAT: PXM 0 -> APIC 0xac -> Node 0 Nov 8 00:28:23.736564 kernel: SRAT: PXM 0 -> APIC 0xae -> Node 0 Nov 8 00:28:23.736569 kernel: SRAT: PXM 0 -> APIC 0xb0 -> Node 0 Nov 8 00:28:23.736574 kernel: SRAT: PXM 0 -> APIC 0xb2 -> Node 0 Nov 8 00:28:23.736579 kernel: SRAT: PXM 0 -> APIC 0xb4 -> Node 0 Nov 8 00:28:23.736584 kernel: SRAT: PXM 0 -> APIC 0xb6 -> Node 0 Nov 8 00:28:23.736589 kernel: SRAT: PXM 0 -> APIC 0xb8 -> Node 0 Nov 8 00:28:23.736594 kernel: SRAT: PXM 0 -> APIC 0xba -> Node 0 Nov 8 00:28:23.736599 kernel: SRAT: PXM 0 -> APIC 0xbc -> Node 0 Nov 8 00:28:23.736605 kernel: SRAT: PXM 0 -> APIC 0xbe -> Node 0 Nov 8 00:28:23.736610 kernel: SRAT: PXM 0 -> APIC 0xc0 -> Node 0 Nov 8 00:28:23.736615 kernel: SRAT: PXM 0 -> APIC 0xc2 -> Node 0 Nov 8 00:28:23.736620 kernel: SRAT: PXM 0 -> APIC 0xc4 -> Node 0 Nov 8 00:28:23.736625 kernel: SRAT: PXM 0 -> APIC 0xc6 -> Node 0 Nov 8 00:28:23.736631 kernel: SRAT: PXM 0 -> APIC 0xc8 -> Node 0 Nov 8 00:28:23.736636 kernel: SRAT: PXM 0 -> APIC 0xca -> Node 0 Nov 8 00:28:23.736641 kernel: SRAT: PXM 0 -> APIC 0xcc -> Node 0 Nov 8 00:28:23.736646 kernel: SRAT: PXM 0 -> APIC 0xce -> Node 0 Nov 8 00:28:23.736651 kernel: SRAT: PXM 0 -> APIC 0xd0 -> Node 0 Nov 8 00:28:23.736657 kernel: SRAT: PXM 0 -> APIC 0xd2 -> Node 0 Nov 8 00:28:23.736662 kernel: SRAT: PXM 0 -> APIC 0xd4 -> Node 0 Nov 8 00:28:23.736667 kernel: SRAT: PXM 0 -> APIC 0xd6 -> Node 0 Nov 8 00:28:23.736672 kernel: SRAT: PXM 0 -> APIC 0xd8 -> Node 0 Nov 8 00:28:23.736677 kernel: SRAT: PXM 0 -> APIC 0xda -> Node 0 Nov 8 00:28:23.736682 kernel: SRAT: PXM 0 -> APIC 0xdc -> Node 0 Nov 8 00:28:23.736687 kernel: SRAT: PXM 0 -> APIC 0xde -> Node 0 Nov 8 00:28:23.736692 kernel: SRAT: PXM 0 -> APIC 0xe0 -> Node 0 Nov 8 00:28:23.736697 kernel: SRAT: PXM 0 -> APIC 0xe2 -> Node 0 Nov 8 00:28:23.736702 kernel: SRAT: PXM 0 -> APIC 0xe4 -> Node 0 Nov 8 00:28:23.736707 kernel: SRAT: PXM 0 -> APIC 0xe6 -> Node 0 Nov 8 00:28:23.736714 kernel: SRAT: PXM 0 -> APIC 0xe8 -> Node 0 Nov 8 00:28:23.736719 kernel: SRAT: PXM 0 -> APIC 0xea -> Node 0 Nov 8 00:28:23.736723 kernel: SRAT: PXM 0 -> APIC 0xec -> Node 0 Nov 8 00:28:23.736729 kernel: SRAT: PXM 0 -> APIC 0xee -> Node 0 Nov 8 00:28:23.736734 kernel: SRAT: PXM 0 -> APIC 0xf0 -> Node 0 Nov 8 00:28:23.736739 kernel: SRAT: PXM 0 -> APIC 0xf2 -> Node 0 Nov 8 00:28:23.736744 kernel: SRAT: PXM 0 -> APIC 0xf4 -> Node 0 Nov 8 00:28:23.736749 kernel: SRAT: PXM 0 -> APIC 0xf6 -> Node 0 Nov 8 00:28:23.736754 kernel: SRAT: PXM 0 -> APIC 0xf8 -> Node 0 Nov 8 00:28:23.736759 kernel: SRAT: PXM 0 -> APIC 0xfa -> Node 0 Nov 8 00:28:23.736765 kernel: SRAT: PXM 0 -> APIC 0xfc -> Node 0 Nov 8 00:28:23.736770 kernel: SRAT: PXM 0 -> APIC 0xfe -> Node 0 Nov 8 00:28:23.736775 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x0009ffff] Nov 8 00:28:23.736780 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00100000-0x7fffffff] Nov 8 00:28:23.736785 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x80000000-0xbfffffff] hotplug Nov 8 00:28:23.736791 kernel: NUMA: Node 0 [mem 0x00000000-0x0009ffff] + [mem 0x00100000-0x7fffffff] -> [mem 0x00000000-0x7fffffff] Nov 8 00:28:23.736796 kernel: NODE_DATA(0) allocated [mem 0x7fffa000-0x7fffffff] Nov 8 00:28:23.736802 kernel: Zone ranges: Nov 8 00:28:23.736807 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Nov 8 00:28:23.736813 kernel: DMA32 [mem 0x0000000001000000-0x000000007fffffff] Nov 8 00:28:23.736818 kernel: Normal empty Nov 8 00:28:23.736823 kernel: Movable zone start for each node Nov 8 00:28:23.736829 kernel: Early memory node ranges Nov 8 00:28:23.736834 kernel: node 0: [mem 0x0000000000001000-0x000000000009dfff] Nov 8 00:28:23.736839 kernel: node 0: [mem 0x0000000000100000-0x000000007fedffff] Nov 8 00:28:23.736845 kernel: node 0: [mem 0x000000007ff00000-0x000000007fffffff] Nov 8 00:28:23.736850 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000007fffffff] Nov 8 00:28:23.736873 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Nov 8 00:28:23.736879 kernel: On node 0, zone DMA: 98 pages in unavailable ranges Nov 8 00:28:23.736885 kernel: On node 0, zone DMA32: 32 pages in unavailable ranges Nov 8 00:28:23.736890 kernel: ACPI: PM-Timer IO Port: 0x1008 Nov 8 00:28:23.736896 kernel: system APIC only can use physical flat Nov 8 00:28:23.736901 kernel: ACPI: LAPIC_NMI (acpi_id[0x00] high edge lint[0x1]) Nov 8 00:28:23.736906 kernel: ACPI: LAPIC_NMI (acpi_id[0x01] high edge lint[0x1]) Nov 8 00:28:23.736911 kernel: ACPI: LAPIC_NMI (acpi_id[0x02] high edge lint[0x1]) Nov 8 00:28:23.736916 kernel: ACPI: LAPIC_NMI (acpi_id[0x03] high edge lint[0x1]) Nov 8 00:28:23.736922 kernel: ACPI: LAPIC_NMI (acpi_id[0x04] high edge lint[0x1]) Nov 8 00:28:23.736927 kernel: ACPI: LAPIC_NMI (acpi_id[0x05] high edge lint[0x1]) Nov 8 00:28:23.736933 kernel: ACPI: LAPIC_NMI (acpi_id[0x06] high edge lint[0x1]) Nov 8 00:28:23.736938 kernel: ACPI: LAPIC_NMI (acpi_id[0x07] high edge lint[0x1]) Nov 8 00:28:23.736943 kernel: ACPI: LAPIC_NMI (acpi_id[0x08] high edge lint[0x1]) Nov 8 00:28:23.736949 kernel: ACPI: LAPIC_NMI (acpi_id[0x09] high edge lint[0x1]) Nov 8 00:28:23.736954 kernel: ACPI: LAPIC_NMI (acpi_id[0x0a] high edge lint[0x1]) Nov 8 00:28:23.736959 kernel: ACPI: LAPIC_NMI (acpi_id[0x0b] high edge lint[0x1]) Nov 8 00:28:23.736965 kernel: ACPI: LAPIC_NMI (acpi_id[0x0c] high edge lint[0x1]) Nov 8 00:28:23.736970 kernel: ACPI: LAPIC_NMI (acpi_id[0x0d] high edge lint[0x1]) Nov 8 00:28:23.736975 kernel: ACPI: LAPIC_NMI (acpi_id[0x0e] high edge lint[0x1]) Nov 8 00:28:23.736980 kernel: ACPI: LAPIC_NMI (acpi_id[0x0f] high edge lint[0x1]) Nov 8 00:28:23.736986 kernel: ACPI: LAPIC_NMI (acpi_id[0x10] high edge lint[0x1]) Nov 8 00:28:23.736992 kernel: ACPI: LAPIC_NMI (acpi_id[0x11] high edge lint[0x1]) Nov 8 00:28:23.736997 kernel: ACPI: LAPIC_NMI (acpi_id[0x12] high edge lint[0x1]) Nov 8 00:28:23.737002 kernel: ACPI: LAPIC_NMI (acpi_id[0x13] high edge lint[0x1]) Nov 8 00:28:23.737007 kernel: ACPI: LAPIC_NMI (acpi_id[0x14] high edge lint[0x1]) Nov 8 00:28:23.737013 kernel: ACPI: LAPIC_NMI (acpi_id[0x15] high edge lint[0x1]) Nov 8 00:28:23.737018 kernel: ACPI: LAPIC_NMI (acpi_id[0x16] high edge lint[0x1]) Nov 8 00:28:23.737023 kernel: ACPI: LAPIC_NMI (acpi_id[0x17] high edge lint[0x1]) Nov 8 00:28:23.737028 kernel: ACPI: LAPIC_NMI (acpi_id[0x18] high edge lint[0x1]) Nov 8 00:28:23.737034 kernel: ACPI: LAPIC_NMI (acpi_id[0x19] high edge lint[0x1]) Nov 8 00:28:23.737040 kernel: ACPI: LAPIC_NMI (acpi_id[0x1a] high edge lint[0x1]) Nov 8 00:28:23.737045 kernel: ACPI: LAPIC_NMI (acpi_id[0x1b] high edge lint[0x1]) Nov 8 00:28:23.737050 kernel: ACPI: LAPIC_NMI (acpi_id[0x1c] high edge lint[0x1]) Nov 8 00:28:23.737055 kernel: ACPI: LAPIC_NMI (acpi_id[0x1d] high edge lint[0x1]) Nov 8 00:28:23.737060 kernel: ACPI: LAPIC_NMI (acpi_id[0x1e] high edge lint[0x1]) Nov 8 00:28:23.737066 kernel: ACPI: LAPIC_NMI (acpi_id[0x1f] high edge lint[0x1]) Nov 8 00:28:23.737071 kernel: ACPI: LAPIC_NMI (acpi_id[0x20] high edge lint[0x1]) Nov 8 00:28:23.737076 kernel: ACPI: LAPIC_NMI (acpi_id[0x21] high edge lint[0x1]) Nov 8 00:28:23.737081 kernel: ACPI: LAPIC_NMI (acpi_id[0x22] high edge lint[0x1]) Nov 8 00:28:23.737088 kernel: ACPI: LAPIC_NMI (acpi_id[0x23] high edge lint[0x1]) Nov 8 00:28:23.737093 kernel: ACPI: LAPIC_NMI (acpi_id[0x24] high edge lint[0x1]) Nov 8 00:28:23.737098 kernel: ACPI: LAPIC_NMI (acpi_id[0x25] high edge lint[0x1]) Nov 8 00:28:23.737103 kernel: ACPI: LAPIC_NMI (acpi_id[0x26] high edge lint[0x1]) Nov 8 00:28:23.737108 kernel: ACPI: LAPIC_NMI (acpi_id[0x27] high edge lint[0x1]) Nov 8 00:28:23.737114 kernel: ACPI: LAPIC_NMI (acpi_id[0x28] high edge lint[0x1]) Nov 8 00:28:23.737119 kernel: ACPI: LAPIC_NMI (acpi_id[0x29] high edge lint[0x1]) Nov 8 00:28:23.737124 kernel: ACPI: LAPIC_NMI (acpi_id[0x2a] high edge lint[0x1]) Nov 8 00:28:23.737129 kernel: ACPI: LAPIC_NMI (acpi_id[0x2b] high edge lint[0x1]) Nov 8 00:28:23.737134 kernel: ACPI: LAPIC_NMI (acpi_id[0x2c] high edge lint[0x1]) Nov 8 00:28:23.737141 kernel: ACPI: LAPIC_NMI (acpi_id[0x2d] high edge lint[0x1]) Nov 8 00:28:23.737146 kernel: ACPI: LAPIC_NMI (acpi_id[0x2e] high edge lint[0x1]) Nov 8 00:28:23.737151 kernel: ACPI: LAPIC_NMI (acpi_id[0x2f] high edge lint[0x1]) Nov 8 00:28:23.737156 kernel: ACPI: LAPIC_NMI (acpi_id[0x30] high edge lint[0x1]) Nov 8 00:28:23.737161 kernel: ACPI: LAPIC_NMI (acpi_id[0x31] high edge lint[0x1]) Nov 8 00:28:23.737166 kernel: ACPI: LAPIC_NMI (acpi_id[0x32] high edge lint[0x1]) Nov 8 00:28:23.737176 kernel: ACPI: LAPIC_NMI (acpi_id[0x33] high edge lint[0x1]) Nov 8 00:28:23.737182 kernel: ACPI: LAPIC_NMI (acpi_id[0x34] high edge lint[0x1]) Nov 8 00:28:23.737187 kernel: ACPI: LAPIC_NMI (acpi_id[0x35] high edge lint[0x1]) Nov 8 00:28:23.737192 kernel: ACPI: LAPIC_NMI (acpi_id[0x36] high edge lint[0x1]) Nov 8 00:28:23.737199 kernel: ACPI: LAPIC_NMI (acpi_id[0x37] high edge lint[0x1]) Nov 8 00:28:23.737204 kernel: ACPI: LAPIC_NMI (acpi_id[0x38] high edge lint[0x1]) Nov 8 00:28:23.737209 kernel: ACPI: LAPIC_NMI (acpi_id[0x39] high edge lint[0x1]) Nov 8 00:28:23.737214 kernel: ACPI: LAPIC_NMI (acpi_id[0x3a] high edge lint[0x1]) Nov 8 00:28:23.737220 kernel: ACPI: LAPIC_NMI (acpi_id[0x3b] high edge lint[0x1]) Nov 8 00:28:23.737225 kernel: ACPI: LAPIC_NMI (acpi_id[0x3c] high edge lint[0x1]) Nov 8 00:28:23.737230 kernel: ACPI: LAPIC_NMI (acpi_id[0x3d] high edge lint[0x1]) Nov 8 00:28:23.737235 kernel: ACPI: LAPIC_NMI (acpi_id[0x3e] high edge lint[0x1]) Nov 8 00:28:23.737240 kernel: ACPI: LAPIC_NMI (acpi_id[0x3f] high edge lint[0x1]) Nov 8 00:28:23.737247 kernel: ACPI: LAPIC_NMI (acpi_id[0x40] high edge lint[0x1]) Nov 8 00:28:23.737252 kernel: ACPI: LAPIC_NMI (acpi_id[0x41] high edge lint[0x1]) Nov 8 00:28:23.737257 kernel: ACPI: LAPIC_NMI (acpi_id[0x42] high edge lint[0x1]) Nov 8 00:28:23.737262 kernel: ACPI: LAPIC_NMI (acpi_id[0x43] high edge lint[0x1]) Nov 8 00:28:23.737268 kernel: ACPI: LAPIC_NMI (acpi_id[0x44] high edge lint[0x1]) Nov 8 00:28:23.737273 kernel: ACPI: LAPIC_NMI (acpi_id[0x45] high edge lint[0x1]) Nov 8 00:28:23.737278 kernel: ACPI: LAPIC_NMI (acpi_id[0x46] high edge lint[0x1]) Nov 8 00:28:23.737283 kernel: ACPI: LAPIC_NMI (acpi_id[0x47] high edge lint[0x1]) Nov 8 00:28:23.737288 kernel: ACPI: LAPIC_NMI (acpi_id[0x48] high edge lint[0x1]) Nov 8 00:28:23.737294 kernel: ACPI: LAPIC_NMI (acpi_id[0x49] high edge lint[0x1]) Nov 8 00:28:23.737300 kernel: ACPI: LAPIC_NMI (acpi_id[0x4a] high edge lint[0x1]) Nov 8 00:28:23.737305 kernel: ACPI: LAPIC_NMI (acpi_id[0x4b] high edge lint[0x1]) Nov 8 00:28:23.737310 kernel: ACPI: LAPIC_NMI (acpi_id[0x4c] high edge lint[0x1]) Nov 8 00:28:23.737315 kernel: ACPI: LAPIC_NMI (acpi_id[0x4d] high edge lint[0x1]) Nov 8 00:28:23.737321 kernel: ACPI: LAPIC_NMI (acpi_id[0x4e] high edge lint[0x1]) Nov 8 00:28:23.737326 kernel: ACPI: LAPIC_NMI (acpi_id[0x4f] high edge lint[0x1]) Nov 8 00:28:23.737331 kernel: ACPI: LAPIC_NMI (acpi_id[0x50] high edge lint[0x1]) Nov 8 00:28:23.737337 kernel: ACPI: LAPIC_NMI (acpi_id[0x51] high edge lint[0x1]) Nov 8 00:28:23.737342 kernel: ACPI: LAPIC_NMI (acpi_id[0x52] high edge lint[0x1]) Nov 8 00:28:23.737347 kernel: ACPI: LAPIC_NMI (acpi_id[0x53] high edge lint[0x1]) Nov 8 00:28:23.737353 kernel: ACPI: LAPIC_NMI (acpi_id[0x54] high edge lint[0x1]) Nov 8 00:28:23.737359 kernel: ACPI: LAPIC_NMI (acpi_id[0x55] high edge lint[0x1]) Nov 8 00:28:23.737364 kernel: ACPI: LAPIC_NMI (acpi_id[0x56] high edge lint[0x1]) Nov 8 00:28:23.737369 kernel: ACPI: LAPIC_NMI (acpi_id[0x57] high edge lint[0x1]) Nov 8 00:28:23.737374 kernel: ACPI: LAPIC_NMI (acpi_id[0x58] high edge lint[0x1]) Nov 8 00:28:23.737380 kernel: ACPI: LAPIC_NMI (acpi_id[0x59] high edge lint[0x1]) Nov 8 00:28:23.737385 kernel: ACPI: LAPIC_NMI (acpi_id[0x5a] high edge lint[0x1]) Nov 8 00:28:23.737390 kernel: ACPI: LAPIC_NMI (acpi_id[0x5b] high edge lint[0x1]) Nov 8 00:28:23.737395 kernel: ACPI: LAPIC_NMI (acpi_id[0x5c] high edge lint[0x1]) Nov 8 00:28:23.737401 kernel: ACPI: LAPIC_NMI (acpi_id[0x5d] high edge lint[0x1]) Nov 8 00:28:23.737406 kernel: ACPI: LAPIC_NMI (acpi_id[0x5e] high edge lint[0x1]) Nov 8 00:28:23.737412 kernel: ACPI: LAPIC_NMI (acpi_id[0x5f] high edge lint[0x1]) Nov 8 00:28:23.737417 kernel: ACPI: LAPIC_NMI (acpi_id[0x60] high edge lint[0x1]) Nov 8 00:28:23.737422 kernel: ACPI: LAPIC_NMI (acpi_id[0x61] high edge lint[0x1]) Nov 8 00:28:23.737427 kernel: ACPI: LAPIC_NMI (acpi_id[0x62] high edge lint[0x1]) Nov 8 00:28:23.737432 kernel: ACPI: LAPIC_NMI (acpi_id[0x63] high edge lint[0x1]) Nov 8 00:28:23.737438 kernel: ACPI: LAPIC_NMI (acpi_id[0x64] high edge lint[0x1]) Nov 8 00:28:23.737443 kernel: ACPI: LAPIC_NMI (acpi_id[0x65] high edge lint[0x1]) Nov 8 00:28:23.737448 kernel: ACPI: LAPIC_NMI (acpi_id[0x66] high edge lint[0x1]) Nov 8 00:28:23.737454 kernel: ACPI: LAPIC_NMI (acpi_id[0x67] high edge lint[0x1]) Nov 8 00:28:23.737459 kernel: ACPI: LAPIC_NMI (acpi_id[0x68] high edge lint[0x1]) Nov 8 00:28:23.737465 kernel: ACPI: LAPIC_NMI (acpi_id[0x69] high edge lint[0x1]) Nov 8 00:28:23.737470 kernel: ACPI: LAPIC_NMI (acpi_id[0x6a] high edge lint[0x1]) Nov 8 00:28:23.737475 kernel: ACPI: LAPIC_NMI (acpi_id[0x6b] high edge lint[0x1]) Nov 8 00:28:23.737480 kernel: ACPI: LAPIC_NMI (acpi_id[0x6c] high edge lint[0x1]) Nov 8 00:28:23.737486 kernel: ACPI: LAPIC_NMI (acpi_id[0x6d] high edge lint[0x1]) Nov 8 00:28:23.737491 kernel: ACPI: LAPIC_NMI (acpi_id[0x6e] high edge lint[0x1]) Nov 8 00:28:23.737496 kernel: ACPI: LAPIC_NMI (acpi_id[0x6f] high edge lint[0x1]) Nov 8 00:28:23.737501 kernel: ACPI: LAPIC_NMI (acpi_id[0x70] high edge lint[0x1]) Nov 8 00:28:23.737508 kernel: ACPI: LAPIC_NMI (acpi_id[0x71] high edge lint[0x1]) Nov 8 00:28:23.737513 kernel: ACPI: LAPIC_NMI (acpi_id[0x72] high edge lint[0x1]) Nov 8 00:28:23.737518 kernel: ACPI: LAPIC_NMI (acpi_id[0x73] high edge lint[0x1]) Nov 8 00:28:23.737523 kernel: ACPI: LAPIC_NMI (acpi_id[0x74] high edge lint[0x1]) Nov 8 00:28:23.737529 kernel: ACPI: LAPIC_NMI (acpi_id[0x75] high edge lint[0x1]) Nov 8 00:28:23.737534 kernel: ACPI: LAPIC_NMI (acpi_id[0x76] high edge lint[0x1]) Nov 8 00:28:23.737539 kernel: ACPI: LAPIC_NMI (acpi_id[0x77] high edge lint[0x1]) Nov 8 00:28:23.737544 kernel: ACPI: LAPIC_NMI (acpi_id[0x78] high edge lint[0x1]) Nov 8 00:28:23.737549 kernel: ACPI: LAPIC_NMI (acpi_id[0x79] high edge lint[0x1]) Nov 8 00:28:23.737556 kernel: ACPI: LAPIC_NMI (acpi_id[0x7a] high edge lint[0x1]) Nov 8 00:28:23.737561 kernel: ACPI: LAPIC_NMI (acpi_id[0x7b] high edge lint[0x1]) Nov 8 00:28:23.737566 kernel: ACPI: LAPIC_NMI (acpi_id[0x7c] high edge lint[0x1]) Nov 8 00:28:23.737571 kernel: ACPI: LAPIC_NMI (acpi_id[0x7d] high edge lint[0x1]) Nov 8 00:28:23.737577 kernel: ACPI: LAPIC_NMI (acpi_id[0x7e] high edge lint[0x1]) Nov 8 00:28:23.737582 kernel: ACPI: LAPIC_NMI (acpi_id[0x7f] high edge lint[0x1]) Nov 8 00:28:23.737587 kernel: IOAPIC[0]: apic_id 1, version 17, address 0xfec00000, GSI 0-23 Nov 8 00:28:23.737593 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 high edge) Nov 8 00:28:23.737598 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Nov 8 00:28:23.737603 kernel: ACPI: HPET id: 0x8086af01 base: 0xfed00000 Nov 8 00:28:23.737609 kernel: TSC deadline timer available Nov 8 00:28:23.737615 kernel: smpboot: Allowing 128 CPUs, 126 hotplug CPUs Nov 8 00:28:23.737620 kernel: [mem 0x80000000-0xefffffff] available for PCI devices Nov 8 00:28:23.737625 kernel: Booting paravirtualized kernel on VMware hypervisor Nov 8 00:28:23.737631 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Nov 8 00:28:23.737636 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:128 nr_cpu_ids:128 nr_node_ids:1 Nov 8 00:28:23.737642 kernel: percpu: Embedded 58 pages/cpu s196712 r8192 d32664 u262144 Nov 8 00:28:23.737647 kernel: pcpu-alloc: s196712 r8192 d32664 u262144 alloc=1*2097152 Nov 8 00:28:23.737652 kernel: pcpu-alloc: [0] 000 001 002 003 004 005 006 007 Nov 8 00:28:23.737659 kernel: pcpu-alloc: [0] 008 009 010 011 012 013 014 015 Nov 8 00:28:23.737664 kernel: pcpu-alloc: [0] 016 017 018 019 020 021 022 023 Nov 8 00:28:23.737669 kernel: pcpu-alloc: [0] 024 025 026 027 028 029 030 031 Nov 8 00:28:23.737674 kernel: pcpu-alloc: [0] 032 033 034 035 036 037 038 039 Nov 8 00:28:23.737686 kernel: pcpu-alloc: [0] 040 041 042 043 044 045 046 047 Nov 8 00:28:23.737693 kernel: pcpu-alloc: [0] 048 049 050 051 052 053 054 055 Nov 8 00:28:23.737698 kernel: pcpu-alloc: [0] 056 057 058 059 060 061 062 063 Nov 8 00:28:23.737704 kernel: pcpu-alloc: [0] 064 065 066 067 068 069 070 071 Nov 8 00:28:23.737709 kernel: pcpu-alloc: [0] 072 073 074 075 076 077 078 079 Nov 8 00:28:23.737716 kernel: pcpu-alloc: [0] 080 081 082 083 084 085 086 087 Nov 8 00:28:23.737721 kernel: pcpu-alloc: [0] 088 089 090 091 092 093 094 095 Nov 8 00:28:23.737727 kernel: pcpu-alloc: [0] 096 097 098 099 100 101 102 103 Nov 8 00:28:23.737732 kernel: pcpu-alloc: [0] 104 105 106 107 108 109 110 111 Nov 8 00:28:23.737738 kernel: pcpu-alloc: [0] 112 113 114 115 116 117 118 119 Nov 8 00:28:23.737743 kernel: pcpu-alloc: [0] 120 121 122 123 124 125 126 127 Nov 8 00:28:23.737750 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=vmware flatcar.autologin verity.usrhash=480a02cf7f2001774aa495c3e719d4173e968e6839485a7d2b207ef2facca472 Nov 8 00:28:23.737757 kernel: random: crng init done Nov 8 00:28:23.737762 kernel: printk: log_buf_len individual max cpu contribution: 4096 bytes Nov 8 00:28:23.737768 kernel: printk: log_buf_len total cpu_extra contributions: 520192 bytes Nov 8 00:28:23.737773 kernel: printk: log_buf_len min size: 262144 bytes Nov 8 00:28:23.737779 kernel: printk: log_buf_len: 1048576 bytes Nov 8 00:28:23.737785 kernel: printk: early log buf free: 239760(91%) Nov 8 00:28:23.737791 kernel: Dentry cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Nov 8 00:28:23.737796 kernel: Inode-cache hash table entries: 131072 (order: 8, 1048576 bytes, linear) Nov 8 00:28:23.737802 kernel: Fallback order for Node 0: 0 Nov 8 00:28:23.737808 kernel: Built 1 zonelists, mobility grouping on. Total pages: 515808 Nov 8 00:28:23.737814 kernel: Policy zone: DMA32 Nov 8 00:28:23.737820 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Nov 8 00:28:23.737826 kernel: Memory: 1936332K/2096628K available (12288K kernel code, 2288K rwdata, 22748K rodata, 42880K init, 2320K bss, 160036K reserved, 0K cma-reserved) Nov 8 00:28:23.737833 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=128, Nodes=1 Nov 8 00:28:23.737839 kernel: ftrace: allocating 37980 entries in 149 pages Nov 8 00:28:23.737845 kernel: ftrace: allocated 149 pages with 4 groups Nov 8 00:28:23.737854 kernel: Dynamic Preempt: voluntary Nov 8 00:28:23.737860 kernel: rcu: Preemptible hierarchical RCU implementation. Nov 8 00:28:23.737866 kernel: rcu: RCU event tracing is enabled. Nov 8 00:28:23.737872 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=128. Nov 8 00:28:23.737877 kernel: Trampoline variant of Tasks RCU enabled. Nov 8 00:28:23.737883 kernel: Rude variant of Tasks RCU enabled. Nov 8 00:28:23.737889 kernel: Tracing variant of Tasks RCU enabled. Nov 8 00:28:23.737895 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Nov 8 00:28:23.737900 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=128 Nov 8 00:28:23.737907 kernel: NR_IRQS: 33024, nr_irqs: 1448, preallocated irqs: 16 Nov 8 00:28:23.737913 kernel: rcu: srcu_init: Setting srcu_struct sizes to big. Nov 8 00:28:23.737919 kernel: Console: colour VGA+ 80x25 Nov 8 00:28:23.737925 kernel: printk: console [tty0] enabled Nov 8 00:28:23.737932 kernel: printk: console [ttyS0] enabled Nov 8 00:28:23.737937 kernel: ACPI: Core revision 20230628 Nov 8 00:28:23.737943 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 133484882848 ns Nov 8 00:28:23.737949 kernel: APIC: Switch to symmetric I/O mode setup Nov 8 00:28:23.737954 kernel: x2apic enabled Nov 8 00:28:23.737961 kernel: APIC: Switched APIC routing to: physical x2apic Nov 8 00:28:23.737967 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Nov 8 00:28:23.737973 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x311fd3cd494, max_idle_ns: 440795223879 ns Nov 8 00:28:23.737979 kernel: Calibrating delay loop (skipped) preset value.. 6816.00 BogoMIPS (lpj=3408000) Nov 8 00:28:23.737984 kernel: Disabled fast string operations Nov 8 00:28:23.737990 kernel: Last level iTLB entries: 4KB 64, 2MB 8, 4MB 8 Nov 8 00:28:23.737996 kernel: Last level dTLB entries: 4KB 64, 2MB 32, 4MB 32, 1GB 4 Nov 8 00:28:23.738001 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Nov 8 00:28:23.738007 kernel: Spectre V2 : Spectre BHI mitigation: SW BHB clearing on vm exit Nov 8 00:28:23.738014 kernel: Spectre V2 : Spectre BHI mitigation: SW BHB clearing on syscall Nov 8 00:28:23.738020 kernel: Spectre V2 : Mitigation: Enhanced / Automatic IBRS Nov 8 00:28:23.738025 kernel: Spectre V2 : Spectre v2 / PBRSB-eIBRS: Retire a single CALL on VMEXIT Nov 8 00:28:23.738031 kernel: RETBleed: Mitigation: Enhanced IBRS Nov 8 00:28:23.738037 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Nov 8 00:28:23.738042 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Nov 8 00:28:23.738048 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode Nov 8 00:28:23.738054 kernel: SRBDS: Unknown: Dependent on hypervisor status Nov 8 00:28:23.738061 kernel: GDS: Unknown: Dependent on hypervisor status Nov 8 00:28:23.738066 kernel: active return thunk: its_return_thunk Nov 8 00:28:23.738072 kernel: ITS: Mitigation: Aligned branch/return thunks Nov 8 00:28:23.738078 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Nov 8 00:28:23.738084 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Nov 8 00:28:23.738089 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Nov 8 00:28:23.738095 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Nov 8 00:28:23.738101 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format. Nov 8 00:28:23.738106 kernel: Freeing SMP alternatives memory: 32K Nov 8 00:28:23.738113 kernel: pid_max: default: 131072 minimum: 1024 Nov 8 00:28:23.738119 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Nov 8 00:28:23.738124 kernel: landlock: Up and running. Nov 8 00:28:23.738130 kernel: SELinux: Initializing. Nov 8 00:28:23.738136 kernel: Mount-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Nov 8 00:28:23.738142 kernel: Mountpoint-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Nov 8 00:28:23.738147 kernel: smpboot: CPU0: Intel(R) Xeon(R) E-2278G CPU @ 3.40GHz (family: 0x6, model: 0x9e, stepping: 0xd) Nov 8 00:28:23.738153 kernel: RCU Tasks: Setting shift to 7 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=128. Nov 8 00:28:23.738159 kernel: RCU Tasks Rude: Setting shift to 7 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=128. Nov 8 00:28:23.738166 kernel: RCU Tasks Trace: Setting shift to 7 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=128. Nov 8 00:28:23.738179 kernel: Performance Events: Skylake events, core PMU driver. Nov 8 00:28:23.738187 kernel: core: CPUID marked event: 'cpu cycles' unavailable Nov 8 00:28:23.738193 kernel: core: CPUID marked event: 'instructions' unavailable Nov 8 00:28:23.738199 kernel: core: CPUID marked event: 'bus cycles' unavailable Nov 8 00:28:23.738204 kernel: core: CPUID marked event: 'cache references' unavailable Nov 8 00:28:23.738210 kernel: core: CPUID marked event: 'cache misses' unavailable Nov 8 00:28:23.738215 kernel: core: CPUID marked event: 'branch instructions' unavailable Nov 8 00:28:23.738221 kernel: core: CPUID marked event: 'branch misses' unavailable Nov 8 00:28:23.738229 kernel: ... version: 1 Nov 8 00:28:23.738234 kernel: ... bit width: 48 Nov 8 00:28:23.738240 kernel: ... generic registers: 4 Nov 8 00:28:23.738246 kernel: ... value mask: 0000ffffffffffff Nov 8 00:28:23.738251 kernel: ... max period: 000000007fffffff Nov 8 00:28:23.738257 kernel: ... fixed-purpose events: 0 Nov 8 00:28:23.738263 kernel: ... event mask: 000000000000000f Nov 8 00:28:23.738268 kernel: signal: max sigframe size: 1776 Nov 8 00:28:23.738274 kernel: rcu: Hierarchical SRCU implementation. Nov 8 00:28:23.738281 kernel: rcu: Max phase no-delay instances is 400. Nov 8 00:28:23.738287 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Nov 8 00:28:23.738293 kernel: smp: Bringing up secondary CPUs ... Nov 8 00:28:23.738299 kernel: smpboot: x86: Booting SMP configuration: Nov 8 00:28:23.738304 kernel: .... node #0, CPUs: #1 Nov 8 00:28:23.738310 kernel: Disabled fast string operations Nov 8 00:28:23.738316 kernel: smpboot: CPU 1 Converting physical 2 to logical package 1 Nov 8 00:28:23.738321 kernel: smpboot: CPU 1 Converting physical 0 to logical die 1 Nov 8 00:28:23.738327 kernel: smp: Brought up 1 node, 2 CPUs Nov 8 00:28:23.738332 kernel: smpboot: Max logical packages: 128 Nov 8 00:28:23.738339 kernel: smpboot: Total of 2 processors activated (13632.00 BogoMIPS) Nov 8 00:28:23.738345 kernel: devtmpfs: initialized Nov 8 00:28:23.738351 kernel: x86/mm: Memory block size: 128MB Nov 8 00:28:23.738357 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x7feff000-0x7fefffff] (4096 bytes) Nov 8 00:28:23.738363 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Nov 8 00:28:23.738368 kernel: futex hash table entries: 32768 (order: 9, 2097152 bytes, linear) Nov 8 00:28:23.738374 kernel: pinctrl core: initialized pinctrl subsystem Nov 8 00:28:23.738380 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Nov 8 00:28:23.738385 kernel: audit: initializing netlink subsys (disabled) Nov 8 00:28:23.738392 kernel: audit: type=2000 audit(1762561701.090:1): state=initialized audit_enabled=0 res=1 Nov 8 00:28:23.738397 kernel: thermal_sys: Registered thermal governor 'step_wise' Nov 8 00:28:23.738403 kernel: thermal_sys: Registered thermal governor 'user_space' Nov 8 00:28:23.738409 kernel: cpuidle: using governor menu Nov 8 00:28:23.738414 kernel: Simple Boot Flag at 0x36 set to 0x80 Nov 8 00:28:23.738420 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Nov 8 00:28:23.738426 kernel: dca service started, version 1.12.1 Nov 8 00:28:23.738432 kernel: PCI: MMCONFIG for domain 0000 [bus 00-7f] at [mem 0xf0000000-0xf7ffffff] (base 0xf0000000) Nov 8 00:28:23.738437 kernel: PCI: Using configuration type 1 for base access Nov 8 00:28:23.738444 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Nov 8 00:28:23.738450 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Nov 8 00:28:23.738455 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Nov 8 00:28:23.738461 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Nov 8 00:28:23.738467 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Nov 8 00:28:23.738472 kernel: ACPI: Added _OSI(Module Device) Nov 8 00:28:23.738478 kernel: ACPI: Added _OSI(Processor Device) Nov 8 00:28:23.738484 kernel: ACPI: Added _OSI(Processor Aggregator Device) Nov 8 00:28:23.738489 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Nov 8 00:28:23.738496 kernel: ACPI: [Firmware Bug]: BIOS _OSI(Linux) query ignored Nov 8 00:28:23.738502 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Nov 8 00:28:23.738507 kernel: ACPI: Interpreter enabled Nov 8 00:28:23.738513 kernel: ACPI: PM: (supports S0 S1 S5) Nov 8 00:28:23.738519 kernel: ACPI: Using IOAPIC for interrupt routing Nov 8 00:28:23.738524 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Nov 8 00:28:23.738530 kernel: PCI: Using E820 reservations for host bridge windows Nov 8 00:28:23.738535 kernel: ACPI: Enabled 4 GPEs in block 00 to 0F Nov 8 00:28:23.738541 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-7f]) Nov 8 00:28:23.738621 kernel: acpi PNP0A03:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Nov 8 00:28:23.738678 kernel: acpi PNP0A03:00: _OSC: platform does not support [AER LTR] Nov 8 00:28:23.738729 kernel: acpi PNP0A03:00: _OSC: OS now controls [PCIeHotplug PME PCIeCapability] Nov 8 00:28:23.738737 kernel: PCI host bridge to bus 0000:00 Nov 8 00:28:23.738789 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Nov 8 00:28:23.738836 kernel: pci_bus 0000:00: root bus resource [mem 0x000cc000-0x000dbfff window] Nov 8 00:28:23.738884 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Nov 8 00:28:23.738930 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Nov 8 00:28:23.738975 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xfeff window] Nov 8 00:28:23.739022 kernel: pci_bus 0000:00: root bus resource [bus 00-7f] Nov 8 00:28:23.739082 kernel: pci 0000:00:00.0: [8086:7190] type 00 class 0x060000 Nov 8 00:28:23.739139 kernel: pci 0000:00:01.0: [8086:7191] type 01 class 0x060400 Nov 8 00:28:23.739211 kernel: pci 0000:00:07.0: [8086:7110] type 00 class 0x060100 Nov 8 00:28:23.739266 kernel: pci 0000:00:07.1: [8086:7111] type 00 class 0x01018a Nov 8 00:28:23.739318 kernel: pci 0000:00:07.1: reg 0x20: [io 0x1060-0x106f] Nov 8 00:28:23.739369 kernel: pci 0000:00:07.1: legacy IDE quirk: reg 0x10: [io 0x01f0-0x01f7] Nov 8 00:28:23.739420 kernel: pci 0000:00:07.1: legacy IDE quirk: reg 0x14: [io 0x03f6] Nov 8 00:28:23.739472 kernel: pci 0000:00:07.1: legacy IDE quirk: reg 0x18: [io 0x0170-0x0177] Nov 8 00:28:23.739522 kernel: pci 0000:00:07.1: legacy IDE quirk: reg 0x1c: [io 0x0376] Nov 8 00:28:23.739582 kernel: pci 0000:00:07.3: [8086:7113] type 00 class 0x068000 Nov 8 00:28:23.739635 kernel: pci 0000:00:07.3: quirk: [io 0x1000-0x103f] claimed by PIIX4 ACPI Nov 8 00:28:23.739685 kernel: pci 0000:00:07.3: quirk: [io 0x1040-0x104f] claimed by PIIX4 SMB Nov 8 00:28:23.739743 kernel: pci 0000:00:07.7: [15ad:0740] type 00 class 0x088000 Nov 8 00:28:23.739794 kernel: pci 0000:00:07.7: reg 0x10: [io 0x1080-0x10bf] Nov 8 00:28:23.739845 kernel: pci 0000:00:07.7: reg 0x14: [mem 0xfebfe000-0xfebfffff 64bit] Nov 8 00:28:23.739903 kernel: pci 0000:00:0f.0: [15ad:0405] type 00 class 0x030000 Nov 8 00:28:23.739953 kernel: pci 0000:00:0f.0: reg 0x10: [io 0x1070-0x107f] Nov 8 00:28:23.740005 kernel: pci 0000:00:0f.0: reg 0x14: [mem 0xe8000000-0xefffffff pref] Nov 8 00:28:23.740055 kernel: pci 0000:00:0f.0: reg 0x18: [mem 0xfe000000-0xfe7fffff] Nov 8 00:28:23.740105 kernel: pci 0000:00:0f.0: reg 0x30: [mem 0x00000000-0x00007fff pref] Nov 8 00:28:23.740155 kernel: pci 0000:00:0f.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Nov 8 00:28:23.740553 kernel: pci 0000:00:11.0: [15ad:0790] type 01 class 0x060401 Nov 8 00:28:23.740618 kernel: pci 0000:00:15.0: [15ad:07a0] type 01 class 0x060400 Nov 8 00:28:23.740672 kernel: pci 0000:00:15.0: PME# supported from D0 D3hot D3cold Nov 8 00:28:23.740728 kernel: pci 0000:00:15.1: [15ad:07a0] type 01 class 0x060400 Nov 8 00:28:23.740794 kernel: pci 0000:00:15.1: PME# supported from D0 D3hot D3cold Nov 8 00:28:23.740856 kernel: pci 0000:00:15.2: [15ad:07a0] type 01 class 0x060400 Nov 8 00:28:23.740910 kernel: pci 0000:00:15.2: PME# supported from D0 D3hot D3cold Nov 8 00:28:23.740968 kernel: pci 0000:00:15.3: [15ad:07a0] type 01 class 0x060400 Nov 8 00:28:23.741020 kernel: pci 0000:00:15.3: PME# supported from D0 D3hot D3cold Nov 8 00:28:23.741074 kernel: pci 0000:00:15.4: [15ad:07a0] type 01 class 0x060400 Nov 8 00:28:23.741126 kernel: pci 0000:00:15.4: PME# supported from D0 D3hot D3cold Nov 8 00:28:23.741195 kernel: pci 0000:00:15.5: [15ad:07a0] type 01 class 0x060400 Nov 8 00:28:23.741251 kernel: pci 0000:00:15.5: PME# supported from D0 D3hot D3cold Nov 8 00:28:23.743222 kernel: pci 0000:00:15.6: [15ad:07a0] type 01 class 0x060400 Nov 8 00:28:23.743316 kernel: pci 0000:00:15.6: PME# supported from D0 D3hot D3cold Nov 8 00:28:23.743399 kernel: pci 0000:00:15.7: [15ad:07a0] type 01 class 0x060400 Nov 8 00:28:23.743470 kernel: pci 0000:00:15.7: PME# supported from D0 D3hot D3cold Nov 8 00:28:23.743529 kernel: pci 0000:00:16.0: [15ad:07a0] type 01 class 0x060400 Nov 8 00:28:23.743584 kernel: pci 0000:00:16.0: PME# supported from D0 D3hot D3cold Nov 8 00:28:23.743657 kernel: pci 0000:00:16.1: [15ad:07a0] type 01 class 0x060400 Nov 8 00:28:23.743712 kernel: pci 0000:00:16.1: PME# supported from D0 D3hot D3cold Nov 8 00:28:23.743772 kernel: pci 0000:00:16.2: [15ad:07a0] type 01 class 0x060400 Nov 8 00:28:23.743826 kernel: pci 0000:00:16.2: PME# supported from D0 D3hot D3cold Nov 8 00:28:23.743882 kernel: pci 0000:00:16.3: [15ad:07a0] type 01 class 0x060400 Nov 8 00:28:23.743935 kernel: pci 0000:00:16.3: PME# supported from D0 D3hot D3cold Nov 8 00:28:23.743996 kernel: pci 0000:00:16.4: [15ad:07a0] type 01 class 0x060400 Nov 8 00:28:23.744049 kernel: pci 0000:00:16.4: PME# supported from D0 D3hot D3cold Nov 8 00:28:23.744106 kernel: pci 0000:00:16.5: [15ad:07a0] type 01 class 0x060400 Nov 8 00:28:23.744158 kernel: pci 0000:00:16.5: PME# supported from D0 D3hot D3cold Nov 8 00:28:23.744249 kernel: pci 0000:00:16.6: [15ad:07a0] type 01 class 0x060400 Nov 8 00:28:23.744303 kernel: pci 0000:00:16.6: PME# supported from D0 D3hot D3cold Nov 8 00:28:23.744363 kernel: pci 0000:00:16.7: [15ad:07a0] type 01 class 0x060400 Nov 8 00:28:23.744417 kernel: pci 0000:00:16.7: PME# supported from D0 D3hot D3cold Nov 8 00:28:23.744473 kernel: pci 0000:00:17.0: [15ad:07a0] type 01 class 0x060400 Nov 8 00:28:23.744561 kernel: pci 0000:00:17.0: PME# supported from D0 D3hot D3cold Nov 8 00:28:23.744655 kernel: pci 0000:00:17.1: [15ad:07a0] type 01 class 0x060400 Nov 8 00:28:23.744722 kernel: pci 0000:00:17.1: PME# supported from D0 D3hot D3cold Nov 8 00:28:23.744789 kernel: pci 0000:00:17.2: [15ad:07a0] type 01 class 0x060400 Nov 8 00:28:23.744854 kernel: pci 0000:00:17.2: PME# supported from D0 D3hot D3cold Nov 8 00:28:23.744924 kernel: pci 0000:00:17.3: [15ad:07a0] type 01 class 0x060400 Nov 8 00:28:23.744986 kernel: pci 0000:00:17.3: PME# supported from D0 D3hot D3cold Nov 8 00:28:23.745068 kernel: pci 0000:00:17.4: [15ad:07a0] type 01 class 0x060400 Nov 8 00:28:23.745153 kernel: pci 0000:00:17.4: PME# supported from D0 D3hot D3cold Nov 8 00:28:23.747280 kernel: pci 0000:00:17.5: [15ad:07a0] type 01 class 0x060400 Nov 8 00:28:23.747431 kernel: pci 0000:00:17.5: PME# supported from D0 D3hot D3cold Nov 8 00:28:23.747561 kernel: pci 0000:00:17.6: [15ad:07a0] type 01 class 0x060400 Nov 8 00:28:23.747656 kernel: pci 0000:00:17.6: PME# supported from D0 D3hot D3cold Nov 8 00:28:23.747751 kernel: pci 0000:00:17.7: [15ad:07a0] type 01 class 0x060400 Nov 8 00:28:23.747868 kernel: pci 0000:00:17.7: PME# supported from D0 D3hot D3cold Nov 8 00:28:23.747977 kernel: pci 0000:00:18.0: [15ad:07a0] type 01 class 0x060400 Nov 8 00:28:23.748069 kernel: pci 0000:00:18.0: PME# supported from D0 D3hot D3cold Nov 8 00:28:23.748162 kernel: pci 0000:00:18.1: [15ad:07a0] type 01 class 0x060400 Nov 8 00:28:23.748260 kernel: pci 0000:00:18.1: PME# supported from D0 D3hot D3cold Nov 8 00:28:23.748348 kernel: pci 0000:00:18.2: [15ad:07a0] type 01 class 0x060400 Nov 8 00:28:23.748438 kernel: pci 0000:00:18.2: PME# supported from D0 D3hot D3cold Nov 8 00:28:23.748504 kernel: pci 0000:00:18.3: [15ad:07a0] type 01 class 0x060400 Nov 8 00:28:23.748571 kernel: pci 0000:00:18.3: PME# supported from D0 D3hot D3cold Nov 8 00:28:23.748630 kernel: pci 0000:00:18.4: [15ad:07a0] type 01 class 0x060400 Nov 8 00:28:23.748684 kernel: pci 0000:00:18.4: PME# supported from D0 D3hot D3cold Nov 8 00:28:23.748804 kernel: pci 0000:00:18.5: [15ad:07a0] type 01 class 0x060400 Nov 8 00:28:23.748898 kernel: pci 0000:00:18.5: PME# supported from D0 D3hot D3cold Nov 8 00:28:23.748983 kernel: pci 0000:00:18.6: [15ad:07a0] type 01 class 0x060400 Nov 8 00:28:23.749076 kernel: pci 0000:00:18.6: PME# supported from D0 D3hot D3cold Nov 8 00:28:23.749211 kernel: pci 0000:00:18.7: [15ad:07a0] type 01 class 0x060400 Nov 8 00:28:23.749307 kernel: pci 0000:00:18.7: PME# supported from D0 D3hot D3cold Nov 8 00:28:23.749398 kernel: pci_bus 0000:01: extended config space not accessible Nov 8 00:28:23.749492 kernel: pci 0000:00:01.0: PCI bridge to [bus 01] Nov 8 00:28:23.749583 kernel: pci_bus 0000:02: extended config space not accessible Nov 8 00:28:23.749599 kernel: acpiphp: Slot [32] registered Nov 8 00:28:23.749614 kernel: acpiphp: Slot [33] registered Nov 8 00:28:23.749626 kernel: acpiphp: Slot [34] registered Nov 8 00:28:23.749637 kernel: acpiphp: Slot [35] registered Nov 8 00:28:23.749647 kernel: acpiphp: Slot [36] registered Nov 8 00:28:23.749657 kernel: acpiphp: Slot [37] registered Nov 8 00:28:23.749668 kernel: acpiphp: Slot [38] registered Nov 8 00:28:23.749679 kernel: acpiphp: Slot [39] registered Nov 8 00:28:23.749689 kernel: acpiphp: Slot [40] registered Nov 8 00:28:23.749699 kernel: acpiphp: Slot [41] registered Nov 8 00:28:23.749713 kernel: acpiphp: Slot [42] registered Nov 8 00:28:23.749723 kernel: acpiphp: Slot [43] registered Nov 8 00:28:23.749733 kernel: acpiphp: Slot [44] registered Nov 8 00:28:23.749743 kernel: acpiphp: Slot [45] registered Nov 8 00:28:23.749753 kernel: acpiphp: Slot [46] registered Nov 8 00:28:23.749764 kernel: acpiphp: Slot [47] registered Nov 8 00:28:23.749774 kernel: acpiphp: Slot [48] registered Nov 8 00:28:23.749783 kernel: acpiphp: Slot [49] registered Nov 8 00:28:23.749793 kernel: acpiphp: Slot [50] registered Nov 8 00:28:23.749803 kernel: acpiphp: Slot [51] registered Nov 8 00:28:23.749817 kernel: acpiphp: Slot [52] registered Nov 8 00:28:23.749827 kernel: acpiphp: Slot [53] registered Nov 8 00:28:23.749837 kernel: acpiphp: Slot [54] registered Nov 8 00:28:23.749847 kernel: acpiphp: Slot [55] registered Nov 8 00:28:23.749857 kernel: acpiphp: Slot [56] registered Nov 8 00:28:23.749867 kernel: acpiphp: Slot [57] registered Nov 8 00:28:23.749877 kernel: acpiphp: Slot [58] registered Nov 8 00:28:23.749887 kernel: acpiphp: Slot [59] registered Nov 8 00:28:23.749896 kernel: acpiphp: Slot [60] registered Nov 8 00:28:23.749913 kernel: acpiphp: Slot [61] registered Nov 8 00:28:23.749935 kernel: acpiphp: Slot [62] registered Nov 8 00:28:23.749953 kernel: acpiphp: Slot [63] registered Nov 8 00:28:23.750063 kernel: pci 0000:00:11.0: PCI bridge to [bus 02] (subtractive decode) Nov 8 00:28:23.750153 kernel: pci 0000:00:11.0: bridge window [io 0x2000-0x3fff] Nov 8 00:28:23.750253 kernel: pci 0000:00:11.0: bridge window [mem 0xfd600000-0xfdffffff] Nov 8 00:28:23.750347 kernel: pci 0000:00:11.0: bridge window [mem 0xe7b00000-0xe7ffffff 64bit pref] Nov 8 00:28:23.752312 kernel: pci 0000:00:11.0: bridge window [mem 0x000a0000-0x000bffff window] (subtractive decode) Nov 8 00:28:23.752376 kernel: pci 0000:00:11.0: bridge window [mem 0x000cc000-0x000dbfff window] (subtractive decode) Nov 8 00:28:23.752430 kernel: pci 0000:00:11.0: bridge window [mem 0xc0000000-0xfebfffff window] (subtractive decode) Nov 8 00:28:23.752496 kernel: pci 0000:00:11.0: bridge window [io 0x0000-0x0cf7 window] (subtractive decode) Nov 8 00:28:23.752549 kernel: pci 0000:00:11.0: bridge window [io 0x0d00-0xfeff window] (subtractive decode) Nov 8 00:28:23.752609 kernel: pci 0000:03:00.0: [15ad:07c0] type 00 class 0x010700 Nov 8 00:28:23.752664 kernel: pci 0000:03:00.0: reg 0x10: [io 0x4000-0x4007] Nov 8 00:28:23.752717 kernel: pci 0000:03:00.0: reg 0x14: [mem 0xfd5f8000-0xfd5fffff 64bit] Nov 8 00:28:23.752772 kernel: pci 0000:03:00.0: reg 0x30: [mem 0x00000000-0x0000ffff pref] Nov 8 00:28:23.752825 kernel: pci 0000:03:00.0: PME# supported from D0 D3hot D3cold Nov 8 00:28:23.752888 kernel: pci 0000:03:00.0: disabling ASPM on pre-1.1 PCIe device. You can enable it with 'pcie_aspm=force' Nov 8 00:28:23.752944 kernel: pci 0000:00:15.0: PCI bridge to [bus 03] Nov 8 00:28:23.752996 kernel: pci 0000:00:15.0: bridge window [io 0x4000-0x4fff] Nov 8 00:28:23.753048 kernel: pci 0000:00:15.0: bridge window [mem 0xfd500000-0xfd5fffff] Nov 8 00:28:23.753102 kernel: pci 0000:00:15.1: PCI bridge to [bus 04] Nov 8 00:28:23.753154 kernel: pci 0000:00:15.1: bridge window [io 0x8000-0x8fff] Nov 8 00:28:23.753220 kernel: pci 0000:00:15.1: bridge window [mem 0xfd100000-0xfd1fffff] Nov 8 00:28:23.753272 kernel: pci 0000:00:15.1: bridge window [mem 0xe7800000-0xe78fffff 64bit pref] Nov 8 00:28:23.753328 kernel: pci 0000:00:15.2: PCI bridge to [bus 05] Nov 8 00:28:23.753380 kernel: pci 0000:00:15.2: bridge window [io 0xc000-0xcfff] Nov 8 00:28:23.753431 kernel: pci 0000:00:15.2: bridge window [mem 0xfcd00000-0xfcdfffff] Nov 8 00:28:23.753483 kernel: pci 0000:00:15.2: bridge window [mem 0xe7400000-0xe74fffff 64bit pref] Nov 8 00:28:23.753539 kernel: pci 0000:00:15.3: PCI bridge to [bus 06] Nov 8 00:28:23.753594 kernel: pci 0000:00:15.3: bridge window [mem 0xfc900000-0xfc9fffff] Nov 8 00:28:23.753646 kernel: pci 0000:00:15.3: bridge window [mem 0xe7000000-0xe70fffff 64bit pref] Nov 8 00:28:23.753700 kernel: pci 0000:00:15.4: PCI bridge to [bus 07] Nov 8 00:28:23.753751 kernel: pci 0000:00:15.4: bridge window [mem 0xfc500000-0xfc5fffff] Nov 8 00:28:23.753803 kernel: pci 0000:00:15.4: bridge window [mem 0xe6c00000-0xe6cfffff 64bit pref] Nov 8 00:28:23.753859 kernel: pci 0000:00:15.5: PCI bridge to [bus 08] Nov 8 00:28:23.753911 kernel: pci 0000:00:15.5: bridge window [mem 0xfc100000-0xfc1fffff] Nov 8 00:28:23.753963 kernel: pci 0000:00:15.5: bridge window [mem 0xe6800000-0xe68fffff 64bit pref] Nov 8 00:28:23.754016 kernel: pci 0000:00:15.6: PCI bridge to [bus 09] Nov 8 00:28:23.754068 kernel: pci 0000:00:15.6: bridge window [mem 0xfbd00000-0xfbdfffff] Nov 8 00:28:23.754120 kernel: pci 0000:00:15.6: bridge window [mem 0xe6400000-0xe64fffff 64bit pref] Nov 8 00:28:23.756208 kernel: pci 0000:00:15.7: PCI bridge to [bus 0a] Nov 8 00:28:23.756277 kernel: pci 0000:00:15.7: bridge window [mem 0xfb900000-0xfb9fffff] Nov 8 00:28:23.756337 kernel: pci 0000:00:15.7: bridge window [mem 0xe6000000-0xe60fffff 64bit pref] Nov 8 00:28:23.756396 kernel: pci 0000:0b:00.0: [15ad:07b0] type 00 class 0x020000 Nov 8 00:28:23.756450 kernel: pci 0000:0b:00.0: reg 0x10: [mem 0xfd4fc000-0xfd4fcfff] Nov 8 00:28:23.756503 kernel: pci 0000:0b:00.0: reg 0x14: [mem 0xfd4fd000-0xfd4fdfff] Nov 8 00:28:23.756556 kernel: pci 0000:0b:00.0: reg 0x18: [mem 0xfd4fe000-0xfd4fffff] Nov 8 00:28:23.756609 kernel: pci 0000:0b:00.0: reg 0x1c: [io 0x5000-0x500f] Nov 8 00:28:23.756660 kernel: pci 0000:0b:00.0: reg 0x30: [mem 0x00000000-0x0000ffff pref] Nov 8 00:28:23.756716 kernel: pci 0000:0b:00.0: supports D1 D2 Nov 8 00:28:23.756768 kernel: pci 0000:0b:00.0: PME# supported from D0 D1 D2 D3hot D3cold Nov 8 00:28:23.756820 kernel: pci 0000:0b:00.0: disabling ASPM on pre-1.1 PCIe device. You can enable it with 'pcie_aspm=force' Nov 8 00:28:23.756873 kernel: pci 0000:00:16.0: PCI bridge to [bus 0b] Nov 8 00:28:23.756960 kernel: pci 0000:00:16.0: bridge window [io 0x5000-0x5fff] Nov 8 00:28:23.757012 kernel: pci 0000:00:16.0: bridge window [mem 0xfd400000-0xfd4fffff] Nov 8 00:28:23.757064 kernel: pci 0000:00:16.1: PCI bridge to [bus 0c] Nov 8 00:28:23.757116 kernel: pci 0000:00:16.1: bridge window [io 0x9000-0x9fff] Nov 8 00:28:23.757170 kernel: pci 0000:00:16.1: bridge window [mem 0xfd000000-0xfd0fffff] Nov 8 00:28:23.758238 kernel: pci 0000:00:16.1: bridge window [mem 0xe7700000-0xe77fffff 64bit pref] Nov 8 00:28:23.758293 kernel: pci 0000:00:16.2: PCI bridge to [bus 0d] Nov 8 00:28:23.758362 kernel: pci 0000:00:16.2: bridge window [io 0xd000-0xdfff] Nov 8 00:28:23.758414 kernel: pci 0000:00:16.2: bridge window [mem 0xfcc00000-0xfccfffff] Nov 8 00:28:23.758466 kernel: pci 0000:00:16.2: bridge window [mem 0xe7300000-0xe73fffff 64bit pref] Nov 8 00:28:23.758521 kernel: pci 0000:00:16.3: PCI bridge to [bus 0e] Nov 8 00:28:23.758575 kernel: pci 0000:00:16.3: bridge window [mem 0xfc800000-0xfc8fffff] Nov 8 00:28:23.758630 kernel: pci 0000:00:16.3: bridge window [mem 0xe6f00000-0xe6ffffff 64bit pref] Nov 8 00:28:23.758686 kernel: pci 0000:00:16.4: PCI bridge to [bus 0f] Nov 8 00:28:23.758738 kernel: pci 0000:00:16.4: bridge window [mem 0xfc400000-0xfc4fffff] Nov 8 00:28:23.758791 kernel: pci 0000:00:16.4: bridge window [mem 0xe6b00000-0xe6bfffff 64bit pref] Nov 8 00:28:23.758846 kernel: pci 0000:00:16.5: PCI bridge to [bus 10] Nov 8 00:28:23.758908 kernel: pci 0000:00:16.5: bridge window [mem 0xfc000000-0xfc0fffff] Nov 8 00:28:23.758961 kernel: pci 0000:00:16.5: bridge window [mem 0xe6700000-0xe67fffff 64bit pref] Nov 8 00:28:23.759016 kernel: pci 0000:00:16.6: PCI bridge to [bus 11] Nov 8 00:28:23.759071 kernel: pci 0000:00:16.6: bridge window [mem 0xfbc00000-0xfbcfffff] Nov 8 00:28:23.759123 kernel: pci 0000:00:16.6: bridge window [mem 0xe6300000-0xe63fffff 64bit pref] Nov 8 00:28:23.761262 kernel: pci 0000:00:16.7: PCI bridge to [bus 12] Nov 8 00:28:23.761321 kernel: pci 0000:00:16.7: bridge window [mem 0xfb800000-0xfb8fffff] Nov 8 00:28:23.761374 kernel: pci 0000:00:16.7: bridge window [mem 0xe5f00000-0xe5ffffff 64bit pref] Nov 8 00:28:23.761428 kernel: pci 0000:00:17.0: PCI bridge to [bus 13] Nov 8 00:28:23.761480 kernel: pci 0000:00:17.0: bridge window [io 0x6000-0x6fff] Nov 8 00:28:23.761532 kernel: pci 0000:00:17.0: bridge window [mem 0xfd300000-0xfd3fffff] Nov 8 00:28:23.761608 kernel: pci 0000:00:17.0: bridge window [mem 0xe7a00000-0xe7afffff 64bit pref] Nov 8 00:28:23.761664 kernel: pci 0000:00:17.1: PCI bridge to [bus 14] Nov 8 00:28:23.761715 kernel: pci 0000:00:17.1: bridge window [io 0xa000-0xafff] Nov 8 00:28:23.761767 kernel: pci 0000:00:17.1: bridge window [mem 0xfcf00000-0xfcffffff] Nov 8 00:28:23.761818 kernel: pci 0000:00:17.1: bridge window [mem 0xe7600000-0xe76fffff 64bit pref] Nov 8 00:28:23.761876 kernel: pci 0000:00:17.2: PCI bridge to [bus 15] Nov 8 00:28:23.761929 kernel: pci 0000:00:17.2: bridge window [io 0xe000-0xefff] Nov 8 00:28:23.761983 kernel: pci 0000:00:17.2: bridge window [mem 0xfcb00000-0xfcbfffff] Nov 8 00:28:23.762034 kernel: pci 0000:00:17.2: bridge window [mem 0xe7200000-0xe72fffff 64bit pref] Nov 8 00:28:23.762088 kernel: pci 0000:00:17.3: PCI bridge to [bus 16] Nov 8 00:28:23.762139 kernel: pci 0000:00:17.3: bridge window [mem 0xfc700000-0xfc7fffff] Nov 8 00:28:23.762234 kernel: pci 0000:00:17.3: bridge window [mem 0xe6e00000-0xe6efffff 64bit pref] Nov 8 00:28:23.762288 kernel: pci 0000:00:17.4: PCI bridge to [bus 17] Nov 8 00:28:23.762339 kernel: pci 0000:00:17.4: bridge window [mem 0xfc300000-0xfc3fffff] Nov 8 00:28:23.762390 kernel: pci 0000:00:17.4: bridge window [mem 0xe6a00000-0xe6afffff 64bit pref] Nov 8 00:28:23.762447 kernel: pci 0000:00:17.5: PCI bridge to [bus 18] Nov 8 00:28:23.762498 kernel: pci 0000:00:17.5: bridge window [mem 0xfbf00000-0xfbffffff] Nov 8 00:28:23.762549 kernel: pci 0000:00:17.5: bridge window [mem 0xe6600000-0xe66fffff 64bit pref] Nov 8 00:28:23.762601 kernel: pci 0000:00:17.6: PCI bridge to [bus 19] Nov 8 00:28:23.762653 kernel: pci 0000:00:17.6: bridge window [mem 0xfbb00000-0xfbbfffff] Nov 8 00:28:23.762703 kernel: pci 0000:00:17.6: bridge window [mem 0xe6200000-0xe62fffff 64bit pref] Nov 8 00:28:23.762756 kernel: pci 0000:00:17.7: PCI bridge to [bus 1a] Nov 8 00:28:23.762806 kernel: pci 0000:00:17.7: bridge window [mem 0xfb700000-0xfb7fffff] Nov 8 00:28:23.762870 kernel: pci 0000:00:17.7: bridge window [mem 0xe5e00000-0xe5efffff 64bit pref] Nov 8 00:28:23.762925 kernel: pci 0000:00:18.0: PCI bridge to [bus 1b] Nov 8 00:28:23.762976 kernel: pci 0000:00:18.0: bridge window [io 0x7000-0x7fff] Nov 8 00:28:23.763026 kernel: pci 0000:00:18.0: bridge window [mem 0xfd200000-0xfd2fffff] Nov 8 00:28:23.763076 kernel: pci 0000:00:18.0: bridge window [mem 0xe7900000-0xe79fffff 64bit pref] Nov 8 00:28:23.763128 kernel: pci 0000:00:18.1: PCI bridge to [bus 1c] Nov 8 00:28:23.764406 kernel: pci 0000:00:18.1: bridge window [io 0xb000-0xbfff] Nov 8 00:28:23.764462 kernel: pci 0000:00:18.1: bridge window [mem 0xfce00000-0xfcefffff] Nov 8 00:28:23.764517 kernel: pci 0000:00:18.1: bridge window [mem 0xe7500000-0xe75fffff 64bit pref] Nov 8 00:28:23.764570 kernel: pci 0000:00:18.2: PCI bridge to [bus 1d] Nov 8 00:28:23.764623 kernel: pci 0000:00:18.2: bridge window [mem 0xfca00000-0xfcafffff] Nov 8 00:28:23.764674 kernel: pci 0000:00:18.2: bridge window [mem 0xe7100000-0xe71fffff 64bit pref] Nov 8 00:28:23.764727 kernel: pci 0000:00:18.3: PCI bridge to [bus 1e] Nov 8 00:28:23.764778 kernel: pci 0000:00:18.3: bridge window [mem 0xfc600000-0xfc6fffff] Nov 8 00:28:23.764829 kernel: pci 0000:00:18.3: bridge window [mem 0xe6d00000-0xe6dfffff 64bit pref] Nov 8 00:28:23.764923 kernel: pci 0000:00:18.4: PCI bridge to [bus 1f] Nov 8 00:28:23.765033 kernel: pci 0000:00:18.4: bridge window [mem 0xfc200000-0xfc2fffff] Nov 8 00:28:23.765295 kernel: pci 0000:00:18.4: bridge window [mem 0xe6900000-0xe69fffff 64bit pref] Nov 8 00:28:23.765367 kernel: pci 0000:00:18.5: PCI bridge to [bus 20] Nov 8 00:28:23.765448 kernel: pci 0000:00:18.5: bridge window [mem 0xfbe00000-0xfbefffff] Nov 8 00:28:23.765858 kernel: pci 0000:00:18.5: bridge window [mem 0xe6500000-0xe65fffff 64bit pref] Nov 8 00:28:23.765922 kernel: pci 0000:00:18.6: PCI bridge to [bus 21] Nov 8 00:28:23.765975 kernel: pci 0000:00:18.6: bridge window [mem 0xfba00000-0xfbafffff] Nov 8 00:28:23.766027 kernel: pci 0000:00:18.6: bridge window [mem 0xe6100000-0xe61fffff 64bit pref] Nov 8 00:28:23.766082 kernel: pci 0000:00:18.7: PCI bridge to [bus 22] Nov 8 00:28:23.766133 kernel: pci 0000:00:18.7: bridge window [mem 0xfb600000-0xfb6fffff] Nov 8 00:28:23.766196 kernel: pci 0000:00:18.7: bridge window [mem 0xe5d00000-0xe5dfffff 64bit pref] Nov 8 00:28:23.766206 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 9 Nov 8 00:28:23.766211 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 0 Nov 8 00:28:23.766217 kernel: ACPI: PCI: Interrupt link LNKB disabled Nov 8 00:28:23.766223 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Nov 8 00:28:23.766229 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 10 Nov 8 00:28:23.766237 kernel: iommu: Default domain type: Translated Nov 8 00:28:23.766243 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Nov 8 00:28:23.766248 kernel: PCI: Using ACPI for IRQ routing Nov 8 00:28:23.766254 kernel: PCI: pci_cache_line_size set to 64 bytes Nov 8 00:28:23.766260 kernel: e820: reserve RAM buffer [mem 0x0009ec00-0x0009ffff] Nov 8 00:28:23.766266 kernel: e820: reserve RAM buffer [mem 0x7fee0000-0x7fffffff] Nov 8 00:28:23.766337 kernel: pci 0000:00:0f.0: vgaarb: setting as boot VGA device Nov 8 00:28:23.766389 kernel: pci 0000:00:0f.0: vgaarb: bridge control possible Nov 8 00:28:23.766440 kernel: pci 0000:00:0f.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Nov 8 00:28:23.766450 kernel: vgaarb: loaded Nov 8 00:28:23.766457 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 Nov 8 00:28:23.766463 kernel: hpet0: 16 comparators, 64-bit 14.318180 MHz counter Nov 8 00:28:23.766469 kernel: clocksource: Switched to clocksource tsc-early Nov 8 00:28:23.766474 kernel: VFS: Disk quotas dquot_6.6.0 Nov 8 00:28:23.766480 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Nov 8 00:28:23.766486 kernel: pnp: PnP ACPI init Nov 8 00:28:23.766544 kernel: system 00:00: [io 0x1000-0x103f] has been reserved Nov 8 00:28:23.766610 kernel: system 00:00: [io 0x1040-0x104f] has been reserved Nov 8 00:28:23.766657 kernel: system 00:00: [io 0x0cf0-0x0cf1] has been reserved Nov 8 00:28:23.766708 kernel: system 00:04: [mem 0xfed00000-0xfed003ff] has been reserved Nov 8 00:28:23.766758 kernel: pnp 00:06: [dma 2] Nov 8 00:28:23.766808 kernel: system 00:07: [io 0xfce0-0xfcff] has been reserved Nov 8 00:28:23.766859 kernel: system 00:07: [mem 0xf0000000-0xf7ffffff] has been reserved Nov 8 00:28:23.766908 kernel: system 00:07: [mem 0xfe800000-0xfe9fffff] has been reserved Nov 8 00:28:23.766916 kernel: pnp: PnP ACPI: found 8 devices Nov 8 00:28:23.766922 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Nov 8 00:28:23.766928 kernel: NET: Registered PF_INET protocol family Nov 8 00:28:23.766934 kernel: IP idents hash table entries: 32768 (order: 6, 262144 bytes, linear) Nov 8 00:28:23.766939 kernel: tcp_listen_portaddr_hash hash table entries: 1024 (order: 2, 16384 bytes, linear) Nov 8 00:28:23.766945 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Nov 8 00:28:23.766951 kernel: TCP established hash table entries: 16384 (order: 5, 131072 bytes, linear) Nov 8 00:28:23.766957 kernel: TCP bind hash table entries: 16384 (order: 7, 524288 bytes, linear) Nov 8 00:28:23.766964 kernel: TCP: Hash tables configured (established 16384 bind 16384) Nov 8 00:28:23.766970 kernel: UDP hash table entries: 1024 (order: 3, 32768 bytes, linear) Nov 8 00:28:23.766976 kernel: UDP-Lite hash table entries: 1024 (order: 3, 32768 bytes, linear) Nov 8 00:28:23.766981 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Nov 8 00:28:23.766987 kernel: NET: Registered PF_XDP protocol family Nov 8 00:28:23.767038 kernel: pci 0000:00:15.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 03] add_size 200000 add_align 100000 Nov 8 00:28:23.767091 kernel: pci 0000:00:15.3: bridge window [io 0x1000-0x0fff] to [bus 06] add_size 1000 Nov 8 00:28:23.767153 kernel: pci 0000:00:15.4: bridge window [io 0x1000-0x0fff] to [bus 07] add_size 1000 Nov 8 00:28:23.767552 kernel: pci 0000:00:15.5: bridge window [io 0x1000-0x0fff] to [bus 08] add_size 1000 Nov 8 00:28:23.767610 kernel: pci 0000:00:15.6: bridge window [io 0x1000-0x0fff] to [bus 09] add_size 1000 Nov 8 00:28:23.767664 kernel: pci 0000:00:15.7: bridge window [io 0x1000-0x0fff] to [bus 0a] add_size 1000 Nov 8 00:28:23.767717 kernel: pci 0000:00:16.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 0b] add_size 200000 add_align 100000 Nov 8 00:28:23.767769 kernel: pci 0000:00:16.3: bridge window [io 0x1000-0x0fff] to [bus 0e] add_size 1000 Nov 8 00:28:23.767821 kernel: pci 0000:00:16.4: bridge window [io 0x1000-0x0fff] to [bus 0f] add_size 1000 Nov 8 00:28:23.767876 kernel: pci 0000:00:16.5: bridge window [io 0x1000-0x0fff] to [bus 10] add_size 1000 Nov 8 00:28:23.767928 kernel: pci 0000:00:16.6: bridge window [io 0x1000-0x0fff] to [bus 11] add_size 1000 Nov 8 00:28:23.767979 kernel: pci 0000:00:16.7: bridge window [io 0x1000-0x0fff] to [bus 12] add_size 1000 Nov 8 00:28:23.768030 kernel: pci 0000:00:17.3: bridge window [io 0x1000-0x0fff] to [bus 16] add_size 1000 Nov 8 00:28:23.768081 kernel: pci 0000:00:17.4: bridge window [io 0x1000-0x0fff] to [bus 17] add_size 1000 Nov 8 00:28:23.768135 kernel: pci 0000:00:17.5: bridge window [io 0x1000-0x0fff] to [bus 18] add_size 1000 Nov 8 00:28:23.768279 kernel: pci 0000:00:17.6: bridge window [io 0x1000-0x0fff] to [bus 19] add_size 1000 Nov 8 00:28:23.768352 kernel: pci 0000:00:17.7: bridge window [io 0x1000-0x0fff] to [bus 1a] add_size 1000 Nov 8 00:28:23.768424 kernel: pci 0000:00:18.2: bridge window [io 0x1000-0x0fff] to [bus 1d] add_size 1000 Nov 8 00:28:23.768480 kernel: pci 0000:00:18.3: bridge window [io 0x1000-0x0fff] to [bus 1e] add_size 1000 Nov 8 00:28:23.768532 kernel: pci 0000:00:18.4: bridge window [io 0x1000-0x0fff] to [bus 1f] add_size 1000 Nov 8 00:28:23.768586 kernel: pci 0000:00:18.5: bridge window [io 0x1000-0x0fff] to [bus 20] add_size 1000 Nov 8 00:28:23.768637 kernel: pci 0000:00:18.6: bridge window [io 0x1000-0x0fff] to [bus 21] add_size 1000 Nov 8 00:28:23.768687 kernel: pci 0000:00:18.7: bridge window [io 0x1000-0x0fff] to [bus 22] add_size 1000 Nov 8 00:28:23.768737 kernel: pci 0000:00:15.0: BAR 15: assigned [mem 0xc0000000-0xc01fffff 64bit pref] Nov 8 00:28:23.768788 kernel: pci 0000:00:16.0: BAR 15: assigned [mem 0xc0200000-0xc03fffff 64bit pref] Nov 8 00:28:23.768838 kernel: pci 0000:00:15.3: BAR 13: no space for [io size 0x1000] Nov 8 00:28:23.768888 kernel: pci 0000:00:15.3: BAR 13: failed to assign [io size 0x1000] Nov 8 00:28:23.768984 kernel: pci 0000:00:15.4: BAR 13: no space for [io size 0x1000] Nov 8 00:28:23.769035 kernel: pci 0000:00:15.4: BAR 13: failed to assign [io size 0x1000] Nov 8 00:28:23.769086 kernel: pci 0000:00:15.5: BAR 13: no space for [io size 0x1000] Nov 8 00:28:23.769136 kernel: pci 0000:00:15.5: BAR 13: failed to assign [io size 0x1000] Nov 8 00:28:23.769197 kernel: pci 0000:00:15.6: BAR 13: no space for [io size 0x1000] Nov 8 00:28:23.769249 kernel: pci 0000:00:15.6: BAR 13: failed to assign [io size 0x1000] Nov 8 00:28:23.769299 kernel: pci 0000:00:15.7: BAR 13: no space for [io size 0x1000] Nov 8 00:28:23.769349 kernel: pci 0000:00:15.7: BAR 13: failed to assign [io size 0x1000] Nov 8 00:28:23.769402 kernel: pci 0000:00:16.3: BAR 13: no space for [io size 0x1000] Nov 8 00:28:23.769452 kernel: pci 0000:00:16.3: BAR 13: failed to assign [io size 0x1000] Nov 8 00:28:23.769502 kernel: pci 0000:00:16.4: BAR 13: no space for [io size 0x1000] Nov 8 00:28:23.769552 kernel: pci 0000:00:16.4: BAR 13: failed to assign [io size 0x1000] Nov 8 00:28:23.769602 kernel: pci 0000:00:16.5: BAR 13: no space for [io size 0x1000] Nov 8 00:28:23.769652 kernel: pci 0000:00:16.5: BAR 13: failed to assign [io size 0x1000] Nov 8 00:28:23.769703 kernel: pci 0000:00:16.6: BAR 13: no space for [io size 0x1000] Nov 8 00:28:23.769753 kernel: pci 0000:00:16.6: BAR 13: failed to assign [io size 0x1000] Nov 8 00:28:23.769806 kernel: pci 0000:00:16.7: BAR 13: no space for [io size 0x1000] Nov 8 00:28:23.769861 kernel: pci 0000:00:16.7: BAR 13: failed to assign [io size 0x1000] Nov 8 00:28:23.769912 kernel: pci 0000:00:17.3: BAR 13: no space for [io size 0x1000] Nov 8 00:28:23.769981 kernel: pci 0000:00:17.3: BAR 13: failed to assign [io size 0x1000] Nov 8 00:28:23.770068 kernel: pci 0000:00:17.4: BAR 13: no space for [io size 0x1000] Nov 8 00:28:23.770134 kernel: pci 0000:00:17.4: BAR 13: failed to assign [io size 0x1000] Nov 8 00:28:23.770258 kernel: pci 0000:00:17.5: BAR 13: no space for [io size 0x1000] Nov 8 00:28:23.770329 kernel: pci 0000:00:17.5: BAR 13: failed to assign [io size 0x1000] Nov 8 00:28:23.770383 kernel: pci 0000:00:17.6: BAR 13: no space for [io size 0x1000] Nov 8 00:28:23.770433 kernel: pci 0000:00:17.6: BAR 13: failed to assign [io size 0x1000] Nov 8 00:28:23.770482 kernel: pci 0000:00:17.7: BAR 13: no space for [io size 0x1000] Nov 8 00:28:23.770532 kernel: pci 0000:00:17.7: BAR 13: failed to assign [io size 0x1000] Nov 8 00:28:23.770582 kernel: pci 0000:00:18.2: BAR 13: no space for [io size 0x1000] Nov 8 00:28:23.770632 kernel: pci 0000:00:18.2: BAR 13: failed to assign [io size 0x1000] Nov 8 00:28:23.770682 kernel: pci 0000:00:18.3: BAR 13: no space for [io size 0x1000] Nov 8 00:28:23.770732 kernel: pci 0000:00:18.3: BAR 13: failed to assign [io size 0x1000] Nov 8 00:28:23.770785 kernel: pci 0000:00:18.4: BAR 13: no space for [io size 0x1000] Nov 8 00:28:23.770836 kernel: pci 0000:00:18.4: BAR 13: failed to assign [io size 0x1000] Nov 8 00:28:23.770887 kernel: pci 0000:00:18.5: BAR 13: no space for [io size 0x1000] Nov 8 00:28:23.770938 kernel: pci 0000:00:18.5: BAR 13: failed to assign [io size 0x1000] Nov 8 00:28:23.770988 kernel: pci 0000:00:18.6: BAR 13: no space for [io size 0x1000] Nov 8 00:28:23.771039 kernel: pci 0000:00:18.6: BAR 13: failed to assign [io size 0x1000] Nov 8 00:28:23.771090 kernel: pci 0000:00:18.7: BAR 13: no space for [io size 0x1000] Nov 8 00:28:23.771141 kernel: pci 0000:00:18.7: BAR 13: failed to assign [io size 0x1000] Nov 8 00:28:23.771219 kernel: pci 0000:00:18.7: BAR 13: no space for [io size 0x1000] Nov 8 00:28:23.771285 kernel: pci 0000:00:18.7: BAR 13: failed to assign [io size 0x1000] Nov 8 00:28:23.771336 kernel: pci 0000:00:18.6: BAR 13: no space for [io size 0x1000] Nov 8 00:28:23.771385 kernel: pci 0000:00:18.6: BAR 13: failed to assign [io size 0x1000] Nov 8 00:28:23.771435 kernel: pci 0000:00:18.5: BAR 13: no space for [io size 0x1000] Nov 8 00:28:23.771485 kernel: pci 0000:00:18.5: BAR 13: failed to assign [io size 0x1000] Nov 8 00:28:23.771535 kernel: pci 0000:00:18.4: BAR 13: no space for [io size 0x1000] Nov 8 00:28:23.771585 kernel: pci 0000:00:18.4: BAR 13: failed to assign [io size 0x1000] Nov 8 00:28:23.771635 kernel: pci 0000:00:18.3: BAR 13: no space for [io size 0x1000] Nov 8 00:28:23.771685 kernel: pci 0000:00:18.3: BAR 13: failed to assign [io size 0x1000] Nov 8 00:28:23.771739 kernel: pci 0000:00:18.2: BAR 13: no space for [io size 0x1000] Nov 8 00:28:23.771789 kernel: pci 0000:00:18.2: BAR 13: failed to assign [io size 0x1000] Nov 8 00:28:23.771838 kernel: pci 0000:00:17.7: BAR 13: no space for [io size 0x1000] Nov 8 00:28:23.771889 kernel: pci 0000:00:17.7: BAR 13: failed to assign [io size 0x1000] Nov 8 00:28:23.771939 kernel: pci 0000:00:17.6: BAR 13: no space for [io size 0x1000] Nov 8 00:28:23.771989 kernel: pci 0000:00:17.6: BAR 13: failed to assign [io size 0x1000] Nov 8 00:28:23.772039 kernel: pci 0000:00:17.5: BAR 13: no space for [io size 0x1000] Nov 8 00:28:23.772089 kernel: pci 0000:00:17.5: BAR 13: failed to assign [io size 0x1000] Nov 8 00:28:23.772138 kernel: pci 0000:00:17.4: BAR 13: no space for [io size 0x1000] Nov 8 00:28:23.772231 kernel: pci 0000:00:17.4: BAR 13: failed to assign [io size 0x1000] Nov 8 00:28:23.772282 kernel: pci 0000:00:17.3: BAR 13: no space for [io size 0x1000] Nov 8 00:28:23.772332 kernel: pci 0000:00:17.3: BAR 13: failed to assign [io size 0x1000] Nov 8 00:28:23.772383 kernel: pci 0000:00:16.7: BAR 13: no space for [io size 0x1000] Nov 8 00:28:23.772433 kernel: pci 0000:00:16.7: BAR 13: failed to assign [io size 0x1000] Nov 8 00:28:23.772483 kernel: pci 0000:00:16.6: BAR 13: no space for [io size 0x1000] Nov 8 00:28:23.772533 kernel: pci 0000:00:16.6: BAR 13: failed to assign [io size 0x1000] Nov 8 00:28:23.772583 kernel: pci 0000:00:16.5: BAR 13: no space for [io size 0x1000] Nov 8 00:28:23.772633 kernel: pci 0000:00:16.5: BAR 13: failed to assign [io size 0x1000] Nov 8 00:28:23.772699 kernel: pci 0000:00:16.4: BAR 13: no space for [io size 0x1000] Nov 8 00:28:23.772754 kernel: pci 0000:00:16.4: BAR 13: failed to assign [io size 0x1000] Nov 8 00:28:23.772804 kernel: pci 0000:00:16.3: BAR 13: no space for [io size 0x1000] Nov 8 00:28:23.775070 kernel: pci 0000:00:16.3: BAR 13: failed to assign [io size 0x1000] Nov 8 00:28:23.775134 kernel: pci 0000:00:15.7: BAR 13: no space for [io size 0x1000] Nov 8 00:28:23.775208 kernel: pci 0000:00:15.7: BAR 13: failed to assign [io size 0x1000] Nov 8 00:28:23.775264 kernel: pci 0000:00:15.6: BAR 13: no space for [io size 0x1000] Nov 8 00:28:23.775316 kernel: pci 0000:00:15.6: BAR 13: failed to assign [io size 0x1000] Nov 8 00:28:23.775367 kernel: pci 0000:00:15.5: BAR 13: no space for [io size 0x1000] Nov 8 00:28:23.775418 kernel: pci 0000:00:15.5: BAR 13: failed to assign [io size 0x1000] Nov 8 00:28:23.775473 kernel: pci 0000:00:15.4: BAR 13: no space for [io size 0x1000] Nov 8 00:28:23.775537 kernel: pci 0000:00:15.4: BAR 13: failed to assign [io size 0x1000] Nov 8 00:28:23.775598 kernel: pci 0000:00:15.3: BAR 13: no space for [io size 0x1000] Nov 8 00:28:23.775651 kernel: pci 0000:00:15.3: BAR 13: failed to assign [io size 0x1000] Nov 8 00:28:23.775703 kernel: pci 0000:00:01.0: PCI bridge to [bus 01] Nov 8 00:28:23.775757 kernel: pci 0000:00:11.0: PCI bridge to [bus 02] Nov 8 00:28:23.775809 kernel: pci 0000:00:11.0: bridge window [io 0x2000-0x3fff] Nov 8 00:28:23.775863 kernel: pci 0000:00:11.0: bridge window [mem 0xfd600000-0xfdffffff] Nov 8 00:28:23.775915 kernel: pci 0000:00:11.0: bridge window [mem 0xe7b00000-0xe7ffffff 64bit pref] Nov 8 00:28:23.775974 kernel: pci 0000:03:00.0: BAR 6: assigned [mem 0xfd500000-0xfd50ffff pref] Nov 8 00:28:23.776027 kernel: pci 0000:00:15.0: PCI bridge to [bus 03] Nov 8 00:28:23.776080 kernel: pci 0000:00:15.0: bridge window [io 0x4000-0x4fff] Nov 8 00:28:23.776131 kernel: pci 0000:00:15.0: bridge window [mem 0xfd500000-0xfd5fffff] Nov 8 00:28:23.776261 kernel: pci 0000:00:15.0: bridge window [mem 0xc0000000-0xc01fffff 64bit pref] Nov 8 00:28:23.776327 kernel: pci 0000:00:15.1: PCI bridge to [bus 04] Nov 8 00:28:23.776390 kernel: pci 0000:00:15.1: bridge window [io 0x8000-0x8fff] Nov 8 00:28:23.776445 kernel: pci 0000:00:15.1: bridge window [mem 0xfd100000-0xfd1fffff] Nov 8 00:28:23.776510 kernel: pci 0000:00:15.1: bridge window [mem 0xe7800000-0xe78fffff 64bit pref] Nov 8 00:28:23.776572 kernel: pci 0000:00:15.2: PCI bridge to [bus 05] Nov 8 00:28:23.776642 kernel: pci 0000:00:15.2: bridge window [io 0xc000-0xcfff] Nov 8 00:28:23.776706 kernel: pci 0000:00:15.2: bridge window [mem 0xfcd00000-0xfcdfffff] Nov 8 00:28:23.776770 kernel: pci 0000:00:15.2: bridge window [mem 0xe7400000-0xe74fffff 64bit pref] Nov 8 00:28:23.776822 kernel: pci 0000:00:15.3: PCI bridge to [bus 06] Nov 8 00:28:23.776873 kernel: pci 0000:00:15.3: bridge window [mem 0xfc900000-0xfc9fffff] Nov 8 00:28:23.776924 kernel: pci 0000:00:15.3: bridge window [mem 0xe7000000-0xe70fffff 64bit pref] Nov 8 00:28:23.776974 kernel: pci 0000:00:15.4: PCI bridge to [bus 07] Nov 8 00:28:23.777025 kernel: pci 0000:00:15.4: bridge window [mem 0xfc500000-0xfc5fffff] Nov 8 00:28:23.777080 kernel: pci 0000:00:15.4: bridge window [mem 0xe6c00000-0xe6cfffff 64bit pref] Nov 8 00:28:23.777134 kernel: pci 0000:00:15.5: PCI bridge to [bus 08] Nov 8 00:28:23.777192 kernel: pci 0000:00:15.5: bridge window [mem 0xfc100000-0xfc1fffff] Nov 8 00:28:23.777244 kernel: pci 0000:00:15.5: bridge window [mem 0xe6800000-0xe68fffff 64bit pref] Nov 8 00:28:23.777295 kernel: pci 0000:00:15.6: PCI bridge to [bus 09] Nov 8 00:28:23.777346 kernel: pci 0000:00:15.6: bridge window [mem 0xfbd00000-0xfbdfffff] Nov 8 00:28:23.777400 kernel: pci 0000:00:15.6: bridge window [mem 0xe6400000-0xe64fffff 64bit pref] Nov 8 00:28:23.777451 kernel: pci 0000:00:15.7: PCI bridge to [bus 0a] Nov 8 00:28:23.777503 kernel: pci 0000:00:15.7: bridge window [mem 0xfb900000-0xfb9fffff] Nov 8 00:28:23.777555 kernel: pci 0000:00:15.7: bridge window [mem 0xe6000000-0xe60fffff 64bit pref] Nov 8 00:28:23.777610 kernel: pci 0000:0b:00.0: BAR 6: assigned [mem 0xfd400000-0xfd40ffff pref] Nov 8 00:28:23.777663 kernel: pci 0000:00:16.0: PCI bridge to [bus 0b] Nov 8 00:28:23.777715 kernel: pci 0000:00:16.0: bridge window [io 0x5000-0x5fff] Nov 8 00:28:23.777767 kernel: pci 0000:00:16.0: bridge window [mem 0xfd400000-0xfd4fffff] Nov 8 00:28:23.777819 kernel: pci 0000:00:16.0: bridge window [mem 0xc0200000-0xc03fffff 64bit pref] Nov 8 00:28:23.777874 kernel: pci 0000:00:16.1: PCI bridge to [bus 0c] Nov 8 00:28:23.777925 kernel: pci 0000:00:16.1: bridge window [io 0x9000-0x9fff] Nov 8 00:28:23.777976 kernel: pci 0000:00:16.1: bridge window [mem 0xfd000000-0xfd0fffff] Nov 8 00:28:23.778028 kernel: pci 0000:00:16.1: bridge window [mem 0xe7700000-0xe77fffff 64bit pref] Nov 8 00:28:23.778080 kernel: pci 0000:00:16.2: PCI bridge to [bus 0d] Nov 8 00:28:23.778132 kernel: pci 0000:00:16.2: bridge window [io 0xd000-0xdfff] Nov 8 00:28:23.778216 kernel: pci 0000:00:16.2: bridge window [mem 0xfcc00000-0xfccfffff] Nov 8 00:28:23.778271 kernel: pci 0000:00:16.2: bridge window [mem 0xe7300000-0xe73fffff 64bit pref] Nov 8 00:28:23.778322 kernel: pci 0000:00:16.3: PCI bridge to [bus 0e] Nov 8 00:28:23.778376 kernel: pci 0000:00:16.3: bridge window [mem 0xfc800000-0xfc8fffff] Nov 8 00:28:23.778426 kernel: pci 0000:00:16.3: bridge window [mem 0xe6f00000-0xe6ffffff 64bit pref] Nov 8 00:28:23.778477 kernel: pci 0000:00:16.4: PCI bridge to [bus 0f] Nov 8 00:28:23.778528 kernel: pci 0000:00:16.4: bridge window [mem 0xfc400000-0xfc4fffff] Nov 8 00:28:23.778579 kernel: pci 0000:00:16.4: bridge window [mem 0xe6b00000-0xe6bfffff 64bit pref] Nov 8 00:28:23.778629 kernel: pci 0000:00:16.5: PCI bridge to [bus 10] Nov 8 00:28:23.778680 kernel: pci 0000:00:16.5: bridge window [mem 0xfc000000-0xfc0fffff] Nov 8 00:28:23.778730 kernel: pci 0000:00:16.5: bridge window [mem 0xe6700000-0xe67fffff 64bit pref] Nov 8 00:28:23.778781 kernel: pci 0000:00:16.6: PCI bridge to [bus 11] Nov 8 00:28:23.778833 kernel: pci 0000:00:16.6: bridge window [mem 0xfbc00000-0xfbcfffff] Nov 8 00:28:23.778904 kernel: pci 0000:00:16.6: bridge window [mem 0xe6300000-0xe63fffff 64bit pref] Nov 8 00:28:23.778955 kernel: pci 0000:00:16.7: PCI bridge to [bus 12] Nov 8 00:28:23.779005 kernel: pci 0000:00:16.7: bridge window [mem 0xfb800000-0xfb8fffff] Nov 8 00:28:23.779055 kernel: pci 0000:00:16.7: bridge window [mem 0xe5f00000-0xe5ffffff 64bit pref] Nov 8 00:28:23.779106 kernel: pci 0000:00:17.0: PCI bridge to [bus 13] Nov 8 00:28:23.779156 kernel: pci 0000:00:17.0: bridge window [io 0x6000-0x6fff] Nov 8 00:28:23.779235 kernel: pci 0000:00:17.0: bridge window [mem 0xfd300000-0xfd3fffff] Nov 8 00:28:23.779288 kernel: pci 0000:00:17.0: bridge window [mem 0xe7a00000-0xe7afffff 64bit pref] Nov 8 00:28:23.779341 kernel: pci 0000:00:17.1: PCI bridge to [bus 14] Nov 8 00:28:23.779395 kernel: pci 0000:00:17.1: bridge window [io 0xa000-0xafff] Nov 8 00:28:23.779447 kernel: pci 0000:00:17.1: bridge window [mem 0xfcf00000-0xfcffffff] Nov 8 00:28:23.779499 kernel: pci 0000:00:17.1: bridge window [mem 0xe7600000-0xe76fffff 64bit pref] Nov 8 00:28:23.779551 kernel: pci 0000:00:17.2: PCI bridge to [bus 15] Nov 8 00:28:23.779603 kernel: pci 0000:00:17.2: bridge window [io 0xe000-0xefff] Nov 8 00:28:23.779655 kernel: pci 0000:00:17.2: bridge window [mem 0xfcb00000-0xfcbfffff] Nov 8 00:28:23.779706 kernel: pci 0000:00:17.2: bridge window [mem 0xe7200000-0xe72fffff 64bit pref] Nov 8 00:28:23.779758 kernel: pci 0000:00:17.3: PCI bridge to [bus 16] Nov 8 00:28:23.779811 kernel: pci 0000:00:17.3: bridge window [mem 0xfc700000-0xfc7fffff] Nov 8 00:28:23.779863 kernel: pci 0000:00:17.3: bridge window [mem 0xe6e00000-0xe6efffff 64bit pref] Nov 8 00:28:23.779931 kernel: pci 0000:00:17.4: PCI bridge to [bus 17] Nov 8 00:28:23.779983 kernel: pci 0000:00:17.4: bridge window [mem 0xfc300000-0xfc3fffff] Nov 8 00:28:23.780035 kernel: pci 0000:00:17.4: bridge window [mem 0xe6a00000-0xe6afffff 64bit pref] Nov 8 00:28:23.780087 kernel: pci 0000:00:17.5: PCI bridge to [bus 18] Nov 8 00:28:23.780139 kernel: pci 0000:00:17.5: bridge window [mem 0xfbf00000-0xfbffffff] Nov 8 00:28:23.780200 kernel: pci 0000:00:17.5: bridge window [mem 0xe6600000-0xe66fffff 64bit pref] Nov 8 00:28:23.780253 kernel: pci 0000:00:17.6: PCI bridge to [bus 19] Nov 8 00:28:23.780304 kernel: pci 0000:00:17.6: bridge window [mem 0xfbb00000-0xfbbfffff] Nov 8 00:28:23.780355 kernel: pci 0000:00:17.6: bridge window [mem 0xe6200000-0xe62fffff 64bit pref] Nov 8 00:28:23.780410 kernel: pci 0000:00:17.7: PCI bridge to [bus 1a] Nov 8 00:28:23.780462 kernel: pci 0000:00:17.7: bridge window [mem 0xfb700000-0xfb7fffff] Nov 8 00:28:23.780513 kernel: pci 0000:00:17.7: bridge window [mem 0xe5e00000-0xe5efffff 64bit pref] Nov 8 00:28:23.780565 kernel: pci 0000:00:18.0: PCI bridge to [bus 1b] Nov 8 00:28:23.780616 kernel: pci 0000:00:18.0: bridge window [io 0x7000-0x7fff] Nov 8 00:28:23.780667 kernel: pci 0000:00:18.0: bridge window [mem 0xfd200000-0xfd2fffff] Nov 8 00:28:23.780719 kernel: pci 0000:00:18.0: bridge window [mem 0xe7900000-0xe79fffff 64bit pref] Nov 8 00:28:23.780771 kernel: pci 0000:00:18.1: PCI bridge to [bus 1c] Nov 8 00:28:23.780823 kernel: pci 0000:00:18.1: bridge window [io 0xb000-0xbfff] Nov 8 00:28:23.780875 kernel: pci 0000:00:18.1: bridge window [mem 0xfce00000-0xfcefffff] Nov 8 00:28:23.780930 kernel: pci 0000:00:18.1: bridge window [mem 0xe7500000-0xe75fffff 64bit pref] Nov 8 00:28:23.781018 kernel: pci 0000:00:18.2: PCI bridge to [bus 1d] Nov 8 00:28:23.781070 kernel: pci 0000:00:18.2: bridge window [mem 0xfca00000-0xfcafffff] Nov 8 00:28:23.781121 kernel: pci 0000:00:18.2: bridge window [mem 0xe7100000-0xe71fffff 64bit pref] Nov 8 00:28:23.781211 kernel: pci 0000:00:18.3: PCI bridge to [bus 1e] Nov 8 00:28:23.781269 kernel: pci 0000:00:18.3: bridge window [mem 0xfc600000-0xfc6fffff] Nov 8 00:28:23.781321 kernel: pci 0000:00:18.3: bridge window [mem 0xe6d00000-0xe6dfffff 64bit pref] Nov 8 00:28:23.781372 kernel: pci 0000:00:18.4: PCI bridge to [bus 1f] Nov 8 00:28:23.781424 kernel: pci 0000:00:18.4: bridge window [mem 0xfc200000-0xfc2fffff] Nov 8 00:28:23.781478 kernel: pci 0000:00:18.4: bridge window [mem 0xe6900000-0xe69fffff 64bit pref] Nov 8 00:28:23.781530 kernel: pci 0000:00:18.5: PCI bridge to [bus 20] Nov 8 00:28:23.781582 kernel: pci 0000:00:18.5: bridge window [mem 0xfbe00000-0xfbefffff] Nov 8 00:28:23.781633 kernel: pci 0000:00:18.5: bridge window [mem 0xe6500000-0xe65fffff 64bit pref] Nov 8 00:28:23.781685 kernel: pci 0000:00:18.6: PCI bridge to [bus 21] Nov 8 00:28:23.781736 kernel: pci 0000:00:18.6: bridge window [mem 0xfba00000-0xfbafffff] Nov 8 00:28:23.781787 kernel: pci 0000:00:18.6: bridge window [mem 0xe6100000-0xe61fffff 64bit pref] Nov 8 00:28:23.781839 kernel: pci 0000:00:18.7: PCI bridge to [bus 22] Nov 8 00:28:23.781890 kernel: pci 0000:00:18.7: bridge window [mem 0xfb600000-0xfb6fffff] Nov 8 00:28:23.781941 kernel: pci 0000:00:18.7: bridge window [mem 0xe5d00000-0xe5dfffff 64bit pref] Nov 8 00:28:23.781993 kernel: pci_bus 0000:00: resource 4 [mem 0x000a0000-0x000bffff window] Nov 8 00:28:23.782041 kernel: pci_bus 0000:00: resource 5 [mem 0x000cc000-0x000dbfff window] Nov 8 00:28:23.782087 kernel: pci_bus 0000:00: resource 6 [mem 0xc0000000-0xfebfffff window] Nov 8 00:28:23.782133 kernel: pci_bus 0000:00: resource 7 [io 0x0000-0x0cf7 window] Nov 8 00:28:23.783653 kernel: pci_bus 0000:00: resource 8 [io 0x0d00-0xfeff window] Nov 8 00:28:23.783715 kernel: pci_bus 0000:02: resource 0 [io 0x2000-0x3fff] Nov 8 00:28:23.783767 kernel: pci_bus 0000:02: resource 1 [mem 0xfd600000-0xfdffffff] Nov 8 00:28:23.783818 kernel: pci_bus 0000:02: resource 2 [mem 0xe7b00000-0xe7ffffff 64bit pref] Nov 8 00:28:23.783866 kernel: pci_bus 0000:02: resource 4 [mem 0x000a0000-0x000bffff window] Nov 8 00:28:23.783914 kernel: pci_bus 0000:02: resource 5 [mem 0x000cc000-0x000dbfff window] Nov 8 00:28:23.783961 kernel: pci_bus 0000:02: resource 6 [mem 0xc0000000-0xfebfffff window] Nov 8 00:28:23.784008 kernel: pci_bus 0000:02: resource 7 [io 0x0000-0x0cf7 window] Nov 8 00:28:23.784055 kernel: pci_bus 0000:02: resource 8 [io 0x0d00-0xfeff window] Nov 8 00:28:23.784107 kernel: pci_bus 0000:03: resource 0 [io 0x4000-0x4fff] Nov 8 00:28:23.784157 kernel: pci_bus 0000:03: resource 1 [mem 0xfd500000-0xfd5fffff] Nov 8 00:28:23.784233 kernel: pci_bus 0000:03: resource 2 [mem 0xc0000000-0xc01fffff 64bit pref] Nov 8 00:28:23.784285 kernel: pci_bus 0000:04: resource 0 [io 0x8000-0x8fff] Nov 8 00:28:23.784332 kernel: pci_bus 0000:04: resource 1 [mem 0xfd100000-0xfd1fffff] Nov 8 00:28:23.784378 kernel: pci_bus 0000:04: resource 2 [mem 0xe7800000-0xe78fffff 64bit pref] Nov 8 00:28:23.784431 kernel: pci_bus 0000:05: resource 0 [io 0xc000-0xcfff] Nov 8 00:28:23.784479 kernel: pci_bus 0000:05: resource 1 [mem 0xfcd00000-0xfcdfffff] Nov 8 00:28:23.784530 kernel: pci_bus 0000:05: resource 2 [mem 0xe7400000-0xe74fffff 64bit pref] Nov 8 00:28:23.784580 kernel: pci_bus 0000:06: resource 1 [mem 0xfc900000-0xfc9fffff] Nov 8 00:28:23.784627 kernel: pci_bus 0000:06: resource 2 [mem 0xe7000000-0xe70fffff 64bit pref] Nov 8 00:28:23.784679 kernel: pci_bus 0000:07: resource 1 [mem 0xfc500000-0xfc5fffff] Nov 8 00:28:23.784726 kernel: pci_bus 0000:07: resource 2 [mem 0xe6c00000-0xe6cfffff 64bit pref] Nov 8 00:28:23.784776 kernel: pci_bus 0000:08: resource 1 [mem 0xfc100000-0xfc1fffff] Nov 8 00:28:23.784824 kernel: pci_bus 0000:08: resource 2 [mem 0xe6800000-0xe68fffff 64bit pref] Nov 8 00:28:23.784913 kernel: pci_bus 0000:09: resource 1 [mem 0xfbd00000-0xfbdfffff] Nov 8 00:28:23.784961 kernel: pci_bus 0000:09: resource 2 [mem 0xe6400000-0xe64fffff 64bit pref] Nov 8 00:28:23.785013 kernel: pci_bus 0000:0a: resource 1 [mem 0xfb900000-0xfb9fffff] Nov 8 00:28:23.785062 kernel: pci_bus 0000:0a: resource 2 [mem 0xe6000000-0xe60fffff 64bit pref] Nov 8 00:28:23.785124 kernel: pci_bus 0000:0b: resource 0 [io 0x5000-0x5fff] Nov 8 00:28:23.785181 kernel: pci_bus 0000:0b: resource 1 [mem 0xfd400000-0xfd4fffff] Nov 8 00:28:23.785231 kernel: pci_bus 0000:0b: resource 2 [mem 0xc0200000-0xc03fffff 64bit pref] Nov 8 00:28:23.785283 kernel: pci_bus 0000:0c: resource 0 [io 0x9000-0x9fff] Nov 8 00:28:23.785333 kernel: pci_bus 0000:0c: resource 1 [mem 0xfd000000-0xfd0fffff] Nov 8 00:28:23.785382 kernel: pci_bus 0000:0c: resource 2 [mem 0xe7700000-0xe77fffff 64bit pref] Nov 8 00:28:23.785437 kernel: pci_bus 0000:0d: resource 0 [io 0xd000-0xdfff] Nov 8 00:28:23.785488 kernel: pci_bus 0000:0d: resource 1 [mem 0xfcc00000-0xfccfffff] Nov 8 00:28:23.785541 kernel: pci_bus 0000:0d: resource 2 [mem 0xe7300000-0xe73fffff 64bit pref] Nov 8 00:28:23.785624 kernel: pci_bus 0000:0e: resource 1 [mem 0xfc800000-0xfc8fffff] Nov 8 00:28:23.785683 kernel: pci_bus 0000:0e: resource 2 [mem 0xe6f00000-0xe6ffffff 64bit pref] Nov 8 00:28:23.785751 kernel: pci_bus 0000:0f: resource 1 [mem 0xfc400000-0xfc4fffff] Nov 8 00:28:23.785802 kernel: pci_bus 0000:0f: resource 2 [mem 0xe6b00000-0xe6bfffff 64bit pref] Nov 8 00:28:23.785862 kernel: pci_bus 0000:10: resource 1 [mem 0xfc000000-0xfc0fffff] Nov 8 00:28:23.785932 kernel: pci_bus 0000:10: resource 2 [mem 0xe6700000-0xe67fffff 64bit pref] Nov 8 00:28:23.785994 kernel: pci_bus 0000:11: resource 1 [mem 0xfbc00000-0xfbcfffff] Nov 8 00:28:23.786046 kernel: pci_bus 0000:11: resource 2 [mem 0xe6300000-0xe63fffff 64bit pref] Nov 8 00:28:23.786109 kernel: pci_bus 0000:12: resource 1 [mem 0xfb800000-0xfb8fffff] Nov 8 00:28:23.786169 kernel: pci_bus 0000:12: resource 2 [mem 0xe5f00000-0xe5ffffff 64bit pref] Nov 8 00:28:23.786247 kernel: pci_bus 0000:13: resource 0 [io 0x6000-0x6fff] Nov 8 00:28:23.786299 kernel: pci_bus 0000:13: resource 1 [mem 0xfd300000-0xfd3fffff] Nov 8 00:28:23.786362 kernel: pci_bus 0000:13: resource 2 [mem 0xe7a00000-0xe7afffff 64bit pref] Nov 8 00:28:23.786427 kernel: pci_bus 0000:14: resource 0 [io 0xa000-0xafff] Nov 8 00:28:23.786480 kernel: pci_bus 0000:14: resource 1 [mem 0xfcf00000-0xfcffffff] Nov 8 00:28:23.786538 kernel: pci_bus 0000:14: resource 2 [mem 0xe7600000-0xe76fffff 64bit pref] Nov 8 00:28:23.786607 kernel: pci_bus 0000:15: resource 0 [io 0xe000-0xefff] Nov 8 00:28:23.786667 kernel: pci_bus 0000:15: resource 1 [mem 0xfcb00000-0xfcbfffff] Nov 8 00:28:23.786720 kernel: pci_bus 0000:15: resource 2 [mem 0xe7200000-0xe72fffff 64bit pref] Nov 8 00:28:23.786793 kernel: pci_bus 0000:16: resource 1 [mem 0xfc700000-0xfc7fffff] Nov 8 00:28:23.786855 kernel: pci_bus 0000:16: resource 2 [mem 0xe6e00000-0xe6efffff 64bit pref] Nov 8 00:28:23.786909 kernel: pci_bus 0000:17: resource 1 [mem 0xfc300000-0xfc3fffff] Nov 8 00:28:23.786965 kernel: pci_bus 0000:17: resource 2 [mem 0xe6a00000-0xe6afffff 64bit pref] Nov 8 00:28:23.787017 kernel: pci_bus 0000:18: resource 1 [mem 0xfbf00000-0xfbffffff] Nov 8 00:28:23.787066 kernel: pci_bus 0000:18: resource 2 [mem 0xe6600000-0xe66fffff 64bit pref] Nov 8 00:28:23.787123 kernel: pci_bus 0000:19: resource 1 [mem 0xfbb00000-0xfbbfffff] Nov 8 00:28:23.787224 kernel: pci_bus 0000:19: resource 2 [mem 0xe6200000-0xe62fffff 64bit pref] Nov 8 00:28:23.787281 kernel: pci_bus 0000:1a: resource 1 [mem 0xfb700000-0xfb7fffff] Nov 8 00:28:23.787330 kernel: pci_bus 0000:1a: resource 2 [mem 0xe5e00000-0xe5efffff 64bit pref] Nov 8 00:28:23.787385 kernel: pci_bus 0000:1b: resource 0 [io 0x7000-0x7fff] Nov 8 00:28:23.787437 kernel: pci_bus 0000:1b: resource 1 [mem 0xfd200000-0xfd2fffff] Nov 8 00:28:23.787485 kernel: pci_bus 0000:1b: resource 2 [mem 0xe7900000-0xe79fffff 64bit pref] Nov 8 00:28:23.787537 kernel: pci_bus 0000:1c: resource 0 [io 0xb000-0xbfff] Nov 8 00:28:23.787586 kernel: pci_bus 0000:1c: resource 1 [mem 0xfce00000-0xfcefffff] Nov 8 00:28:23.787634 kernel: pci_bus 0000:1c: resource 2 [mem 0xe7500000-0xe75fffff 64bit pref] Nov 8 00:28:23.787686 kernel: pci_bus 0000:1d: resource 1 [mem 0xfca00000-0xfcafffff] Nov 8 00:28:23.787735 kernel: pci_bus 0000:1d: resource 2 [mem 0xe7100000-0xe71fffff 64bit pref] Nov 8 00:28:23.787793 kernel: pci_bus 0000:1e: resource 1 [mem 0xfc600000-0xfc6fffff] Nov 8 00:28:23.787841 kernel: pci_bus 0000:1e: resource 2 [mem 0xe6d00000-0xe6dfffff 64bit pref] Nov 8 00:28:23.787893 kernel: pci_bus 0000:1f: resource 1 [mem 0xfc200000-0xfc2fffff] Nov 8 00:28:23.787942 kernel: pci_bus 0000:1f: resource 2 [mem 0xe6900000-0xe69fffff 64bit pref] Nov 8 00:28:23.787994 kernel: pci_bus 0000:20: resource 1 [mem 0xfbe00000-0xfbefffff] Nov 8 00:28:23.788043 kernel: pci_bus 0000:20: resource 2 [mem 0xe6500000-0xe65fffff 64bit pref] Nov 8 00:28:23.788096 kernel: pci_bus 0000:21: resource 1 [mem 0xfba00000-0xfbafffff] Nov 8 00:28:23.788146 kernel: pci_bus 0000:21: resource 2 [mem 0xe6100000-0xe61fffff 64bit pref] Nov 8 00:28:23.788223 kernel: pci_bus 0000:22: resource 1 [mem 0xfb600000-0xfb6fffff] Nov 8 00:28:23.788275 kernel: pci_bus 0000:22: resource 2 [mem 0xe5d00000-0xe5dfffff 64bit pref] Nov 8 00:28:23.788333 kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers Nov 8 00:28:23.788343 kernel: PCI: CLS 32 bytes, default 64 Nov 8 00:28:23.788350 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Nov 8 00:28:23.788359 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x311fd3cd494, max_idle_ns: 440795223879 ns Nov 8 00:28:23.788365 kernel: clocksource: Switched to clocksource tsc Nov 8 00:28:23.788372 kernel: Initialise system trusted keyrings Nov 8 00:28:23.788379 kernel: workingset: timestamp_bits=39 max_order=19 bucket_order=0 Nov 8 00:28:23.788385 kernel: Key type asymmetric registered Nov 8 00:28:23.788391 kernel: Asymmetric key parser 'x509' registered Nov 8 00:28:23.788399 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Nov 8 00:28:23.788405 kernel: io scheduler mq-deadline registered Nov 8 00:28:23.788412 kernel: io scheduler kyber registered Nov 8 00:28:23.788419 kernel: io scheduler bfq registered Nov 8 00:28:23.788476 kernel: pcieport 0000:00:15.0: PME: Signaling with IRQ 24 Nov 8 00:28:23.788532 kernel: pcieport 0000:00:15.0: pciehp: Slot #160 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Nov 8 00:28:23.788588 kernel: pcieport 0000:00:15.1: PME: Signaling with IRQ 25 Nov 8 00:28:23.788642 kernel: pcieport 0000:00:15.1: pciehp: Slot #161 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Nov 8 00:28:23.788696 kernel: pcieport 0000:00:15.2: PME: Signaling with IRQ 26 Nov 8 00:28:23.788750 kernel: pcieport 0000:00:15.2: pciehp: Slot #162 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Nov 8 00:28:23.788805 kernel: pcieport 0000:00:15.3: PME: Signaling with IRQ 27 Nov 8 00:28:23.788870 kernel: pcieport 0000:00:15.3: pciehp: Slot #163 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Nov 8 00:28:23.788926 kernel: pcieport 0000:00:15.4: PME: Signaling with IRQ 28 Nov 8 00:28:23.788980 kernel: pcieport 0000:00:15.4: pciehp: Slot #164 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Nov 8 00:28:23.789034 kernel: pcieport 0000:00:15.5: PME: Signaling with IRQ 29 Nov 8 00:28:23.789088 kernel: pcieport 0000:00:15.5: pciehp: Slot #165 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Nov 8 00:28:23.789145 kernel: pcieport 0000:00:15.6: PME: Signaling with IRQ 30 Nov 8 00:28:23.789359 kernel: pcieport 0000:00:15.6: pciehp: Slot #166 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Nov 8 00:28:23.789415 kernel: pcieport 0000:00:15.7: PME: Signaling with IRQ 31 Nov 8 00:28:23.789469 kernel: pcieport 0000:00:15.7: pciehp: Slot #167 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Nov 8 00:28:23.789523 kernel: pcieport 0000:00:16.0: PME: Signaling with IRQ 32 Nov 8 00:28:23.789580 kernel: pcieport 0000:00:16.0: pciehp: Slot #192 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Nov 8 00:28:23.789634 kernel: pcieport 0000:00:16.1: PME: Signaling with IRQ 33 Nov 8 00:28:23.789688 kernel: pcieport 0000:00:16.1: pciehp: Slot #193 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Nov 8 00:28:23.789742 kernel: pcieport 0000:00:16.2: PME: Signaling with IRQ 34 Nov 8 00:28:23.789796 kernel: pcieport 0000:00:16.2: pciehp: Slot #194 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Nov 8 00:28:23.789851 kernel: pcieport 0000:00:16.3: PME: Signaling with IRQ 35 Nov 8 00:28:23.789905 kernel: pcieport 0000:00:16.3: pciehp: Slot #195 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Nov 8 00:28:23.789962 kernel: pcieport 0000:00:16.4: PME: Signaling with IRQ 36 Nov 8 00:28:23.790016 kernel: pcieport 0000:00:16.4: pciehp: Slot #196 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Nov 8 00:28:23.790070 kernel: pcieport 0000:00:16.5: PME: Signaling with IRQ 37 Nov 8 00:28:23.790124 kernel: pcieport 0000:00:16.5: pciehp: Slot #197 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Nov 8 00:28:23.790186 kernel: pcieport 0000:00:16.6: PME: Signaling with IRQ 38 Nov 8 00:28:23.790243 kernel: pcieport 0000:00:16.6: pciehp: Slot #198 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Nov 8 00:28:23.790297 kernel: pcieport 0000:00:16.7: PME: Signaling with IRQ 39 Nov 8 00:28:23.790351 kernel: pcieport 0000:00:16.7: pciehp: Slot #199 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Nov 8 00:28:23.790405 kernel: pcieport 0000:00:17.0: PME: Signaling with IRQ 40 Nov 8 00:28:23.790699 kernel: pcieport 0000:00:17.0: pciehp: Slot #224 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Nov 8 00:28:23.790757 kernel: pcieport 0000:00:17.1: PME: Signaling with IRQ 41 Nov 8 00:28:23.790811 kernel: pcieport 0000:00:17.1: pciehp: Slot #225 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Nov 8 00:28:23.790869 kernel: pcieport 0000:00:17.2: PME: Signaling with IRQ 42 Nov 8 00:28:23.790923 kernel: pcieport 0000:00:17.2: pciehp: Slot #226 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Nov 8 00:28:23.790977 kernel: pcieport 0000:00:17.3: PME: Signaling with IRQ 43 Nov 8 00:28:23.791031 kernel: pcieport 0000:00:17.3: pciehp: Slot #227 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Nov 8 00:28:23.791085 kernel: pcieport 0000:00:17.4: PME: Signaling with IRQ 44 Nov 8 00:28:23.791139 kernel: pcieport 0000:00:17.4: pciehp: Slot #228 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Nov 8 00:28:23.791574 kernel: pcieport 0000:00:17.5: PME: Signaling with IRQ 45 Nov 8 00:28:23.791679 kernel: pcieport 0000:00:17.5: pciehp: Slot #229 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Nov 8 00:28:23.791737 kernel: pcieport 0000:00:17.6: PME: Signaling with IRQ 46 Nov 8 00:28:23.791801 kernel: pcieport 0000:00:17.6: pciehp: Slot #230 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Nov 8 00:28:23.791877 kernel: pcieport 0000:00:17.7: PME: Signaling with IRQ 47 Nov 8 00:28:23.791960 kernel: pcieport 0000:00:17.7: pciehp: Slot #231 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Nov 8 00:28:23.792023 kernel: pcieport 0000:00:18.0: PME: Signaling with IRQ 48 Nov 8 00:28:23.792080 kernel: pcieport 0000:00:18.0: pciehp: Slot #256 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Nov 8 00:28:23.792134 kernel: pcieport 0000:00:18.1: PME: Signaling with IRQ 49 Nov 8 00:28:23.792194 kernel: pcieport 0000:00:18.1: pciehp: Slot #257 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Nov 8 00:28:23.792248 kernel: pcieport 0000:00:18.2: PME: Signaling with IRQ 50 Nov 8 00:28:23.792316 kernel: pcieport 0000:00:18.2: pciehp: Slot #258 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Nov 8 00:28:23.792391 kernel: pcieport 0000:00:18.3: PME: Signaling with IRQ 51 Nov 8 00:28:23.792480 kernel: pcieport 0000:00:18.3: pciehp: Slot #259 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Nov 8 00:28:23.792561 kernel: pcieport 0000:00:18.4: PME: Signaling with IRQ 52 Nov 8 00:28:23.792625 kernel: pcieport 0000:00:18.4: pciehp: Slot #260 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Nov 8 00:28:23.792681 kernel: pcieport 0000:00:18.5: PME: Signaling with IRQ 53 Nov 8 00:28:23.792753 kernel: pcieport 0000:00:18.5: pciehp: Slot #261 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Nov 8 00:28:23.792810 kernel: pcieport 0000:00:18.6: PME: Signaling with IRQ 54 Nov 8 00:28:23.792903 kernel: pcieport 0000:00:18.6: pciehp: Slot #262 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Nov 8 00:28:23.792960 kernel: pcieport 0000:00:18.7: PME: Signaling with IRQ 55 Nov 8 00:28:23.793016 kernel: pcieport 0000:00:18.7: pciehp: Slot #263 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Nov 8 00:28:23.793027 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Nov 8 00:28:23.793035 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Nov 8 00:28:23.793041 kernel: 00:05: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Nov 8 00:28:23.793048 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBC,PNP0f13:MOUS] at 0x60,0x64 irq 1,12 Nov 8 00:28:23.793054 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Nov 8 00:28:23.793061 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Nov 8 00:28:23.793117 kernel: rtc_cmos 00:01: registered as rtc0 Nov 8 00:28:23.793169 kernel: rtc_cmos 00:01: setting system clock to 2025-11-08T00:28:23 UTC (1762561703) Nov 8 00:28:23.793299 kernel: rtc_cmos 00:01: alarms up to one month, y3k, 114 bytes nvram Nov 8 00:28:23.793309 kernel: intel_pstate: CPU model not supported Nov 8 00:28:23.793316 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Nov 8 00:28:23.793322 kernel: NET: Registered PF_INET6 protocol family Nov 8 00:28:23.793328 kernel: Segment Routing with IPv6 Nov 8 00:28:23.793334 kernel: In-situ OAM (IOAM) with IPv6 Nov 8 00:28:23.793341 kernel: NET: Registered PF_PACKET protocol family Nov 8 00:28:23.793348 kernel: Key type dns_resolver registered Nov 8 00:28:23.793354 kernel: IPI shorthand broadcast: enabled Nov 8 00:28:23.793367 kernel: sched_clock: Marking stable (924133064, 226104169)->(1212169787, -61932554) Nov 8 00:28:23.793379 kernel: registered taskstats version 1 Nov 8 00:28:23.793389 kernel: Loading compiled-in X.509 certificates Nov 8 00:28:23.793398 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.113-flatcar: cf7a35a152685ec84a621291e4ce58c959319dfd' Nov 8 00:28:23.793407 kernel: Key type .fscrypt registered Nov 8 00:28:23.793417 kernel: Key type fscrypt-provisioning registered Nov 8 00:28:23.793426 kernel: ima: No TPM chip found, activating TPM-bypass! Nov 8 00:28:23.793436 kernel: ima: Allocated hash algorithm: sha1 Nov 8 00:28:23.793448 kernel: ima: No architecture policies found Nov 8 00:28:23.793458 kernel: clk: Disabling unused clocks Nov 8 00:28:23.793468 kernel: Freeing unused kernel image (initmem) memory: 42880K Nov 8 00:28:23.793477 kernel: Write protecting the kernel read-only data: 36864k Nov 8 00:28:23.793487 kernel: Freeing unused kernel image (rodata/data gap) memory: 1828K Nov 8 00:28:23.793496 kernel: Run /init as init process Nov 8 00:28:23.793504 kernel: with arguments: Nov 8 00:28:23.793515 kernel: /init Nov 8 00:28:23.793525 kernel: with environment: Nov 8 00:28:23.793536 kernel: HOME=/ Nov 8 00:28:23.793545 kernel: TERM=linux Nov 8 00:28:23.793556 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Nov 8 00:28:23.793568 systemd[1]: Detected virtualization vmware. Nov 8 00:28:23.793579 systemd[1]: Detected architecture x86-64. Nov 8 00:28:23.793589 systemd[1]: Running in initrd. Nov 8 00:28:23.793598 systemd[1]: No hostname configured, using default hostname. Nov 8 00:28:23.793608 systemd[1]: Hostname set to . Nov 8 00:28:23.793622 systemd[1]: Initializing machine ID from random generator. Nov 8 00:28:23.793633 systemd[1]: Queued start job for default target initrd.target. Nov 8 00:28:23.793644 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Nov 8 00:28:23.793652 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Nov 8 00:28:23.793659 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Nov 8 00:28:23.793667 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Nov 8 00:28:23.793673 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Nov 8 00:28:23.793680 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Nov 8 00:28:23.793689 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Nov 8 00:28:23.793701 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Nov 8 00:28:23.793713 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Nov 8 00:28:23.793724 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Nov 8 00:28:23.793731 systemd[1]: Reached target paths.target - Path Units. Nov 8 00:28:23.793737 systemd[1]: Reached target slices.target - Slice Units. Nov 8 00:28:23.793744 systemd[1]: Reached target swap.target - Swaps. Nov 8 00:28:23.793753 systemd[1]: Reached target timers.target - Timer Units. Nov 8 00:28:23.793759 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Nov 8 00:28:23.793766 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Nov 8 00:28:23.793772 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Nov 8 00:28:23.793779 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Nov 8 00:28:23.793785 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Nov 8 00:28:23.793796 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Nov 8 00:28:23.793808 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Nov 8 00:28:23.793816 systemd[1]: Reached target sockets.target - Socket Units. Nov 8 00:28:23.793822 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Nov 8 00:28:23.793829 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Nov 8 00:28:23.793835 systemd[1]: Finished network-cleanup.service - Network Cleanup. Nov 8 00:28:23.793842 systemd[1]: Starting systemd-fsck-usr.service... Nov 8 00:28:23.793848 systemd[1]: Starting systemd-journald.service - Journal Service... Nov 8 00:28:23.793855 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Nov 8 00:28:23.793861 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Nov 8 00:28:23.793885 systemd-journald[217]: Collecting audit messages is disabled. Nov 8 00:28:23.793908 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Nov 8 00:28:23.793915 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Nov 8 00:28:23.793921 systemd[1]: Finished systemd-fsck-usr.service. Nov 8 00:28:23.793930 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Nov 8 00:28:23.793938 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Nov 8 00:28:23.793944 kernel: Bridge firewalling registered Nov 8 00:28:23.793951 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Nov 8 00:28:23.793961 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Nov 8 00:28:23.793974 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Nov 8 00:28:23.793983 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Nov 8 00:28:23.793989 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Nov 8 00:28:23.793996 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Nov 8 00:28:23.794005 systemd-journald[217]: Journal started Nov 8 00:28:23.794025 systemd-journald[217]: Runtime Journal (/run/log/journal/a995d73bdc854b3bb4cb8484db23371a) is 4.8M, max 38.6M, 33.8M free. Nov 8 00:28:23.749227 systemd-modules-load[218]: Inserted module 'overlay' Nov 8 00:28:23.768950 systemd-modules-load[218]: Inserted module 'br_netfilter' Nov 8 00:28:23.796284 systemd[1]: Started systemd-journald.service - Journal Service. Nov 8 00:28:23.806449 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Nov 8 00:28:23.806894 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Nov 8 00:28:23.820392 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Nov 8 00:28:23.822515 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Nov 8 00:28:23.822793 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Nov 8 00:28:23.828474 dracut-cmdline[246]: dracut-dracut-053 Nov 8 00:28:23.829593 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Nov 8 00:28:23.830484 dracut-cmdline[246]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=vmware flatcar.autologin verity.usrhash=480a02cf7f2001774aa495c3e719d4173e968e6839485a7d2b207ef2facca472 Nov 8 00:28:23.831262 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Nov 8 00:28:23.853419 systemd-resolved[263]: Positive Trust Anchors: Nov 8 00:28:23.853428 systemd-resolved[263]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Nov 8 00:28:23.853449 systemd-resolved[263]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Nov 8 00:28:23.855114 systemd-resolved[263]: Defaulting to hostname 'linux'. Nov 8 00:28:23.855703 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Nov 8 00:28:23.855840 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Nov 8 00:28:23.882194 kernel: SCSI subsystem initialized Nov 8 00:28:23.889183 kernel: Loading iSCSI transport class v2.0-870. Nov 8 00:28:23.897193 kernel: iscsi: registered transport (tcp) Nov 8 00:28:23.912197 kernel: iscsi: registered transport (qla4xxx) Nov 8 00:28:23.912234 kernel: QLogic iSCSI HBA Driver Nov 8 00:28:23.931761 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Nov 8 00:28:23.935268 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Nov 8 00:28:23.950320 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Nov 8 00:28:23.950351 kernel: device-mapper: uevent: version 1.0.3 Nov 8 00:28:23.951581 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Nov 8 00:28:23.982217 kernel: raid6: avx2x4 gen() 52163 MB/s Nov 8 00:28:23.999188 kernel: raid6: avx2x2 gen() 52809 MB/s Nov 8 00:28:24.016432 kernel: raid6: avx2x1 gen() 44767 MB/s Nov 8 00:28:24.016456 kernel: raid6: using algorithm avx2x2 gen() 52809 MB/s Nov 8 00:28:24.034352 kernel: raid6: .... xor() 31588 MB/s, rmw enabled Nov 8 00:28:24.034384 kernel: raid6: using avx2x2 recovery algorithm Nov 8 00:28:24.048190 kernel: xor: automatically using best checksumming function avx Nov 8 00:28:24.151190 kernel: Btrfs loaded, zoned=no, fsverity=no Nov 8 00:28:24.156970 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Nov 8 00:28:24.161366 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Nov 8 00:28:24.168531 systemd-udevd[434]: Using default interface naming scheme 'v255'. Nov 8 00:28:24.171008 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Nov 8 00:28:24.176496 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Nov 8 00:28:24.182624 dracut-pre-trigger[439]: rd.md=0: removing MD RAID activation Nov 8 00:28:24.197732 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Nov 8 00:28:24.201399 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Nov 8 00:28:24.273927 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Nov 8 00:28:24.280356 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Nov 8 00:28:24.291997 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Nov 8 00:28:24.292481 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Nov 8 00:28:24.293043 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Nov 8 00:28:24.293271 systemd[1]: Reached target remote-fs.target - Remote File Systems. Nov 8 00:28:24.297264 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Nov 8 00:28:24.306382 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Nov 8 00:28:24.346185 kernel: VMware PVSCSI driver - version 1.0.7.0-k Nov 8 00:28:24.352170 kernel: VMware vmxnet3 virtual NIC driver - version 1.7.0.0-k-NAPI Nov 8 00:28:24.352229 kernel: vmw_pvscsi: using 64bit dma Nov 8 00:28:24.352241 kernel: vmw_pvscsi: max_id: 16 Nov 8 00:28:24.352249 kernel: vmw_pvscsi: setting ring_pages to 8 Nov 8 00:28:24.355231 kernel: vmw_pvscsi: enabling reqCallThreshold Nov 8 00:28:24.355248 kernel: vmw_pvscsi: driver-based request coalescing enabled Nov 8 00:28:24.355257 kernel: vmw_pvscsi: using MSI-X Nov 8 00:28:24.357180 kernel: scsi host0: VMware PVSCSI storage adapter rev 2, req/cmp/msg rings: 8/8/1 pages, cmd_per_lun=254 Nov 8 00:28:24.359645 kernel: vmw_pvscsi 0000:03:00.0: VMware PVSCSI rev 2 host #0 Nov 8 00:28:24.359751 kernel: scsi 0:0:0:0: Direct-Access VMware Virtual disk 2.0 PQ: 0 ANSI: 6 Nov 8 00:28:24.361195 kernel: vmxnet3 0000:0b:00.0: # of Tx queues : 2, # of Rx queues : 2 Nov 8 00:28:24.363183 kernel: vmxnet3 0000:0b:00.0 eth0: NIC Link is Up 10000 Mbps Nov 8 00:28:24.369186 kernel: cryptd: max_cpu_qlen set to 1000 Nov 8 00:28:24.374788 kernel: vmxnet3 0000:0b:00.0 ens192: renamed from eth0 Nov 8 00:28:24.377480 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Nov 8 00:28:24.377558 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Nov 8 00:28:24.378049 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Nov 8 00:28:24.378160 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Nov 8 00:28:24.378245 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Nov 8 00:28:24.378351 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Nov 8 00:28:24.388269 kernel: AVX2 version of gcm_enc/dec engaged. Nov 8 00:28:24.388307 kernel: AES CTR mode by8 optimization enabled Nov 8 00:28:24.385825 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Nov 8 00:28:24.392224 kernel: libata version 3.00 loaded. Nov 8 00:28:24.397308 kernel: sd 0:0:0:0: [sda] 17805312 512-byte logical blocks: (9.12 GB/8.49 GiB) Nov 8 00:28:24.397423 kernel: ata_piix 0000:00:07.1: version 2.13 Nov 8 00:28:24.397517 kernel: scsi host1: ata_piix Nov 8 00:28:24.399193 kernel: sd 0:0:0:0: [sda] Write Protect is off Nov 8 00:28:24.399295 kernel: sd 0:0:0:0: [sda] Mode Sense: 31 00 00 00 Nov 8 00:28:24.399364 kernel: sd 0:0:0:0: [sda] Cache data unavailable Nov 8 00:28:24.399429 kernel: sd 0:0:0:0: [sda] Assuming drive cache: write through Nov 8 00:28:24.399495 kernel: scsi host2: ata_piix Nov 8 00:28:24.399560 kernel: ata1: PATA max UDMA/33 cmd 0x1f0 ctl 0x3f6 bmdma 0x1060 irq 14 Nov 8 00:28:24.399569 kernel: ata2: PATA max UDMA/33 cmd 0x170 ctl 0x376 bmdma 0x1068 irq 15 Nov 8 00:28:24.409456 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Nov 8 00:28:24.413293 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Nov 8 00:28:24.421707 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Nov 8 00:28:24.438628 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Nov 8 00:28:24.438661 kernel: sd 0:0:0:0: [sda] Attached SCSI disk Nov 8 00:28:24.566194 kernel: ata2.00: ATAPI: VMware Virtual IDE CDROM Drive, 00000001, max UDMA/33 Nov 8 00:28:24.571193 kernel: scsi 2:0:0:0: CD-ROM NECVMWar VMware IDE CDR10 1.00 PQ: 0 ANSI: 5 Nov 8 00:28:24.595738 kernel: sr 2:0:0:0: [sr0] scsi3-mmc drive: 1x/1x writer dvd-ram cd/rw xa/form2 cdda tray Nov 8 00:28:24.595865 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Nov 8 00:28:24.602240 kernel: BTRFS: device label OEM devid 1 transid 10 /dev/sda6 scanned by (udev-worker) (491) Nov 8 00:28:24.608195 kernel: BTRFS: device fsid a2737782-a37e-42f9-8b56-489a87f47acc devid 1 transid 35 /dev/sda3 scanned by (udev-worker) (487) Nov 8 00:28:24.608226 kernel: sr 2:0:0:0: Attached scsi CD-ROM sr0 Nov 8 00:28:24.608756 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Virtual_disk EFI-SYSTEM. Nov 8 00:28:24.612066 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Virtual_disk ROOT. Nov 8 00:28:24.614817 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Virtual_disk OEM. Nov 8 00:28:24.617067 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - Virtual_disk USR-A. Nov 8 00:28:24.617344 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Virtual_disk USR-A. Nov 8 00:28:24.621260 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Nov 8 00:28:24.647212 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Nov 8 00:28:24.652329 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Nov 8 00:28:25.703450 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Nov 8 00:28:25.703495 disk-uuid[590]: The operation has completed successfully. Nov 8 00:28:25.818872 systemd[1]: disk-uuid.service: Deactivated successfully. Nov 8 00:28:25.819142 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Nov 8 00:28:25.827323 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Nov 8 00:28:25.829247 sh[607]: Success Nov 8 00:28:25.838189 kernel: device-mapper: verity: sha256 using implementation "sha256-avx2" Nov 8 00:28:25.883041 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Nov 8 00:28:25.888042 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Nov 8 00:28:25.888400 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Nov 8 00:28:25.903340 kernel: BTRFS info (device dm-0): first mount of filesystem a2737782-a37e-42f9-8b56-489a87f47acc Nov 8 00:28:25.903371 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Nov 8 00:28:25.903379 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Nov 8 00:28:25.904429 kernel: BTRFS info (device dm-0): disabling log replay at mount time Nov 8 00:28:25.906183 kernel: BTRFS info (device dm-0): using free space tree Nov 8 00:28:25.912185 kernel: BTRFS info (device dm-0): enabling ssd optimizations Nov 8 00:28:25.913318 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Nov 8 00:28:25.918266 systemd[1]: Starting afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments... Nov 8 00:28:25.920254 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Nov 8 00:28:25.940930 kernel: BTRFS info (device sda6): first mount of filesystem 7b59d8a2-cf4e-4d67-8d1e-00d7f134f45e Nov 8 00:28:25.940967 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Nov 8 00:28:25.940975 kernel: BTRFS info (device sda6): using free space tree Nov 8 00:28:25.955190 kernel: BTRFS info (device sda6): enabling ssd optimizations Nov 8 00:28:25.962258 systemd[1]: mnt-oem.mount: Deactivated successfully. Nov 8 00:28:25.963282 kernel: BTRFS info (device sda6): last unmount of filesystem 7b59d8a2-cf4e-4d67-8d1e-00d7f134f45e Nov 8 00:28:25.967119 systemd[1]: Finished ignition-setup.service - Ignition (setup). Nov 8 00:28:25.972306 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Nov 8 00:28:25.982368 systemd[1]: Finished afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments. Nov 8 00:28:25.991299 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Nov 8 00:28:26.044665 ignition[667]: Ignition 2.19.0 Nov 8 00:28:26.044672 ignition[667]: Stage: fetch-offline Nov 8 00:28:26.044702 ignition[667]: no configs at "/usr/lib/ignition/base.d" Nov 8 00:28:26.044709 ignition[667]: no config dir at "/usr/lib/ignition/base.platform.d/vmware" Nov 8 00:28:26.044771 ignition[667]: parsed url from cmdline: "" Nov 8 00:28:26.044774 ignition[667]: no config URL provided Nov 8 00:28:26.044776 ignition[667]: reading system config file "/usr/lib/ignition/user.ign" Nov 8 00:28:26.044781 ignition[667]: no config at "/usr/lib/ignition/user.ign" Nov 8 00:28:26.045203 ignition[667]: config successfully fetched Nov 8 00:28:26.045221 ignition[667]: parsing config with SHA512: f99f94d8a07b24e781616f4e53aef106b429aac977dd81c95b13f54424ef851634f4937812e590a7b066af3b93f7edca4810785e5cb5bd8d076a94cbc3a78fbb Nov 8 00:28:26.047586 unknown[667]: fetched base config from "system" Nov 8 00:28:26.047592 unknown[667]: fetched user config from "vmware" Nov 8 00:28:26.047853 ignition[667]: fetch-offline: fetch-offline passed Nov 8 00:28:26.047895 ignition[667]: Ignition finished successfully Nov 8 00:28:26.048888 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Nov 8 00:28:26.070241 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Nov 8 00:28:26.074262 systemd[1]: Starting systemd-networkd.service - Network Configuration... Nov 8 00:28:26.086429 systemd-networkd[800]: lo: Link UP Nov 8 00:28:26.086646 systemd-networkd[800]: lo: Gained carrier Nov 8 00:28:26.087480 systemd-networkd[800]: Enumeration completed Nov 8 00:28:26.087649 systemd[1]: Started systemd-networkd.service - Network Configuration. Nov 8 00:28:26.087812 systemd[1]: Reached target network.target - Network. Nov 8 00:28:26.087816 systemd-networkd[800]: ens192: Configuring with /etc/systemd/network/10-dracut-cmdline-99.network. Nov 8 00:28:26.087919 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Nov 8 00:28:26.091382 kernel: vmxnet3 0000:0b:00.0 ens192: intr type 3, mode 0, 3 vectors allocated Nov 8 00:28:26.091502 kernel: vmxnet3 0000:0b:00.0 ens192: NIC Link is Up 10000 Mbps Nov 8 00:28:26.091881 systemd-networkd[800]: ens192: Link UP Nov 8 00:28:26.091995 systemd-networkd[800]: ens192: Gained carrier Nov 8 00:28:26.096322 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Nov 8 00:28:26.104678 ignition[802]: Ignition 2.19.0 Nov 8 00:28:26.104685 ignition[802]: Stage: kargs Nov 8 00:28:26.104788 ignition[802]: no configs at "/usr/lib/ignition/base.d" Nov 8 00:28:26.104795 ignition[802]: no config dir at "/usr/lib/ignition/base.platform.d/vmware" Nov 8 00:28:26.105356 ignition[802]: kargs: kargs passed Nov 8 00:28:26.105389 ignition[802]: Ignition finished successfully Nov 8 00:28:26.106694 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Nov 8 00:28:26.114283 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Nov 8 00:28:26.121752 ignition[810]: Ignition 2.19.0 Nov 8 00:28:26.121758 ignition[810]: Stage: disks Nov 8 00:28:26.121864 ignition[810]: no configs at "/usr/lib/ignition/base.d" Nov 8 00:28:26.121871 ignition[810]: no config dir at "/usr/lib/ignition/base.platform.d/vmware" Nov 8 00:28:26.122434 ignition[810]: disks: disks passed Nov 8 00:28:26.122462 ignition[810]: Ignition finished successfully Nov 8 00:28:26.123440 systemd[1]: Finished ignition-disks.service - Ignition (disks). Nov 8 00:28:26.123730 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Nov 8 00:28:26.123962 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Nov 8 00:28:26.124181 systemd[1]: Reached target local-fs.target - Local File Systems. Nov 8 00:28:26.124269 systemd[1]: Reached target sysinit.target - System Initialization. Nov 8 00:28:26.124355 systemd[1]: Reached target basic.target - Basic System. Nov 8 00:28:26.127264 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Nov 8 00:28:26.137400 systemd-fsck[818]: ROOT: clean, 14/1628000 files, 120691/1617920 blocks Nov 8 00:28:26.138585 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Nov 8 00:28:26.144280 systemd[1]: Mounting sysroot.mount - /sysroot... Nov 8 00:28:26.206183 kernel: EXT4-fs (sda9): mounted filesystem 3cd35b5c-4e0e-45c1-abc9-cf70eebd42df r/w with ordered data mode. Quota mode: none. Nov 8 00:28:26.206563 systemd[1]: Mounted sysroot.mount - /sysroot. Nov 8 00:28:26.207024 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Nov 8 00:28:26.212226 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Nov 8 00:28:26.214268 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Nov 8 00:28:26.214583 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Nov 8 00:28:26.214607 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Nov 8 00:28:26.214620 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Nov 8 00:28:26.217244 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Nov 8 00:28:26.217837 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Nov 8 00:28:26.221184 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/sda6 scanned by mount (826) Nov 8 00:28:26.223986 kernel: BTRFS info (device sda6): first mount of filesystem 7b59d8a2-cf4e-4d67-8d1e-00d7f134f45e Nov 8 00:28:26.224008 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Nov 8 00:28:26.224022 kernel: BTRFS info (device sda6): using free space tree Nov 8 00:28:26.228185 kernel: BTRFS info (device sda6): enabling ssd optimizations Nov 8 00:28:26.229001 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Nov 8 00:28:26.246161 initrd-setup-root[850]: cut: /sysroot/etc/passwd: No such file or directory Nov 8 00:28:26.248668 initrd-setup-root[857]: cut: /sysroot/etc/group: No such file or directory Nov 8 00:28:26.250791 initrd-setup-root[864]: cut: /sysroot/etc/shadow: No such file or directory Nov 8 00:28:26.252827 initrd-setup-root[871]: cut: /sysroot/etc/gshadow: No such file or directory Nov 8 00:28:26.304105 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Nov 8 00:28:26.309276 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Nov 8 00:28:26.311736 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Nov 8 00:28:26.314193 kernel: BTRFS info (device sda6): last unmount of filesystem 7b59d8a2-cf4e-4d67-8d1e-00d7f134f45e Nov 8 00:28:26.328108 ignition[939]: INFO : Ignition 2.19.0 Nov 8 00:28:26.328108 ignition[939]: INFO : Stage: mount Nov 8 00:28:26.328548 ignition[939]: INFO : no configs at "/usr/lib/ignition/base.d" Nov 8 00:28:26.328548 ignition[939]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/vmware" Nov 8 00:28:26.328826 ignition[939]: INFO : mount: mount passed Nov 8 00:28:26.328950 ignition[939]: INFO : Ignition finished successfully Nov 8 00:28:26.329397 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Nov 8 00:28:26.329579 systemd[1]: Finished ignition-mount.service - Ignition (mount). Nov 8 00:28:26.333303 systemd[1]: Starting ignition-files.service - Ignition (files)... Nov 8 00:28:26.902069 systemd[1]: sysroot-oem.mount: Deactivated successfully. Nov 8 00:28:26.908316 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Nov 8 00:28:26.970362 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/sda6 scanned by mount (951) Nov 8 00:28:26.973203 kernel: BTRFS info (device sda6): first mount of filesystem 7b59d8a2-cf4e-4d67-8d1e-00d7f134f45e Nov 8 00:28:26.973225 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Nov 8 00:28:26.973236 kernel: BTRFS info (device sda6): using free space tree Nov 8 00:28:26.979193 kernel: BTRFS info (device sda6): enabling ssd optimizations Nov 8 00:28:26.980555 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Nov 8 00:28:27.003682 ignition[967]: INFO : Ignition 2.19.0 Nov 8 00:28:27.003682 ignition[967]: INFO : Stage: files Nov 8 00:28:27.004062 ignition[967]: INFO : no configs at "/usr/lib/ignition/base.d" Nov 8 00:28:27.004062 ignition[967]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/vmware" Nov 8 00:28:27.004486 ignition[967]: DEBUG : files: compiled without relabeling support, skipping Nov 8 00:28:27.005266 ignition[967]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Nov 8 00:28:27.005409 ignition[967]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Nov 8 00:28:27.007794 ignition[967]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Nov 8 00:28:27.007998 ignition[967]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Nov 8 00:28:27.008168 unknown[967]: wrote ssh authorized keys file for user: core Nov 8 00:28:27.008383 ignition[967]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Nov 8 00:28:27.010791 ignition[967]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/etc/flatcar-cgroupv1" Nov 8 00:28:27.010980 ignition[967]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/etc/flatcar-cgroupv1" Nov 8 00:28:27.010980 ignition[967]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/helm-v3.17.0-linux-amd64.tar.gz" Nov 8 00:28:27.010980 ignition[967]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://get.helm.sh/helm-v3.17.0-linux-amd64.tar.gz: attempt #1 Nov 8 00:28:27.048879 ignition[967]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Nov 8 00:28:27.112655 ignition[967]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/helm-v3.17.0-linux-amd64.tar.gz" Nov 8 00:28:27.112942 ignition[967]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Nov 8 00:28:27.112942 ignition[967]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Nov 8 00:28:27.112942 ignition[967]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Nov 8 00:28:27.112942 ignition[967]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Nov 8 00:28:27.112942 ignition[967]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Nov 8 00:28:27.113735 ignition[967]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Nov 8 00:28:27.113735 ignition[967]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Nov 8 00:28:27.113735 ignition[967]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Nov 8 00:28:27.113735 ignition[967]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Nov 8 00:28:27.113735 ignition[967]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Nov 8 00:28:27.113735 ignition[967]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Nov 8 00:28:27.113735 ignition[967]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Nov 8 00:28:27.113735 ignition[967]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Nov 8 00:28:27.113735 ignition[967]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://extensions.flatcar.org/extensions/kubernetes-v1.32.4-x86-64.raw: attempt #1 Nov 8 00:28:27.385298 systemd-networkd[800]: ens192: Gained IPv6LL Nov 8 00:28:27.585686 ignition[967]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Nov 8 00:28:27.793505 ignition[967]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Nov 8 00:28:27.793505 ignition[967]: INFO : files: createFilesystemsFiles: createFiles: op(c): [started] writing file "/sysroot/etc/systemd/network/00-vmware.network" Nov 8 00:28:27.793991 ignition[967]: INFO : files: createFilesystemsFiles: createFiles: op(c): [finished] writing file "/sysroot/etc/systemd/network/00-vmware.network" Nov 8 00:28:27.793991 ignition[967]: INFO : files: op(d): [started] processing unit "containerd.service" Nov 8 00:28:27.793991 ignition[967]: INFO : files: op(d): op(e): [started] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Nov 8 00:28:27.793991 ignition[967]: INFO : files: op(d): op(e): [finished] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Nov 8 00:28:27.793991 ignition[967]: INFO : files: op(d): [finished] processing unit "containerd.service" Nov 8 00:28:27.793991 ignition[967]: INFO : files: op(f): [started] processing unit "prepare-helm.service" Nov 8 00:28:27.793991 ignition[967]: INFO : files: op(f): op(10): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Nov 8 00:28:27.793991 ignition[967]: INFO : files: op(f): op(10): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Nov 8 00:28:27.793991 ignition[967]: INFO : files: op(f): [finished] processing unit "prepare-helm.service" Nov 8 00:28:27.793991 ignition[967]: INFO : files: op(11): [started] processing unit "coreos-metadata.service" Nov 8 00:28:27.793991 ignition[967]: INFO : files: op(11): op(12): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Nov 8 00:28:27.795729 ignition[967]: INFO : files: op(11): op(12): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Nov 8 00:28:27.795729 ignition[967]: INFO : files: op(11): [finished] processing unit "coreos-metadata.service" Nov 8 00:28:27.795729 ignition[967]: INFO : files: op(13): [started] setting preset to disabled for "coreos-metadata.service" Nov 8 00:28:27.829798 ignition[967]: INFO : files: op(13): op(14): [started] removing enablement symlink(s) for "coreos-metadata.service" Nov 8 00:28:27.833249 ignition[967]: INFO : files: op(13): op(14): [finished] removing enablement symlink(s) for "coreos-metadata.service" Nov 8 00:28:27.833469 ignition[967]: INFO : files: op(13): [finished] setting preset to disabled for "coreos-metadata.service" Nov 8 00:28:27.833469 ignition[967]: INFO : files: op(15): [started] setting preset to enabled for "prepare-helm.service" Nov 8 00:28:27.833469 ignition[967]: INFO : files: op(15): [finished] setting preset to enabled for "prepare-helm.service" Nov 8 00:28:27.834451 ignition[967]: INFO : files: createResultFile: createFiles: op(16): [started] writing file "/sysroot/etc/.ignition-result.json" Nov 8 00:28:27.834451 ignition[967]: INFO : files: createResultFile: createFiles: op(16): [finished] writing file "/sysroot/etc/.ignition-result.json" Nov 8 00:28:27.834451 ignition[967]: INFO : files: files passed Nov 8 00:28:27.834451 ignition[967]: INFO : Ignition finished successfully Nov 8 00:28:27.834514 systemd[1]: Finished ignition-files.service - Ignition (files). Nov 8 00:28:27.839267 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Nov 8 00:28:27.841267 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Nov 8 00:28:27.842526 systemd[1]: ignition-quench.service: Deactivated successfully. Nov 8 00:28:27.842592 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Nov 8 00:28:27.849077 initrd-setup-root-after-ignition[999]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Nov 8 00:28:27.849077 initrd-setup-root-after-ignition[999]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Nov 8 00:28:27.850263 initrd-setup-root-after-ignition[1003]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Nov 8 00:28:27.851359 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Nov 8 00:28:27.851591 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Nov 8 00:28:27.855269 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Nov 8 00:28:27.871847 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Nov 8 00:28:27.871904 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Nov 8 00:28:27.872130 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Nov 8 00:28:27.872253 systemd[1]: Reached target initrd.target - Initrd Default Target. Nov 8 00:28:27.872487 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Nov 8 00:28:27.874295 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Nov 8 00:28:27.883244 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Nov 8 00:28:27.888268 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Nov 8 00:28:27.895562 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Nov 8 00:28:27.895865 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Nov 8 00:28:27.896070 systemd[1]: Stopped target timers.target - Timer Units. Nov 8 00:28:27.896261 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Nov 8 00:28:27.896355 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Nov 8 00:28:27.896903 systemd[1]: Stopped target initrd.target - Initrd Default Target. Nov 8 00:28:27.897108 systemd[1]: Stopped target basic.target - Basic System. Nov 8 00:28:27.897306 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Nov 8 00:28:27.897488 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Nov 8 00:28:27.897723 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Nov 8 00:28:27.897940 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Nov 8 00:28:27.898138 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Nov 8 00:28:27.898366 systemd[1]: Stopped target sysinit.target - System Initialization. Nov 8 00:28:27.898561 systemd[1]: Stopped target local-fs.target - Local File Systems. Nov 8 00:28:27.898735 systemd[1]: Stopped target swap.target - Swaps. Nov 8 00:28:27.898862 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Nov 8 00:28:27.898961 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Nov 8 00:28:27.899323 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Nov 8 00:28:27.899501 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Nov 8 00:28:27.899656 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Nov 8 00:28:27.899712 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Nov 8 00:28:27.899881 systemd[1]: dracut-initqueue.service: Deactivated successfully. Nov 8 00:28:27.899961 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Nov 8 00:28:27.900387 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Nov 8 00:28:27.900491 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Nov 8 00:28:27.900706 systemd[1]: Stopped target paths.target - Path Units. Nov 8 00:28:27.900858 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Nov 8 00:28:27.904198 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Nov 8 00:28:27.904436 systemd[1]: Stopped target slices.target - Slice Units. Nov 8 00:28:27.904603 systemd[1]: Stopped target sockets.target - Socket Units. Nov 8 00:28:27.904752 systemd[1]: iscsid.socket: Deactivated successfully. Nov 8 00:28:27.904804 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Nov 8 00:28:27.905076 systemd[1]: iscsiuio.socket: Deactivated successfully. Nov 8 00:28:27.905142 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Nov 8 00:28:27.905372 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Nov 8 00:28:27.905457 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Nov 8 00:28:27.905689 systemd[1]: ignition-files.service: Deactivated successfully. Nov 8 00:28:27.905766 systemd[1]: Stopped ignition-files.service - Ignition (files). Nov 8 00:28:27.913369 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Nov 8 00:28:27.916323 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Nov 8 00:28:27.916437 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Nov 8 00:28:27.916532 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Nov 8 00:28:27.916725 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Nov 8 00:28:27.916784 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Nov 8 00:28:27.918711 systemd[1]: initrd-cleanup.service: Deactivated successfully. Nov 8 00:28:27.918776 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Nov 8 00:28:27.923865 ignition[1023]: INFO : Ignition 2.19.0 Nov 8 00:28:27.923865 ignition[1023]: INFO : Stage: umount Nov 8 00:28:27.923865 ignition[1023]: INFO : no configs at "/usr/lib/ignition/base.d" Nov 8 00:28:27.923865 ignition[1023]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/vmware" Nov 8 00:28:27.925652 ignition[1023]: INFO : umount: umount passed Nov 8 00:28:27.925792 ignition[1023]: INFO : Ignition finished successfully Nov 8 00:28:27.926851 systemd[1]: ignition-mount.service: Deactivated successfully. Nov 8 00:28:27.926967 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Nov 8 00:28:27.927249 systemd[1]: Stopped target network.target - Network. Nov 8 00:28:27.927354 systemd[1]: ignition-disks.service: Deactivated successfully. Nov 8 00:28:27.927385 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Nov 8 00:28:27.927545 systemd[1]: ignition-kargs.service: Deactivated successfully. Nov 8 00:28:27.927568 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Nov 8 00:28:27.927814 systemd[1]: ignition-setup.service: Deactivated successfully. Nov 8 00:28:27.927844 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Nov 8 00:28:27.928002 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Nov 8 00:28:27.928035 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Nov 8 00:28:27.928314 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Nov 8 00:28:27.928618 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Nov 8 00:28:27.931613 systemd[1]: systemd-networkd.service: Deactivated successfully. Nov 8 00:28:27.931679 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Nov 8 00:28:27.931977 systemd[1]: systemd-networkd.socket: Deactivated successfully. Nov 8 00:28:27.932004 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Nov 8 00:28:27.936219 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Nov 8 00:28:27.936312 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Nov 8 00:28:27.936339 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Nov 8 00:28:27.936455 systemd[1]: afterburn-network-kargs.service: Deactivated successfully. Nov 8 00:28:27.936477 systemd[1]: Stopped afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments. Nov 8 00:28:27.936626 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Nov 8 00:28:27.936862 systemd[1]: systemd-resolved.service: Deactivated successfully. Nov 8 00:28:27.936915 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Nov 8 00:28:27.940649 systemd[1]: systemd-sysctl.service: Deactivated successfully. Nov 8 00:28:27.940694 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Nov 8 00:28:27.940833 systemd[1]: systemd-modules-load.service: Deactivated successfully. Nov 8 00:28:27.940854 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Nov 8 00:28:27.940976 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Nov 8 00:28:27.940997 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Nov 8 00:28:27.943953 systemd[1]: network-cleanup.service: Deactivated successfully. Nov 8 00:28:27.944012 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Nov 8 00:28:27.947379 systemd[1]: systemd-udevd.service: Deactivated successfully. Nov 8 00:28:27.947452 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Nov 8 00:28:27.948086 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Nov 8 00:28:27.948117 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Nov 8 00:28:27.948258 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Nov 8 00:28:27.948276 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Nov 8 00:28:27.948389 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Nov 8 00:28:27.948411 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Nov 8 00:28:27.948565 systemd[1]: dracut-cmdline.service: Deactivated successfully. Nov 8 00:28:27.948587 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Nov 8 00:28:27.948725 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Nov 8 00:28:27.948747 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Nov 8 00:28:27.956282 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Nov 8 00:28:27.956389 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Nov 8 00:28:27.956420 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Nov 8 00:28:27.956542 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Nov 8 00:28:27.956565 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Nov 8 00:28:27.956677 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Nov 8 00:28:27.956698 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Nov 8 00:28:27.956805 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Nov 8 00:28:27.956826 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Nov 8 00:28:27.957878 systemd[1]: sysroot-boot.mount: Deactivated successfully. Nov 8 00:28:27.960722 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Nov 8 00:28:27.960784 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Nov 8 00:28:28.061387 systemd[1]: sysroot-boot.service: Deactivated successfully. Nov 8 00:28:28.061475 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Nov 8 00:28:28.062000 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Nov 8 00:28:28.062160 systemd[1]: initrd-setup-root.service: Deactivated successfully. Nov 8 00:28:28.062224 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Nov 8 00:28:28.065301 systemd[1]: Starting initrd-switch-root.service - Switch Root... Nov 8 00:28:28.093093 systemd[1]: Switching root. Nov 8 00:28:28.122105 systemd-journald[217]: Journal stopped Nov 8 00:28:23.735822 kernel: Linux version 6.6.113-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT_DYNAMIC Fri Nov 7 22:45:04 -00 2025 Nov 8 00:28:23.735838 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=vmware flatcar.autologin verity.usrhash=480a02cf7f2001774aa495c3e719d4173e968e6839485a7d2b207ef2facca472 Nov 8 00:28:23.735844 kernel: Disabled fast string operations Nov 8 00:28:23.735848 kernel: BIOS-provided physical RAM map: Nov 8 00:28:23.735852 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009ebff] usable Nov 8 00:28:23.735856 kernel: BIOS-e820: [mem 0x000000000009ec00-0x000000000009ffff] reserved Nov 8 00:28:23.735862 kernel: BIOS-e820: [mem 0x00000000000dc000-0x00000000000fffff] reserved Nov 8 00:28:23.735866 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000007fedffff] usable Nov 8 00:28:23.735870 kernel: BIOS-e820: [mem 0x000000007fee0000-0x000000007fefefff] ACPI data Nov 8 00:28:23.735874 kernel: BIOS-e820: [mem 0x000000007feff000-0x000000007fefffff] ACPI NVS Nov 8 00:28:23.735878 kernel: BIOS-e820: [mem 0x000000007ff00000-0x000000007fffffff] usable Nov 8 00:28:23.735882 kernel: BIOS-e820: [mem 0x00000000f0000000-0x00000000f7ffffff] reserved Nov 8 00:28:23.735886 kernel: BIOS-e820: [mem 0x00000000fec00000-0x00000000fec0ffff] reserved Nov 8 00:28:23.735890 kernel: BIOS-e820: [mem 0x00000000fee00000-0x00000000fee00fff] reserved Nov 8 00:28:23.735896 kernel: BIOS-e820: [mem 0x00000000fffe0000-0x00000000ffffffff] reserved Nov 8 00:28:23.735901 kernel: NX (Execute Disable) protection: active Nov 8 00:28:23.735905 kernel: APIC: Static calls initialized Nov 8 00:28:23.735910 kernel: SMBIOS 2.7 present. Nov 8 00:28:23.735914 kernel: DMI: VMware, Inc. VMware Virtual Platform/440BX Desktop Reference Platform, BIOS 6.00 05/28/2020 Nov 8 00:28:23.735919 kernel: vmware: hypercall mode: 0x00 Nov 8 00:28:23.735923 kernel: Hypervisor detected: VMware Nov 8 00:28:23.735928 kernel: vmware: TSC freq read from hypervisor : 3408.000 MHz Nov 8 00:28:23.735933 kernel: vmware: Host bus clock speed read from hypervisor : 66000000 Hz Nov 8 00:28:23.735938 kernel: vmware: using clock offset of 2547557263 ns Nov 8 00:28:23.735942 kernel: tsc: Detected 3408.000 MHz processor Nov 8 00:28:23.735947 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Nov 8 00:28:23.735953 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Nov 8 00:28:23.735957 kernel: last_pfn = 0x80000 max_arch_pfn = 0x400000000 Nov 8 00:28:23.735962 kernel: total RAM covered: 3072M Nov 8 00:28:23.735966 kernel: Found optimal setting for mtrr clean up Nov 8 00:28:23.735972 kernel: gran_size: 64K chunk_size: 64K num_reg: 2 lose cover RAM: 0G Nov 8 00:28:23.735977 kernel: MTRR map: 6 entries (5 fixed + 1 variable; max 21), built from 8 variable MTRRs Nov 8 00:28:23.735982 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Nov 8 00:28:23.735987 kernel: Using GB pages for direct mapping Nov 8 00:28:23.735991 kernel: ACPI: Early table checksum verification disabled Nov 8 00:28:23.735996 kernel: ACPI: RSDP 0x00000000000F6A00 000024 (v02 PTLTD ) Nov 8 00:28:23.736000 kernel: ACPI: XSDT 0x000000007FEE965B 00005C (v01 INTEL 440BX 06040000 VMW 01324272) Nov 8 00:28:23.736005 kernel: ACPI: FACP 0x000000007FEFEE73 0000F4 (v04 INTEL 440BX 06040000 PTL 000F4240) Nov 8 00:28:23.736010 kernel: ACPI: DSDT 0x000000007FEEAD55 01411E (v01 PTLTD Custom 06040000 MSFT 03000001) Nov 8 00:28:23.736015 kernel: ACPI: FACS 0x000000007FEFFFC0 000040 Nov 8 00:28:23.736022 kernel: ACPI: FACS 0x000000007FEFFFC0 000040 Nov 8 00:28:23.736026 kernel: ACPI: BOOT 0x000000007FEEAD2D 000028 (v01 PTLTD $SBFTBL$ 06040000 LTP 00000001) Nov 8 00:28:23.736031 kernel: ACPI: APIC 0x000000007FEEA5EB 000742 (v01 PTLTD ? APIC 06040000 LTP 00000000) Nov 8 00:28:23.736036 kernel: ACPI: MCFG 0x000000007FEEA5AF 00003C (v01 PTLTD $PCITBL$ 06040000 LTP 00000001) Nov 8 00:28:23.736041 kernel: ACPI: SRAT 0x000000007FEE9757 0008A8 (v02 VMWARE MEMPLUG 06040000 VMW 00000001) Nov 8 00:28:23.736047 kernel: ACPI: HPET 0x000000007FEE971F 000038 (v01 VMWARE VMW HPET 06040000 VMW 00000001) Nov 8 00:28:23.736052 kernel: ACPI: WAET 0x000000007FEE96F7 000028 (v01 VMWARE VMW WAET 06040000 VMW 00000001) Nov 8 00:28:23.736057 kernel: ACPI: Reserving FACP table memory at [mem 0x7fefee73-0x7fefef66] Nov 8 00:28:23.736062 kernel: ACPI: Reserving DSDT table memory at [mem 0x7feead55-0x7fefee72] Nov 8 00:28:23.736067 kernel: ACPI: Reserving FACS table memory at [mem 0x7fefffc0-0x7fefffff] Nov 8 00:28:23.736072 kernel: ACPI: Reserving FACS table memory at [mem 0x7fefffc0-0x7fefffff] Nov 8 00:28:23.736077 kernel: ACPI: Reserving BOOT table memory at [mem 0x7feead2d-0x7feead54] Nov 8 00:28:23.736081 kernel: ACPI: Reserving APIC table memory at [mem 0x7feea5eb-0x7feead2c] Nov 8 00:28:23.736086 kernel: ACPI: Reserving MCFG table memory at [mem 0x7feea5af-0x7feea5ea] Nov 8 00:28:23.736091 kernel: ACPI: Reserving SRAT table memory at [mem 0x7fee9757-0x7fee9ffe] Nov 8 00:28:23.736097 kernel: ACPI: Reserving HPET table memory at [mem 0x7fee971f-0x7fee9756] Nov 8 00:28:23.736102 kernel: ACPI: Reserving WAET table memory at [mem 0x7fee96f7-0x7fee971e] Nov 8 00:28:23.736106 kernel: system APIC only can use physical flat Nov 8 00:28:23.736111 kernel: APIC: Switched APIC routing to: physical flat Nov 8 00:28:23.736116 kernel: SRAT: PXM 0 -> APIC 0x00 -> Node 0 Nov 8 00:28:23.736121 kernel: SRAT: PXM 0 -> APIC 0x02 -> Node 0 Nov 8 00:28:23.736126 kernel: SRAT: PXM 0 -> APIC 0x04 -> Node 0 Nov 8 00:28:23.736131 kernel: SRAT: PXM 0 -> APIC 0x06 -> Node 0 Nov 8 00:28:23.736136 kernel: SRAT: PXM 0 -> APIC 0x08 -> Node 0 Nov 8 00:28:23.736141 kernel: SRAT: PXM 0 -> APIC 0x0a -> Node 0 Nov 8 00:28:23.736146 kernel: SRAT: PXM 0 -> APIC 0x0c -> Node 0 Nov 8 00:28:23.736151 kernel: SRAT: PXM 0 -> APIC 0x0e -> Node 0 Nov 8 00:28:23.736156 kernel: SRAT: PXM 0 -> APIC 0x10 -> Node 0 Nov 8 00:28:23.736160 kernel: SRAT: PXM 0 -> APIC 0x12 -> Node 0 Nov 8 00:28:23.736165 kernel: SRAT: PXM 0 -> APIC 0x14 -> Node 0 Nov 8 00:28:23.736170 kernel: SRAT: PXM 0 -> APIC 0x16 -> Node 0 Nov 8 00:28:23.736184 kernel: SRAT: PXM 0 -> APIC 0x18 -> Node 0 Nov 8 00:28:23.736190 kernel: SRAT: PXM 0 -> APIC 0x1a -> Node 0 Nov 8 00:28:23.736194 kernel: SRAT: PXM 0 -> APIC 0x1c -> Node 0 Nov 8 00:28:23.736201 kernel: SRAT: PXM 0 -> APIC 0x1e -> Node 0 Nov 8 00:28:23.736206 kernel: SRAT: PXM 0 -> APIC 0x20 -> Node 0 Nov 8 00:28:23.736211 kernel: SRAT: PXM 0 -> APIC 0x22 -> Node 0 Nov 8 00:28:23.736216 kernel: SRAT: PXM 0 -> APIC 0x24 -> Node 0 Nov 8 00:28:23.736221 kernel: SRAT: PXM 0 -> APIC 0x26 -> Node 0 Nov 8 00:28:23.736225 kernel: SRAT: PXM 0 -> APIC 0x28 -> Node 0 Nov 8 00:28:23.736230 kernel: SRAT: PXM 0 -> APIC 0x2a -> Node 0 Nov 8 00:28:23.736235 kernel: SRAT: PXM 0 -> APIC 0x2c -> Node 0 Nov 8 00:28:23.736240 kernel: SRAT: PXM 0 -> APIC 0x2e -> Node 0 Nov 8 00:28:23.736244 kernel: SRAT: PXM 0 -> APIC 0x30 -> Node 0 Nov 8 00:28:23.736249 kernel: SRAT: PXM 0 -> APIC 0x32 -> Node 0 Nov 8 00:28:23.736255 kernel: SRAT: PXM 0 -> APIC 0x34 -> Node 0 Nov 8 00:28:23.736259 kernel: SRAT: PXM 0 -> APIC 0x36 -> Node 0 Nov 8 00:28:23.736264 kernel: SRAT: PXM 0 -> APIC 0x38 -> Node 0 Nov 8 00:28:23.736269 kernel: SRAT: PXM 0 -> APIC 0x3a -> Node 0 Nov 8 00:28:23.736274 kernel: SRAT: PXM 0 -> APIC 0x3c -> Node 0 Nov 8 00:28:23.736279 kernel: SRAT: PXM 0 -> APIC 0x3e -> Node 0 Nov 8 00:28:23.736283 kernel: SRAT: PXM 0 -> APIC 0x40 -> Node 0 Nov 8 00:28:23.736288 kernel: SRAT: PXM 0 -> APIC 0x42 -> Node 0 Nov 8 00:28:23.736293 kernel: SRAT: PXM 0 -> APIC 0x44 -> Node 0 Nov 8 00:28:23.736298 kernel: SRAT: PXM 0 -> APIC 0x46 -> Node 0 Nov 8 00:28:23.736304 kernel: SRAT: PXM 0 -> APIC 0x48 -> Node 0 Nov 8 00:28:23.736309 kernel: SRAT: PXM 0 -> APIC 0x4a -> Node 0 Nov 8 00:28:23.736313 kernel: SRAT: PXM 0 -> APIC 0x4c -> Node 0 Nov 8 00:28:23.736318 kernel: SRAT: PXM 0 -> APIC 0x4e -> Node 0 Nov 8 00:28:23.736323 kernel: SRAT: PXM 0 -> APIC 0x50 -> Node 0 Nov 8 00:28:23.736328 kernel: SRAT: PXM 0 -> APIC 0x52 -> Node 0 Nov 8 00:28:23.736332 kernel: SRAT: PXM 0 -> APIC 0x54 -> Node 0 Nov 8 00:28:23.736337 kernel: SRAT: PXM 0 -> APIC 0x56 -> Node 0 Nov 8 00:28:23.736342 kernel: SRAT: PXM 0 -> APIC 0x58 -> Node 0 Nov 8 00:28:23.736346 kernel: SRAT: PXM 0 -> APIC 0x5a -> Node 0 Nov 8 00:28:23.736352 kernel: SRAT: PXM 0 -> APIC 0x5c -> Node 0 Nov 8 00:28:23.736357 kernel: SRAT: PXM 0 -> APIC 0x5e -> Node 0 Nov 8 00:28:23.736362 kernel: SRAT: PXM 0 -> APIC 0x60 -> Node 0 Nov 8 00:28:23.736366 kernel: SRAT: PXM 0 -> APIC 0x62 -> Node 0 Nov 8 00:28:23.736371 kernel: SRAT: PXM 0 -> APIC 0x64 -> Node 0 Nov 8 00:28:23.736376 kernel: SRAT: PXM 0 -> APIC 0x66 -> Node 0 Nov 8 00:28:23.736381 kernel: SRAT: PXM 0 -> APIC 0x68 -> Node 0 Nov 8 00:28:23.736385 kernel: SRAT: PXM 0 -> APIC 0x6a -> Node 0 Nov 8 00:28:23.736390 kernel: SRAT: PXM 0 -> APIC 0x6c -> Node 0 Nov 8 00:28:23.736395 kernel: SRAT: PXM 0 -> APIC 0x6e -> Node 0 Nov 8 00:28:23.736400 kernel: SRAT: PXM 0 -> APIC 0x70 -> Node 0 Nov 8 00:28:23.736405 kernel: SRAT: PXM 0 -> APIC 0x72 -> Node 0 Nov 8 00:28:23.736410 kernel: SRAT: PXM 0 -> APIC 0x74 -> Node 0 Nov 8 00:28:23.736419 kernel: SRAT: PXM 0 -> APIC 0x76 -> Node 0 Nov 8 00:28:23.736424 kernel: SRAT: PXM 0 -> APIC 0x78 -> Node 0 Nov 8 00:28:23.736429 kernel: SRAT: PXM 0 -> APIC 0x7a -> Node 0 Nov 8 00:28:23.736434 kernel: SRAT: PXM 0 -> APIC 0x7c -> Node 0 Nov 8 00:28:23.736439 kernel: SRAT: PXM 0 -> APIC 0x7e -> Node 0 Nov 8 00:28:23.736444 kernel: SRAT: PXM 0 -> APIC 0x80 -> Node 0 Nov 8 00:28:23.736450 kernel: SRAT: PXM 0 -> APIC 0x82 -> Node 0 Nov 8 00:28:23.736456 kernel: SRAT: PXM 0 -> APIC 0x84 -> Node 0 Nov 8 00:28:23.736461 kernel: SRAT: PXM 0 -> APIC 0x86 -> Node 0 Nov 8 00:28:23.736466 kernel: SRAT: PXM 0 -> APIC 0x88 -> Node 0 Nov 8 00:28:23.736471 kernel: SRAT: PXM 0 -> APIC 0x8a -> Node 0 Nov 8 00:28:23.736476 kernel: SRAT: PXM 0 -> APIC 0x8c -> Node 0 Nov 8 00:28:23.736481 kernel: SRAT: PXM 0 -> APIC 0x8e -> Node 0 Nov 8 00:28:23.736486 kernel: SRAT: PXM 0 -> APIC 0x90 -> Node 0 Nov 8 00:28:23.736491 kernel: SRAT: PXM 0 -> APIC 0x92 -> Node 0 Nov 8 00:28:23.736496 kernel: SRAT: PXM 0 -> APIC 0x94 -> Node 0 Nov 8 00:28:23.736502 kernel: SRAT: PXM 0 -> APIC 0x96 -> Node 0 Nov 8 00:28:23.736507 kernel: SRAT: PXM 0 -> APIC 0x98 -> Node 0 Nov 8 00:28:23.736512 kernel: SRAT: PXM 0 -> APIC 0x9a -> Node 0 Nov 8 00:28:23.736517 kernel: SRAT: PXM 0 -> APIC 0x9c -> Node 0 Nov 8 00:28:23.736522 kernel: SRAT: PXM 0 -> APIC 0x9e -> Node 0 Nov 8 00:28:23.736528 kernel: SRAT: PXM 0 -> APIC 0xa0 -> Node 0 Nov 8 00:28:23.736533 kernel: SRAT: PXM 0 -> APIC 0xa2 -> Node 0 Nov 8 00:28:23.736538 kernel: SRAT: PXM 0 -> APIC 0xa4 -> Node 0 Nov 8 00:28:23.736543 kernel: SRAT: PXM 0 -> APIC 0xa6 -> Node 0 Nov 8 00:28:23.736548 kernel: SRAT: PXM 0 -> APIC 0xa8 -> Node 0 Nov 8 00:28:23.736554 kernel: SRAT: PXM 0 -> APIC 0xaa -> Node 0 Nov 8 00:28:23.736559 kernel: SRAT: PXM 0 -> APIC 0xac -> Node 0 Nov 8 00:28:23.736564 kernel: SRAT: PXM 0 -> APIC 0xae -> Node 0 Nov 8 00:28:23.736569 kernel: SRAT: PXM 0 -> APIC 0xb0 -> Node 0 Nov 8 00:28:23.736574 kernel: SRAT: PXM 0 -> APIC 0xb2 -> Node 0 Nov 8 00:28:23.736579 kernel: SRAT: PXM 0 -> APIC 0xb4 -> Node 0 Nov 8 00:28:23.736584 kernel: SRAT: PXM 0 -> APIC 0xb6 -> Node 0 Nov 8 00:28:23.736589 kernel: SRAT: PXM 0 -> APIC 0xb8 -> Node 0 Nov 8 00:28:23.736594 kernel: SRAT: PXM 0 -> APIC 0xba -> Node 0 Nov 8 00:28:23.736599 kernel: SRAT: PXM 0 -> APIC 0xbc -> Node 0 Nov 8 00:28:23.736605 kernel: SRAT: PXM 0 -> APIC 0xbe -> Node 0 Nov 8 00:28:23.736610 kernel: SRAT: PXM 0 -> APIC 0xc0 -> Node 0 Nov 8 00:28:23.736615 kernel: SRAT: PXM 0 -> APIC 0xc2 -> Node 0 Nov 8 00:28:23.736620 kernel: SRAT: PXM 0 -> APIC 0xc4 -> Node 0 Nov 8 00:28:23.736625 kernel: SRAT: PXM 0 -> APIC 0xc6 -> Node 0 Nov 8 00:28:23.736631 kernel: SRAT: PXM 0 -> APIC 0xc8 -> Node 0 Nov 8 00:28:23.736636 kernel: SRAT: PXM 0 -> APIC 0xca -> Node 0 Nov 8 00:28:23.736641 kernel: SRAT: PXM 0 -> APIC 0xcc -> Node 0 Nov 8 00:28:23.736646 kernel: SRAT: PXM 0 -> APIC 0xce -> Node 0 Nov 8 00:28:23.736651 kernel: SRAT: PXM 0 -> APIC 0xd0 -> Node 0 Nov 8 00:28:23.736657 kernel: SRAT: PXM 0 -> APIC 0xd2 -> Node 0 Nov 8 00:28:23.736662 kernel: SRAT: PXM 0 -> APIC 0xd4 -> Node 0 Nov 8 00:28:23.736667 kernel: SRAT: PXM 0 -> APIC 0xd6 -> Node 0 Nov 8 00:28:23.736672 kernel: SRAT: PXM 0 -> APIC 0xd8 -> Node 0 Nov 8 00:28:23.736677 kernel: SRAT: PXM 0 -> APIC 0xda -> Node 0 Nov 8 00:28:23.736682 kernel: SRAT: PXM 0 -> APIC 0xdc -> Node 0 Nov 8 00:28:23.736687 kernel: SRAT: PXM 0 -> APIC 0xde -> Node 0 Nov 8 00:28:23.736692 kernel: SRAT: PXM 0 -> APIC 0xe0 -> Node 0 Nov 8 00:28:23.736697 kernel: SRAT: PXM 0 -> APIC 0xe2 -> Node 0 Nov 8 00:28:23.736702 kernel: SRAT: PXM 0 -> APIC 0xe4 -> Node 0 Nov 8 00:28:23.736707 kernel: SRAT: PXM 0 -> APIC 0xe6 -> Node 0 Nov 8 00:28:23.736714 kernel: SRAT: PXM 0 -> APIC 0xe8 -> Node 0 Nov 8 00:28:23.736719 kernel: SRAT: PXM 0 -> APIC 0xea -> Node 0 Nov 8 00:28:23.736723 kernel: SRAT: PXM 0 -> APIC 0xec -> Node 0 Nov 8 00:28:23.736729 kernel: SRAT: PXM 0 -> APIC 0xee -> Node 0 Nov 8 00:28:23.736734 kernel: SRAT: PXM 0 -> APIC 0xf0 -> Node 0 Nov 8 00:28:23.736739 kernel: SRAT: PXM 0 -> APIC 0xf2 -> Node 0 Nov 8 00:28:23.736744 kernel: SRAT: PXM 0 -> APIC 0xf4 -> Node 0 Nov 8 00:28:23.736749 kernel: SRAT: PXM 0 -> APIC 0xf6 -> Node 0 Nov 8 00:28:23.736754 kernel: SRAT: PXM 0 -> APIC 0xf8 -> Node 0 Nov 8 00:28:23.736759 kernel: SRAT: PXM 0 -> APIC 0xfa -> Node 0 Nov 8 00:28:23.736765 kernel: SRAT: PXM 0 -> APIC 0xfc -> Node 0 Nov 8 00:28:23.736770 kernel: SRAT: PXM 0 -> APIC 0xfe -> Node 0 Nov 8 00:28:23.736775 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x0009ffff] Nov 8 00:28:23.736780 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00100000-0x7fffffff] Nov 8 00:28:23.736785 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x80000000-0xbfffffff] hotplug Nov 8 00:28:23.736791 kernel: NUMA: Node 0 [mem 0x00000000-0x0009ffff] + [mem 0x00100000-0x7fffffff] -> [mem 0x00000000-0x7fffffff] Nov 8 00:28:23.736796 kernel: NODE_DATA(0) allocated [mem 0x7fffa000-0x7fffffff] Nov 8 00:28:23.736802 kernel: Zone ranges: Nov 8 00:28:23.736807 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Nov 8 00:28:23.736813 kernel: DMA32 [mem 0x0000000001000000-0x000000007fffffff] Nov 8 00:28:23.736818 kernel: Normal empty Nov 8 00:28:23.736823 kernel: Movable zone start for each node Nov 8 00:28:23.736829 kernel: Early memory node ranges Nov 8 00:28:23.736834 kernel: node 0: [mem 0x0000000000001000-0x000000000009dfff] Nov 8 00:28:23.736839 kernel: node 0: [mem 0x0000000000100000-0x000000007fedffff] Nov 8 00:28:23.736845 kernel: node 0: [mem 0x000000007ff00000-0x000000007fffffff] Nov 8 00:28:23.736850 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000007fffffff] Nov 8 00:28:23.736873 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Nov 8 00:28:23.736879 kernel: On node 0, zone DMA: 98 pages in unavailable ranges Nov 8 00:28:23.736885 kernel: On node 0, zone DMA32: 32 pages in unavailable ranges Nov 8 00:28:23.736890 kernel: ACPI: PM-Timer IO Port: 0x1008 Nov 8 00:28:23.736896 kernel: system APIC only can use physical flat Nov 8 00:28:23.736901 kernel: ACPI: LAPIC_NMI (acpi_id[0x00] high edge lint[0x1]) Nov 8 00:28:23.736906 kernel: ACPI: LAPIC_NMI (acpi_id[0x01] high edge lint[0x1]) Nov 8 00:28:23.736911 kernel: ACPI: LAPIC_NMI (acpi_id[0x02] high edge lint[0x1]) Nov 8 00:28:23.736916 kernel: ACPI: LAPIC_NMI (acpi_id[0x03] high edge lint[0x1]) Nov 8 00:28:23.736922 kernel: ACPI: LAPIC_NMI (acpi_id[0x04] high edge lint[0x1]) Nov 8 00:28:23.736927 kernel: ACPI: LAPIC_NMI (acpi_id[0x05] high edge lint[0x1]) Nov 8 00:28:23.736933 kernel: ACPI: LAPIC_NMI (acpi_id[0x06] high edge lint[0x1]) Nov 8 00:28:23.736938 kernel: ACPI: LAPIC_NMI (acpi_id[0x07] high edge lint[0x1]) Nov 8 00:28:23.736943 kernel: ACPI: LAPIC_NMI (acpi_id[0x08] high edge lint[0x1]) Nov 8 00:28:23.736949 kernel: ACPI: LAPIC_NMI (acpi_id[0x09] high edge lint[0x1]) Nov 8 00:28:23.736954 kernel: ACPI: LAPIC_NMI (acpi_id[0x0a] high edge lint[0x1]) Nov 8 00:28:23.736959 kernel: ACPI: LAPIC_NMI (acpi_id[0x0b] high edge lint[0x1]) Nov 8 00:28:23.736965 kernel: ACPI: LAPIC_NMI (acpi_id[0x0c] high edge lint[0x1]) Nov 8 00:28:23.736970 kernel: ACPI: LAPIC_NMI (acpi_id[0x0d] high edge lint[0x1]) Nov 8 00:28:23.736975 kernel: ACPI: LAPIC_NMI (acpi_id[0x0e] high edge lint[0x1]) Nov 8 00:28:23.736980 kernel: ACPI: LAPIC_NMI (acpi_id[0x0f] high edge lint[0x1]) Nov 8 00:28:23.736986 kernel: ACPI: LAPIC_NMI (acpi_id[0x10] high edge lint[0x1]) Nov 8 00:28:23.736992 kernel: ACPI: LAPIC_NMI (acpi_id[0x11] high edge lint[0x1]) Nov 8 00:28:23.736997 kernel: ACPI: LAPIC_NMI (acpi_id[0x12] high edge lint[0x1]) Nov 8 00:28:23.737002 kernel: ACPI: LAPIC_NMI (acpi_id[0x13] high edge lint[0x1]) Nov 8 00:28:23.737007 kernel: ACPI: LAPIC_NMI (acpi_id[0x14] high edge lint[0x1]) Nov 8 00:28:23.737013 kernel: ACPI: LAPIC_NMI (acpi_id[0x15] high edge lint[0x1]) Nov 8 00:28:23.737018 kernel: ACPI: LAPIC_NMI (acpi_id[0x16] high edge lint[0x1]) Nov 8 00:28:23.737023 kernel: ACPI: LAPIC_NMI (acpi_id[0x17] high edge lint[0x1]) Nov 8 00:28:23.737028 kernel: ACPI: LAPIC_NMI (acpi_id[0x18] high edge lint[0x1]) Nov 8 00:28:23.737034 kernel: ACPI: LAPIC_NMI (acpi_id[0x19] high edge lint[0x1]) Nov 8 00:28:23.737040 kernel: ACPI: LAPIC_NMI (acpi_id[0x1a] high edge lint[0x1]) Nov 8 00:28:23.737045 kernel: ACPI: LAPIC_NMI (acpi_id[0x1b] high edge lint[0x1]) Nov 8 00:28:23.737050 kernel: ACPI: LAPIC_NMI (acpi_id[0x1c] high edge lint[0x1]) Nov 8 00:28:23.737055 kernel: ACPI: LAPIC_NMI (acpi_id[0x1d] high edge lint[0x1]) Nov 8 00:28:23.737060 kernel: ACPI: LAPIC_NMI (acpi_id[0x1e] high edge lint[0x1]) Nov 8 00:28:23.737066 kernel: ACPI: LAPIC_NMI (acpi_id[0x1f] high edge lint[0x1]) Nov 8 00:28:23.737071 kernel: ACPI: LAPIC_NMI (acpi_id[0x20] high edge lint[0x1]) Nov 8 00:28:23.737076 kernel: ACPI: LAPIC_NMI (acpi_id[0x21] high edge lint[0x1]) Nov 8 00:28:23.737081 kernel: ACPI: LAPIC_NMI (acpi_id[0x22] high edge lint[0x1]) Nov 8 00:28:23.737088 kernel: ACPI: LAPIC_NMI (acpi_id[0x23] high edge lint[0x1]) Nov 8 00:28:23.737093 kernel: ACPI: LAPIC_NMI (acpi_id[0x24] high edge lint[0x1]) Nov 8 00:28:23.737098 kernel: ACPI: LAPIC_NMI (acpi_id[0x25] high edge lint[0x1]) Nov 8 00:28:23.737103 kernel: ACPI: LAPIC_NMI (acpi_id[0x26] high edge lint[0x1]) Nov 8 00:28:23.737108 kernel: ACPI: LAPIC_NMI (acpi_id[0x27] high edge lint[0x1]) Nov 8 00:28:23.737114 kernel: ACPI: LAPIC_NMI (acpi_id[0x28] high edge lint[0x1]) Nov 8 00:28:23.737119 kernel: ACPI: LAPIC_NMI (acpi_id[0x29] high edge lint[0x1]) Nov 8 00:28:23.737124 kernel: ACPI: LAPIC_NMI (acpi_id[0x2a] high edge lint[0x1]) Nov 8 00:28:23.737129 kernel: ACPI: LAPIC_NMI (acpi_id[0x2b] high edge lint[0x1]) Nov 8 00:28:23.737134 kernel: ACPI: LAPIC_NMI (acpi_id[0x2c] high edge lint[0x1]) Nov 8 00:28:23.737141 kernel: ACPI: LAPIC_NMI (acpi_id[0x2d] high edge lint[0x1]) Nov 8 00:28:23.737146 kernel: ACPI: LAPIC_NMI (acpi_id[0x2e] high edge lint[0x1]) Nov 8 00:28:23.737151 kernel: ACPI: LAPIC_NMI (acpi_id[0x2f] high edge lint[0x1]) Nov 8 00:28:23.737156 kernel: ACPI: LAPIC_NMI (acpi_id[0x30] high edge lint[0x1]) Nov 8 00:28:23.737161 kernel: ACPI: LAPIC_NMI (acpi_id[0x31] high edge lint[0x1]) Nov 8 00:28:23.737166 kernel: ACPI: LAPIC_NMI (acpi_id[0x32] high edge lint[0x1]) Nov 8 00:28:23.737176 kernel: ACPI: LAPIC_NMI (acpi_id[0x33] high edge lint[0x1]) Nov 8 00:28:23.737182 kernel: ACPI: LAPIC_NMI (acpi_id[0x34] high edge lint[0x1]) Nov 8 00:28:23.737187 kernel: ACPI: LAPIC_NMI (acpi_id[0x35] high edge lint[0x1]) Nov 8 00:28:23.737192 kernel: ACPI: LAPIC_NMI (acpi_id[0x36] high edge lint[0x1]) Nov 8 00:28:23.737199 kernel: ACPI: LAPIC_NMI (acpi_id[0x37] high edge lint[0x1]) Nov 8 00:28:23.737204 kernel: ACPI: LAPIC_NMI (acpi_id[0x38] high edge lint[0x1]) Nov 8 00:28:23.737209 kernel: ACPI: LAPIC_NMI (acpi_id[0x39] high edge lint[0x1]) Nov 8 00:28:23.737214 kernel: ACPI: LAPIC_NMI (acpi_id[0x3a] high edge lint[0x1]) Nov 8 00:28:23.737220 kernel: ACPI: LAPIC_NMI (acpi_id[0x3b] high edge lint[0x1]) Nov 8 00:28:23.737225 kernel: ACPI: LAPIC_NMI (acpi_id[0x3c] high edge lint[0x1]) Nov 8 00:28:23.737230 kernel: ACPI: LAPIC_NMI (acpi_id[0x3d] high edge lint[0x1]) Nov 8 00:28:23.737235 kernel: ACPI: LAPIC_NMI (acpi_id[0x3e] high edge lint[0x1]) Nov 8 00:28:23.737240 kernel: ACPI: LAPIC_NMI (acpi_id[0x3f] high edge lint[0x1]) Nov 8 00:28:23.737247 kernel: ACPI: LAPIC_NMI (acpi_id[0x40] high edge lint[0x1]) Nov 8 00:28:23.737252 kernel: ACPI: LAPIC_NMI (acpi_id[0x41] high edge lint[0x1]) Nov 8 00:28:23.737257 kernel: ACPI: LAPIC_NMI (acpi_id[0x42] high edge lint[0x1]) Nov 8 00:28:23.737262 kernel: ACPI: LAPIC_NMI (acpi_id[0x43] high edge lint[0x1]) Nov 8 00:28:23.737268 kernel: ACPI: LAPIC_NMI (acpi_id[0x44] high edge lint[0x1]) Nov 8 00:28:23.737273 kernel: ACPI: LAPIC_NMI (acpi_id[0x45] high edge lint[0x1]) Nov 8 00:28:23.737278 kernel: ACPI: LAPIC_NMI (acpi_id[0x46] high edge lint[0x1]) Nov 8 00:28:23.737283 kernel: ACPI: LAPIC_NMI (acpi_id[0x47] high edge lint[0x1]) Nov 8 00:28:23.737288 kernel: ACPI: LAPIC_NMI (acpi_id[0x48] high edge lint[0x1]) Nov 8 00:28:23.737294 kernel: ACPI: LAPIC_NMI (acpi_id[0x49] high edge lint[0x1]) Nov 8 00:28:23.737300 kernel: ACPI: LAPIC_NMI (acpi_id[0x4a] high edge lint[0x1]) Nov 8 00:28:23.737305 kernel: ACPI: LAPIC_NMI (acpi_id[0x4b] high edge lint[0x1]) Nov 8 00:28:23.737310 kernel: ACPI: LAPIC_NMI (acpi_id[0x4c] high edge lint[0x1]) Nov 8 00:28:23.737315 kernel: ACPI: LAPIC_NMI (acpi_id[0x4d] high edge lint[0x1]) Nov 8 00:28:23.737321 kernel: ACPI: LAPIC_NMI (acpi_id[0x4e] high edge lint[0x1]) Nov 8 00:28:23.737326 kernel: ACPI: LAPIC_NMI (acpi_id[0x4f] high edge lint[0x1]) Nov 8 00:28:23.737331 kernel: ACPI: LAPIC_NMI (acpi_id[0x50] high edge lint[0x1]) Nov 8 00:28:23.737337 kernel: ACPI: LAPIC_NMI (acpi_id[0x51] high edge lint[0x1]) Nov 8 00:28:23.737342 kernel: ACPI: LAPIC_NMI (acpi_id[0x52] high edge lint[0x1]) Nov 8 00:28:23.737347 kernel: ACPI: LAPIC_NMI (acpi_id[0x53] high edge lint[0x1]) Nov 8 00:28:23.737353 kernel: ACPI: LAPIC_NMI (acpi_id[0x54] high edge lint[0x1]) Nov 8 00:28:23.737359 kernel: ACPI: LAPIC_NMI (acpi_id[0x55] high edge lint[0x1]) Nov 8 00:28:23.737364 kernel: ACPI: LAPIC_NMI (acpi_id[0x56] high edge lint[0x1]) Nov 8 00:28:23.737369 kernel: ACPI: LAPIC_NMI (acpi_id[0x57] high edge lint[0x1]) Nov 8 00:28:23.737374 kernel: ACPI: LAPIC_NMI (acpi_id[0x58] high edge lint[0x1]) Nov 8 00:28:23.737380 kernel: ACPI: LAPIC_NMI (acpi_id[0x59] high edge lint[0x1]) Nov 8 00:28:23.737385 kernel: ACPI: LAPIC_NMI (acpi_id[0x5a] high edge lint[0x1]) Nov 8 00:28:23.737390 kernel: ACPI: LAPIC_NMI (acpi_id[0x5b] high edge lint[0x1]) Nov 8 00:28:23.737395 kernel: ACPI: LAPIC_NMI (acpi_id[0x5c] high edge lint[0x1]) Nov 8 00:28:23.737401 kernel: ACPI: LAPIC_NMI (acpi_id[0x5d] high edge lint[0x1]) Nov 8 00:28:23.737406 kernel: ACPI: LAPIC_NMI (acpi_id[0x5e] high edge lint[0x1]) Nov 8 00:28:23.737412 kernel: ACPI: LAPIC_NMI (acpi_id[0x5f] high edge lint[0x1]) Nov 8 00:28:23.737417 kernel: ACPI: LAPIC_NMI (acpi_id[0x60] high edge lint[0x1]) Nov 8 00:28:23.737422 kernel: ACPI: LAPIC_NMI (acpi_id[0x61] high edge lint[0x1]) Nov 8 00:28:23.737427 kernel: ACPI: LAPIC_NMI (acpi_id[0x62] high edge lint[0x1]) Nov 8 00:28:23.737432 kernel: ACPI: LAPIC_NMI (acpi_id[0x63] high edge lint[0x1]) Nov 8 00:28:23.737438 kernel: ACPI: LAPIC_NMI (acpi_id[0x64] high edge lint[0x1]) Nov 8 00:28:23.737443 kernel: ACPI: LAPIC_NMI (acpi_id[0x65] high edge lint[0x1]) Nov 8 00:28:23.737448 kernel: ACPI: LAPIC_NMI (acpi_id[0x66] high edge lint[0x1]) Nov 8 00:28:23.737454 kernel: ACPI: LAPIC_NMI (acpi_id[0x67] high edge lint[0x1]) Nov 8 00:28:23.737459 kernel: ACPI: LAPIC_NMI (acpi_id[0x68] high edge lint[0x1]) Nov 8 00:28:23.737465 kernel: ACPI: LAPIC_NMI (acpi_id[0x69] high edge lint[0x1]) Nov 8 00:28:23.737470 kernel: ACPI: LAPIC_NMI (acpi_id[0x6a] high edge lint[0x1]) Nov 8 00:28:23.737475 kernel: ACPI: LAPIC_NMI (acpi_id[0x6b] high edge lint[0x1]) Nov 8 00:28:23.737480 kernel: ACPI: LAPIC_NMI (acpi_id[0x6c] high edge lint[0x1]) Nov 8 00:28:23.737486 kernel: ACPI: LAPIC_NMI (acpi_id[0x6d] high edge lint[0x1]) Nov 8 00:28:23.737491 kernel: ACPI: LAPIC_NMI (acpi_id[0x6e] high edge lint[0x1]) Nov 8 00:28:23.737496 kernel: ACPI: LAPIC_NMI (acpi_id[0x6f] high edge lint[0x1]) Nov 8 00:28:23.737501 kernel: ACPI: LAPIC_NMI (acpi_id[0x70] high edge lint[0x1]) Nov 8 00:28:23.737508 kernel: ACPI: LAPIC_NMI (acpi_id[0x71] high edge lint[0x1]) Nov 8 00:28:23.737513 kernel: ACPI: LAPIC_NMI (acpi_id[0x72] high edge lint[0x1]) Nov 8 00:28:23.737518 kernel: ACPI: LAPIC_NMI (acpi_id[0x73] high edge lint[0x1]) Nov 8 00:28:23.737523 kernel: ACPI: LAPIC_NMI (acpi_id[0x74] high edge lint[0x1]) Nov 8 00:28:23.737529 kernel: ACPI: LAPIC_NMI (acpi_id[0x75] high edge lint[0x1]) Nov 8 00:28:23.737534 kernel: ACPI: LAPIC_NMI (acpi_id[0x76] high edge lint[0x1]) Nov 8 00:28:23.737539 kernel: ACPI: LAPIC_NMI (acpi_id[0x77] high edge lint[0x1]) Nov 8 00:28:23.737544 kernel: ACPI: LAPIC_NMI (acpi_id[0x78] high edge lint[0x1]) Nov 8 00:28:23.737549 kernel: ACPI: LAPIC_NMI (acpi_id[0x79] high edge lint[0x1]) Nov 8 00:28:23.737556 kernel: ACPI: LAPIC_NMI (acpi_id[0x7a] high edge lint[0x1]) Nov 8 00:28:23.737561 kernel: ACPI: LAPIC_NMI (acpi_id[0x7b] high edge lint[0x1]) Nov 8 00:28:23.737566 kernel: ACPI: LAPIC_NMI (acpi_id[0x7c] high edge lint[0x1]) Nov 8 00:28:23.737571 kernel: ACPI: LAPIC_NMI (acpi_id[0x7d] high edge lint[0x1]) Nov 8 00:28:23.737577 kernel: ACPI: LAPIC_NMI (acpi_id[0x7e] high edge lint[0x1]) Nov 8 00:28:23.737582 kernel: ACPI: LAPIC_NMI (acpi_id[0x7f] high edge lint[0x1]) Nov 8 00:28:23.737587 kernel: IOAPIC[0]: apic_id 1, version 17, address 0xfec00000, GSI 0-23 Nov 8 00:28:23.737593 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 high edge) Nov 8 00:28:23.737598 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Nov 8 00:28:23.737603 kernel: ACPI: HPET id: 0x8086af01 base: 0xfed00000 Nov 8 00:28:23.737609 kernel: TSC deadline timer available Nov 8 00:28:23.737615 kernel: smpboot: Allowing 128 CPUs, 126 hotplug CPUs Nov 8 00:28:23.737620 kernel: [mem 0x80000000-0xefffffff] available for PCI devices Nov 8 00:28:23.737625 kernel: Booting paravirtualized kernel on VMware hypervisor Nov 8 00:28:23.737631 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Nov 8 00:28:23.737636 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:128 nr_cpu_ids:128 nr_node_ids:1 Nov 8 00:28:23.737642 kernel: percpu: Embedded 58 pages/cpu s196712 r8192 d32664 u262144 Nov 8 00:28:23.737647 kernel: pcpu-alloc: s196712 r8192 d32664 u262144 alloc=1*2097152 Nov 8 00:28:23.737652 kernel: pcpu-alloc: [0] 000 001 002 003 004 005 006 007 Nov 8 00:28:23.737659 kernel: pcpu-alloc: [0] 008 009 010 011 012 013 014 015 Nov 8 00:28:23.737664 kernel: pcpu-alloc: [0] 016 017 018 019 020 021 022 023 Nov 8 00:28:23.737669 kernel: pcpu-alloc: [0] 024 025 026 027 028 029 030 031 Nov 8 00:28:23.737674 kernel: pcpu-alloc: [0] 032 033 034 035 036 037 038 039 Nov 8 00:28:23.737686 kernel: pcpu-alloc: [0] 040 041 042 043 044 045 046 047 Nov 8 00:28:23.737693 kernel: pcpu-alloc: [0] 048 049 050 051 052 053 054 055 Nov 8 00:28:23.737698 kernel: pcpu-alloc: [0] 056 057 058 059 060 061 062 063 Nov 8 00:28:23.737704 kernel: pcpu-alloc: [0] 064 065 066 067 068 069 070 071 Nov 8 00:28:23.737709 kernel: pcpu-alloc: [0] 072 073 074 075 076 077 078 079 Nov 8 00:28:23.737716 kernel: pcpu-alloc: [0] 080 081 082 083 084 085 086 087 Nov 8 00:28:23.737721 kernel: pcpu-alloc: [0] 088 089 090 091 092 093 094 095 Nov 8 00:28:23.737727 kernel: pcpu-alloc: [0] 096 097 098 099 100 101 102 103 Nov 8 00:28:23.737732 kernel: pcpu-alloc: [0] 104 105 106 107 108 109 110 111 Nov 8 00:28:23.737738 kernel: pcpu-alloc: [0] 112 113 114 115 116 117 118 119 Nov 8 00:28:23.737743 kernel: pcpu-alloc: [0] 120 121 122 123 124 125 126 127 Nov 8 00:28:23.737750 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=vmware flatcar.autologin verity.usrhash=480a02cf7f2001774aa495c3e719d4173e968e6839485a7d2b207ef2facca472 Nov 8 00:28:23.737757 kernel: random: crng init done Nov 8 00:28:23.737762 kernel: printk: log_buf_len individual max cpu contribution: 4096 bytes Nov 8 00:28:23.737768 kernel: printk: log_buf_len total cpu_extra contributions: 520192 bytes Nov 8 00:28:23.737773 kernel: printk: log_buf_len min size: 262144 bytes Nov 8 00:28:23.737779 kernel: printk: log_buf_len: 1048576 bytes Nov 8 00:28:23.737785 kernel: printk: early log buf free: 239760(91%) Nov 8 00:28:23.737791 kernel: Dentry cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Nov 8 00:28:23.737796 kernel: Inode-cache hash table entries: 131072 (order: 8, 1048576 bytes, linear) Nov 8 00:28:23.737802 kernel: Fallback order for Node 0: 0 Nov 8 00:28:23.737808 kernel: Built 1 zonelists, mobility grouping on. Total pages: 515808 Nov 8 00:28:23.737814 kernel: Policy zone: DMA32 Nov 8 00:28:23.737820 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Nov 8 00:28:23.737826 kernel: Memory: 1936332K/2096628K available (12288K kernel code, 2288K rwdata, 22748K rodata, 42880K init, 2320K bss, 160036K reserved, 0K cma-reserved) Nov 8 00:28:23.737833 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=128, Nodes=1 Nov 8 00:28:23.737839 kernel: ftrace: allocating 37980 entries in 149 pages Nov 8 00:28:23.737845 kernel: ftrace: allocated 149 pages with 4 groups Nov 8 00:28:23.737854 kernel: Dynamic Preempt: voluntary Nov 8 00:28:23.737860 kernel: rcu: Preemptible hierarchical RCU implementation. Nov 8 00:28:23.737866 kernel: rcu: RCU event tracing is enabled. Nov 8 00:28:23.737872 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=128. Nov 8 00:28:23.737877 kernel: Trampoline variant of Tasks RCU enabled. Nov 8 00:28:23.737883 kernel: Rude variant of Tasks RCU enabled. Nov 8 00:28:23.737889 kernel: Tracing variant of Tasks RCU enabled. Nov 8 00:28:23.737895 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Nov 8 00:28:23.737900 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=128 Nov 8 00:28:23.737907 kernel: NR_IRQS: 33024, nr_irqs: 1448, preallocated irqs: 16 Nov 8 00:28:23.737913 kernel: rcu: srcu_init: Setting srcu_struct sizes to big. Nov 8 00:28:23.737919 kernel: Console: colour VGA+ 80x25 Nov 8 00:28:23.737925 kernel: printk: console [tty0] enabled Nov 8 00:28:23.737932 kernel: printk: console [ttyS0] enabled Nov 8 00:28:23.737937 kernel: ACPI: Core revision 20230628 Nov 8 00:28:23.737943 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 133484882848 ns Nov 8 00:28:23.737949 kernel: APIC: Switch to symmetric I/O mode setup Nov 8 00:28:23.737954 kernel: x2apic enabled Nov 8 00:28:23.737961 kernel: APIC: Switched APIC routing to: physical x2apic Nov 8 00:28:23.737967 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Nov 8 00:28:23.737973 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x311fd3cd494, max_idle_ns: 440795223879 ns Nov 8 00:28:23.737979 kernel: Calibrating delay loop (skipped) preset value.. 6816.00 BogoMIPS (lpj=3408000) Nov 8 00:28:23.737984 kernel: Disabled fast string operations Nov 8 00:28:23.737990 kernel: Last level iTLB entries: 4KB 64, 2MB 8, 4MB 8 Nov 8 00:28:23.737996 kernel: Last level dTLB entries: 4KB 64, 2MB 32, 4MB 32, 1GB 4 Nov 8 00:28:23.738001 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Nov 8 00:28:23.738007 kernel: Spectre V2 : Spectre BHI mitigation: SW BHB clearing on vm exit Nov 8 00:28:23.738014 kernel: Spectre V2 : Spectre BHI mitigation: SW BHB clearing on syscall Nov 8 00:28:23.738020 kernel: Spectre V2 : Mitigation: Enhanced / Automatic IBRS Nov 8 00:28:23.738025 kernel: Spectre V2 : Spectre v2 / PBRSB-eIBRS: Retire a single CALL on VMEXIT Nov 8 00:28:23.738031 kernel: RETBleed: Mitigation: Enhanced IBRS Nov 8 00:28:23.738037 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Nov 8 00:28:23.738042 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Nov 8 00:28:23.738048 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode Nov 8 00:28:23.738054 kernel: SRBDS: Unknown: Dependent on hypervisor status Nov 8 00:28:23.738061 kernel: GDS: Unknown: Dependent on hypervisor status Nov 8 00:28:23.738066 kernel: active return thunk: its_return_thunk Nov 8 00:28:23.738072 kernel: ITS: Mitigation: Aligned branch/return thunks Nov 8 00:28:23.738078 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Nov 8 00:28:23.738084 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Nov 8 00:28:23.738089 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Nov 8 00:28:23.738095 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Nov 8 00:28:23.738101 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format. Nov 8 00:28:23.738106 kernel: Freeing SMP alternatives memory: 32K Nov 8 00:28:23.738113 kernel: pid_max: default: 131072 minimum: 1024 Nov 8 00:28:23.738119 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Nov 8 00:28:23.738124 kernel: landlock: Up and running. Nov 8 00:28:23.738130 kernel: SELinux: Initializing. Nov 8 00:28:23.738136 kernel: Mount-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Nov 8 00:28:23.738142 kernel: Mountpoint-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Nov 8 00:28:23.738147 kernel: smpboot: CPU0: Intel(R) Xeon(R) E-2278G CPU @ 3.40GHz (family: 0x6, model: 0x9e, stepping: 0xd) Nov 8 00:28:23.738153 kernel: RCU Tasks: Setting shift to 7 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=128. Nov 8 00:28:23.738159 kernel: RCU Tasks Rude: Setting shift to 7 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=128. Nov 8 00:28:23.738166 kernel: RCU Tasks Trace: Setting shift to 7 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=128. Nov 8 00:28:23.738179 kernel: Performance Events: Skylake events, core PMU driver. Nov 8 00:28:23.738187 kernel: core: CPUID marked event: 'cpu cycles' unavailable Nov 8 00:28:23.738193 kernel: core: CPUID marked event: 'instructions' unavailable Nov 8 00:28:23.738199 kernel: core: CPUID marked event: 'bus cycles' unavailable Nov 8 00:28:23.738204 kernel: core: CPUID marked event: 'cache references' unavailable Nov 8 00:28:23.738210 kernel: core: CPUID marked event: 'cache misses' unavailable Nov 8 00:28:23.738215 kernel: core: CPUID marked event: 'branch instructions' unavailable Nov 8 00:28:23.738221 kernel: core: CPUID marked event: 'branch misses' unavailable Nov 8 00:28:23.738229 kernel: ... version: 1 Nov 8 00:28:23.738234 kernel: ... bit width: 48 Nov 8 00:28:23.738240 kernel: ... generic registers: 4 Nov 8 00:28:23.738246 kernel: ... value mask: 0000ffffffffffff Nov 8 00:28:23.738251 kernel: ... max period: 000000007fffffff Nov 8 00:28:23.738257 kernel: ... fixed-purpose events: 0 Nov 8 00:28:23.738263 kernel: ... event mask: 000000000000000f Nov 8 00:28:23.738268 kernel: signal: max sigframe size: 1776 Nov 8 00:28:23.738274 kernel: rcu: Hierarchical SRCU implementation. Nov 8 00:28:23.738281 kernel: rcu: Max phase no-delay instances is 400. Nov 8 00:28:23.738287 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Nov 8 00:28:23.738293 kernel: smp: Bringing up secondary CPUs ... Nov 8 00:28:23.738299 kernel: smpboot: x86: Booting SMP configuration: Nov 8 00:28:23.738304 kernel: .... node #0, CPUs: #1 Nov 8 00:28:23.738310 kernel: Disabled fast string operations Nov 8 00:28:23.738316 kernel: smpboot: CPU 1 Converting physical 2 to logical package 1 Nov 8 00:28:23.738321 kernel: smpboot: CPU 1 Converting physical 0 to logical die 1 Nov 8 00:28:23.738327 kernel: smp: Brought up 1 node, 2 CPUs Nov 8 00:28:23.738332 kernel: smpboot: Max logical packages: 128 Nov 8 00:28:23.738339 kernel: smpboot: Total of 2 processors activated (13632.00 BogoMIPS) Nov 8 00:28:23.738345 kernel: devtmpfs: initialized Nov 8 00:28:23.738351 kernel: x86/mm: Memory block size: 128MB Nov 8 00:28:23.738357 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x7feff000-0x7fefffff] (4096 bytes) Nov 8 00:28:23.738363 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Nov 8 00:28:23.738368 kernel: futex hash table entries: 32768 (order: 9, 2097152 bytes, linear) Nov 8 00:28:23.738374 kernel: pinctrl core: initialized pinctrl subsystem Nov 8 00:28:23.738380 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Nov 8 00:28:23.738385 kernel: audit: initializing netlink subsys (disabled) Nov 8 00:28:23.738392 kernel: audit: type=2000 audit(1762561701.090:1): state=initialized audit_enabled=0 res=1 Nov 8 00:28:23.738397 kernel: thermal_sys: Registered thermal governor 'step_wise' Nov 8 00:28:23.738403 kernel: thermal_sys: Registered thermal governor 'user_space' Nov 8 00:28:23.738409 kernel: cpuidle: using governor menu Nov 8 00:28:23.738414 kernel: Simple Boot Flag at 0x36 set to 0x80 Nov 8 00:28:23.738420 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Nov 8 00:28:23.738426 kernel: dca service started, version 1.12.1 Nov 8 00:28:23.738432 kernel: PCI: MMCONFIG for domain 0000 [bus 00-7f] at [mem 0xf0000000-0xf7ffffff] (base 0xf0000000) Nov 8 00:28:23.738437 kernel: PCI: Using configuration type 1 for base access Nov 8 00:28:23.738444 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Nov 8 00:28:23.738450 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Nov 8 00:28:23.738455 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Nov 8 00:28:23.738461 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Nov 8 00:28:23.738467 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Nov 8 00:28:23.738472 kernel: ACPI: Added _OSI(Module Device) Nov 8 00:28:23.738478 kernel: ACPI: Added _OSI(Processor Device) Nov 8 00:28:23.738484 kernel: ACPI: Added _OSI(Processor Aggregator Device) Nov 8 00:28:23.738489 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Nov 8 00:28:23.738496 kernel: ACPI: [Firmware Bug]: BIOS _OSI(Linux) query ignored Nov 8 00:28:23.738502 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Nov 8 00:28:23.738507 kernel: ACPI: Interpreter enabled Nov 8 00:28:23.738513 kernel: ACPI: PM: (supports S0 S1 S5) Nov 8 00:28:23.738519 kernel: ACPI: Using IOAPIC for interrupt routing Nov 8 00:28:23.738524 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Nov 8 00:28:23.738530 kernel: PCI: Using E820 reservations for host bridge windows Nov 8 00:28:23.738535 kernel: ACPI: Enabled 4 GPEs in block 00 to 0F Nov 8 00:28:23.738541 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-7f]) Nov 8 00:28:23.738621 kernel: acpi PNP0A03:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Nov 8 00:28:23.738678 kernel: acpi PNP0A03:00: _OSC: platform does not support [AER LTR] Nov 8 00:28:23.738729 kernel: acpi PNP0A03:00: _OSC: OS now controls [PCIeHotplug PME PCIeCapability] Nov 8 00:28:23.738737 kernel: PCI host bridge to bus 0000:00 Nov 8 00:28:23.738789 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Nov 8 00:28:23.738836 kernel: pci_bus 0000:00: root bus resource [mem 0x000cc000-0x000dbfff window] Nov 8 00:28:23.738884 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Nov 8 00:28:23.738930 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Nov 8 00:28:23.738975 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xfeff window] Nov 8 00:28:23.739022 kernel: pci_bus 0000:00: root bus resource [bus 00-7f] Nov 8 00:28:23.739082 kernel: pci 0000:00:00.0: [8086:7190] type 00 class 0x060000 Nov 8 00:28:23.739139 kernel: pci 0000:00:01.0: [8086:7191] type 01 class 0x060400 Nov 8 00:28:23.739211 kernel: pci 0000:00:07.0: [8086:7110] type 00 class 0x060100 Nov 8 00:28:23.739266 kernel: pci 0000:00:07.1: [8086:7111] type 00 class 0x01018a Nov 8 00:28:23.739318 kernel: pci 0000:00:07.1: reg 0x20: [io 0x1060-0x106f] Nov 8 00:28:23.739369 kernel: pci 0000:00:07.1: legacy IDE quirk: reg 0x10: [io 0x01f0-0x01f7] Nov 8 00:28:23.739420 kernel: pci 0000:00:07.1: legacy IDE quirk: reg 0x14: [io 0x03f6] Nov 8 00:28:23.739472 kernel: pci 0000:00:07.1: legacy IDE quirk: reg 0x18: [io 0x0170-0x0177] Nov 8 00:28:23.739522 kernel: pci 0000:00:07.1: legacy IDE quirk: reg 0x1c: [io 0x0376] Nov 8 00:28:23.739582 kernel: pci 0000:00:07.3: [8086:7113] type 00 class 0x068000 Nov 8 00:28:23.739635 kernel: pci 0000:00:07.3: quirk: [io 0x1000-0x103f] claimed by PIIX4 ACPI Nov 8 00:28:23.739685 kernel: pci 0000:00:07.3: quirk: [io 0x1040-0x104f] claimed by PIIX4 SMB Nov 8 00:28:23.739743 kernel: pci 0000:00:07.7: [15ad:0740] type 00 class 0x088000 Nov 8 00:28:23.739794 kernel: pci 0000:00:07.7: reg 0x10: [io 0x1080-0x10bf] Nov 8 00:28:23.739845 kernel: pci 0000:00:07.7: reg 0x14: [mem 0xfebfe000-0xfebfffff 64bit] Nov 8 00:28:23.739903 kernel: pci 0000:00:0f.0: [15ad:0405] type 00 class 0x030000 Nov 8 00:28:23.739953 kernel: pci 0000:00:0f.0: reg 0x10: [io 0x1070-0x107f] Nov 8 00:28:23.740005 kernel: pci 0000:00:0f.0: reg 0x14: [mem 0xe8000000-0xefffffff pref] Nov 8 00:28:23.740055 kernel: pci 0000:00:0f.0: reg 0x18: [mem 0xfe000000-0xfe7fffff] Nov 8 00:28:23.740105 kernel: pci 0000:00:0f.0: reg 0x30: [mem 0x00000000-0x00007fff pref] Nov 8 00:28:23.740155 kernel: pci 0000:00:0f.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Nov 8 00:28:23.740553 kernel: pci 0000:00:11.0: [15ad:0790] type 01 class 0x060401 Nov 8 00:28:23.740618 kernel: pci 0000:00:15.0: [15ad:07a0] type 01 class 0x060400 Nov 8 00:28:23.740672 kernel: pci 0000:00:15.0: PME# supported from D0 D3hot D3cold Nov 8 00:28:23.740728 kernel: pci 0000:00:15.1: [15ad:07a0] type 01 class 0x060400 Nov 8 00:28:23.740794 kernel: pci 0000:00:15.1: PME# supported from D0 D3hot D3cold Nov 8 00:28:23.740856 kernel: pci 0000:00:15.2: [15ad:07a0] type 01 class 0x060400 Nov 8 00:28:23.740910 kernel: pci 0000:00:15.2: PME# supported from D0 D3hot D3cold Nov 8 00:28:23.740968 kernel: pci 0000:00:15.3: [15ad:07a0] type 01 class 0x060400 Nov 8 00:28:23.741020 kernel: pci 0000:00:15.3: PME# supported from D0 D3hot D3cold Nov 8 00:28:23.741074 kernel: pci 0000:00:15.4: [15ad:07a0] type 01 class 0x060400 Nov 8 00:28:23.741126 kernel: pci 0000:00:15.4: PME# supported from D0 D3hot D3cold Nov 8 00:28:23.741195 kernel: pci 0000:00:15.5: [15ad:07a0] type 01 class 0x060400 Nov 8 00:28:23.741251 kernel: pci 0000:00:15.5: PME# supported from D0 D3hot D3cold Nov 8 00:28:23.743222 kernel: pci 0000:00:15.6: [15ad:07a0] type 01 class 0x060400 Nov 8 00:28:23.743316 kernel: pci 0000:00:15.6: PME# supported from D0 D3hot D3cold Nov 8 00:28:23.743399 kernel: pci 0000:00:15.7: [15ad:07a0] type 01 class 0x060400 Nov 8 00:28:23.743470 kernel: pci 0000:00:15.7: PME# supported from D0 D3hot D3cold Nov 8 00:28:23.743529 kernel: pci 0000:00:16.0: [15ad:07a0] type 01 class 0x060400 Nov 8 00:28:23.743584 kernel: pci 0000:00:16.0: PME# supported from D0 D3hot D3cold Nov 8 00:28:23.743657 kernel: pci 0000:00:16.1: [15ad:07a0] type 01 class 0x060400 Nov 8 00:28:23.743712 kernel: pci 0000:00:16.1: PME# supported from D0 D3hot D3cold Nov 8 00:28:23.743772 kernel: pci 0000:00:16.2: [15ad:07a0] type 01 class 0x060400 Nov 8 00:28:23.743826 kernel: pci 0000:00:16.2: PME# supported from D0 D3hot D3cold Nov 8 00:28:23.743882 kernel: pci 0000:00:16.3: [15ad:07a0] type 01 class 0x060400 Nov 8 00:28:23.743935 kernel: pci 0000:00:16.3: PME# supported from D0 D3hot D3cold Nov 8 00:28:23.743996 kernel: pci 0000:00:16.4: [15ad:07a0] type 01 class 0x060400 Nov 8 00:28:23.744049 kernel: pci 0000:00:16.4: PME# supported from D0 D3hot D3cold Nov 8 00:28:23.744106 kernel: pci 0000:00:16.5: [15ad:07a0] type 01 class 0x060400 Nov 8 00:28:23.744158 kernel: pci 0000:00:16.5: PME# supported from D0 D3hot D3cold Nov 8 00:28:23.744249 kernel: pci 0000:00:16.6: [15ad:07a0] type 01 class 0x060400 Nov 8 00:28:23.744303 kernel: pci 0000:00:16.6: PME# supported from D0 D3hot D3cold Nov 8 00:28:23.744363 kernel: pci 0000:00:16.7: [15ad:07a0] type 01 class 0x060400 Nov 8 00:28:23.744417 kernel: pci 0000:00:16.7: PME# supported from D0 D3hot D3cold Nov 8 00:28:23.744473 kernel: pci 0000:00:17.0: [15ad:07a0] type 01 class 0x060400 Nov 8 00:28:23.744561 kernel: pci 0000:00:17.0: PME# supported from D0 D3hot D3cold Nov 8 00:28:23.744655 kernel: pci 0000:00:17.1: [15ad:07a0] type 01 class 0x060400 Nov 8 00:28:23.744722 kernel: pci 0000:00:17.1: PME# supported from D0 D3hot D3cold Nov 8 00:28:23.744789 kernel: pci 0000:00:17.2: [15ad:07a0] type 01 class 0x060400 Nov 8 00:28:23.744854 kernel: pci 0000:00:17.2: PME# supported from D0 D3hot D3cold Nov 8 00:28:23.744924 kernel: pci 0000:00:17.3: [15ad:07a0] type 01 class 0x060400 Nov 8 00:28:23.744986 kernel: pci 0000:00:17.3: PME# supported from D0 D3hot D3cold Nov 8 00:28:23.745068 kernel: pci 0000:00:17.4: [15ad:07a0] type 01 class 0x060400 Nov 8 00:28:23.745153 kernel: pci 0000:00:17.4: PME# supported from D0 D3hot D3cold Nov 8 00:28:23.747280 kernel: pci 0000:00:17.5: [15ad:07a0] type 01 class 0x060400 Nov 8 00:28:23.747431 kernel: pci 0000:00:17.5: PME# supported from D0 D3hot D3cold Nov 8 00:28:23.747561 kernel: pci 0000:00:17.6: [15ad:07a0] type 01 class 0x060400 Nov 8 00:28:23.747656 kernel: pci 0000:00:17.6: PME# supported from D0 D3hot D3cold Nov 8 00:28:23.747751 kernel: pci 0000:00:17.7: [15ad:07a0] type 01 class 0x060400 Nov 8 00:28:23.747868 kernel: pci 0000:00:17.7: PME# supported from D0 D3hot D3cold Nov 8 00:28:23.747977 kernel: pci 0000:00:18.0: [15ad:07a0] type 01 class 0x060400 Nov 8 00:28:23.748069 kernel: pci 0000:00:18.0: PME# supported from D0 D3hot D3cold Nov 8 00:28:23.748162 kernel: pci 0000:00:18.1: [15ad:07a0] type 01 class 0x060400 Nov 8 00:28:23.748260 kernel: pci 0000:00:18.1: PME# supported from D0 D3hot D3cold Nov 8 00:28:23.748348 kernel: pci 0000:00:18.2: [15ad:07a0] type 01 class 0x060400 Nov 8 00:28:23.748438 kernel: pci 0000:00:18.2: PME# supported from D0 D3hot D3cold Nov 8 00:28:23.748504 kernel: pci 0000:00:18.3: [15ad:07a0] type 01 class 0x060400 Nov 8 00:28:23.748571 kernel: pci 0000:00:18.3: PME# supported from D0 D3hot D3cold Nov 8 00:28:23.748630 kernel: pci 0000:00:18.4: [15ad:07a0] type 01 class 0x060400 Nov 8 00:28:23.748684 kernel: pci 0000:00:18.4: PME# supported from D0 D3hot D3cold Nov 8 00:28:23.748804 kernel: pci 0000:00:18.5: [15ad:07a0] type 01 class 0x060400 Nov 8 00:28:23.748898 kernel: pci 0000:00:18.5: PME# supported from D0 D3hot D3cold Nov 8 00:28:23.748983 kernel: pci 0000:00:18.6: [15ad:07a0] type 01 class 0x060400 Nov 8 00:28:23.749076 kernel: pci 0000:00:18.6: PME# supported from D0 D3hot D3cold Nov 8 00:28:23.749211 kernel: pci 0000:00:18.7: [15ad:07a0] type 01 class 0x060400 Nov 8 00:28:23.749307 kernel: pci 0000:00:18.7: PME# supported from D0 D3hot D3cold Nov 8 00:28:23.749398 kernel: pci_bus 0000:01: extended config space not accessible Nov 8 00:28:23.749492 kernel: pci 0000:00:01.0: PCI bridge to [bus 01] Nov 8 00:28:23.749583 kernel: pci_bus 0000:02: extended config space not accessible Nov 8 00:28:23.749599 kernel: acpiphp: Slot [32] registered Nov 8 00:28:23.749614 kernel: acpiphp: Slot [33] registered Nov 8 00:28:23.749626 kernel: acpiphp: Slot [34] registered Nov 8 00:28:23.749637 kernel: acpiphp: Slot [35] registered Nov 8 00:28:23.749647 kernel: acpiphp: Slot [36] registered Nov 8 00:28:23.749657 kernel: acpiphp: Slot [37] registered Nov 8 00:28:23.749668 kernel: acpiphp: Slot [38] registered Nov 8 00:28:23.749679 kernel: acpiphp: Slot [39] registered Nov 8 00:28:23.749689 kernel: acpiphp: Slot [40] registered Nov 8 00:28:23.749699 kernel: acpiphp: Slot [41] registered Nov 8 00:28:23.749713 kernel: acpiphp: Slot [42] registered Nov 8 00:28:23.749723 kernel: acpiphp: Slot [43] registered Nov 8 00:28:23.749733 kernel: acpiphp: Slot [44] registered Nov 8 00:28:23.749743 kernel: acpiphp: Slot [45] registered Nov 8 00:28:23.749753 kernel: acpiphp: Slot [46] registered Nov 8 00:28:23.749764 kernel: acpiphp: Slot [47] registered Nov 8 00:28:23.749774 kernel: acpiphp: Slot [48] registered Nov 8 00:28:23.749783 kernel: acpiphp: Slot [49] registered Nov 8 00:28:23.749793 kernel: acpiphp: Slot [50] registered Nov 8 00:28:23.749803 kernel: acpiphp: Slot [51] registered Nov 8 00:28:23.749817 kernel: acpiphp: Slot [52] registered Nov 8 00:28:23.749827 kernel: acpiphp: Slot [53] registered Nov 8 00:28:23.749837 kernel: acpiphp: Slot [54] registered Nov 8 00:28:23.749847 kernel: acpiphp: Slot [55] registered Nov 8 00:28:23.749857 kernel: acpiphp: Slot [56] registered Nov 8 00:28:23.749867 kernel: acpiphp: Slot [57] registered Nov 8 00:28:23.749877 kernel: acpiphp: Slot [58] registered Nov 8 00:28:23.749887 kernel: acpiphp: Slot [59] registered Nov 8 00:28:23.749896 kernel: acpiphp: Slot [60] registered Nov 8 00:28:23.749913 kernel: acpiphp: Slot [61] registered Nov 8 00:28:23.749935 kernel: acpiphp: Slot [62] registered Nov 8 00:28:23.749953 kernel: acpiphp: Slot [63] registered Nov 8 00:28:23.750063 kernel: pci 0000:00:11.0: PCI bridge to [bus 02] (subtractive decode) Nov 8 00:28:23.750153 kernel: pci 0000:00:11.0: bridge window [io 0x2000-0x3fff] Nov 8 00:28:23.750253 kernel: pci 0000:00:11.0: bridge window [mem 0xfd600000-0xfdffffff] Nov 8 00:28:23.750347 kernel: pci 0000:00:11.0: bridge window [mem 0xe7b00000-0xe7ffffff 64bit pref] Nov 8 00:28:23.752312 kernel: pci 0000:00:11.0: bridge window [mem 0x000a0000-0x000bffff window] (subtractive decode) Nov 8 00:28:23.752376 kernel: pci 0000:00:11.0: bridge window [mem 0x000cc000-0x000dbfff window] (subtractive decode) Nov 8 00:28:23.752430 kernel: pci 0000:00:11.0: bridge window [mem 0xc0000000-0xfebfffff window] (subtractive decode) Nov 8 00:28:23.752496 kernel: pci 0000:00:11.0: bridge window [io 0x0000-0x0cf7 window] (subtractive decode) Nov 8 00:28:23.752549 kernel: pci 0000:00:11.0: bridge window [io 0x0d00-0xfeff window] (subtractive decode) Nov 8 00:28:23.752609 kernel: pci 0000:03:00.0: [15ad:07c0] type 00 class 0x010700 Nov 8 00:28:23.752664 kernel: pci 0000:03:00.0: reg 0x10: [io 0x4000-0x4007] Nov 8 00:28:23.752717 kernel: pci 0000:03:00.0: reg 0x14: [mem 0xfd5f8000-0xfd5fffff 64bit] Nov 8 00:28:23.752772 kernel: pci 0000:03:00.0: reg 0x30: [mem 0x00000000-0x0000ffff pref] Nov 8 00:28:23.752825 kernel: pci 0000:03:00.0: PME# supported from D0 D3hot D3cold Nov 8 00:28:23.752888 kernel: pci 0000:03:00.0: disabling ASPM on pre-1.1 PCIe device. You can enable it with 'pcie_aspm=force' Nov 8 00:28:23.752944 kernel: pci 0000:00:15.0: PCI bridge to [bus 03] Nov 8 00:28:23.752996 kernel: pci 0000:00:15.0: bridge window [io 0x4000-0x4fff] Nov 8 00:28:23.753048 kernel: pci 0000:00:15.0: bridge window [mem 0xfd500000-0xfd5fffff] Nov 8 00:28:23.753102 kernel: pci 0000:00:15.1: PCI bridge to [bus 04] Nov 8 00:28:23.753154 kernel: pci 0000:00:15.1: bridge window [io 0x8000-0x8fff] Nov 8 00:28:23.753220 kernel: pci 0000:00:15.1: bridge window [mem 0xfd100000-0xfd1fffff] Nov 8 00:28:23.753272 kernel: pci 0000:00:15.1: bridge window [mem 0xe7800000-0xe78fffff 64bit pref] Nov 8 00:28:23.753328 kernel: pci 0000:00:15.2: PCI bridge to [bus 05] Nov 8 00:28:23.753380 kernel: pci 0000:00:15.2: bridge window [io 0xc000-0xcfff] Nov 8 00:28:23.753431 kernel: pci 0000:00:15.2: bridge window [mem 0xfcd00000-0xfcdfffff] Nov 8 00:28:23.753483 kernel: pci 0000:00:15.2: bridge window [mem 0xe7400000-0xe74fffff 64bit pref] Nov 8 00:28:23.753539 kernel: pci 0000:00:15.3: PCI bridge to [bus 06] Nov 8 00:28:23.753594 kernel: pci 0000:00:15.3: bridge window [mem 0xfc900000-0xfc9fffff] Nov 8 00:28:23.753646 kernel: pci 0000:00:15.3: bridge window [mem 0xe7000000-0xe70fffff 64bit pref] Nov 8 00:28:23.753700 kernel: pci 0000:00:15.4: PCI bridge to [bus 07] Nov 8 00:28:23.753751 kernel: pci 0000:00:15.4: bridge window [mem 0xfc500000-0xfc5fffff] Nov 8 00:28:23.753803 kernel: pci 0000:00:15.4: bridge window [mem 0xe6c00000-0xe6cfffff 64bit pref] Nov 8 00:28:23.753859 kernel: pci 0000:00:15.5: PCI bridge to [bus 08] Nov 8 00:28:23.753911 kernel: pci 0000:00:15.5: bridge window [mem 0xfc100000-0xfc1fffff] Nov 8 00:28:23.753963 kernel: pci 0000:00:15.5: bridge window [mem 0xe6800000-0xe68fffff 64bit pref] Nov 8 00:28:23.754016 kernel: pci 0000:00:15.6: PCI bridge to [bus 09] Nov 8 00:28:23.754068 kernel: pci 0000:00:15.6: bridge window [mem 0xfbd00000-0xfbdfffff] Nov 8 00:28:23.754120 kernel: pci 0000:00:15.6: bridge window [mem 0xe6400000-0xe64fffff 64bit pref] Nov 8 00:28:23.756208 kernel: pci 0000:00:15.7: PCI bridge to [bus 0a] Nov 8 00:28:23.756277 kernel: pci 0000:00:15.7: bridge window [mem 0xfb900000-0xfb9fffff] Nov 8 00:28:23.756337 kernel: pci 0000:00:15.7: bridge window [mem 0xe6000000-0xe60fffff 64bit pref] Nov 8 00:28:23.756396 kernel: pci 0000:0b:00.0: [15ad:07b0] type 00 class 0x020000 Nov 8 00:28:23.756450 kernel: pci 0000:0b:00.0: reg 0x10: [mem 0xfd4fc000-0xfd4fcfff] Nov 8 00:28:23.756503 kernel: pci 0000:0b:00.0: reg 0x14: [mem 0xfd4fd000-0xfd4fdfff] Nov 8 00:28:23.756556 kernel: pci 0000:0b:00.0: reg 0x18: [mem 0xfd4fe000-0xfd4fffff] Nov 8 00:28:23.756609 kernel: pci 0000:0b:00.0: reg 0x1c: [io 0x5000-0x500f] Nov 8 00:28:23.756660 kernel: pci 0000:0b:00.0: reg 0x30: [mem 0x00000000-0x0000ffff pref] Nov 8 00:28:23.756716 kernel: pci 0000:0b:00.0: supports D1 D2 Nov 8 00:28:23.756768 kernel: pci 0000:0b:00.0: PME# supported from D0 D1 D2 D3hot D3cold Nov 8 00:28:23.756820 kernel: pci 0000:0b:00.0: disabling ASPM on pre-1.1 PCIe device. You can enable it with 'pcie_aspm=force' Nov 8 00:28:23.756873 kernel: pci 0000:00:16.0: PCI bridge to [bus 0b] Nov 8 00:28:23.756960 kernel: pci 0000:00:16.0: bridge window [io 0x5000-0x5fff] Nov 8 00:28:23.757012 kernel: pci 0000:00:16.0: bridge window [mem 0xfd400000-0xfd4fffff] Nov 8 00:28:23.757064 kernel: pci 0000:00:16.1: PCI bridge to [bus 0c] Nov 8 00:28:23.757116 kernel: pci 0000:00:16.1: bridge window [io 0x9000-0x9fff] Nov 8 00:28:23.757170 kernel: pci 0000:00:16.1: bridge window [mem 0xfd000000-0xfd0fffff] Nov 8 00:28:23.758238 kernel: pci 0000:00:16.1: bridge window [mem 0xe7700000-0xe77fffff 64bit pref] Nov 8 00:28:23.758293 kernel: pci 0000:00:16.2: PCI bridge to [bus 0d] Nov 8 00:28:23.758362 kernel: pci 0000:00:16.2: bridge window [io 0xd000-0xdfff] Nov 8 00:28:23.758414 kernel: pci 0000:00:16.2: bridge window [mem 0xfcc00000-0xfccfffff] Nov 8 00:28:23.758466 kernel: pci 0000:00:16.2: bridge window [mem 0xe7300000-0xe73fffff 64bit pref] Nov 8 00:28:23.758521 kernel: pci 0000:00:16.3: PCI bridge to [bus 0e] Nov 8 00:28:23.758575 kernel: pci 0000:00:16.3: bridge window [mem 0xfc800000-0xfc8fffff] Nov 8 00:28:23.758630 kernel: pci 0000:00:16.3: bridge window [mem 0xe6f00000-0xe6ffffff 64bit pref] Nov 8 00:28:23.758686 kernel: pci 0000:00:16.4: PCI bridge to [bus 0f] Nov 8 00:28:23.758738 kernel: pci 0000:00:16.4: bridge window [mem 0xfc400000-0xfc4fffff] Nov 8 00:28:23.758791 kernel: pci 0000:00:16.4: bridge window [mem 0xe6b00000-0xe6bfffff 64bit pref] Nov 8 00:28:23.758846 kernel: pci 0000:00:16.5: PCI bridge to [bus 10] Nov 8 00:28:23.758908 kernel: pci 0000:00:16.5: bridge window [mem 0xfc000000-0xfc0fffff] Nov 8 00:28:23.758961 kernel: pci 0000:00:16.5: bridge window [mem 0xe6700000-0xe67fffff 64bit pref] Nov 8 00:28:23.759016 kernel: pci 0000:00:16.6: PCI bridge to [bus 11] Nov 8 00:28:23.759071 kernel: pci 0000:00:16.6: bridge window [mem 0xfbc00000-0xfbcfffff] Nov 8 00:28:23.759123 kernel: pci 0000:00:16.6: bridge window [mem 0xe6300000-0xe63fffff 64bit pref] Nov 8 00:28:23.761262 kernel: pci 0000:00:16.7: PCI bridge to [bus 12] Nov 8 00:28:23.761321 kernel: pci 0000:00:16.7: bridge window [mem 0xfb800000-0xfb8fffff] Nov 8 00:28:23.761374 kernel: pci 0000:00:16.7: bridge window [mem 0xe5f00000-0xe5ffffff 64bit pref] Nov 8 00:28:23.761428 kernel: pci 0000:00:17.0: PCI bridge to [bus 13] Nov 8 00:28:23.761480 kernel: pci 0000:00:17.0: bridge window [io 0x6000-0x6fff] Nov 8 00:28:23.761532 kernel: pci 0000:00:17.0: bridge window [mem 0xfd300000-0xfd3fffff] Nov 8 00:28:23.761608 kernel: pci 0000:00:17.0: bridge window [mem 0xe7a00000-0xe7afffff 64bit pref] Nov 8 00:28:23.761664 kernel: pci 0000:00:17.1: PCI bridge to [bus 14] Nov 8 00:28:23.761715 kernel: pci 0000:00:17.1: bridge window [io 0xa000-0xafff] Nov 8 00:28:23.761767 kernel: pci 0000:00:17.1: bridge window [mem 0xfcf00000-0xfcffffff] Nov 8 00:28:23.761818 kernel: pci 0000:00:17.1: bridge window [mem 0xe7600000-0xe76fffff 64bit pref] Nov 8 00:28:23.761876 kernel: pci 0000:00:17.2: PCI bridge to [bus 15] Nov 8 00:28:23.761929 kernel: pci 0000:00:17.2: bridge window [io 0xe000-0xefff] Nov 8 00:28:23.761983 kernel: pci 0000:00:17.2: bridge window [mem 0xfcb00000-0xfcbfffff] Nov 8 00:28:23.762034 kernel: pci 0000:00:17.2: bridge window [mem 0xe7200000-0xe72fffff 64bit pref] Nov 8 00:28:23.762088 kernel: pci 0000:00:17.3: PCI bridge to [bus 16] Nov 8 00:28:23.762139 kernel: pci 0000:00:17.3: bridge window [mem 0xfc700000-0xfc7fffff] Nov 8 00:28:23.762234 kernel: pci 0000:00:17.3: bridge window [mem 0xe6e00000-0xe6efffff 64bit pref] Nov 8 00:28:23.762288 kernel: pci 0000:00:17.4: PCI bridge to [bus 17] Nov 8 00:28:23.762339 kernel: pci 0000:00:17.4: bridge window [mem 0xfc300000-0xfc3fffff] Nov 8 00:28:23.762390 kernel: pci 0000:00:17.4: bridge window [mem 0xe6a00000-0xe6afffff 64bit pref] Nov 8 00:28:23.762447 kernel: pci 0000:00:17.5: PCI bridge to [bus 18] Nov 8 00:28:23.762498 kernel: pci 0000:00:17.5: bridge window [mem 0xfbf00000-0xfbffffff] Nov 8 00:28:23.762549 kernel: pci 0000:00:17.5: bridge window [mem 0xe6600000-0xe66fffff 64bit pref] Nov 8 00:28:23.762601 kernel: pci 0000:00:17.6: PCI bridge to [bus 19] Nov 8 00:28:23.762653 kernel: pci 0000:00:17.6: bridge window [mem 0xfbb00000-0xfbbfffff] Nov 8 00:28:23.762703 kernel: pci 0000:00:17.6: bridge window [mem 0xe6200000-0xe62fffff 64bit pref] Nov 8 00:28:23.762756 kernel: pci 0000:00:17.7: PCI bridge to [bus 1a] Nov 8 00:28:23.762806 kernel: pci 0000:00:17.7: bridge window [mem 0xfb700000-0xfb7fffff] Nov 8 00:28:23.762870 kernel: pci 0000:00:17.7: bridge window [mem 0xe5e00000-0xe5efffff 64bit pref] Nov 8 00:28:23.762925 kernel: pci 0000:00:18.0: PCI bridge to [bus 1b] Nov 8 00:28:23.762976 kernel: pci 0000:00:18.0: bridge window [io 0x7000-0x7fff] Nov 8 00:28:23.763026 kernel: pci 0000:00:18.0: bridge window [mem 0xfd200000-0xfd2fffff] Nov 8 00:28:23.763076 kernel: pci 0000:00:18.0: bridge window [mem 0xe7900000-0xe79fffff 64bit pref] Nov 8 00:28:23.763128 kernel: pci 0000:00:18.1: PCI bridge to [bus 1c] Nov 8 00:28:23.764406 kernel: pci 0000:00:18.1: bridge window [io 0xb000-0xbfff] Nov 8 00:28:23.764462 kernel: pci 0000:00:18.1: bridge window [mem 0xfce00000-0xfcefffff] Nov 8 00:28:23.764517 kernel: pci 0000:00:18.1: bridge window [mem 0xe7500000-0xe75fffff 64bit pref] Nov 8 00:28:23.764570 kernel: pci 0000:00:18.2: PCI bridge to [bus 1d] Nov 8 00:28:23.764623 kernel: pci 0000:00:18.2: bridge window [mem 0xfca00000-0xfcafffff] Nov 8 00:28:23.764674 kernel: pci 0000:00:18.2: bridge window [mem 0xe7100000-0xe71fffff 64bit pref] Nov 8 00:28:23.764727 kernel: pci 0000:00:18.3: PCI bridge to [bus 1e] Nov 8 00:28:23.764778 kernel: pci 0000:00:18.3: bridge window [mem 0xfc600000-0xfc6fffff] Nov 8 00:28:23.764829 kernel: pci 0000:00:18.3: bridge window [mem 0xe6d00000-0xe6dfffff 64bit pref] Nov 8 00:28:23.764923 kernel: pci 0000:00:18.4: PCI bridge to [bus 1f] Nov 8 00:28:23.765033 kernel: pci 0000:00:18.4: bridge window [mem 0xfc200000-0xfc2fffff] Nov 8 00:28:23.765295 kernel: pci 0000:00:18.4: bridge window [mem 0xe6900000-0xe69fffff 64bit pref] Nov 8 00:28:23.765367 kernel: pci 0000:00:18.5: PCI bridge to [bus 20] Nov 8 00:28:23.765448 kernel: pci 0000:00:18.5: bridge window [mem 0xfbe00000-0xfbefffff] Nov 8 00:28:23.765858 kernel: pci 0000:00:18.5: bridge window [mem 0xe6500000-0xe65fffff 64bit pref] Nov 8 00:28:23.765922 kernel: pci 0000:00:18.6: PCI bridge to [bus 21] Nov 8 00:28:23.765975 kernel: pci 0000:00:18.6: bridge window [mem 0xfba00000-0xfbafffff] Nov 8 00:28:23.766027 kernel: pci 0000:00:18.6: bridge window [mem 0xe6100000-0xe61fffff 64bit pref] Nov 8 00:28:23.766082 kernel: pci 0000:00:18.7: PCI bridge to [bus 22] Nov 8 00:28:23.766133 kernel: pci 0000:00:18.7: bridge window [mem 0xfb600000-0xfb6fffff] Nov 8 00:28:23.766196 kernel: pci 0000:00:18.7: bridge window [mem 0xe5d00000-0xe5dfffff 64bit pref] Nov 8 00:28:23.766206 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 9 Nov 8 00:28:23.766211 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 0 Nov 8 00:28:23.766217 kernel: ACPI: PCI: Interrupt link LNKB disabled Nov 8 00:28:23.766223 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Nov 8 00:28:23.766229 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 10 Nov 8 00:28:23.766237 kernel: iommu: Default domain type: Translated Nov 8 00:28:23.766243 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Nov 8 00:28:23.766248 kernel: PCI: Using ACPI for IRQ routing Nov 8 00:28:23.766254 kernel: PCI: pci_cache_line_size set to 64 bytes Nov 8 00:28:23.766260 kernel: e820: reserve RAM buffer [mem 0x0009ec00-0x0009ffff] Nov 8 00:28:23.766266 kernel: e820: reserve RAM buffer [mem 0x7fee0000-0x7fffffff] Nov 8 00:28:23.766337 kernel: pci 0000:00:0f.0: vgaarb: setting as boot VGA device Nov 8 00:28:23.766389 kernel: pci 0000:00:0f.0: vgaarb: bridge control possible Nov 8 00:28:23.766440 kernel: pci 0000:00:0f.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Nov 8 00:28:23.766450 kernel: vgaarb: loaded Nov 8 00:28:23.766457 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 Nov 8 00:28:23.766463 kernel: hpet0: 16 comparators, 64-bit 14.318180 MHz counter Nov 8 00:28:23.766469 kernel: clocksource: Switched to clocksource tsc-early Nov 8 00:28:23.766474 kernel: VFS: Disk quotas dquot_6.6.0 Nov 8 00:28:23.766480 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Nov 8 00:28:23.766486 kernel: pnp: PnP ACPI init Nov 8 00:28:23.766544 kernel: system 00:00: [io 0x1000-0x103f] has been reserved Nov 8 00:28:23.766610 kernel: system 00:00: [io 0x1040-0x104f] has been reserved Nov 8 00:28:23.766657 kernel: system 00:00: [io 0x0cf0-0x0cf1] has been reserved Nov 8 00:28:23.766708 kernel: system 00:04: [mem 0xfed00000-0xfed003ff] has been reserved Nov 8 00:28:23.766758 kernel: pnp 00:06: [dma 2] Nov 8 00:28:23.766808 kernel: system 00:07: [io 0xfce0-0xfcff] has been reserved Nov 8 00:28:23.766859 kernel: system 00:07: [mem 0xf0000000-0xf7ffffff] has been reserved Nov 8 00:28:23.766908 kernel: system 00:07: [mem 0xfe800000-0xfe9fffff] has been reserved Nov 8 00:28:23.766916 kernel: pnp: PnP ACPI: found 8 devices Nov 8 00:28:23.766922 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Nov 8 00:28:23.766928 kernel: NET: Registered PF_INET protocol family Nov 8 00:28:23.766934 kernel: IP idents hash table entries: 32768 (order: 6, 262144 bytes, linear) Nov 8 00:28:23.766939 kernel: tcp_listen_portaddr_hash hash table entries: 1024 (order: 2, 16384 bytes, linear) Nov 8 00:28:23.766945 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Nov 8 00:28:23.766951 kernel: TCP established hash table entries: 16384 (order: 5, 131072 bytes, linear) Nov 8 00:28:23.766957 kernel: TCP bind hash table entries: 16384 (order: 7, 524288 bytes, linear) Nov 8 00:28:23.766964 kernel: TCP: Hash tables configured (established 16384 bind 16384) Nov 8 00:28:23.766970 kernel: UDP hash table entries: 1024 (order: 3, 32768 bytes, linear) Nov 8 00:28:23.766976 kernel: UDP-Lite hash table entries: 1024 (order: 3, 32768 bytes, linear) Nov 8 00:28:23.766981 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Nov 8 00:28:23.766987 kernel: NET: Registered PF_XDP protocol family Nov 8 00:28:23.767038 kernel: pci 0000:00:15.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 03] add_size 200000 add_align 100000 Nov 8 00:28:23.767091 kernel: pci 0000:00:15.3: bridge window [io 0x1000-0x0fff] to [bus 06] add_size 1000 Nov 8 00:28:23.767153 kernel: pci 0000:00:15.4: bridge window [io 0x1000-0x0fff] to [bus 07] add_size 1000 Nov 8 00:28:23.767552 kernel: pci 0000:00:15.5: bridge window [io 0x1000-0x0fff] to [bus 08] add_size 1000 Nov 8 00:28:23.767610 kernel: pci 0000:00:15.6: bridge window [io 0x1000-0x0fff] to [bus 09] add_size 1000 Nov 8 00:28:23.767664 kernel: pci 0000:00:15.7: bridge window [io 0x1000-0x0fff] to [bus 0a] add_size 1000 Nov 8 00:28:23.767717 kernel: pci 0000:00:16.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 0b] add_size 200000 add_align 100000 Nov 8 00:28:23.767769 kernel: pci 0000:00:16.3: bridge window [io 0x1000-0x0fff] to [bus 0e] add_size 1000 Nov 8 00:28:23.767821 kernel: pci 0000:00:16.4: bridge window [io 0x1000-0x0fff] to [bus 0f] add_size 1000 Nov 8 00:28:23.767876 kernel: pci 0000:00:16.5: bridge window [io 0x1000-0x0fff] to [bus 10] add_size 1000 Nov 8 00:28:23.767928 kernel: pci 0000:00:16.6: bridge window [io 0x1000-0x0fff] to [bus 11] add_size 1000 Nov 8 00:28:23.767979 kernel: pci 0000:00:16.7: bridge window [io 0x1000-0x0fff] to [bus 12] add_size 1000 Nov 8 00:28:23.768030 kernel: pci 0000:00:17.3: bridge window [io 0x1000-0x0fff] to [bus 16] add_size 1000 Nov 8 00:28:23.768081 kernel: pci 0000:00:17.4: bridge window [io 0x1000-0x0fff] to [bus 17] add_size 1000 Nov 8 00:28:23.768135 kernel: pci 0000:00:17.5: bridge window [io 0x1000-0x0fff] to [bus 18] add_size 1000 Nov 8 00:28:23.768279 kernel: pci 0000:00:17.6: bridge window [io 0x1000-0x0fff] to [bus 19] add_size 1000 Nov 8 00:28:23.768352 kernel: pci 0000:00:17.7: bridge window [io 0x1000-0x0fff] to [bus 1a] add_size 1000 Nov 8 00:28:23.768424 kernel: pci 0000:00:18.2: bridge window [io 0x1000-0x0fff] to [bus 1d] add_size 1000 Nov 8 00:28:23.768480 kernel: pci 0000:00:18.3: bridge window [io 0x1000-0x0fff] to [bus 1e] add_size 1000 Nov 8 00:28:23.768532 kernel: pci 0000:00:18.4: bridge window [io 0x1000-0x0fff] to [bus 1f] add_size 1000 Nov 8 00:28:23.768586 kernel: pci 0000:00:18.5: bridge window [io 0x1000-0x0fff] to [bus 20] add_size 1000 Nov 8 00:28:23.768637 kernel: pci 0000:00:18.6: bridge window [io 0x1000-0x0fff] to [bus 21] add_size 1000 Nov 8 00:28:23.768687 kernel: pci 0000:00:18.7: bridge window [io 0x1000-0x0fff] to [bus 22] add_size 1000 Nov 8 00:28:23.768737 kernel: pci 0000:00:15.0: BAR 15: assigned [mem 0xc0000000-0xc01fffff 64bit pref] Nov 8 00:28:23.768788 kernel: pci 0000:00:16.0: BAR 15: assigned [mem 0xc0200000-0xc03fffff 64bit pref] Nov 8 00:28:23.768838 kernel: pci 0000:00:15.3: BAR 13: no space for [io size 0x1000] Nov 8 00:28:23.768888 kernel: pci 0000:00:15.3: BAR 13: failed to assign [io size 0x1000] Nov 8 00:28:23.768984 kernel: pci 0000:00:15.4: BAR 13: no space for [io size 0x1000] Nov 8 00:28:23.769035 kernel: pci 0000:00:15.4: BAR 13: failed to assign [io size 0x1000] Nov 8 00:28:23.769086 kernel: pci 0000:00:15.5: BAR 13: no space for [io size 0x1000] Nov 8 00:28:23.769136 kernel: pci 0000:00:15.5: BAR 13: failed to assign [io size 0x1000] Nov 8 00:28:23.769197 kernel: pci 0000:00:15.6: BAR 13: no space for [io size 0x1000] Nov 8 00:28:23.769249 kernel: pci 0000:00:15.6: BAR 13: failed to assign [io size 0x1000] Nov 8 00:28:23.769299 kernel: pci 0000:00:15.7: BAR 13: no space for [io size 0x1000] Nov 8 00:28:23.769349 kernel: pci 0000:00:15.7: BAR 13: failed to assign [io size 0x1000] Nov 8 00:28:23.769402 kernel: pci 0000:00:16.3: BAR 13: no space for [io size 0x1000] Nov 8 00:28:23.769452 kernel: pci 0000:00:16.3: BAR 13: failed to assign [io size 0x1000] Nov 8 00:28:23.769502 kernel: pci 0000:00:16.4: BAR 13: no space for [io size 0x1000] Nov 8 00:28:23.769552 kernel: pci 0000:00:16.4: BAR 13: failed to assign [io size 0x1000] Nov 8 00:28:23.769602 kernel: pci 0000:00:16.5: BAR 13: no space for [io size 0x1000] Nov 8 00:28:23.769652 kernel: pci 0000:00:16.5: BAR 13: failed to assign [io size 0x1000] Nov 8 00:28:23.769703 kernel: pci 0000:00:16.6: BAR 13: no space for [io size 0x1000] Nov 8 00:28:23.769753 kernel: pci 0000:00:16.6: BAR 13: failed to assign [io size 0x1000] Nov 8 00:28:23.769806 kernel: pci 0000:00:16.7: BAR 13: no space for [io size 0x1000] Nov 8 00:28:23.769861 kernel: pci 0000:00:16.7: BAR 13: failed to assign [io size 0x1000] Nov 8 00:28:23.769912 kernel: pci 0000:00:17.3: BAR 13: no space for [io size 0x1000] Nov 8 00:28:23.769981 kernel: pci 0000:00:17.3: BAR 13: failed to assign [io size 0x1000] Nov 8 00:28:23.770068 kernel: pci 0000:00:17.4: BAR 13: no space for [io size 0x1000] Nov 8 00:28:23.770134 kernel: pci 0000:00:17.4: BAR 13: failed to assign [io size 0x1000] Nov 8 00:28:23.770258 kernel: pci 0000:00:17.5: BAR 13: no space for [io size 0x1000] Nov 8 00:28:23.770329 kernel: pci 0000:00:17.5: BAR 13: failed to assign [io size 0x1000] Nov 8 00:28:23.770383 kernel: pci 0000:00:17.6: BAR 13: no space for [io size 0x1000] Nov 8 00:28:23.770433 kernel: pci 0000:00:17.6: BAR 13: failed to assign [io size 0x1000] Nov 8 00:28:23.770482 kernel: pci 0000:00:17.7: BAR 13: no space for [io size 0x1000] Nov 8 00:28:23.770532 kernel: pci 0000:00:17.7: BAR 13: failed to assign [io size 0x1000] Nov 8 00:28:23.770582 kernel: pci 0000:00:18.2: BAR 13: no space for [io size 0x1000] Nov 8 00:28:23.770632 kernel: pci 0000:00:18.2: BAR 13: failed to assign [io size 0x1000] Nov 8 00:28:23.770682 kernel: pci 0000:00:18.3: BAR 13: no space for [io size 0x1000] Nov 8 00:28:23.770732 kernel: pci 0000:00:18.3: BAR 13: failed to assign [io size 0x1000] Nov 8 00:28:23.770785 kernel: pci 0000:00:18.4: BAR 13: no space for [io size 0x1000] Nov 8 00:28:23.770836 kernel: pci 0000:00:18.4: BAR 13: failed to assign [io size 0x1000] Nov 8 00:28:23.770887 kernel: pci 0000:00:18.5: BAR 13: no space for [io size 0x1000] Nov 8 00:28:23.770938 kernel: pci 0000:00:18.5: BAR 13: failed to assign [io size 0x1000] Nov 8 00:28:23.770988 kernel: pci 0000:00:18.6: BAR 13: no space for [io size 0x1000] Nov 8 00:28:23.771039 kernel: pci 0000:00:18.6: BAR 13: failed to assign [io size 0x1000] Nov 8 00:28:23.771090 kernel: pci 0000:00:18.7: BAR 13: no space for [io size 0x1000] Nov 8 00:28:23.771141 kernel: pci 0000:00:18.7: BAR 13: failed to assign [io size 0x1000] Nov 8 00:28:23.771219 kernel: pci 0000:00:18.7: BAR 13: no space for [io size 0x1000] Nov 8 00:28:23.771285 kernel: pci 0000:00:18.7: BAR 13: failed to assign [io size 0x1000] Nov 8 00:28:23.771336 kernel: pci 0000:00:18.6: BAR 13: no space for [io size 0x1000] Nov 8 00:28:23.771385 kernel: pci 0000:00:18.6: BAR 13: failed to assign [io size 0x1000] Nov 8 00:28:23.771435 kernel: pci 0000:00:18.5: BAR 13: no space for [io size 0x1000] Nov 8 00:28:23.771485 kernel: pci 0000:00:18.5: BAR 13: failed to assign [io size 0x1000] Nov 8 00:28:23.771535 kernel: pci 0000:00:18.4: BAR 13: no space for [io size 0x1000] Nov 8 00:28:23.771585 kernel: pci 0000:00:18.4: BAR 13: failed to assign [io size 0x1000] Nov 8 00:28:23.771635 kernel: pci 0000:00:18.3: BAR 13: no space for [io size 0x1000] Nov 8 00:28:23.771685 kernel: pci 0000:00:18.3: BAR 13: failed to assign [io size 0x1000] Nov 8 00:28:23.771739 kernel: pci 0000:00:18.2: BAR 13: no space for [io size 0x1000] Nov 8 00:28:23.771789 kernel: pci 0000:00:18.2: BAR 13: failed to assign [io size 0x1000] Nov 8 00:28:23.771838 kernel: pci 0000:00:17.7: BAR 13: no space for [io size 0x1000] Nov 8 00:28:23.771889 kernel: pci 0000:00:17.7: BAR 13: failed to assign [io size 0x1000] Nov 8 00:28:23.771939 kernel: pci 0000:00:17.6: BAR 13: no space for [io size 0x1000] Nov 8 00:28:23.771989 kernel: pci 0000:00:17.6: BAR 13: failed to assign [io size 0x1000] Nov 8 00:28:23.772039 kernel: pci 0000:00:17.5: BAR 13: no space for [io size 0x1000] Nov 8 00:28:23.772089 kernel: pci 0000:00:17.5: BAR 13: failed to assign [io size 0x1000] Nov 8 00:28:23.772138 kernel: pci 0000:00:17.4: BAR 13: no space for [io size 0x1000] Nov 8 00:28:23.772231 kernel: pci 0000:00:17.4: BAR 13: failed to assign [io size 0x1000] Nov 8 00:28:23.772282 kernel: pci 0000:00:17.3: BAR 13: no space for [io size 0x1000] Nov 8 00:28:23.772332 kernel: pci 0000:00:17.3: BAR 13: failed to assign [io size 0x1000] Nov 8 00:28:23.772383 kernel: pci 0000:00:16.7: BAR 13: no space for [io size 0x1000] Nov 8 00:28:23.772433 kernel: pci 0000:00:16.7: BAR 13: failed to assign [io size 0x1000] Nov 8 00:28:23.772483 kernel: pci 0000:00:16.6: BAR 13: no space for [io size 0x1000] Nov 8 00:28:23.772533 kernel: pci 0000:00:16.6: BAR 13: failed to assign [io size 0x1000] Nov 8 00:28:23.772583 kernel: pci 0000:00:16.5: BAR 13: no space for [io size 0x1000] Nov 8 00:28:23.772633 kernel: pci 0000:00:16.5: BAR 13: failed to assign [io size 0x1000] Nov 8 00:28:23.772699 kernel: pci 0000:00:16.4: BAR 13: no space for [io size 0x1000] Nov 8 00:28:23.772754 kernel: pci 0000:00:16.4: BAR 13: failed to assign [io size 0x1000] Nov 8 00:28:23.772804 kernel: pci 0000:00:16.3: BAR 13: no space for [io size 0x1000] Nov 8 00:28:23.775070 kernel: pci 0000:00:16.3: BAR 13: failed to assign [io size 0x1000] Nov 8 00:28:23.775134 kernel: pci 0000:00:15.7: BAR 13: no space for [io size 0x1000] Nov 8 00:28:23.775208 kernel: pci 0000:00:15.7: BAR 13: failed to assign [io size 0x1000] Nov 8 00:28:23.775264 kernel: pci 0000:00:15.6: BAR 13: no space for [io size 0x1000] Nov 8 00:28:23.775316 kernel: pci 0000:00:15.6: BAR 13: failed to assign [io size 0x1000] Nov 8 00:28:23.775367 kernel: pci 0000:00:15.5: BAR 13: no space for [io size 0x1000] Nov 8 00:28:23.775418 kernel: pci 0000:00:15.5: BAR 13: failed to assign [io size 0x1000] Nov 8 00:28:23.775473 kernel: pci 0000:00:15.4: BAR 13: no space for [io size 0x1000] Nov 8 00:28:23.775537 kernel: pci 0000:00:15.4: BAR 13: failed to assign [io size 0x1000] Nov 8 00:28:23.775598 kernel: pci 0000:00:15.3: BAR 13: no space for [io size 0x1000] Nov 8 00:28:23.775651 kernel: pci 0000:00:15.3: BAR 13: failed to assign [io size 0x1000] Nov 8 00:28:23.775703 kernel: pci 0000:00:01.0: PCI bridge to [bus 01] Nov 8 00:28:23.775757 kernel: pci 0000:00:11.0: PCI bridge to [bus 02] Nov 8 00:28:23.775809 kernel: pci 0000:00:11.0: bridge window [io 0x2000-0x3fff] Nov 8 00:28:23.775863 kernel: pci 0000:00:11.0: bridge window [mem 0xfd600000-0xfdffffff] Nov 8 00:28:23.775915 kernel: pci 0000:00:11.0: bridge window [mem 0xe7b00000-0xe7ffffff 64bit pref] Nov 8 00:28:23.775974 kernel: pci 0000:03:00.0: BAR 6: assigned [mem 0xfd500000-0xfd50ffff pref] Nov 8 00:28:23.776027 kernel: pci 0000:00:15.0: PCI bridge to [bus 03] Nov 8 00:28:23.776080 kernel: pci 0000:00:15.0: bridge window [io 0x4000-0x4fff] Nov 8 00:28:23.776131 kernel: pci 0000:00:15.0: bridge window [mem 0xfd500000-0xfd5fffff] Nov 8 00:28:23.776261 kernel: pci 0000:00:15.0: bridge window [mem 0xc0000000-0xc01fffff 64bit pref] Nov 8 00:28:23.776327 kernel: pci 0000:00:15.1: PCI bridge to [bus 04] Nov 8 00:28:23.776390 kernel: pci 0000:00:15.1: bridge window [io 0x8000-0x8fff] Nov 8 00:28:23.776445 kernel: pci 0000:00:15.1: bridge window [mem 0xfd100000-0xfd1fffff] Nov 8 00:28:23.776510 kernel: pci 0000:00:15.1: bridge window [mem 0xe7800000-0xe78fffff 64bit pref] Nov 8 00:28:23.776572 kernel: pci 0000:00:15.2: PCI bridge to [bus 05] Nov 8 00:28:23.776642 kernel: pci 0000:00:15.2: bridge window [io 0xc000-0xcfff] Nov 8 00:28:23.776706 kernel: pci 0000:00:15.2: bridge window [mem 0xfcd00000-0xfcdfffff] Nov 8 00:28:23.776770 kernel: pci 0000:00:15.2: bridge window [mem 0xe7400000-0xe74fffff 64bit pref] Nov 8 00:28:23.776822 kernel: pci 0000:00:15.3: PCI bridge to [bus 06] Nov 8 00:28:23.776873 kernel: pci 0000:00:15.3: bridge window [mem 0xfc900000-0xfc9fffff] Nov 8 00:28:23.776924 kernel: pci 0000:00:15.3: bridge window [mem 0xe7000000-0xe70fffff 64bit pref] Nov 8 00:28:23.776974 kernel: pci 0000:00:15.4: PCI bridge to [bus 07] Nov 8 00:28:23.777025 kernel: pci 0000:00:15.4: bridge window [mem 0xfc500000-0xfc5fffff] Nov 8 00:28:23.777080 kernel: pci 0000:00:15.4: bridge window [mem 0xe6c00000-0xe6cfffff 64bit pref] Nov 8 00:28:23.777134 kernel: pci 0000:00:15.5: PCI bridge to [bus 08] Nov 8 00:28:23.777192 kernel: pci 0000:00:15.5: bridge window [mem 0xfc100000-0xfc1fffff] Nov 8 00:28:23.777244 kernel: pci 0000:00:15.5: bridge window [mem 0xe6800000-0xe68fffff 64bit pref] Nov 8 00:28:23.777295 kernel: pci 0000:00:15.6: PCI bridge to [bus 09] Nov 8 00:28:23.777346 kernel: pci 0000:00:15.6: bridge window [mem 0xfbd00000-0xfbdfffff] Nov 8 00:28:23.777400 kernel: pci 0000:00:15.6: bridge window [mem 0xe6400000-0xe64fffff 64bit pref] Nov 8 00:28:23.777451 kernel: pci 0000:00:15.7: PCI bridge to [bus 0a] Nov 8 00:28:23.777503 kernel: pci 0000:00:15.7: bridge window [mem 0xfb900000-0xfb9fffff] Nov 8 00:28:23.777555 kernel: pci 0000:00:15.7: bridge window [mem 0xe6000000-0xe60fffff 64bit pref] Nov 8 00:28:23.777610 kernel: pci 0000:0b:00.0: BAR 6: assigned [mem 0xfd400000-0xfd40ffff pref] Nov 8 00:28:23.777663 kernel: pci 0000:00:16.0: PCI bridge to [bus 0b] Nov 8 00:28:23.777715 kernel: pci 0000:00:16.0: bridge window [io 0x5000-0x5fff] Nov 8 00:28:23.777767 kernel: pci 0000:00:16.0: bridge window [mem 0xfd400000-0xfd4fffff] Nov 8 00:28:23.777819 kernel: pci 0000:00:16.0: bridge window [mem 0xc0200000-0xc03fffff 64bit pref] Nov 8 00:28:23.777874 kernel: pci 0000:00:16.1: PCI bridge to [bus 0c] Nov 8 00:28:23.777925 kernel: pci 0000:00:16.1: bridge window [io 0x9000-0x9fff] Nov 8 00:28:23.777976 kernel: pci 0000:00:16.1: bridge window [mem 0xfd000000-0xfd0fffff] Nov 8 00:28:23.778028 kernel: pci 0000:00:16.1: bridge window [mem 0xe7700000-0xe77fffff 64bit pref] Nov 8 00:28:23.778080 kernel: pci 0000:00:16.2: PCI bridge to [bus 0d] Nov 8 00:28:23.778132 kernel: pci 0000:00:16.2: bridge window [io 0xd000-0xdfff] Nov 8 00:28:23.778216 kernel: pci 0000:00:16.2: bridge window [mem 0xfcc00000-0xfccfffff] Nov 8 00:28:23.778271 kernel: pci 0000:00:16.2: bridge window [mem 0xe7300000-0xe73fffff 64bit pref] Nov 8 00:28:23.778322 kernel: pci 0000:00:16.3: PCI bridge to [bus 0e] Nov 8 00:28:23.778376 kernel: pci 0000:00:16.3: bridge window [mem 0xfc800000-0xfc8fffff] Nov 8 00:28:23.778426 kernel: pci 0000:00:16.3: bridge window [mem 0xe6f00000-0xe6ffffff 64bit pref] Nov 8 00:28:23.778477 kernel: pci 0000:00:16.4: PCI bridge to [bus 0f] Nov 8 00:28:23.778528 kernel: pci 0000:00:16.4: bridge window [mem 0xfc400000-0xfc4fffff] Nov 8 00:28:23.778579 kernel: pci 0000:00:16.4: bridge window [mem 0xe6b00000-0xe6bfffff 64bit pref] Nov 8 00:28:23.778629 kernel: pci 0000:00:16.5: PCI bridge to [bus 10] Nov 8 00:28:23.778680 kernel: pci 0000:00:16.5: bridge window [mem 0xfc000000-0xfc0fffff] Nov 8 00:28:23.778730 kernel: pci 0000:00:16.5: bridge window [mem 0xe6700000-0xe67fffff 64bit pref] Nov 8 00:28:23.778781 kernel: pci 0000:00:16.6: PCI bridge to [bus 11] Nov 8 00:28:23.778833 kernel: pci 0000:00:16.6: bridge window [mem 0xfbc00000-0xfbcfffff] Nov 8 00:28:23.778904 kernel: pci 0000:00:16.6: bridge window [mem 0xe6300000-0xe63fffff 64bit pref] Nov 8 00:28:23.778955 kernel: pci 0000:00:16.7: PCI bridge to [bus 12] Nov 8 00:28:23.779005 kernel: pci 0000:00:16.7: bridge window [mem 0xfb800000-0xfb8fffff] Nov 8 00:28:23.779055 kernel: pci 0000:00:16.7: bridge window [mem 0xe5f00000-0xe5ffffff 64bit pref] Nov 8 00:28:23.779106 kernel: pci 0000:00:17.0: PCI bridge to [bus 13] Nov 8 00:28:23.779156 kernel: pci 0000:00:17.0: bridge window [io 0x6000-0x6fff] Nov 8 00:28:23.779235 kernel: pci 0000:00:17.0: bridge window [mem 0xfd300000-0xfd3fffff] Nov 8 00:28:23.779288 kernel: pci 0000:00:17.0: bridge window [mem 0xe7a00000-0xe7afffff 64bit pref] Nov 8 00:28:23.779341 kernel: pci 0000:00:17.1: PCI bridge to [bus 14] Nov 8 00:28:23.779395 kernel: pci 0000:00:17.1: bridge window [io 0xa000-0xafff] Nov 8 00:28:23.779447 kernel: pci 0000:00:17.1: bridge window [mem 0xfcf00000-0xfcffffff] Nov 8 00:28:23.779499 kernel: pci 0000:00:17.1: bridge window [mem 0xe7600000-0xe76fffff 64bit pref] Nov 8 00:28:23.779551 kernel: pci 0000:00:17.2: PCI bridge to [bus 15] Nov 8 00:28:23.779603 kernel: pci 0000:00:17.2: bridge window [io 0xe000-0xefff] Nov 8 00:28:23.779655 kernel: pci 0000:00:17.2: bridge window [mem 0xfcb00000-0xfcbfffff] Nov 8 00:28:23.779706 kernel: pci 0000:00:17.2: bridge window [mem 0xe7200000-0xe72fffff 64bit pref] Nov 8 00:28:23.779758 kernel: pci 0000:00:17.3: PCI bridge to [bus 16] Nov 8 00:28:23.779811 kernel: pci 0000:00:17.3: bridge window [mem 0xfc700000-0xfc7fffff] Nov 8 00:28:23.779863 kernel: pci 0000:00:17.3: bridge window [mem 0xe6e00000-0xe6efffff 64bit pref] Nov 8 00:28:23.779931 kernel: pci 0000:00:17.4: PCI bridge to [bus 17] Nov 8 00:28:23.779983 kernel: pci 0000:00:17.4: bridge window [mem 0xfc300000-0xfc3fffff] Nov 8 00:28:23.780035 kernel: pci 0000:00:17.4: bridge window [mem 0xe6a00000-0xe6afffff 64bit pref] Nov 8 00:28:23.780087 kernel: pci 0000:00:17.5: PCI bridge to [bus 18] Nov 8 00:28:23.780139 kernel: pci 0000:00:17.5: bridge window [mem 0xfbf00000-0xfbffffff] Nov 8 00:28:23.780200 kernel: pci 0000:00:17.5: bridge window [mem 0xe6600000-0xe66fffff 64bit pref] Nov 8 00:28:23.780253 kernel: pci 0000:00:17.6: PCI bridge to [bus 19] Nov 8 00:28:23.780304 kernel: pci 0000:00:17.6: bridge window [mem 0xfbb00000-0xfbbfffff] Nov 8 00:28:23.780355 kernel: pci 0000:00:17.6: bridge window [mem 0xe6200000-0xe62fffff 64bit pref] Nov 8 00:28:23.780410 kernel: pci 0000:00:17.7: PCI bridge to [bus 1a] Nov 8 00:28:23.780462 kernel: pci 0000:00:17.7: bridge window [mem 0xfb700000-0xfb7fffff] Nov 8 00:28:23.780513 kernel: pci 0000:00:17.7: bridge window [mem 0xe5e00000-0xe5efffff 64bit pref] Nov 8 00:28:23.780565 kernel: pci 0000:00:18.0: PCI bridge to [bus 1b] Nov 8 00:28:23.780616 kernel: pci 0000:00:18.0: bridge window [io 0x7000-0x7fff] Nov 8 00:28:23.780667 kernel: pci 0000:00:18.0: bridge window [mem 0xfd200000-0xfd2fffff] Nov 8 00:28:23.780719 kernel: pci 0000:00:18.0: bridge window [mem 0xe7900000-0xe79fffff 64bit pref] Nov 8 00:28:23.780771 kernel: pci 0000:00:18.1: PCI bridge to [bus 1c] Nov 8 00:28:23.780823 kernel: pci 0000:00:18.1: bridge window [io 0xb000-0xbfff] Nov 8 00:28:23.780875 kernel: pci 0000:00:18.1: bridge window [mem 0xfce00000-0xfcefffff] Nov 8 00:28:23.780930 kernel: pci 0000:00:18.1: bridge window [mem 0xe7500000-0xe75fffff 64bit pref] Nov 8 00:28:23.781018 kernel: pci 0000:00:18.2: PCI bridge to [bus 1d] Nov 8 00:28:23.781070 kernel: pci 0000:00:18.2: bridge window [mem 0xfca00000-0xfcafffff] Nov 8 00:28:23.781121 kernel: pci 0000:00:18.2: bridge window [mem 0xe7100000-0xe71fffff 64bit pref] Nov 8 00:28:23.781211 kernel: pci 0000:00:18.3: PCI bridge to [bus 1e] Nov 8 00:28:23.781269 kernel: pci 0000:00:18.3: bridge window [mem 0xfc600000-0xfc6fffff] Nov 8 00:28:23.781321 kernel: pci 0000:00:18.3: bridge window [mem 0xe6d00000-0xe6dfffff 64bit pref] Nov 8 00:28:23.781372 kernel: pci 0000:00:18.4: PCI bridge to [bus 1f] Nov 8 00:28:23.781424 kernel: pci 0000:00:18.4: bridge window [mem 0xfc200000-0xfc2fffff] Nov 8 00:28:23.781478 kernel: pci 0000:00:18.4: bridge window [mem 0xe6900000-0xe69fffff 64bit pref] Nov 8 00:28:23.781530 kernel: pci 0000:00:18.5: PCI bridge to [bus 20] Nov 8 00:28:23.781582 kernel: pci 0000:00:18.5: bridge window [mem 0xfbe00000-0xfbefffff] Nov 8 00:28:23.781633 kernel: pci 0000:00:18.5: bridge window [mem 0xe6500000-0xe65fffff 64bit pref] Nov 8 00:28:23.781685 kernel: pci 0000:00:18.6: PCI bridge to [bus 21] Nov 8 00:28:23.781736 kernel: pci 0000:00:18.6: bridge window [mem 0xfba00000-0xfbafffff] Nov 8 00:28:23.781787 kernel: pci 0000:00:18.6: bridge window [mem 0xe6100000-0xe61fffff 64bit pref] Nov 8 00:28:23.781839 kernel: pci 0000:00:18.7: PCI bridge to [bus 22] Nov 8 00:28:23.781890 kernel: pci 0000:00:18.7: bridge window [mem 0xfb600000-0xfb6fffff] Nov 8 00:28:23.781941 kernel: pci 0000:00:18.7: bridge window [mem 0xe5d00000-0xe5dfffff 64bit pref] Nov 8 00:28:23.781993 kernel: pci_bus 0000:00: resource 4 [mem 0x000a0000-0x000bffff window] Nov 8 00:28:23.782041 kernel: pci_bus 0000:00: resource 5 [mem 0x000cc000-0x000dbfff window] Nov 8 00:28:23.782087 kernel: pci_bus 0000:00: resource 6 [mem 0xc0000000-0xfebfffff window] Nov 8 00:28:23.782133 kernel: pci_bus 0000:00: resource 7 [io 0x0000-0x0cf7 window] Nov 8 00:28:23.783653 kernel: pci_bus 0000:00: resource 8 [io 0x0d00-0xfeff window] Nov 8 00:28:23.783715 kernel: pci_bus 0000:02: resource 0 [io 0x2000-0x3fff] Nov 8 00:28:23.783767 kernel: pci_bus 0000:02: resource 1 [mem 0xfd600000-0xfdffffff] Nov 8 00:28:23.783818 kernel: pci_bus 0000:02: resource 2 [mem 0xe7b00000-0xe7ffffff 64bit pref] Nov 8 00:28:23.783866 kernel: pci_bus 0000:02: resource 4 [mem 0x000a0000-0x000bffff window] Nov 8 00:28:23.783914 kernel: pci_bus 0000:02: resource 5 [mem 0x000cc000-0x000dbfff window] Nov 8 00:28:23.783961 kernel: pci_bus 0000:02: resource 6 [mem 0xc0000000-0xfebfffff window] Nov 8 00:28:23.784008 kernel: pci_bus 0000:02: resource 7 [io 0x0000-0x0cf7 window] Nov 8 00:28:23.784055 kernel: pci_bus 0000:02: resource 8 [io 0x0d00-0xfeff window] Nov 8 00:28:23.784107 kernel: pci_bus 0000:03: resource 0 [io 0x4000-0x4fff] Nov 8 00:28:23.784157 kernel: pci_bus 0000:03: resource 1 [mem 0xfd500000-0xfd5fffff] Nov 8 00:28:23.784233 kernel: pci_bus 0000:03: resource 2 [mem 0xc0000000-0xc01fffff 64bit pref] Nov 8 00:28:23.784285 kernel: pci_bus 0000:04: resource 0 [io 0x8000-0x8fff] Nov 8 00:28:23.784332 kernel: pci_bus 0000:04: resource 1 [mem 0xfd100000-0xfd1fffff] Nov 8 00:28:23.784378 kernel: pci_bus 0000:04: resource 2 [mem 0xe7800000-0xe78fffff 64bit pref] Nov 8 00:28:23.784431 kernel: pci_bus 0000:05: resource 0 [io 0xc000-0xcfff] Nov 8 00:28:23.784479 kernel: pci_bus 0000:05: resource 1 [mem 0xfcd00000-0xfcdfffff] Nov 8 00:28:23.784530 kernel: pci_bus 0000:05: resource 2 [mem 0xe7400000-0xe74fffff 64bit pref] Nov 8 00:28:23.784580 kernel: pci_bus 0000:06: resource 1 [mem 0xfc900000-0xfc9fffff] Nov 8 00:28:23.784627 kernel: pci_bus 0000:06: resource 2 [mem 0xe7000000-0xe70fffff 64bit pref] Nov 8 00:28:23.784679 kernel: pci_bus 0000:07: resource 1 [mem 0xfc500000-0xfc5fffff] Nov 8 00:28:23.784726 kernel: pci_bus 0000:07: resource 2 [mem 0xe6c00000-0xe6cfffff 64bit pref] Nov 8 00:28:23.784776 kernel: pci_bus 0000:08: resource 1 [mem 0xfc100000-0xfc1fffff] Nov 8 00:28:23.784824 kernel: pci_bus 0000:08: resource 2 [mem 0xe6800000-0xe68fffff 64bit pref] Nov 8 00:28:23.784913 kernel: pci_bus 0000:09: resource 1 [mem 0xfbd00000-0xfbdfffff] Nov 8 00:28:23.784961 kernel: pci_bus 0000:09: resource 2 [mem 0xe6400000-0xe64fffff 64bit pref] Nov 8 00:28:23.785013 kernel: pci_bus 0000:0a: resource 1 [mem 0xfb900000-0xfb9fffff] Nov 8 00:28:23.785062 kernel: pci_bus 0000:0a: resource 2 [mem 0xe6000000-0xe60fffff 64bit pref] Nov 8 00:28:23.785124 kernel: pci_bus 0000:0b: resource 0 [io 0x5000-0x5fff] Nov 8 00:28:23.785181 kernel: pci_bus 0000:0b: resource 1 [mem 0xfd400000-0xfd4fffff] Nov 8 00:28:23.785231 kernel: pci_bus 0000:0b: resource 2 [mem 0xc0200000-0xc03fffff 64bit pref] Nov 8 00:28:23.785283 kernel: pci_bus 0000:0c: resource 0 [io 0x9000-0x9fff] Nov 8 00:28:23.785333 kernel: pci_bus 0000:0c: resource 1 [mem 0xfd000000-0xfd0fffff] Nov 8 00:28:23.785382 kernel: pci_bus 0000:0c: resource 2 [mem 0xe7700000-0xe77fffff 64bit pref] Nov 8 00:28:23.785437 kernel: pci_bus 0000:0d: resource 0 [io 0xd000-0xdfff] Nov 8 00:28:23.785488 kernel: pci_bus 0000:0d: resource 1 [mem 0xfcc00000-0xfccfffff] Nov 8 00:28:23.785541 kernel: pci_bus 0000:0d: resource 2 [mem 0xe7300000-0xe73fffff 64bit pref] Nov 8 00:28:23.785624 kernel: pci_bus 0000:0e: resource 1 [mem 0xfc800000-0xfc8fffff] Nov 8 00:28:23.785683 kernel: pci_bus 0000:0e: resource 2 [mem 0xe6f00000-0xe6ffffff 64bit pref] Nov 8 00:28:23.785751 kernel: pci_bus 0000:0f: resource 1 [mem 0xfc400000-0xfc4fffff] Nov 8 00:28:23.785802 kernel: pci_bus 0000:0f: resource 2 [mem 0xe6b00000-0xe6bfffff 64bit pref] Nov 8 00:28:23.785862 kernel: pci_bus 0000:10: resource 1 [mem 0xfc000000-0xfc0fffff] Nov 8 00:28:23.785932 kernel: pci_bus 0000:10: resource 2 [mem 0xe6700000-0xe67fffff 64bit pref] Nov 8 00:28:23.785994 kernel: pci_bus 0000:11: resource 1 [mem 0xfbc00000-0xfbcfffff] Nov 8 00:28:23.786046 kernel: pci_bus 0000:11: resource 2 [mem 0xe6300000-0xe63fffff 64bit pref] Nov 8 00:28:23.786109 kernel: pci_bus 0000:12: resource 1 [mem 0xfb800000-0xfb8fffff] Nov 8 00:28:23.786169 kernel: pci_bus 0000:12: resource 2 [mem 0xe5f00000-0xe5ffffff 64bit pref] Nov 8 00:28:23.786247 kernel: pci_bus 0000:13: resource 0 [io 0x6000-0x6fff] Nov 8 00:28:23.786299 kernel: pci_bus 0000:13: resource 1 [mem 0xfd300000-0xfd3fffff] Nov 8 00:28:23.786362 kernel: pci_bus 0000:13: resource 2 [mem 0xe7a00000-0xe7afffff 64bit pref] Nov 8 00:28:23.786427 kernel: pci_bus 0000:14: resource 0 [io 0xa000-0xafff] Nov 8 00:28:23.786480 kernel: pci_bus 0000:14: resource 1 [mem 0xfcf00000-0xfcffffff] Nov 8 00:28:23.786538 kernel: pci_bus 0000:14: resource 2 [mem 0xe7600000-0xe76fffff 64bit pref] Nov 8 00:28:23.786607 kernel: pci_bus 0000:15: resource 0 [io 0xe000-0xefff] Nov 8 00:28:23.786667 kernel: pci_bus 0000:15: resource 1 [mem 0xfcb00000-0xfcbfffff] Nov 8 00:28:23.786720 kernel: pci_bus 0000:15: resource 2 [mem 0xe7200000-0xe72fffff 64bit pref] Nov 8 00:28:23.786793 kernel: pci_bus 0000:16: resource 1 [mem 0xfc700000-0xfc7fffff] Nov 8 00:28:23.786855 kernel: pci_bus 0000:16: resource 2 [mem 0xe6e00000-0xe6efffff 64bit pref] Nov 8 00:28:23.786909 kernel: pci_bus 0000:17: resource 1 [mem 0xfc300000-0xfc3fffff] Nov 8 00:28:23.786965 kernel: pci_bus 0000:17: resource 2 [mem 0xe6a00000-0xe6afffff 64bit pref] Nov 8 00:28:23.787017 kernel: pci_bus 0000:18: resource 1 [mem 0xfbf00000-0xfbffffff] Nov 8 00:28:23.787066 kernel: pci_bus 0000:18: resource 2 [mem 0xe6600000-0xe66fffff 64bit pref] Nov 8 00:28:23.787123 kernel: pci_bus 0000:19: resource 1 [mem 0xfbb00000-0xfbbfffff] Nov 8 00:28:23.787224 kernel: pci_bus 0000:19: resource 2 [mem 0xe6200000-0xe62fffff 64bit pref] Nov 8 00:28:23.787281 kernel: pci_bus 0000:1a: resource 1 [mem 0xfb700000-0xfb7fffff] Nov 8 00:28:23.787330 kernel: pci_bus 0000:1a: resource 2 [mem 0xe5e00000-0xe5efffff 64bit pref] Nov 8 00:28:23.787385 kernel: pci_bus 0000:1b: resource 0 [io 0x7000-0x7fff] Nov 8 00:28:23.787437 kernel: pci_bus 0000:1b: resource 1 [mem 0xfd200000-0xfd2fffff] Nov 8 00:28:23.787485 kernel: pci_bus 0000:1b: resource 2 [mem 0xe7900000-0xe79fffff 64bit pref] Nov 8 00:28:23.787537 kernel: pci_bus 0000:1c: resource 0 [io 0xb000-0xbfff] Nov 8 00:28:23.787586 kernel: pci_bus 0000:1c: resource 1 [mem 0xfce00000-0xfcefffff] Nov 8 00:28:23.787634 kernel: pci_bus 0000:1c: resource 2 [mem 0xe7500000-0xe75fffff 64bit pref] Nov 8 00:28:23.787686 kernel: pci_bus 0000:1d: resource 1 [mem 0xfca00000-0xfcafffff] Nov 8 00:28:23.787735 kernel: pci_bus 0000:1d: resource 2 [mem 0xe7100000-0xe71fffff 64bit pref] Nov 8 00:28:23.787793 kernel: pci_bus 0000:1e: resource 1 [mem 0xfc600000-0xfc6fffff] Nov 8 00:28:23.787841 kernel: pci_bus 0000:1e: resource 2 [mem 0xe6d00000-0xe6dfffff 64bit pref] Nov 8 00:28:23.787893 kernel: pci_bus 0000:1f: resource 1 [mem 0xfc200000-0xfc2fffff] Nov 8 00:28:23.787942 kernel: pci_bus 0000:1f: resource 2 [mem 0xe6900000-0xe69fffff 64bit pref] Nov 8 00:28:23.787994 kernel: pci_bus 0000:20: resource 1 [mem 0xfbe00000-0xfbefffff] Nov 8 00:28:23.788043 kernel: pci_bus 0000:20: resource 2 [mem 0xe6500000-0xe65fffff 64bit pref] Nov 8 00:28:23.788096 kernel: pci_bus 0000:21: resource 1 [mem 0xfba00000-0xfbafffff] Nov 8 00:28:23.788146 kernel: pci_bus 0000:21: resource 2 [mem 0xe6100000-0xe61fffff 64bit pref] Nov 8 00:28:23.788223 kernel: pci_bus 0000:22: resource 1 [mem 0xfb600000-0xfb6fffff] Nov 8 00:28:23.788275 kernel: pci_bus 0000:22: resource 2 [mem 0xe5d00000-0xe5dfffff 64bit pref] Nov 8 00:28:23.788333 kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers Nov 8 00:28:23.788343 kernel: PCI: CLS 32 bytes, default 64 Nov 8 00:28:23.788350 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Nov 8 00:28:23.788359 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x311fd3cd494, max_idle_ns: 440795223879 ns Nov 8 00:28:23.788365 kernel: clocksource: Switched to clocksource tsc Nov 8 00:28:23.788372 kernel: Initialise system trusted keyrings Nov 8 00:28:23.788379 kernel: workingset: timestamp_bits=39 max_order=19 bucket_order=0 Nov 8 00:28:23.788385 kernel: Key type asymmetric registered Nov 8 00:28:23.788391 kernel: Asymmetric key parser 'x509' registered Nov 8 00:28:23.788399 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Nov 8 00:28:23.788405 kernel: io scheduler mq-deadline registered Nov 8 00:28:23.788412 kernel: io scheduler kyber registered Nov 8 00:28:23.788419 kernel: io scheduler bfq registered Nov 8 00:28:23.788476 kernel: pcieport 0000:00:15.0: PME: Signaling with IRQ 24 Nov 8 00:28:23.788532 kernel: pcieport 0000:00:15.0: pciehp: Slot #160 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Nov 8 00:28:23.788588 kernel: pcieport 0000:00:15.1: PME: Signaling with IRQ 25 Nov 8 00:28:23.788642 kernel: pcieport 0000:00:15.1: pciehp: Slot #161 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Nov 8 00:28:23.788696 kernel: pcieport 0000:00:15.2: PME: Signaling with IRQ 26 Nov 8 00:28:23.788750 kernel: pcieport 0000:00:15.2: pciehp: Slot #162 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Nov 8 00:28:23.788805 kernel: pcieport 0000:00:15.3: PME: Signaling with IRQ 27 Nov 8 00:28:23.788870 kernel: pcieport 0000:00:15.3: pciehp: Slot #163 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Nov 8 00:28:23.788926 kernel: pcieport 0000:00:15.4: PME: Signaling with IRQ 28 Nov 8 00:28:23.788980 kernel: pcieport 0000:00:15.4: pciehp: Slot #164 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Nov 8 00:28:23.789034 kernel: pcieport 0000:00:15.5: PME: Signaling with IRQ 29 Nov 8 00:28:23.789088 kernel: pcieport 0000:00:15.5: pciehp: Slot #165 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Nov 8 00:28:23.789145 kernel: pcieport 0000:00:15.6: PME: Signaling with IRQ 30 Nov 8 00:28:23.789359 kernel: pcieport 0000:00:15.6: pciehp: Slot #166 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Nov 8 00:28:23.789415 kernel: pcieport 0000:00:15.7: PME: Signaling with IRQ 31 Nov 8 00:28:23.789469 kernel: pcieport 0000:00:15.7: pciehp: Slot #167 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Nov 8 00:28:23.789523 kernel: pcieport 0000:00:16.0: PME: Signaling with IRQ 32 Nov 8 00:28:23.789580 kernel: pcieport 0000:00:16.0: pciehp: Slot #192 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Nov 8 00:28:23.789634 kernel: pcieport 0000:00:16.1: PME: Signaling with IRQ 33 Nov 8 00:28:23.789688 kernel: pcieport 0000:00:16.1: pciehp: Slot #193 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Nov 8 00:28:23.789742 kernel: pcieport 0000:00:16.2: PME: Signaling with IRQ 34 Nov 8 00:28:23.789796 kernel: pcieport 0000:00:16.2: pciehp: Slot #194 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Nov 8 00:28:23.789851 kernel: pcieport 0000:00:16.3: PME: Signaling with IRQ 35 Nov 8 00:28:23.789905 kernel: pcieport 0000:00:16.3: pciehp: Slot #195 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Nov 8 00:28:23.789962 kernel: pcieport 0000:00:16.4: PME: Signaling with IRQ 36 Nov 8 00:28:23.790016 kernel: pcieport 0000:00:16.4: pciehp: Slot #196 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Nov 8 00:28:23.790070 kernel: pcieport 0000:00:16.5: PME: Signaling with IRQ 37 Nov 8 00:28:23.790124 kernel: pcieport 0000:00:16.5: pciehp: Slot #197 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Nov 8 00:28:23.790186 kernel: pcieport 0000:00:16.6: PME: Signaling with IRQ 38 Nov 8 00:28:23.790243 kernel: pcieport 0000:00:16.6: pciehp: Slot #198 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Nov 8 00:28:23.790297 kernel: pcieport 0000:00:16.7: PME: Signaling with IRQ 39 Nov 8 00:28:23.790351 kernel: pcieport 0000:00:16.7: pciehp: Slot #199 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Nov 8 00:28:23.790405 kernel: pcieport 0000:00:17.0: PME: Signaling with IRQ 40 Nov 8 00:28:23.790699 kernel: pcieport 0000:00:17.0: pciehp: Slot #224 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Nov 8 00:28:23.790757 kernel: pcieport 0000:00:17.1: PME: Signaling with IRQ 41 Nov 8 00:28:23.790811 kernel: pcieport 0000:00:17.1: pciehp: Slot #225 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Nov 8 00:28:23.790869 kernel: pcieport 0000:00:17.2: PME: Signaling with IRQ 42 Nov 8 00:28:23.790923 kernel: pcieport 0000:00:17.2: pciehp: Slot #226 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Nov 8 00:28:23.790977 kernel: pcieport 0000:00:17.3: PME: Signaling with IRQ 43 Nov 8 00:28:23.791031 kernel: pcieport 0000:00:17.3: pciehp: Slot #227 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Nov 8 00:28:23.791085 kernel: pcieport 0000:00:17.4: PME: Signaling with IRQ 44 Nov 8 00:28:23.791139 kernel: pcieport 0000:00:17.4: pciehp: Slot #228 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Nov 8 00:28:23.791574 kernel: pcieport 0000:00:17.5: PME: Signaling with IRQ 45 Nov 8 00:28:23.791679 kernel: pcieport 0000:00:17.5: pciehp: Slot #229 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Nov 8 00:28:23.791737 kernel: pcieport 0000:00:17.6: PME: Signaling with IRQ 46 Nov 8 00:28:23.791801 kernel: pcieport 0000:00:17.6: pciehp: Slot #230 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Nov 8 00:28:23.791877 kernel: pcieport 0000:00:17.7: PME: Signaling with IRQ 47 Nov 8 00:28:23.791960 kernel: pcieport 0000:00:17.7: pciehp: Slot #231 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Nov 8 00:28:23.792023 kernel: pcieport 0000:00:18.0: PME: Signaling with IRQ 48 Nov 8 00:28:23.792080 kernel: pcieport 0000:00:18.0: pciehp: Slot #256 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Nov 8 00:28:23.792134 kernel: pcieport 0000:00:18.1: PME: Signaling with IRQ 49 Nov 8 00:28:23.792194 kernel: pcieport 0000:00:18.1: pciehp: Slot #257 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Nov 8 00:28:23.792248 kernel: pcieport 0000:00:18.2: PME: Signaling with IRQ 50 Nov 8 00:28:23.792316 kernel: pcieport 0000:00:18.2: pciehp: Slot #258 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Nov 8 00:28:23.792391 kernel: pcieport 0000:00:18.3: PME: Signaling with IRQ 51 Nov 8 00:28:23.792480 kernel: pcieport 0000:00:18.3: pciehp: Slot #259 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Nov 8 00:28:23.792561 kernel: pcieport 0000:00:18.4: PME: Signaling with IRQ 52 Nov 8 00:28:23.792625 kernel: pcieport 0000:00:18.4: pciehp: Slot #260 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Nov 8 00:28:23.792681 kernel: pcieport 0000:00:18.5: PME: Signaling with IRQ 53 Nov 8 00:28:23.792753 kernel: pcieport 0000:00:18.5: pciehp: Slot #261 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Nov 8 00:28:23.792810 kernel: pcieport 0000:00:18.6: PME: Signaling with IRQ 54 Nov 8 00:28:23.792903 kernel: pcieport 0000:00:18.6: pciehp: Slot #262 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Nov 8 00:28:23.792960 kernel: pcieport 0000:00:18.7: PME: Signaling with IRQ 55 Nov 8 00:28:23.793016 kernel: pcieport 0000:00:18.7: pciehp: Slot #263 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Nov 8 00:28:23.793027 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Nov 8 00:28:23.793035 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Nov 8 00:28:23.793041 kernel: 00:05: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Nov 8 00:28:23.793048 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBC,PNP0f13:MOUS] at 0x60,0x64 irq 1,12 Nov 8 00:28:23.793054 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Nov 8 00:28:23.793061 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Nov 8 00:28:23.793117 kernel: rtc_cmos 00:01: registered as rtc0 Nov 8 00:28:23.793169 kernel: rtc_cmos 00:01: setting system clock to 2025-11-08T00:28:23 UTC (1762561703) Nov 8 00:28:23.793299 kernel: rtc_cmos 00:01: alarms up to one month, y3k, 114 bytes nvram Nov 8 00:28:23.793309 kernel: intel_pstate: CPU model not supported Nov 8 00:28:23.793316 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Nov 8 00:28:23.793322 kernel: NET: Registered PF_INET6 protocol family Nov 8 00:28:23.793328 kernel: Segment Routing with IPv6 Nov 8 00:28:23.793334 kernel: In-situ OAM (IOAM) with IPv6 Nov 8 00:28:23.793341 kernel: NET: Registered PF_PACKET protocol family Nov 8 00:28:23.793348 kernel: Key type dns_resolver registered Nov 8 00:28:23.793354 kernel: IPI shorthand broadcast: enabled Nov 8 00:28:23.793367 kernel: sched_clock: Marking stable (924133064, 226104169)->(1212169787, -61932554) Nov 8 00:28:23.793379 kernel: registered taskstats version 1 Nov 8 00:28:23.793389 kernel: Loading compiled-in X.509 certificates Nov 8 00:28:23.793398 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.113-flatcar: cf7a35a152685ec84a621291e4ce58c959319dfd' Nov 8 00:28:23.793407 kernel: Key type .fscrypt registered Nov 8 00:28:23.793417 kernel: Key type fscrypt-provisioning registered Nov 8 00:28:23.793426 kernel: ima: No TPM chip found, activating TPM-bypass! Nov 8 00:28:23.793436 kernel: ima: Allocated hash algorithm: sha1 Nov 8 00:28:23.793448 kernel: ima: No architecture policies found Nov 8 00:28:23.793458 kernel: clk: Disabling unused clocks Nov 8 00:28:23.793468 kernel: Freeing unused kernel image (initmem) memory: 42880K Nov 8 00:28:23.793477 kernel: Write protecting the kernel read-only data: 36864k Nov 8 00:28:23.793487 kernel: Freeing unused kernel image (rodata/data gap) memory: 1828K Nov 8 00:28:23.793496 kernel: Run /init as init process Nov 8 00:28:23.793504 kernel: with arguments: Nov 8 00:28:23.793515 kernel: /init Nov 8 00:28:23.793525 kernel: with environment: Nov 8 00:28:23.793536 kernel: HOME=/ Nov 8 00:28:23.793545 kernel: TERM=linux Nov 8 00:28:23.793556 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Nov 8 00:28:23.793568 systemd[1]: Detected virtualization vmware. Nov 8 00:28:23.793579 systemd[1]: Detected architecture x86-64. Nov 8 00:28:23.793589 systemd[1]: Running in initrd. Nov 8 00:28:23.793598 systemd[1]: No hostname configured, using default hostname. Nov 8 00:28:23.793608 systemd[1]: Hostname set to . Nov 8 00:28:23.793622 systemd[1]: Initializing machine ID from random generator. Nov 8 00:28:23.793633 systemd[1]: Queued start job for default target initrd.target. Nov 8 00:28:23.793644 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Nov 8 00:28:23.793652 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Nov 8 00:28:23.793659 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Nov 8 00:28:23.793667 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Nov 8 00:28:23.793673 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Nov 8 00:28:23.793680 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Nov 8 00:28:23.793689 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Nov 8 00:28:23.793701 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Nov 8 00:28:23.793713 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Nov 8 00:28:23.793724 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Nov 8 00:28:23.793731 systemd[1]: Reached target paths.target - Path Units. Nov 8 00:28:23.793737 systemd[1]: Reached target slices.target - Slice Units. Nov 8 00:28:23.793744 systemd[1]: Reached target swap.target - Swaps. Nov 8 00:28:23.793753 systemd[1]: Reached target timers.target - Timer Units. Nov 8 00:28:23.793759 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Nov 8 00:28:23.793766 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Nov 8 00:28:23.793772 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Nov 8 00:28:23.793779 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Nov 8 00:28:23.793785 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Nov 8 00:28:23.793796 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Nov 8 00:28:23.793808 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Nov 8 00:28:23.793816 systemd[1]: Reached target sockets.target - Socket Units. Nov 8 00:28:23.793822 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Nov 8 00:28:23.793829 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Nov 8 00:28:23.793835 systemd[1]: Finished network-cleanup.service - Network Cleanup. Nov 8 00:28:23.793842 systemd[1]: Starting systemd-fsck-usr.service... Nov 8 00:28:23.793848 systemd[1]: Starting systemd-journald.service - Journal Service... Nov 8 00:28:23.793855 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Nov 8 00:28:23.793861 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Nov 8 00:28:23.793885 systemd-journald[217]: Collecting audit messages is disabled. Nov 8 00:28:23.793908 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Nov 8 00:28:23.793915 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Nov 8 00:28:23.793921 systemd[1]: Finished systemd-fsck-usr.service. Nov 8 00:28:23.793930 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Nov 8 00:28:23.793938 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Nov 8 00:28:23.793944 kernel: Bridge firewalling registered Nov 8 00:28:23.793951 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Nov 8 00:28:23.793961 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Nov 8 00:28:23.793974 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Nov 8 00:28:23.793983 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Nov 8 00:28:23.793989 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Nov 8 00:28:23.793996 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Nov 8 00:28:23.794005 systemd-journald[217]: Journal started Nov 8 00:28:23.794025 systemd-journald[217]: Runtime Journal (/run/log/journal/a995d73bdc854b3bb4cb8484db23371a) is 4.8M, max 38.6M, 33.8M free. Nov 8 00:28:23.749227 systemd-modules-load[218]: Inserted module 'overlay' Nov 8 00:28:23.768950 systemd-modules-load[218]: Inserted module 'br_netfilter' Nov 8 00:28:23.796284 systemd[1]: Started systemd-journald.service - Journal Service. Nov 8 00:28:23.806449 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Nov 8 00:28:23.806894 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Nov 8 00:28:23.820392 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Nov 8 00:28:23.822515 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Nov 8 00:28:23.822793 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Nov 8 00:28:23.828474 dracut-cmdline[246]: dracut-dracut-053 Nov 8 00:28:23.829593 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Nov 8 00:28:23.830484 dracut-cmdline[246]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=vmware flatcar.autologin verity.usrhash=480a02cf7f2001774aa495c3e719d4173e968e6839485a7d2b207ef2facca472 Nov 8 00:28:23.831262 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Nov 8 00:28:23.853419 systemd-resolved[263]: Positive Trust Anchors: Nov 8 00:28:23.853428 systemd-resolved[263]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Nov 8 00:28:23.853449 systemd-resolved[263]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Nov 8 00:28:23.855114 systemd-resolved[263]: Defaulting to hostname 'linux'. Nov 8 00:28:23.855703 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Nov 8 00:28:23.855840 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Nov 8 00:28:23.882194 kernel: SCSI subsystem initialized Nov 8 00:28:23.889183 kernel: Loading iSCSI transport class v2.0-870. Nov 8 00:28:23.897193 kernel: iscsi: registered transport (tcp) Nov 8 00:28:23.912197 kernel: iscsi: registered transport (qla4xxx) Nov 8 00:28:23.912234 kernel: QLogic iSCSI HBA Driver Nov 8 00:28:23.931761 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Nov 8 00:28:23.935268 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Nov 8 00:28:23.950320 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Nov 8 00:28:23.950351 kernel: device-mapper: uevent: version 1.0.3 Nov 8 00:28:23.951581 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Nov 8 00:28:23.982217 kernel: raid6: avx2x4 gen() 52163 MB/s Nov 8 00:28:23.999188 kernel: raid6: avx2x2 gen() 52809 MB/s Nov 8 00:28:24.016432 kernel: raid6: avx2x1 gen() 44767 MB/s Nov 8 00:28:24.016456 kernel: raid6: using algorithm avx2x2 gen() 52809 MB/s Nov 8 00:28:24.034352 kernel: raid6: .... xor() 31588 MB/s, rmw enabled Nov 8 00:28:24.034384 kernel: raid6: using avx2x2 recovery algorithm Nov 8 00:28:24.048190 kernel: xor: automatically using best checksumming function avx Nov 8 00:28:24.151190 kernel: Btrfs loaded, zoned=no, fsverity=no Nov 8 00:28:24.156970 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Nov 8 00:28:24.161366 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Nov 8 00:28:24.168531 systemd-udevd[434]: Using default interface naming scheme 'v255'. Nov 8 00:28:24.171008 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Nov 8 00:28:24.176496 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Nov 8 00:28:24.182624 dracut-pre-trigger[439]: rd.md=0: removing MD RAID activation Nov 8 00:28:24.197732 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Nov 8 00:28:24.201399 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Nov 8 00:28:24.273927 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Nov 8 00:28:24.280356 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Nov 8 00:28:24.291997 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Nov 8 00:28:24.292481 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Nov 8 00:28:24.293043 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Nov 8 00:28:24.293271 systemd[1]: Reached target remote-fs.target - Remote File Systems. Nov 8 00:28:24.297264 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Nov 8 00:28:24.306382 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Nov 8 00:28:24.346185 kernel: VMware PVSCSI driver - version 1.0.7.0-k Nov 8 00:28:24.352170 kernel: VMware vmxnet3 virtual NIC driver - version 1.7.0.0-k-NAPI Nov 8 00:28:24.352229 kernel: vmw_pvscsi: using 64bit dma Nov 8 00:28:24.352241 kernel: vmw_pvscsi: max_id: 16 Nov 8 00:28:24.352249 kernel: vmw_pvscsi: setting ring_pages to 8 Nov 8 00:28:24.355231 kernel: vmw_pvscsi: enabling reqCallThreshold Nov 8 00:28:24.355248 kernel: vmw_pvscsi: driver-based request coalescing enabled Nov 8 00:28:24.355257 kernel: vmw_pvscsi: using MSI-X Nov 8 00:28:24.357180 kernel: scsi host0: VMware PVSCSI storage adapter rev 2, req/cmp/msg rings: 8/8/1 pages, cmd_per_lun=254 Nov 8 00:28:24.359645 kernel: vmw_pvscsi 0000:03:00.0: VMware PVSCSI rev 2 host #0 Nov 8 00:28:24.359751 kernel: scsi 0:0:0:0: Direct-Access VMware Virtual disk 2.0 PQ: 0 ANSI: 6 Nov 8 00:28:24.361195 kernel: vmxnet3 0000:0b:00.0: # of Tx queues : 2, # of Rx queues : 2 Nov 8 00:28:24.363183 kernel: vmxnet3 0000:0b:00.0 eth0: NIC Link is Up 10000 Mbps Nov 8 00:28:24.369186 kernel: cryptd: max_cpu_qlen set to 1000 Nov 8 00:28:24.374788 kernel: vmxnet3 0000:0b:00.0 ens192: renamed from eth0 Nov 8 00:28:24.377480 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Nov 8 00:28:24.377558 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Nov 8 00:28:24.378049 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Nov 8 00:28:24.378160 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Nov 8 00:28:24.378245 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Nov 8 00:28:24.378351 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Nov 8 00:28:24.388269 kernel: AVX2 version of gcm_enc/dec engaged. Nov 8 00:28:24.388307 kernel: AES CTR mode by8 optimization enabled Nov 8 00:28:24.385825 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Nov 8 00:28:24.392224 kernel: libata version 3.00 loaded. Nov 8 00:28:24.397308 kernel: sd 0:0:0:0: [sda] 17805312 512-byte logical blocks: (9.12 GB/8.49 GiB) Nov 8 00:28:24.397423 kernel: ata_piix 0000:00:07.1: version 2.13 Nov 8 00:28:24.397517 kernel: scsi host1: ata_piix Nov 8 00:28:24.399193 kernel: sd 0:0:0:0: [sda] Write Protect is off Nov 8 00:28:24.399295 kernel: sd 0:0:0:0: [sda] Mode Sense: 31 00 00 00 Nov 8 00:28:24.399364 kernel: sd 0:0:0:0: [sda] Cache data unavailable Nov 8 00:28:24.399429 kernel: sd 0:0:0:0: [sda] Assuming drive cache: write through Nov 8 00:28:24.399495 kernel: scsi host2: ata_piix Nov 8 00:28:24.399560 kernel: ata1: PATA max UDMA/33 cmd 0x1f0 ctl 0x3f6 bmdma 0x1060 irq 14 Nov 8 00:28:24.399569 kernel: ata2: PATA max UDMA/33 cmd 0x170 ctl 0x376 bmdma 0x1068 irq 15 Nov 8 00:28:24.409456 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Nov 8 00:28:24.413293 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Nov 8 00:28:24.421707 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Nov 8 00:28:24.438628 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Nov 8 00:28:24.438661 kernel: sd 0:0:0:0: [sda] Attached SCSI disk Nov 8 00:28:24.566194 kernel: ata2.00: ATAPI: VMware Virtual IDE CDROM Drive, 00000001, max UDMA/33 Nov 8 00:28:24.571193 kernel: scsi 2:0:0:0: CD-ROM NECVMWar VMware IDE CDR10 1.00 PQ: 0 ANSI: 5 Nov 8 00:28:24.595738 kernel: sr 2:0:0:0: [sr0] scsi3-mmc drive: 1x/1x writer dvd-ram cd/rw xa/form2 cdda tray Nov 8 00:28:24.595865 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Nov 8 00:28:24.602240 kernel: BTRFS: device label OEM devid 1 transid 10 /dev/sda6 scanned by (udev-worker) (491) Nov 8 00:28:24.608195 kernel: BTRFS: device fsid a2737782-a37e-42f9-8b56-489a87f47acc devid 1 transid 35 /dev/sda3 scanned by (udev-worker) (487) Nov 8 00:28:24.608226 kernel: sr 2:0:0:0: Attached scsi CD-ROM sr0 Nov 8 00:28:24.608756 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Virtual_disk EFI-SYSTEM. Nov 8 00:28:24.612066 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Virtual_disk ROOT. Nov 8 00:28:24.614817 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Virtual_disk OEM. Nov 8 00:28:24.617067 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - Virtual_disk USR-A. Nov 8 00:28:24.617344 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Virtual_disk USR-A. Nov 8 00:28:24.621260 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Nov 8 00:28:24.647212 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Nov 8 00:28:24.652329 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Nov 8 00:28:25.703450 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Nov 8 00:28:25.703495 disk-uuid[590]: The operation has completed successfully. Nov 8 00:28:25.818872 systemd[1]: disk-uuid.service: Deactivated successfully. Nov 8 00:28:25.819142 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Nov 8 00:28:25.827323 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Nov 8 00:28:25.829247 sh[607]: Success Nov 8 00:28:25.838189 kernel: device-mapper: verity: sha256 using implementation "sha256-avx2" Nov 8 00:28:25.883041 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Nov 8 00:28:25.888042 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Nov 8 00:28:25.888400 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Nov 8 00:28:25.903340 kernel: BTRFS info (device dm-0): first mount of filesystem a2737782-a37e-42f9-8b56-489a87f47acc Nov 8 00:28:25.903371 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Nov 8 00:28:25.903379 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Nov 8 00:28:25.904429 kernel: BTRFS info (device dm-0): disabling log replay at mount time Nov 8 00:28:25.906183 kernel: BTRFS info (device dm-0): using free space tree Nov 8 00:28:25.912185 kernel: BTRFS info (device dm-0): enabling ssd optimizations Nov 8 00:28:25.913318 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Nov 8 00:28:25.918266 systemd[1]: Starting afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments... Nov 8 00:28:25.920254 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Nov 8 00:28:25.940930 kernel: BTRFS info (device sda6): first mount of filesystem 7b59d8a2-cf4e-4d67-8d1e-00d7f134f45e Nov 8 00:28:25.940967 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Nov 8 00:28:25.940975 kernel: BTRFS info (device sda6): using free space tree Nov 8 00:28:25.955190 kernel: BTRFS info (device sda6): enabling ssd optimizations Nov 8 00:28:25.962258 systemd[1]: mnt-oem.mount: Deactivated successfully. Nov 8 00:28:25.963282 kernel: BTRFS info (device sda6): last unmount of filesystem 7b59d8a2-cf4e-4d67-8d1e-00d7f134f45e Nov 8 00:28:25.967119 systemd[1]: Finished ignition-setup.service - Ignition (setup). Nov 8 00:28:25.972306 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Nov 8 00:28:25.982368 systemd[1]: Finished afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments. Nov 8 00:28:25.991299 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Nov 8 00:28:26.044665 ignition[667]: Ignition 2.19.0 Nov 8 00:28:26.044672 ignition[667]: Stage: fetch-offline Nov 8 00:28:26.044702 ignition[667]: no configs at "/usr/lib/ignition/base.d" Nov 8 00:28:26.044709 ignition[667]: no config dir at "/usr/lib/ignition/base.platform.d/vmware" Nov 8 00:28:26.044771 ignition[667]: parsed url from cmdline: "" Nov 8 00:28:26.044774 ignition[667]: no config URL provided Nov 8 00:28:26.044776 ignition[667]: reading system config file "/usr/lib/ignition/user.ign" Nov 8 00:28:26.044781 ignition[667]: no config at "/usr/lib/ignition/user.ign" Nov 8 00:28:26.045203 ignition[667]: config successfully fetched Nov 8 00:28:26.045221 ignition[667]: parsing config with SHA512: f99f94d8a07b24e781616f4e53aef106b429aac977dd81c95b13f54424ef851634f4937812e590a7b066af3b93f7edca4810785e5cb5bd8d076a94cbc3a78fbb Nov 8 00:28:26.047586 unknown[667]: fetched base config from "system" Nov 8 00:28:26.047592 unknown[667]: fetched user config from "vmware" Nov 8 00:28:26.047853 ignition[667]: fetch-offline: fetch-offline passed Nov 8 00:28:26.047895 ignition[667]: Ignition finished successfully Nov 8 00:28:26.048888 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Nov 8 00:28:26.070241 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Nov 8 00:28:26.074262 systemd[1]: Starting systemd-networkd.service - Network Configuration... Nov 8 00:28:26.086429 systemd-networkd[800]: lo: Link UP Nov 8 00:28:26.086646 systemd-networkd[800]: lo: Gained carrier Nov 8 00:28:26.087480 systemd-networkd[800]: Enumeration completed Nov 8 00:28:26.087649 systemd[1]: Started systemd-networkd.service - Network Configuration. Nov 8 00:28:26.087812 systemd[1]: Reached target network.target - Network. Nov 8 00:28:26.087816 systemd-networkd[800]: ens192: Configuring with /etc/systemd/network/10-dracut-cmdline-99.network. Nov 8 00:28:26.087919 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Nov 8 00:28:26.091382 kernel: vmxnet3 0000:0b:00.0 ens192: intr type 3, mode 0, 3 vectors allocated Nov 8 00:28:26.091502 kernel: vmxnet3 0000:0b:00.0 ens192: NIC Link is Up 10000 Mbps Nov 8 00:28:26.091881 systemd-networkd[800]: ens192: Link UP Nov 8 00:28:26.091995 systemd-networkd[800]: ens192: Gained carrier Nov 8 00:28:26.096322 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Nov 8 00:28:26.104678 ignition[802]: Ignition 2.19.0 Nov 8 00:28:26.104685 ignition[802]: Stage: kargs Nov 8 00:28:26.104788 ignition[802]: no configs at "/usr/lib/ignition/base.d" Nov 8 00:28:26.104795 ignition[802]: no config dir at "/usr/lib/ignition/base.platform.d/vmware" Nov 8 00:28:26.105356 ignition[802]: kargs: kargs passed Nov 8 00:28:26.105389 ignition[802]: Ignition finished successfully Nov 8 00:28:26.106694 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Nov 8 00:28:26.114283 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Nov 8 00:28:26.121752 ignition[810]: Ignition 2.19.0 Nov 8 00:28:26.121758 ignition[810]: Stage: disks Nov 8 00:28:26.121864 ignition[810]: no configs at "/usr/lib/ignition/base.d" Nov 8 00:28:26.121871 ignition[810]: no config dir at "/usr/lib/ignition/base.platform.d/vmware" Nov 8 00:28:26.122434 ignition[810]: disks: disks passed Nov 8 00:28:26.122462 ignition[810]: Ignition finished successfully Nov 8 00:28:26.123440 systemd[1]: Finished ignition-disks.service - Ignition (disks). Nov 8 00:28:26.123730 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Nov 8 00:28:26.123962 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Nov 8 00:28:26.124181 systemd[1]: Reached target local-fs.target - Local File Systems. Nov 8 00:28:26.124269 systemd[1]: Reached target sysinit.target - System Initialization. Nov 8 00:28:26.124355 systemd[1]: Reached target basic.target - Basic System. Nov 8 00:28:26.127264 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Nov 8 00:28:26.137400 systemd-fsck[818]: ROOT: clean, 14/1628000 files, 120691/1617920 blocks Nov 8 00:28:26.138585 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Nov 8 00:28:26.144280 systemd[1]: Mounting sysroot.mount - /sysroot... Nov 8 00:28:26.206183 kernel: EXT4-fs (sda9): mounted filesystem 3cd35b5c-4e0e-45c1-abc9-cf70eebd42df r/w with ordered data mode. Quota mode: none. Nov 8 00:28:26.206563 systemd[1]: Mounted sysroot.mount - /sysroot. Nov 8 00:28:26.207024 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Nov 8 00:28:26.212226 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Nov 8 00:28:26.214268 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Nov 8 00:28:26.214583 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Nov 8 00:28:26.214607 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Nov 8 00:28:26.214620 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Nov 8 00:28:26.217244 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Nov 8 00:28:26.217837 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Nov 8 00:28:26.221184 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/sda6 scanned by mount (826) Nov 8 00:28:26.223986 kernel: BTRFS info (device sda6): first mount of filesystem 7b59d8a2-cf4e-4d67-8d1e-00d7f134f45e Nov 8 00:28:26.224008 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Nov 8 00:28:26.224022 kernel: BTRFS info (device sda6): using free space tree Nov 8 00:28:26.228185 kernel: BTRFS info (device sda6): enabling ssd optimizations Nov 8 00:28:26.229001 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Nov 8 00:28:26.246161 initrd-setup-root[850]: cut: /sysroot/etc/passwd: No such file or directory Nov 8 00:28:26.248668 initrd-setup-root[857]: cut: /sysroot/etc/group: No such file or directory Nov 8 00:28:26.250791 initrd-setup-root[864]: cut: /sysroot/etc/shadow: No such file or directory Nov 8 00:28:26.252827 initrd-setup-root[871]: cut: /sysroot/etc/gshadow: No such file or directory Nov 8 00:28:26.304105 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Nov 8 00:28:26.309276 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Nov 8 00:28:26.311736 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Nov 8 00:28:26.314193 kernel: BTRFS info (device sda6): last unmount of filesystem 7b59d8a2-cf4e-4d67-8d1e-00d7f134f45e Nov 8 00:28:26.328108 ignition[939]: INFO : Ignition 2.19.0 Nov 8 00:28:26.328108 ignition[939]: INFO : Stage: mount Nov 8 00:28:26.328548 ignition[939]: INFO : no configs at "/usr/lib/ignition/base.d" Nov 8 00:28:26.328548 ignition[939]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/vmware" Nov 8 00:28:26.328826 ignition[939]: INFO : mount: mount passed Nov 8 00:28:26.328950 ignition[939]: INFO : Ignition finished successfully Nov 8 00:28:26.329397 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Nov 8 00:28:26.329579 systemd[1]: Finished ignition-mount.service - Ignition (mount). Nov 8 00:28:26.333303 systemd[1]: Starting ignition-files.service - Ignition (files)... Nov 8 00:28:26.902069 systemd[1]: sysroot-oem.mount: Deactivated successfully. Nov 8 00:28:26.908316 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Nov 8 00:28:26.970362 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/sda6 scanned by mount (951) Nov 8 00:28:26.973203 kernel: BTRFS info (device sda6): first mount of filesystem 7b59d8a2-cf4e-4d67-8d1e-00d7f134f45e Nov 8 00:28:26.973225 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Nov 8 00:28:26.973236 kernel: BTRFS info (device sda6): using free space tree Nov 8 00:28:26.979193 kernel: BTRFS info (device sda6): enabling ssd optimizations Nov 8 00:28:26.980555 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Nov 8 00:28:27.003682 ignition[967]: INFO : Ignition 2.19.0 Nov 8 00:28:27.003682 ignition[967]: INFO : Stage: files Nov 8 00:28:27.004062 ignition[967]: INFO : no configs at "/usr/lib/ignition/base.d" Nov 8 00:28:27.004062 ignition[967]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/vmware" Nov 8 00:28:27.004486 ignition[967]: DEBUG : files: compiled without relabeling support, skipping Nov 8 00:28:27.005266 ignition[967]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Nov 8 00:28:27.005409 ignition[967]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Nov 8 00:28:27.007794 ignition[967]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Nov 8 00:28:27.007998 ignition[967]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Nov 8 00:28:27.008168 unknown[967]: wrote ssh authorized keys file for user: core Nov 8 00:28:27.008383 ignition[967]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Nov 8 00:28:27.010791 ignition[967]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/etc/flatcar-cgroupv1" Nov 8 00:28:27.010980 ignition[967]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/etc/flatcar-cgroupv1" Nov 8 00:28:27.010980 ignition[967]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/helm-v3.17.0-linux-amd64.tar.gz" Nov 8 00:28:27.010980 ignition[967]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://get.helm.sh/helm-v3.17.0-linux-amd64.tar.gz: attempt #1 Nov 8 00:28:27.048879 ignition[967]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Nov 8 00:28:27.112655 ignition[967]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/helm-v3.17.0-linux-amd64.tar.gz" Nov 8 00:28:27.112942 ignition[967]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Nov 8 00:28:27.112942 ignition[967]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Nov 8 00:28:27.112942 ignition[967]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Nov 8 00:28:27.112942 ignition[967]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Nov 8 00:28:27.112942 ignition[967]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Nov 8 00:28:27.113735 ignition[967]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Nov 8 00:28:27.113735 ignition[967]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Nov 8 00:28:27.113735 ignition[967]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Nov 8 00:28:27.113735 ignition[967]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Nov 8 00:28:27.113735 ignition[967]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Nov 8 00:28:27.113735 ignition[967]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Nov 8 00:28:27.113735 ignition[967]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Nov 8 00:28:27.113735 ignition[967]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Nov 8 00:28:27.113735 ignition[967]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://extensions.flatcar.org/extensions/kubernetes-v1.32.4-x86-64.raw: attempt #1 Nov 8 00:28:27.385298 systemd-networkd[800]: ens192: Gained IPv6LL Nov 8 00:28:27.585686 ignition[967]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Nov 8 00:28:27.793505 ignition[967]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Nov 8 00:28:27.793505 ignition[967]: INFO : files: createFilesystemsFiles: createFiles: op(c): [started] writing file "/sysroot/etc/systemd/network/00-vmware.network" Nov 8 00:28:27.793991 ignition[967]: INFO : files: createFilesystemsFiles: createFiles: op(c): [finished] writing file "/sysroot/etc/systemd/network/00-vmware.network" Nov 8 00:28:27.793991 ignition[967]: INFO : files: op(d): [started] processing unit "containerd.service" Nov 8 00:28:27.793991 ignition[967]: INFO : files: op(d): op(e): [started] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Nov 8 00:28:27.793991 ignition[967]: INFO : files: op(d): op(e): [finished] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Nov 8 00:28:27.793991 ignition[967]: INFO : files: op(d): [finished] processing unit "containerd.service" Nov 8 00:28:27.793991 ignition[967]: INFO : files: op(f): [started] processing unit "prepare-helm.service" Nov 8 00:28:27.793991 ignition[967]: INFO : files: op(f): op(10): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Nov 8 00:28:27.793991 ignition[967]: INFO : files: op(f): op(10): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Nov 8 00:28:27.793991 ignition[967]: INFO : files: op(f): [finished] processing unit "prepare-helm.service" Nov 8 00:28:27.793991 ignition[967]: INFO : files: op(11): [started] processing unit "coreos-metadata.service" Nov 8 00:28:27.793991 ignition[967]: INFO : files: op(11): op(12): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Nov 8 00:28:27.795729 ignition[967]: INFO : files: op(11): op(12): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Nov 8 00:28:27.795729 ignition[967]: INFO : files: op(11): [finished] processing unit "coreos-metadata.service" Nov 8 00:28:27.795729 ignition[967]: INFO : files: op(13): [started] setting preset to disabled for "coreos-metadata.service" Nov 8 00:28:27.829798 ignition[967]: INFO : files: op(13): op(14): [started] removing enablement symlink(s) for "coreos-metadata.service" Nov 8 00:28:27.833249 ignition[967]: INFO : files: op(13): op(14): [finished] removing enablement symlink(s) for "coreos-metadata.service" Nov 8 00:28:27.833469 ignition[967]: INFO : files: op(13): [finished] setting preset to disabled for "coreos-metadata.service" Nov 8 00:28:27.833469 ignition[967]: INFO : files: op(15): [started] setting preset to enabled for "prepare-helm.service" Nov 8 00:28:27.833469 ignition[967]: INFO : files: op(15): [finished] setting preset to enabled for "prepare-helm.service" Nov 8 00:28:27.834451 ignition[967]: INFO : files: createResultFile: createFiles: op(16): [started] writing file "/sysroot/etc/.ignition-result.json" Nov 8 00:28:27.834451 ignition[967]: INFO : files: createResultFile: createFiles: op(16): [finished] writing file "/sysroot/etc/.ignition-result.json" Nov 8 00:28:27.834451 ignition[967]: INFO : files: files passed Nov 8 00:28:27.834451 ignition[967]: INFO : Ignition finished successfully Nov 8 00:28:27.834514 systemd[1]: Finished ignition-files.service - Ignition (files). Nov 8 00:28:27.839267 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Nov 8 00:28:27.841267 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Nov 8 00:28:27.842526 systemd[1]: ignition-quench.service: Deactivated successfully. Nov 8 00:28:27.842592 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Nov 8 00:28:27.849077 initrd-setup-root-after-ignition[999]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Nov 8 00:28:27.849077 initrd-setup-root-after-ignition[999]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Nov 8 00:28:27.850263 initrd-setup-root-after-ignition[1003]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Nov 8 00:28:27.851359 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Nov 8 00:28:27.851591 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Nov 8 00:28:27.855269 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Nov 8 00:28:27.871847 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Nov 8 00:28:27.871904 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Nov 8 00:28:27.872130 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Nov 8 00:28:27.872253 systemd[1]: Reached target initrd.target - Initrd Default Target. Nov 8 00:28:27.872487 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Nov 8 00:28:27.874295 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Nov 8 00:28:27.883244 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Nov 8 00:28:27.888268 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Nov 8 00:28:27.895562 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Nov 8 00:28:27.895865 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Nov 8 00:28:27.896070 systemd[1]: Stopped target timers.target - Timer Units. Nov 8 00:28:27.896261 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Nov 8 00:28:27.896355 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Nov 8 00:28:27.896903 systemd[1]: Stopped target initrd.target - Initrd Default Target. Nov 8 00:28:27.897108 systemd[1]: Stopped target basic.target - Basic System. Nov 8 00:28:27.897306 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Nov 8 00:28:27.897488 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Nov 8 00:28:27.897723 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Nov 8 00:28:27.897940 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Nov 8 00:28:27.898138 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Nov 8 00:28:27.898366 systemd[1]: Stopped target sysinit.target - System Initialization. Nov 8 00:28:27.898561 systemd[1]: Stopped target local-fs.target - Local File Systems. Nov 8 00:28:27.898735 systemd[1]: Stopped target swap.target - Swaps. Nov 8 00:28:27.898862 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Nov 8 00:28:27.898961 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Nov 8 00:28:27.899323 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Nov 8 00:28:27.899501 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Nov 8 00:28:27.899656 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Nov 8 00:28:27.899712 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Nov 8 00:28:27.899881 systemd[1]: dracut-initqueue.service: Deactivated successfully. Nov 8 00:28:27.899961 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Nov 8 00:28:27.900387 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Nov 8 00:28:27.900491 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Nov 8 00:28:27.900706 systemd[1]: Stopped target paths.target - Path Units. Nov 8 00:28:27.900858 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Nov 8 00:28:27.904198 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Nov 8 00:28:27.904436 systemd[1]: Stopped target slices.target - Slice Units. Nov 8 00:28:27.904603 systemd[1]: Stopped target sockets.target - Socket Units. Nov 8 00:28:27.904752 systemd[1]: iscsid.socket: Deactivated successfully. Nov 8 00:28:27.904804 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Nov 8 00:28:27.905076 systemd[1]: iscsiuio.socket: Deactivated successfully. Nov 8 00:28:27.905142 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Nov 8 00:28:27.905372 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Nov 8 00:28:27.905457 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Nov 8 00:28:27.905689 systemd[1]: ignition-files.service: Deactivated successfully. Nov 8 00:28:27.905766 systemd[1]: Stopped ignition-files.service - Ignition (files). Nov 8 00:28:27.913369 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Nov 8 00:28:27.916323 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Nov 8 00:28:27.916437 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Nov 8 00:28:27.916532 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Nov 8 00:28:27.916725 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Nov 8 00:28:27.916784 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Nov 8 00:28:27.918711 systemd[1]: initrd-cleanup.service: Deactivated successfully. Nov 8 00:28:27.918776 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Nov 8 00:28:27.923865 ignition[1023]: INFO : Ignition 2.19.0 Nov 8 00:28:27.923865 ignition[1023]: INFO : Stage: umount Nov 8 00:28:27.923865 ignition[1023]: INFO : no configs at "/usr/lib/ignition/base.d" Nov 8 00:28:27.923865 ignition[1023]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/vmware" Nov 8 00:28:27.925652 ignition[1023]: INFO : umount: umount passed Nov 8 00:28:27.925792 ignition[1023]: INFO : Ignition finished successfully Nov 8 00:28:27.926851 systemd[1]: ignition-mount.service: Deactivated successfully. Nov 8 00:28:27.926967 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Nov 8 00:28:27.927249 systemd[1]: Stopped target network.target - Network. Nov 8 00:28:27.927354 systemd[1]: ignition-disks.service: Deactivated successfully. Nov 8 00:28:27.927385 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Nov 8 00:28:27.927545 systemd[1]: ignition-kargs.service: Deactivated successfully. Nov 8 00:28:27.927568 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Nov 8 00:28:27.927814 systemd[1]: ignition-setup.service: Deactivated successfully. Nov 8 00:28:27.927844 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Nov 8 00:28:27.928002 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Nov 8 00:28:27.928035 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Nov 8 00:28:27.928314 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Nov 8 00:28:27.928618 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Nov 8 00:28:27.931613 systemd[1]: systemd-networkd.service: Deactivated successfully. Nov 8 00:28:27.931679 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Nov 8 00:28:27.931977 systemd[1]: systemd-networkd.socket: Deactivated successfully. Nov 8 00:28:27.932004 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Nov 8 00:28:27.936219 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Nov 8 00:28:27.936312 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Nov 8 00:28:27.936339 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Nov 8 00:28:27.936455 systemd[1]: afterburn-network-kargs.service: Deactivated successfully. Nov 8 00:28:27.936477 systemd[1]: Stopped afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments. Nov 8 00:28:27.936626 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Nov 8 00:28:27.936862 systemd[1]: systemd-resolved.service: Deactivated successfully. Nov 8 00:28:27.936915 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Nov 8 00:28:27.940649 systemd[1]: systemd-sysctl.service: Deactivated successfully. Nov 8 00:28:27.940694 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Nov 8 00:28:27.940833 systemd[1]: systemd-modules-load.service: Deactivated successfully. Nov 8 00:28:27.940854 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Nov 8 00:28:27.940976 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Nov 8 00:28:27.940997 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Nov 8 00:28:27.943953 systemd[1]: network-cleanup.service: Deactivated successfully. Nov 8 00:28:27.944012 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Nov 8 00:28:27.947379 systemd[1]: systemd-udevd.service: Deactivated successfully. Nov 8 00:28:27.947452 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Nov 8 00:28:27.948086 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Nov 8 00:28:27.948117 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Nov 8 00:28:27.948258 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Nov 8 00:28:27.948276 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Nov 8 00:28:27.948389 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Nov 8 00:28:27.948411 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Nov 8 00:28:27.948565 systemd[1]: dracut-cmdline.service: Deactivated successfully. Nov 8 00:28:27.948587 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Nov 8 00:28:27.948725 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Nov 8 00:28:27.948747 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Nov 8 00:28:27.956282 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Nov 8 00:28:27.956389 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Nov 8 00:28:27.956420 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Nov 8 00:28:27.956542 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Nov 8 00:28:27.956565 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Nov 8 00:28:27.956677 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Nov 8 00:28:27.956698 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Nov 8 00:28:27.956805 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Nov 8 00:28:27.956826 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Nov 8 00:28:27.957878 systemd[1]: sysroot-boot.mount: Deactivated successfully. Nov 8 00:28:27.960722 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Nov 8 00:28:27.960784 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Nov 8 00:28:28.061387 systemd[1]: sysroot-boot.service: Deactivated successfully. Nov 8 00:28:28.061475 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Nov 8 00:28:28.062000 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Nov 8 00:28:28.062160 systemd[1]: initrd-setup-root.service: Deactivated successfully. Nov 8 00:28:28.062224 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Nov 8 00:28:28.065301 systemd[1]: Starting initrd-switch-root.service - Switch Root... Nov 8 00:28:28.093093 systemd[1]: Switching root. Nov 8 00:28:28.122105 systemd-journald[217]: Journal stopped Nov 8 00:28:29.396349 systemd-journald[217]: Received SIGTERM from PID 1 (systemd). Nov 8 00:28:29.396375 kernel: SELinux: policy capability network_peer_controls=1 Nov 8 00:28:29.396384 kernel: SELinux: policy capability open_perms=1 Nov 8 00:28:29.396390 kernel: SELinux: policy capability extended_socket_class=1 Nov 8 00:28:29.396395 kernel: SELinux: policy capability always_check_network=0 Nov 8 00:28:29.396400 kernel: SELinux: policy capability cgroup_seclabel=1 Nov 8 00:28:29.396408 kernel: SELinux: policy capability nnp_nosuid_transition=1 Nov 8 00:28:29.396414 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Nov 8 00:28:29.396420 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Nov 8 00:28:29.396425 kernel: audit: type=1403 audit(1762561708.814:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Nov 8 00:28:29.396432 systemd[1]: Successfully loaded SELinux policy in 31ms. Nov 8 00:28:29.396439 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 7.005ms. Nov 8 00:28:29.396446 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Nov 8 00:28:29.396454 systemd[1]: Detected virtualization vmware. Nov 8 00:28:29.396461 systemd[1]: Detected architecture x86-64. Nov 8 00:28:29.396467 systemd[1]: Detected first boot. Nov 8 00:28:29.396474 systemd[1]: Initializing machine ID from random generator. Nov 8 00:28:29.396483 zram_generator::config[1085]: No configuration found. Nov 8 00:28:29.396490 systemd[1]: Populated /etc with preset unit settings. Nov 8 00:28:29.396498 systemd[1]: /etc/systemd/system/coreos-metadata.service:11: Ignoring unknown escape sequences: "echo "COREOS_CUSTOM_PRIVATE_IPV4=$(ip addr show ens192 | grep "inet 10." | grep -Po "inet \K[\d.]+") Nov 8 00:28:29.396505 systemd[1]: COREOS_CUSTOM_PUBLIC_IPV4=$(ip addr show ens192 | grep -v "inet 10." | grep -Po "inet \K[\d.]+")" > ${OUTPUT}" Nov 8 00:28:29.396512 systemd[1]: Queued start job for default target multi-user.target. Nov 8 00:28:29.396518 systemd[1]: Unnecessary job was removed for dev-sda6.device - /dev/sda6. Nov 8 00:28:29.396525 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Nov 8 00:28:29.396534 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Nov 8 00:28:29.396541 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Nov 8 00:28:29.396548 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Nov 8 00:28:29.396554 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Nov 8 00:28:29.396561 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Nov 8 00:28:29.396568 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Nov 8 00:28:29.396575 systemd[1]: Created slice user.slice - User and Session Slice. Nov 8 00:28:29.396583 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Nov 8 00:28:29.396590 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Nov 8 00:28:29.396597 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Nov 8 00:28:29.396603 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Nov 8 00:28:29.396610 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Nov 8 00:28:29.396617 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Nov 8 00:28:29.396624 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Nov 8 00:28:29.396630 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Nov 8 00:28:29.396639 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Nov 8 00:28:29.396646 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Nov 8 00:28:29.396655 systemd[1]: Reached target remote-fs.target - Remote File Systems. Nov 8 00:28:29.396662 systemd[1]: Reached target slices.target - Slice Units. Nov 8 00:28:29.396669 systemd[1]: Reached target swap.target - Swaps. Nov 8 00:28:29.396676 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Nov 8 00:28:29.396683 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Nov 8 00:28:29.396690 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Nov 8 00:28:29.396698 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Nov 8 00:28:29.396706 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Nov 8 00:28:29.396713 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Nov 8 00:28:29.396721 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Nov 8 00:28:29.396728 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Nov 8 00:28:29.396736 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Nov 8 00:28:29.396744 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Nov 8 00:28:29.396751 systemd[1]: Mounting media.mount - External Media Directory... Nov 8 00:28:29.396759 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 8 00:28:29.396766 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Nov 8 00:28:29.396773 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Nov 8 00:28:29.396780 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Nov 8 00:28:29.396787 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Nov 8 00:28:29.396796 systemd[1]: Starting ignition-delete-config.service - Ignition (delete config)... Nov 8 00:28:29.396803 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Nov 8 00:28:29.396810 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Nov 8 00:28:29.396817 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Nov 8 00:28:29.396828 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Nov 8 00:28:29.396842 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Nov 8 00:28:29.396857 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Nov 8 00:28:29.396865 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Nov 8 00:28:29.396872 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Nov 8 00:28:29.396881 systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling. Nov 8 00:28:29.396889 systemd[1]: systemd-journald.service: (This warning is only shown for the first unit using IP firewalling.) Nov 8 00:28:29.396896 systemd[1]: Starting systemd-journald.service - Journal Service... Nov 8 00:28:29.396903 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Nov 8 00:28:29.396910 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Nov 8 00:28:29.396917 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Nov 8 00:28:29.396925 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Nov 8 00:28:29.396932 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 8 00:28:29.396940 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Nov 8 00:28:29.396948 kernel: fuse: init (API version 7.39) Nov 8 00:28:29.396954 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Nov 8 00:28:29.396961 systemd[1]: Mounted media.mount - External Media Directory. Nov 8 00:28:29.396968 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Nov 8 00:28:29.396975 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Nov 8 00:28:29.396982 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Nov 8 00:28:29.396990 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Nov 8 00:28:29.396998 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Nov 8 00:28:29.397006 systemd[1]: modprobe@configfs.service: Deactivated successfully. Nov 8 00:28:29.397013 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Nov 8 00:28:29.397020 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Nov 8 00:28:29.397027 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Nov 8 00:28:29.397034 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Nov 8 00:28:29.397041 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Nov 8 00:28:29.397048 systemd[1]: modprobe@fuse.service: Deactivated successfully. Nov 8 00:28:29.397055 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Nov 8 00:28:29.397063 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Nov 8 00:28:29.397070 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Nov 8 00:28:29.397078 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Nov 8 00:28:29.397085 systemd[1]: Reached target network-pre.target - Preparation for Network. Nov 8 00:28:29.397091 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Nov 8 00:28:29.397098 kernel: loop: module loaded Nov 8 00:28:29.397105 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Nov 8 00:28:29.397112 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Nov 8 00:28:29.397132 systemd-journald[1198]: Collecting audit messages is disabled. Nov 8 00:28:29.397148 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Nov 8 00:28:29.397156 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Nov 8 00:28:29.397163 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Nov 8 00:28:29.397179 systemd-journald[1198]: Journal started Nov 8 00:28:29.397197 systemd-journald[1198]: Runtime Journal (/run/log/journal/84c463a85bc64bf289ae73f8d7bc4de3) is 4.8M, max 38.6M, 33.8M free. Nov 8 00:28:29.397571 jq[1162]: true Nov 8 00:28:29.398103 jq[1215]: true Nov 8 00:28:29.402262 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Nov 8 00:28:29.406471 kernel: ACPI: bus type drm_connector registered Nov 8 00:28:29.406495 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Nov 8 00:28:29.408185 systemd[1]: Started systemd-journald.service - Journal Service. Nov 8 00:28:29.411247 systemd[1]: modprobe@drm.service: Deactivated successfully. Nov 8 00:28:29.417346 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Nov 8 00:28:29.418474 systemd[1]: modprobe@loop.service: Deactivated successfully. Nov 8 00:28:29.418589 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Nov 8 00:28:29.419385 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Nov 8 00:28:29.420299 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Nov 8 00:28:29.426020 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Nov 8 00:28:29.436739 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Nov 8 00:28:29.442699 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Nov 8 00:28:29.450298 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Nov 8 00:28:29.450481 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Nov 8 00:28:29.464582 systemd-journald[1198]: Time spent on flushing to /var/log/journal/84c463a85bc64bf289ae73f8d7bc4de3 is 50.393ms for 1824 entries. Nov 8 00:28:29.464582 systemd-journald[1198]: System Journal (/var/log/journal/84c463a85bc64bf289ae73f8d7bc4de3) is 8.0M, max 584.8M, 576.8M free. Nov 8 00:28:29.525498 systemd-journald[1198]: Received client request to flush runtime journal. Nov 8 00:28:29.495042 systemd-tmpfiles[1225]: ACLs are not supported, ignoring. Nov 8 00:28:29.522073 ignition[1229]: Ignition 2.19.0 Nov 8 00:28:29.495052 systemd-tmpfiles[1225]: ACLs are not supported, ignoring. Nov 8 00:28:29.522300 ignition[1229]: deleting config from guestinfo properties Nov 8 00:28:29.500584 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Nov 8 00:28:29.509420 systemd[1]: Starting systemd-sysusers.service - Create System Users... Nov 8 00:28:29.526999 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Nov 8 00:28:29.530839 ignition[1229]: Successfully deleted config Nov 8 00:28:29.537563 systemd[1]: Finished ignition-delete-config.service - Ignition (delete config). Nov 8 00:28:29.539101 systemd[1]: Finished systemd-sysusers.service - Create System Users. Nov 8 00:28:29.547573 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Nov 8 00:28:29.572380 systemd-tmpfiles[1273]: ACLs are not supported, ignoring. Nov 8 00:28:29.572393 systemd-tmpfiles[1273]: ACLs are not supported, ignoring. Nov 8 00:28:29.579908 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Nov 8 00:28:29.590430 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Nov 8 00:28:29.596284 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Nov 8 00:28:29.601925 udevadm[1279]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Nov 8 00:28:29.888424 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Nov 8 00:28:29.905291 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Nov 8 00:28:29.922945 systemd-udevd[1282]: Using default interface naming scheme 'v255'. Nov 8 00:28:29.971100 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Nov 8 00:28:29.978411 systemd[1]: Starting systemd-networkd.service - Network Configuration... Nov 8 00:28:29.994289 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Nov 8 00:28:30.012505 systemd[1]: Found device dev-ttyS0.device - /dev/ttyS0. Nov 8 00:28:30.020706 systemd[1]: Started systemd-userdbd.service - User Database Manager. Nov 8 00:28:30.075833 systemd-networkd[1287]: lo: Link UP Nov 8 00:28:30.077208 systemd-networkd[1287]: lo: Gained carrier Nov 8 00:28:30.078132 systemd-networkd[1287]: Enumeration completed Nov 8 00:28:30.078243 systemd[1]: Started systemd-networkd.service - Network Configuration. Nov 8 00:28:30.079289 systemd-networkd[1287]: ens192: Configuring with /etc/systemd/network/00-vmware.network. Nov 8 00:28:30.083204 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input2 Nov 8 00:28:30.083237 kernel: vmxnet3 0000:0b:00.0 ens192: intr type 3, mode 0, 3 vectors allocated Nov 8 00:28:30.083356 kernel: vmxnet3 0000:0b:00.0 ens192: NIC Link is Up 10000 Mbps Nov 8 00:28:30.083828 systemd-networkd[1287]: ens192: Link UP Nov 8 00:28:30.084205 systemd-networkd[1287]: ens192: Gained carrier Nov 8 00:28:30.086185 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 35 scanned by (udev-worker) (1296) Nov 8 00:28:30.087307 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Nov 8 00:28:30.104201 kernel: ACPI: button: Power Button [PWRF] Nov 8 00:28:30.132060 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Virtual_disk OEM. Nov 8 00:28:30.155184 kernel: piix4_smbus 0000:00:07.3: SMBus Host Controller not enabled! Nov 8 00:28:30.170087 kernel: vmw_vmci 0000:00:07.7: Using capabilities 0xc Nov 8 00:28:30.170250 kernel: input: ImPS/2 Generic Wheel Mouse as /devices/platform/i8042/serio1/input/input3 Nov 8 00:28:30.183188 kernel: Guest personality initialized and is active Nov 8 00:28:30.186235 kernel: VMCI host device registered (name=vmci, major=10, minor=125) Nov 8 00:28:30.186266 kernel: Initialized host personality Nov 8 00:28:30.191159 (udev-worker)[1289]: id: Truncating stdout of 'dmi_memory_id' up to 16384 byte. Nov 8 00:28:30.200186 kernel: mousedev: PS/2 mouse device common for all mice Nov 8 00:28:30.207419 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Nov 8 00:28:30.218900 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Nov 8 00:28:30.228289 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Nov 8 00:28:30.255800 lvm[1324]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Nov 8 00:28:30.276103 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Nov 8 00:28:30.276357 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Nov 8 00:28:30.282297 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Nov 8 00:28:30.286204 lvm[1328]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Nov 8 00:28:30.295820 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Nov 8 00:28:30.317902 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Nov 8 00:28:30.318464 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Nov 8 00:28:30.318614 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Nov 8 00:28:30.318628 systemd[1]: Reached target local-fs.target - Local File Systems. Nov 8 00:28:30.318739 systemd[1]: Reached target machines.target - Containers. Nov 8 00:28:30.319598 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Nov 8 00:28:30.325405 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Nov 8 00:28:30.328308 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Nov 8 00:28:30.328534 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Nov 8 00:28:30.329252 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Nov 8 00:28:30.331423 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Nov 8 00:28:30.335686 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Nov 8 00:28:30.337066 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Nov 8 00:28:30.344687 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Nov 8 00:28:30.353312 kernel: loop0: detected capacity change from 0 to 140768 Nov 8 00:28:30.371085 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Nov 8 00:28:30.373660 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Nov 8 00:28:30.401220 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Nov 8 00:28:30.418188 kernel: loop1: detected capacity change from 0 to 2976 Nov 8 00:28:30.447216 kernel: loop2: detected capacity change from 0 to 142488 Nov 8 00:28:30.477413 kernel: loop3: detected capacity change from 0 to 224512 Nov 8 00:28:30.556194 kernel: loop4: detected capacity change from 0 to 140768 Nov 8 00:28:30.588194 kernel: loop5: detected capacity change from 0 to 2976 Nov 8 00:28:30.602194 kernel: loop6: detected capacity change from 0 to 142488 Nov 8 00:28:30.616194 kernel: loop7: detected capacity change from 0 to 224512 Nov 8 00:28:30.634964 (sd-merge)[1352]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-vmware'. Nov 8 00:28:30.635288 (sd-merge)[1352]: Merged extensions into '/usr'. Nov 8 00:28:30.639219 systemd[1]: Reloading requested from client PID 1339 ('systemd-sysext') (unit systemd-sysext.service)... Nov 8 00:28:30.639229 systemd[1]: Reloading... Nov 8 00:28:30.672211 zram_generator::config[1379]: No configuration found. Nov 8 00:28:30.764953 systemd[1]: /etc/systemd/system/coreos-metadata.service:11: Ignoring unknown escape sequences: "echo "COREOS_CUSTOM_PRIVATE_IPV4=$(ip addr show ens192 | grep "inet 10." | grep -Po "inet \K[\d.]+") Nov 8 00:28:30.782392 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Nov 8 00:28:30.820607 systemd[1]: Reloading finished in 181 ms. Nov 8 00:28:30.831162 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Nov 8 00:28:30.836378 systemd[1]: Starting ensure-sysext.service... Nov 8 00:28:30.837266 ldconfig[1335]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Nov 8 00:28:30.840380 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Nov 8 00:28:30.840816 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Nov 8 00:28:30.843858 systemd[1]: Reloading requested from client PID 1441 ('systemctl') (unit ensure-sysext.service)... Nov 8 00:28:30.843924 systemd[1]: Reloading... Nov 8 00:28:30.854609 systemd-tmpfiles[1443]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Nov 8 00:28:30.854830 systemd-tmpfiles[1443]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Nov 8 00:28:30.855359 systemd-tmpfiles[1443]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Nov 8 00:28:30.855534 systemd-tmpfiles[1443]: ACLs are not supported, ignoring. Nov 8 00:28:30.855583 systemd-tmpfiles[1443]: ACLs are not supported, ignoring. Nov 8 00:28:30.857311 systemd-tmpfiles[1443]: Detected autofs mount point /boot during canonicalization of boot. Nov 8 00:28:30.857327 systemd-tmpfiles[1443]: Skipping /boot Nov 8 00:28:30.862843 systemd-tmpfiles[1443]: Detected autofs mount point /boot during canonicalization of boot. Nov 8 00:28:30.862855 systemd-tmpfiles[1443]: Skipping /boot Nov 8 00:28:30.891191 zram_generator::config[1474]: No configuration found. Nov 8 00:28:30.962049 systemd[1]: /etc/systemd/system/coreos-metadata.service:11: Ignoring unknown escape sequences: "echo "COREOS_CUSTOM_PRIVATE_IPV4=$(ip addr show ens192 | grep "inet 10." | grep -Po "inet \K[\d.]+") Nov 8 00:28:30.977821 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Nov 8 00:28:31.018018 systemd[1]: Reloading finished in 173 ms. Nov 8 00:28:31.038573 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Nov 8 00:28:31.044701 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Nov 8 00:28:31.048646 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Nov 8 00:28:31.050452 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Nov 8 00:28:31.054747 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Nov 8 00:28:31.056377 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Nov 8 00:28:31.060324 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 8 00:28:31.067380 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Nov 8 00:28:31.070081 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Nov 8 00:28:31.071708 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Nov 8 00:28:31.073772 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Nov 8 00:28:31.073942 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 8 00:28:31.074884 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Nov 8 00:28:31.075062 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Nov 8 00:28:31.075543 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Nov 8 00:28:31.076508 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Nov 8 00:28:31.090531 systemd[1]: modprobe@loop.service: Deactivated successfully. Nov 8 00:28:31.090642 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Nov 8 00:28:31.091299 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 8 00:28:31.094152 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Nov 8 00:28:31.095969 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Nov 8 00:28:31.096567 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Nov 8 00:28:31.096840 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 8 00:28:31.105949 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Nov 8 00:28:31.107525 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 8 00:28:31.109449 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Nov 8 00:28:31.111351 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Nov 8 00:28:31.112812 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Nov 8 00:28:31.113281 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 8 00:28:31.113696 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Nov 8 00:28:31.113799 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Nov 8 00:28:31.116055 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Nov 8 00:28:31.117369 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Nov 8 00:28:31.119007 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Nov 8 00:28:31.121839 systemd[1]: Finished ensure-sysext.service. Nov 8 00:28:31.126759 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Nov 8 00:28:31.131306 augenrules[1581]: No rules Nov 8 00:28:31.131960 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Nov 8 00:28:31.134263 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Nov 8 00:28:31.135591 systemd[1]: modprobe@drm.service: Deactivated successfully. Nov 8 00:28:31.135840 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Nov 8 00:28:31.136385 systemd[1]: modprobe@loop.service: Deactivated successfully. Nov 8 00:28:31.139543 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Nov 8 00:28:31.141628 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Nov 8 00:28:31.151285 systemd[1]: Starting systemd-update-done.service - Update is Completed... Nov 8 00:28:31.166055 systemd[1]: Finished systemd-update-done.service - Update is Completed. Nov 8 00:28:31.173400 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Nov 8 00:28:31.173921 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Nov 8 00:28:31.179060 systemd-resolved[1542]: Positive Trust Anchors: Nov 8 00:28:31.179071 systemd-resolved[1542]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Nov 8 00:28:31.179096 systemd-resolved[1542]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Nov 8 00:28:31.181888 systemd-resolved[1542]: Defaulting to hostname 'linux'. Nov 8 00:28:31.183446 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Nov 8 00:28:31.183630 systemd[1]: Reached target network.target - Network. Nov 8 00:28:31.183732 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Nov 8 00:28:31.190969 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Nov 8 00:28:31.191215 systemd[1]: Reached target sysinit.target - System Initialization. Nov 8 00:28:31.191386 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Nov 8 00:28:31.191529 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Nov 8 00:28:31.191665 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Nov 8 00:28:31.191793 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Nov 8 00:28:31.191813 systemd[1]: Reached target paths.target - Path Units. Nov 8 00:28:31.191907 systemd[1]: Reached target time-set.target - System Time Set. Nov 8 00:28:31.192134 systemd[1]: Started logrotate.timer - Daily rotation of log files. Nov 8 00:28:31.192296 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Nov 8 00:28:31.192420 systemd[1]: Reached target timers.target - Timer Units. Nov 8 00:28:31.193582 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Nov 8 00:28:31.194835 systemd[1]: Starting docker.socket - Docker Socket for the API... Nov 8 00:28:31.195737 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Nov 8 00:28:31.206846 systemd[1]: Listening on docker.socket - Docker Socket for the API. Nov 8 00:28:31.207007 systemd[1]: Reached target sockets.target - Socket Units. Nov 8 00:28:31.207113 systemd[1]: Reached target basic.target - Basic System. Nov 8 00:28:31.207321 systemd[1]: System is tainted: cgroupsv1 Nov 8 00:28:31.207345 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Nov 8 00:28:31.207359 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Nov 8 00:28:31.209363 systemd[1]: Starting containerd.service - containerd container runtime... Nov 8 00:28:31.211300 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Nov 8 00:28:31.214085 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Nov 8 00:28:31.217362 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Nov 8 00:28:31.218426 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Nov 8 00:28:31.221368 jq[1606]: false Nov 8 00:28:31.225348 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Nov 8 00:28:31.227165 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Nov 8 00:28:31.231164 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Nov 8 00:28:31.234304 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Nov 8 00:30:09.688161 systemd-timesyncd[1580]: Contacted time server 67.217.246.204:123 (0.flatcar.pool.ntp.org). Nov 8 00:30:09.688169 systemd-resolved[1542]: Clock change detected. Flushing caches. Nov 8 00:30:09.689945 systemd-timesyncd[1580]: Initial clock synchronization to Sat 2025-11-08 00:30:09.688095 UTC. Nov 8 00:30:09.693404 dbus-daemon[1604]: [system] SELinux support is enabled Nov 8 00:30:09.697066 systemd[1]: Starting systemd-logind.service - User Login Management... Nov 8 00:30:09.697411 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Nov 8 00:30:09.698517 systemd[1]: Starting update-engine.service - Update Engine... Nov 8 00:30:09.700982 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Nov 8 00:30:09.708435 systemd[1]: Starting vgauthd.service - VGAuth Service for open-vm-tools... Nov 8 00:30:09.710197 systemd[1]: Started dbus.service - D-Bus System Message Bus. Nov 8 00:30:09.716578 update_engine[1622]: I20251108 00:30:09.716503 1622 main.cc:92] Flatcar Update Engine starting Nov 8 00:30:09.717327 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Nov 8 00:30:09.717478 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Nov 8 00:30:09.717637 systemd[1]: motdgen.service: Deactivated successfully. Nov 8 00:30:09.717702 update_engine[1622]: I20251108 00:30:09.717684 1622 update_check_scheduler.cc:74] Next update check in 10m41s Nov 8 00:30:09.717763 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Nov 8 00:30:09.721924 extend-filesystems[1607]: Found loop4 Nov 8 00:30:09.721924 extend-filesystems[1607]: Found loop5 Nov 8 00:30:09.721924 extend-filesystems[1607]: Found loop6 Nov 8 00:30:09.721924 extend-filesystems[1607]: Found loop7 Nov 8 00:30:09.721924 extend-filesystems[1607]: Found sda Nov 8 00:30:09.721924 extend-filesystems[1607]: Found sda1 Nov 8 00:30:09.721924 extend-filesystems[1607]: Found sda2 Nov 8 00:30:09.721924 extend-filesystems[1607]: Found sda3 Nov 8 00:30:09.721924 extend-filesystems[1607]: Found usr Nov 8 00:30:09.721924 extend-filesystems[1607]: Found sda4 Nov 8 00:30:09.721924 extend-filesystems[1607]: Found sda6 Nov 8 00:30:09.721924 extend-filesystems[1607]: Found sda7 Nov 8 00:30:09.721924 extend-filesystems[1607]: Found sda9 Nov 8 00:30:09.721924 extend-filesystems[1607]: Checking size of /dev/sda9 Nov 8 00:30:09.723732 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Nov 8 00:30:09.730411 jq[1623]: true Nov 8 00:30:09.725060 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Nov 8 00:30:09.743578 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Nov 8 00:30:09.743615 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Nov 8 00:30:09.744229 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Nov 8 00:30:09.744244 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Nov 8 00:30:09.747382 systemd[1]: Started update-engine.service - Update Engine. Nov 8 00:30:09.748135 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Nov 8 00:30:09.749007 systemd[1]: Started locksmithd.service - Cluster reboot manager. Nov 8 00:30:09.751557 jq[1633]: true Nov 8 00:30:09.756647 extend-filesystems[1607]: Old size kept for /dev/sda9 Nov 8 00:30:09.756647 extend-filesystems[1607]: Found sr0 Nov 8 00:30:09.761307 systemd[1]: extend-filesystems.service: Deactivated successfully. Nov 8 00:30:09.764594 systemd-logind[1620]: Watching system buttons on /dev/input/event1 (Power Button) Nov 8 00:30:09.765104 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Nov 8 00:30:09.765527 (ntainerd)[1649]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Nov 8 00:30:09.766727 systemd-logind[1620]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Nov 8 00:30:09.767062 systemd-logind[1620]: New seat seat0. Nov 8 00:30:09.774533 systemd[1]: Started vgauthd.service - VGAuth Service for open-vm-tools. Nov 8 00:30:09.774760 systemd[1]: Started systemd-logind.service - User Login Management. Nov 8 00:30:09.787493 tar[1629]: linux-amd64/LICENSE Nov 8 00:30:09.787684 tar[1629]: linux-amd64/helm Nov 8 00:30:09.788972 systemd[1]: Starting vmtoolsd.service - Service for virtual machines hosted on VMware... Nov 8 00:30:09.815181 unknown[1654]: Pref_Init: Using '/etc/vmware-tools/vgauth.conf' as preferences filepath Nov 8 00:30:09.822872 unknown[1654]: Core dump limit set to -1 Nov 8 00:30:09.828068 systemd[1]: Started vmtoolsd.service - Service for virtual machines hosted on VMware. Nov 8 00:30:09.857114 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 35 scanned by (udev-worker) (1292) Nov 8 00:30:09.866934 kernel: NET: Registered PF_VSOCK protocol family Nov 8 00:30:09.883315 locksmithd[1643]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Nov 8 00:30:09.936566 bash[1677]: Updated "/home/core/.ssh/authorized_keys" Nov 8 00:30:09.937744 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Nov 8 00:30:09.938408 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Nov 8 00:30:09.978955 sshd_keygen[1635]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Nov 8 00:30:09.998131 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Nov 8 00:30:10.007117 systemd[1]: Starting issuegen.service - Generate /run/issue... Nov 8 00:30:10.020026 systemd[1]: issuegen.service: Deactivated successfully. Nov 8 00:30:10.020184 systemd[1]: Finished issuegen.service - Generate /run/issue. Nov 8 00:30:10.032757 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Nov 8 00:30:10.053571 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Nov 8 00:30:10.064251 systemd[1]: Started getty@tty1.service - Getty on tty1. Nov 8 00:30:10.075144 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Nov 8 00:30:10.075372 systemd[1]: Reached target getty.target - Login Prompts. Nov 8 00:30:10.129457 containerd[1649]: time="2025-11-08T00:30:10.129416248Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Nov 8 00:30:10.146171 containerd[1649]: time="2025-11-08T00:30:10.146148664Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Nov 8 00:30:10.147317 containerd[1649]: time="2025-11-08T00:30:10.147300160Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.113-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Nov 8 00:30:10.147364 containerd[1649]: time="2025-11-08T00:30:10.147356868Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Nov 8 00:30:10.147400 containerd[1649]: time="2025-11-08T00:30:10.147392748Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Nov 8 00:30:10.147519 containerd[1649]: time="2025-11-08T00:30:10.147510171Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Nov 8 00:30:10.147567 containerd[1649]: time="2025-11-08T00:30:10.147559498Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Nov 8 00:30:10.147719 containerd[1649]: time="2025-11-08T00:30:10.147699705Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Nov 8 00:30:10.147790 containerd[1649]: time="2025-11-08T00:30:10.147775630Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Nov 8 00:30:10.148043 containerd[1649]: time="2025-11-08T00:30:10.148026661Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Nov 8 00:30:10.148099 containerd[1649]: time="2025-11-08T00:30:10.148090080Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Nov 8 00:30:10.148146 containerd[1649]: time="2025-11-08T00:30:10.148133493Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Nov 8 00:30:10.148185 containerd[1649]: time="2025-11-08T00:30:10.148177817Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Nov 8 00:30:10.148614 containerd[1649]: time="2025-11-08T00:30:10.148261824Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Nov 8 00:30:10.148614 containerd[1649]: time="2025-11-08T00:30:10.148410211Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Nov 8 00:30:10.148614 containerd[1649]: time="2025-11-08T00:30:10.148496266Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Nov 8 00:30:10.148614 containerd[1649]: time="2025-11-08T00:30:10.148510115Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Nov 8 00:30:10.148614 containerd[1649]: time="2025-11-08T00:30:10.148559705Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Nov 8 00:30:10.148614 containerd[1649]: time="2025-11-08T00:30:10.148591469Z" level=info msg="metadata content store policy set" policy=shared Nov 8 00:30:10.160434 containerd[1649]: time="2025-11-08T00:30:10.160415558Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Nov 8 00:30:10.160514 containerd[1649]: time="2025-11-08T00:30:10.160504645Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Nov 8 00:30:10.160576 containerd[1649]: time="2025-11-08T00:30:10.160568817Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Nov 8 00:30:10.160642 containerd[1649]: time="2025-11-08T00:30:10.160633235Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Nov 8 00:30:10.160678 containerd[1649]: time="2025-11-08T00:30:10.160671715Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Nov 8 00:30:10.160786 containerd[1649]: time="2025-11-08T00:30:10.160777649Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Nov 8 00:30:10.161018 containerd[1649]: time="2025-11-08T00:30:10.161008671Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Nov 8 00:30:10.161110 containerd[1649]: time="2025-11-08T00:30:10.161101400Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Nov 8 00:30:10.162327 containerd[1649]: time="2025-11-08T00:30:10.161148440Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Nov 8 00:30:10.162327 containerd[1649]: time="2025-11-08T00:30:10.161159982Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Nov 8 00:30:10.162327 containerd[1649]: time="2025-11-08T00:30:10.161173420Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Nov 8 00:30:10.162327 containerd[1649]: time="2025-11-08T00:30:10.161181814Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Nov 8 00:30:10.162327 containerd[1649]: time="2025-11-08T00:30:10.161190696Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Nov 8 00:30:10.162327 containerd[1649]: time="2025-11-08T00:30:10.161198826Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Nov 8 00:30:10.162327 containerd[1649]: time="2025-11-08T00:30:10.161206786Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Nov 8 00:30:10.162327 containerd[1649]: time="2025-11-08T00:30:10.161214101Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Nov 8 00:30:10.162327 containerd[1649]: time="2025-11-08T00:30:10.161221295Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Nov 8 00:30:10.162327 containerd[1649]: time="2025-11-08T00:30:10.161228087Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Nov 8 00:30:10.162327 containerd[1649]: time="2025-11-08T00:30:10.161239632Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Nov 8 00:30:10.162327 containerd[1649]: time="2025-11-08T00:30:10.161247168Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Nov 8 00:30:10.162327 containerd[1649]: time="2025-11-08T00:30:10.161254162Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Nov 8 00:30:10.162327 containerd[1649]: time="2025-11-08T00:30:10.161261714Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Nov 8 00:30:10.162523 containerd[1649]: time="2025-11-08T00:30:10.161268554Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Nov 8 00:30:10.162523 containerd[1649]: time="2025-11-08T00:30:10.161275702Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Nov 8 00:30:10.162523 containerd[1649]: time="2025-11-08T00:30:10.161282439Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Nov 8 00:30:10.162523 containerd[1649]: time="2025-11-08T00:30:10.161290173Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Nov 8 00:30:10.162523 containerd[1649]: time="2025-11-08T00:30:10.161300239Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Nov 8 00:30:10.162523 containerd[1649]: time="2025-11-08T00:30:10.161311188Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Nov 8 00:30:10.162523 containerd[1649]: time="2025-11-08T00:30:10.161322942Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Nov 8 00:30:10.162523 containerd[1649]: time="2025-11-08T00:30:10.161334598Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Nov 8 00:30:10.162523 containerd[1649]: time="2025-11-08T00:30:10.161345366Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Nov 8 00:30:10.162523 containerd[1649]: time="2025-11-08T00:30:10.161358798Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Nov 8 00:30:10.162523 containerd[1649]: time="2025-11-08T00:30:10.161381058Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Nov 8 00:30:10.162523 containerd[1649]: time="2025-11-08T00:30:10.161393054Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Nov 8 00:30:10.162523 containerd[1649]: time="2025-11-08T00:30:10.161402830Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Nov 8 00:30:10.162523 containerd[1649]: time="2025-11-08T00:30:10.161437449Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Nov 8 00:30:10.162736 containerd[1649]: time="2025-11-08T00:30:10.161453662Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Nov 8 00:30:10.162736 containerd[1649]: time="2025-11-08T00:30:10.161464254Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Nov 8 00:30:10.162736 containerd[1649]: time="2025-11-08T00:30:10.161475314Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Nov 8 00:30:10.162736 containerd[1649]: time="2025-11-08T00:30:10.161484891Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Nov 8 00:30:10.162736 containerd[1649]: time="2025-11-08T00:30:10.161496465Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Nov 8 00:30:10.162736 containerd[1649]: time="2025-11-08T00:30:10.161505635Z" level=info msg="NRI interface is disabled by configuration." Nov 8 00:30:10.162736 containerd[1649]: time="2025-11-08T00:30:10.161514992Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Nov 8 00:30:10.162843 containerd[1649]: time="2025-11-08T00:30:10.161784530Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:false] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:false SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Nov 8 00:30:10.162843 containerd[1649]: time="2025-11-08T00:30:10.161822446Z" level=info msg="Connect containerd service" Nov 8 00:30:10.162843 containerd[1649]: time="2025-11-08T00:30:10.161844903Z" level=info msg="using legacy CRI server" Nov 8 00:30:10.162843 containerd[1649]: time="2025-11-08T00:30:10.161849968Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Nov 8 00:30:10.162843 containerd[1649]: time="2025-11-08T00:30:10.161925436Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Nov 8 00:30:10.162843 containerd[1649]: time="2025-11-08T00:30:10.162222213Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Nov 8 00:30:10.163169 containerd[1649]: time="2025-11-08T00:30:10.163157172Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Nov 8 00:30:10.163241 containerd[1649]: time="2025-11-08T00:30:10.163232446Z" level=info msg=serving... address=/run/containerd/containerd.sock Nov 8 00:30:10.163376 containerd[1649]: time="2025-11-08T00:30:10.163359823Z" level=info msg="Start subscribing containerd event" Nov 8 00:30:10.163422 containerd[1649]: time="2025-11-08T00:30:10.163414156Z" level=info msg="Start recovering state" Nov 8 00:30:10.163480 containerd[1649]: time="2025-11-08T00:30:10.163473183Z" level=info msg="Start event monitor" Nov 8 00:30:10.163528 containerd[1649]: time="2025-11-08T00:30:10.163520337Z" level=info msg="Start snapshots syncer" Nov 8 00:30:10.163558 containerd[1649]: time="2025-11-08T00:30:10.163552922Z" level=info msg="Start cni network conf syncer for default" Nov 8 00:30:10.163596 containerd[1649]: time="2025-11-08T00:30:10.163588108Z" level=info msg="Start streaming server" Nov 8 00:30:10.163665 containerd[1649]: time="2025-11-08T00:30:10.163655401Z" level=info msg="containerd successfully booted in 0.034848s" Nov 8 00:30:10.163723 systemd[1]: Started containerd.service - containerd container runtime. Nov 8 00:30:10.294271 tar[1629]: linux-amd64/README.md Nov 8 00:30:10.308563 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Nov 8 00:30:10.569057 systemd-networkd[1287]: ens192: Gained IPv6LL Nov 8 00:30:10.571144 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Nov 8 00:30:10.572556 systemd[1]: Reached target network-online.target - Network is Online. Nov 8 00:30:10.578426 systemd[1]: Starting coreos-metadata.service - VMware metadata agent... Nov 8 00:30:10.581893 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 8 00:30:10.584081 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Nov 8 00:30:10.628024 systemd[1]: coreos-metadata.service: Deactivated successfully. Nov 8 00:30:10.628187 systemd[1]: Finished coreos-metadata.service - VMware metadata agent. Nov 8 00:30:10.629908 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Nov 8 00:30:10.630765 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Nov 8 00:30:11.507850 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 8 00:30:11.508321 systemd[1]: Reached target multi-user.target - Multi-User System. Nov 8 00:30:11.509035 systemd[1]: Startup finished in 6.195s (kernel) + 4.276s (userspace) = 10.472s. Nov 8 00:30:11.512310 (kubelet)[1808]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Nov 8 00:30:11.540965 login[1709]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Nov 8 00:30:11.543069 login[1710]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Nov 8 00:30:11.549507 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Nov 8 00:30:11.555051 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Nov 8 00:30:11.557945 systemd-logind[1620]: New session 1 of user core. Nov 8 00:30:11.561382 systemd-logind[1620]: New session 2 of user core. Nov 8 00:30:11.565008 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Nov 8 00:30:11.572082 systemd[1]: Starting user@500.service - User Manager for UID 500... Nov 8 00:30:11.573800 (systemd)[1817]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Nov 8 00:30:11.636018 systemd[1817]: Queued start job for default target default.target. Nov 8 00:30:11.636251 systemd[1817]: Created slice app.slice - User Application Slice. Nov 8 00:30:11.636263 systemd[1817]: Reached target paths.target - Paths. Nov 8 00:30:11.636271 systemd[1817]: Reached target timers.target - Timers. Nov 8 00:30:11.644960 systemd[1817]: Starting dbus.socket - D-Bus User Message Bus Socket... Nov 8 00:30:11.649369 systemd[1817]: Listening on dbus.socket - D-Bus User Message Bus Socket. Nov 8 00:30:11.649412 systemd[1817]: Reached target sockets.target - Sockets. Nov 8 00:30:11.649421 systemd[1817]: Reached target basic.target - Basic System. Nov 8 00:30:11.649489 systemd[1]: Started user@500.service - User Manager for UID 500. Nov 8 00:30:11.649839 systemd[1817]: Reached target default.target - Main User Target. Nov 8 00:30:11.649864 systemd[1817]: Startup finished in 72ms. Nov 8 00:30:11.652051 systemd[1]: Started session-1.scope - Session 1 of User core. Nov 8 00:30:11.652547 systemd[1]: Started session-2.scope - Session 2 of User core. Nov 8 00:30:12.124957 kubelet[1808]: E1108 00:30:12.124903 1808 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Nov 8 00:30:12.126239 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Nov 8 00:30:12.126352 systemd[1]: kubelet.service: Failed with result 'exit-code'. Nov 8 00:30:22.304688 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Nov 8 00:30:22.310034 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 8 00:30:22.646433 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 8 00:30:22.649277 (kubelet)[1870]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Nov 8 00:30:22.691513 kubelet[1870]: E1108 00:30:22.691456 1870 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Nov 8 00:30:22.693877 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Nov 8 00:30:22.694013 systemd[1]: kubelet.service: Failed with result 'exit-code'. Nov 8 00:30:32.804707 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Nov 8 00:30:32.810050 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 8 00:30:33.257647 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 8 00:30:33.261676 (kubelet)[1889]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Nov 8 00:30:33.308417 kubelet[1889]: E1108 00:30:33.308375 1889 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Nov 8 00:30:33.309782 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Nov 8 00:30:33.309881 systemd[1]: kubelet.service: Failed with result 'exit-code'. Nov 8 00:30:39.958893 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Nov 8 00:30:39.964134 systemd[1]: Started sshd@0-139.178.70.109:22-147.75.109.163:41040.service - OpenSSH per-connection server daemon (147.75.109.163:41040). Nov 8 00:30:39.999174 sshd[1898]: Accepted publickey for core from 147.75.109.163 port 41040 ssh2: RSA SHA256:w/xGfnUobRwx5tSVykkPMEgd5qYNjwGoDH1wcB/4M9g Nov 8 00:30:40.000169 sshd[1898]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 8 00:30:40.003344 systemd-logind[1620]: New session 3 of user core. Nov 8 00:30:40.014249 systemd[1]: Started session-3.scope - Session 3 of User core. Nov 8 00:30:40.069816 systemd[1]: Started sshd@1-139.178.70.109:22-147.75.109.163:50878.service - OpenSSH per-connection server daemon (147.75.109.163:50878). Nov 8 00:30:40.103686 sshd[1903]: Accepted publickey for core from 147.75.109.163 port 50878 ssh2: RSA SHA256:w/xGfnUobRwx5tSVykkPMEgd5qYNjwGoDH1wcB/4M9g Nov 8 00:30:40.105040 sshd[1903]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 8 00:30:40.109209 systemd-logind[1620]: New session 4 of user core. Nov 8 00:30:40.114181 systemd[1]: Started session-4.scope - Session 4 of User core. Nov 8 00:30:40.166644 sshd[1903]: pam_unix(sshd:session): session closed for user core Nov 8 00:30:40.172967 systemd[1]: Started sshd@2-139.178.70.109:22-147.75.109.163:50888.service - OpenSSH per-connection server daemon (147.75.109.163:50888). Nov 8 00:30:40.173390 systemd[1]: sshd@1-139.178.70.109:22-147.75.109.163:50878.service: Deactivated successfully. Nov 8 00:30:40.176383 systemd[1]: session-4.scope: Deactivated successfully. Nov 8 00:30:40.177955 systemd-logind[1620]: Session 4 logged out. Waiting for processes to exit. Nov 8 00:30:40.178511 systemd-logind[1620]: Removed session 4. Nov 8 00:30:40.203409 sshd[1908]: Accepted publickey for core from 147.75.109.163 port 50888 ssh2: RSA SHA256:w/xGfnUobRwx5tSVykkPMEgd5qYNjwGoDH1wcB/4M9g Nov 8 00:30:40.204631 sshd[1908]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 8 00:30:40.208048 systemd-logind[1620]: New session 5 of user core. Nov 8 00:30:40.216237 systemd[1]: Started session-5.scope - Session 5 of User core. Nov 8 00:30:40.264417 sshd[1908]: pam_unix(sshd:session): session closed for user core Nov 8 00:30:40.266476 systemd[1]: sshd@2-139.178.70.109:22-147.75.109.163:50888.service: Deactivated successfully. Nov 8 00:30:40.268973 systemd-logind[1620]: Session 5 logged out. Waiting for processes to exit. Nov 8 00:30:40.269541 systemd[1]: session-5.scope: Deactivated successfully. Nov 8 00:30:40.281335 systemd[1]: Started sshd@3-139.178.70.109:22-147.75.109.163:50892.service - OpenSSH per-connection server daemon (147.75.109.163:50892). Nov 8 00:30:40.282081 systemd-logind[1620]: Removed session 5. Nov 8 00:30:40.309971 sshd[1919]: Accepted publickey for core from 147.75.109.163 port 50892 ssh2: RSA SHA256:w/xGfnUobRwx5tSVykkPMEgd5qYNjwGoDH1wcB/4M9g Nov 8 00:30:40.310934 sshd[1919]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 8 00:30:40.314981 systemd-logind[1620]: New session 6 of user core. Nov 8 00:30:40.321210 systemd[1]: Started session-6.scope - Session 6 of User core. Nov 8 00:30:40.374692 sshd[1919]: pam_unix(sshd:session): session closed for user core Nov 8 00:30:40.381268 systemd[1]: Started sshd@4-139.178.70.109:22-147.75.109.163:50906.service - OpenSSH per-connection server daemon (147.75.109.163:50906). Nov 8 00:30:40.382050 systemd[1]: sshd@3-139.178.70.109:22-147.75.109.163:50892.service: Deactivated successfully. Nov 8 00:30:40.386374 systemd[1]: session-6.scope: Deactivated successfully. Nov 8 00:30:40.386760 systemd-logind[1620]: Session 6 logged out. Waiting for processes to exit. Nov 8 00:30:40.388245 systemd-logind[1620]: Removed session 6. Nov 8 00:30:40.414783 sshd[1924]: Accepted publickey for core from 147.75.109.163 port 50906 ssh2: RSA SHA256:w/xGfnUobRwx5tSVykkPMEgd5qYNjwGoDH1wcB/4M9g Nov 8 00:30:40.415684 sshd[1924]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 8 00:30:40.420715 systemd-logind[1620]: New session 7 of user core. Nov 8 00:30:40.427279 systemd[1]: Started session-7.scope - Session 7 of User core. Nov 8 00:30:40.488480 sudo[1931]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Nov 8 00:30:40.488686 sudo[1931]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Nov 8 00:30:40.498831 sudo[1931]: pam_unix(sudo:session): session closed for user root Nov 8 00:30:40.501502 sshd[1924]: pam_unix(sshd:session): session closed for user core Nov 8 00:30:40.508216 systemd[1]: Started sshd@5-139.178.70.109:22-147.75.109.163:50910.service - OpenSSH per-connection server daemon (147.75.109.163:50910). Nov 8 00:30:40.508532 systemd[1]: sshd@4-139.178.70.109:22-147.75.109.163:50906.service: Deactivated successfully. Nov 8 00:30:40.513388 systemd[1]: session-7.scope: Deactivated successfully. Nov 8 00:30:40.513587 systemd-logind[1620]: Session 7 logged out. Waiting for processes to exit. Nov 8 00:30:40.516328 systemd-logind[1620]: Removed session 7. Nov 8 00:30:40.542556 sshd[1933]: Accepted publickey for core from 147.75.109.163 port 50910 ssh2: RSA SHA256:w/xGfnUobRwx5tSVykkPMEgd5qYNjwGoDH1wcB/4M9g Nov 8 00:30:40.543557 sshd[1933]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 8 00:30:40.546661 systemd-logind[1620]: New session 8 of user core. Nov 8 00:30:40.557299 systemd[1]: Started session-8.scope - Session 8 of User core. Nov 8 00:30:40.606818 sudo[1941]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Nov 8 00:30:40.607437 sudo[1941]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Nov 8 00:30:40.610498 sudo[1941]: pam_unix(sudo:session): session closed for user root Nov 8 00:30:40.615728 sudo[1940]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Nov 8 00:30:40.616110 sudo[1940]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Nov 8 00:30:40.628172 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Nov 8 00:30:40.629171 auditctl[1944]: No rules Nov 8 00:30:40.629396 systemd[1]: audit-rules.service: Deactivated successfully. Nov 8 00:30:40.629538 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Nov 8 00:30:40.632142 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Nov 8 00:30:40.652576 augenrules[1963]: No rules Nov 8 00:30:40.653372 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Nov 8 00:30:40.654609 sudo[1940]: pam_unix(sudo:session): session closed for user root Nov 8 00:30:40.657064 sshd[1933]: pam_unix(sshd:session): session closed for user core Nov 8 00:30:40.665182 systemd[1]: Started sshd@6-139.178.70.109:22-147.75.109.163:50926.service - OpenSSH per-connection server daemon (147.75.109.163:50926). Nov 8 00:30:40.665563 systemd[1]: sshd@5-139.178.70.109:22-147.75.109.163:50910.service: Deactivated successfully. Nov 8 00:30:40.669033 systemd[1]: session-8.scope: Deactivated successfully. Nov 8 00:30:40.671966 systemd-logind[1620]: Session 8 logged out. Waiting for processes to exit. Nov 8 00:30:40.672680 systemd-logind[1620]: Removed session 8. Nov 8 00:30:40.694024 sshd[1969]: Accepted publickey for core from 147.75.109.163 port 50926 ssh2: RSA SHA256:w/xGfnUobRwx5tSVykkPMEgd5qYNjwGoDH1wcB/4M9g Nov 8 00:30:40.694859 sshd[1969]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 8 00:30:40.697634 systemd-logind[1620]: New session 9 of user core. Nov 8 00:30:40.707189 systemd[1]: Started session-9.scope - Session 9 of User core. Nov 8 00:30:40.755357 sudo[1976]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Nov 8 00:30:40.755537 sudo[1976]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Nov 8 00:30:41.113266 systemd[1]: Starting docker.service - Docker Application Container Engine... Nov 8 00:30:41.114096 (dockerd)[1992]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Nov 8 00:30:41.470713 dockerd[1992]: time="2025-11-08T00:30:41.470150630Z" level=info msg="Starting up" Nov 8 00:30:41.545086 systemd[1]: var-lib-docker-check\x2doverlayfs\x2dsupport2933587006-merged.mount: Deactivated successfully. Nov 8 00:30:41.796658 dockerd[1992]: time="2025-11-08T00:30:41.796559819Z" level=info msg="Loading containers: start." Nov 8 00:30:41.910933 kernel: Initializing XFRM netlink socket Nov 8 00:30:41.976063 systemd-networkd[1287]: docker0: Link UP Nov 8 00:30:41.998988 dockerd[1992]: time="2025-11-08T00:30:41.998957709Z" level=info msg="Loading containers: done." Nov 8 00:30:42.016742 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck2389695387-merged.mount: Deactivated successfully. Nov 8 00:30:42.018306 dockerd[1992]: time="2025-11-08T00:30:42.018271258Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Nov 8 00:30:42.018364 dockerd[1992]: time="2025-11-08T00:30:42.018352288Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Nov 8 00:30:42.018443 dockerd[1992]: time="2025-11-08T00:30:42.018427782Z" level=info msg="Daemon has completed initialization" Nov 8 00:30:42.037451 dockerd[1992]: time="2025-11-08T00:30:42.037396796Z" level=info msg="API listen on /run/docker.sock" Nov 8 00:30:42.037967 systemd[1]: Started docker.service - Docker Application Container Engine. Nov 8 00:30:43.189123 containerd[1649]: time="2025-11-08T00:30:43.189085561Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.9\"" Nov 8 00:30:43.554703 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Nov 8 00:30:43.560170 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 8 00:30:43.819051 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 8 00:30:43.821098 (kubelet)[2144]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Nov 8 00:30:43.875308 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Nov 8 00:30:44.227088 kubelet[2144]: E1108 00:30:43.874311 2144 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Nov 8 00:30:43.875408 systemd[1]: kubelet.service: Failed with result 'exit-code'. Nov 8 00:30:44.341513 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2724533162.mount: Deactivated successfully. Nov 8 00:30:45.434064 containerd[1649]: time="2025-11-08T00:30:45.433999220Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.32.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:30:45.436132 containerd[1649]: time="2025-11-08T00:30:45.434697669Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.32.9: active requests=0, bytes read=28837916" Nov 8 00:30:45.436132 containerd[1649]: time="2025-11-08T00:30:45.434715523Z" level=info msg="ImageCreate event name:\"sha256:abd2b525baf428ffb8b8b7d1e09761dc5cdb7ed0c7896a9427e29e84f8eafc59\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:30:45.437133 containerd[1649]: time="2025-11-08T00:30:45.436951650Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:6df11cc2ad9679b1117be34d3a0230add88bc0a08fd7a3ebc26b680575e8de97\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:30:45.437537 containerd[1649]: time="2025-11-08T00:30:45.437518976Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.32.9\" with image id \"sha256:abd2b525baf428ffb8b8b7d1e09761dc5cdb7ed0c7896a9427e29e84f8eafc59\", repo tag \"registry.k8s.io/kube-apiserver:v1.32.9\", repo digest \"registry.k8s.io/kube-apiserver@sha256:6df11cc2ad9679b1117be34d3a0230add88bc0a08fd7a3ebc26b680575e8de97\", size \"28834515\" in 2.248409358s" Nov 8 00:30:45.437579 containerd[1649]: time="2025-11-08T00:30:45.437539099Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.9\" returns image reference \"sha256:abd2b525baf428ffb8b8b7d1e09761dc5cdb7ed0c7896a9427e29e84f8eafc59\"" Nov 8 00:30:45.438261 containerd[1649]: time="2025-11-08T00:30:45.438212405Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.9\"" Nov 8 00:30:47.081562 containerd[1649]: time="2025-11-08T00:30:47.081523369Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.32.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:30:47.095438 containerd[1649]: time="2025-11-08T00:30:47.095389614Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.32.9: active requests=0, bytes read=24787027" Nov 8 00:30:47.104705 containerd[1649]: time="2025-11-08T00:30:47.104660488Z" level=info msg="ImageCreate event name:\"sha256:0debe32fbb7223500fcf8c312f2a568a5abd3ed9274d8ec6780cfb30b8861e91\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:30:47.118327 containerd[1649]: time="2025-11-08T00:30:47.118278792Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:243c4b8e3bce271fcb1b78008ab996ab6976b1a20096deac08338fcd17979922\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:30:47.119141 containerd[1649]: time="2025-11-08T00:30:47.118936893Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.32.9\" with image id \"sha256:0debe32fbb7223500fcf8c312f2a568a5abd3ed9274d8ec6780cfb30b8861e91\", repo tag \"registry.k8s.io/kube-controller-manager:v1.32.9\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:243c4b8e3bce271fcb1b78008ab996ab6976b1a20096deac08338fcd17979922\", size \"26421706\" in 1.680600528s" Nov 8 00:30:47.119141 containerd[1649]: time="2025-11-08T00:30:47.118958485Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.9\" returns image reference \"sha256:0debe32fbb7223500fcf8c312f2a568a5abd3ed9274d8ec6780cfb30b8861e91\"" Nov 8 00:30:47.119489 containerd[1649]: time="2025-11-08T00:30:47.119445083Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.9\"" Nov 8 00:30:48.583847 containerd[1649]: time="2025-11-08T00:30:48.583812436Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.32.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:30:48.584953 containerd[1649]: time="2025-11-08T00:30:48.584924582Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.32.9: active requests=0, bytes read=19176289" Nov 8 00:30:48.585593 containerd[1649]: time="2025-11-08T00:30:48.585276451Z" level=info msg="ImageCreate event name:\"sha256:6934c23b154fcb9bf54ed5913782de746735a49f4daa4732285915050cd44ad5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:30:48.587725 containerd[1649]: time="2025-11-08T00:30:48.587703753Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:50c49520dbd0e8b4076b6a5c77d8014df09ea3d59a73e8bafd2678d51ebb92d5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:30:48.588426 containerd[1649]: time="2025-11-08T00:30:48.588405997Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.32.9\" with image id \"sha256:6934c23b154fcb9bf54ed5913782de746735a49f4daa4732285915050cd44ad5\", repo tag \"registry.k8s.io/kube-scheduler:v1.32.9\", repo digest \"registry.k8s.io/kube-scheduler@sha256:50c49520dbd0e8b4076b6a5c77d8014df09ea3d59a73e8bafd2678d51ebb92d5\", size \"20810986\" in 1.468873015s" Nov 8 00:30:48.588482 containerd[1649]: time="2025-11-08T00:30:48.588425687Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.9\" returns image reference \"sha256:6934c23b154fcb9bf54ed5913782de746735a49f4daa4732285915050cd44ad5\"" Nov 8 00:30:48.588792 containerd[1649]: time="2025-11-08T00:30:48.588746526Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.9\"" Nov 8 00:30:50.002237 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2292036794.mount: Deactivated successfully. Nov 8 00:30:50.544958 containerd[1649]: time="2025-11-08T00:30:50.544610525Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.32.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:30:50.558014 containerd[1649]: time="2025-11-08T00:30:50.557954215Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.32.9: active requests=0, bytes read=30924206" Nov 8 00:30:50.569025 containerd[1649]: time="2025-11-08T00:30:50.568991304Z" level=info msg="ImageCreate event name:\"sha256:fa3fdca615a501743d8deb39729a96e731312aac8d96accec061d5265360332f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:30:50.578318 containerd[1649]: time="2025-11-08T00:30:50.578299431Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:886af02535dc34886e4618b902f8c140d89af57233a245621d29642224516064\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:30:50.578624 containerd[1649]: time="2025-11-08T00:30:50.578608329Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.32.9\" with image id \"sha256:fa3fdca615a501743d8deb39729a96e731312aac8d96accec061d5265360332f\", repo tag \"registry.k8s.io/kube-proxy:v1.32.9\", repo digest \"registry.k8s.io/kube-proxy@sha256:886af02535dc34886e4618b902f8c140d89af57233a245621d29642224516064\", size \"30923225\" in 1.989843496s" Nov 8 00:30:50.578672 containerd[1649]: time="2025-11-08T00:30:50.578663395Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.9\" returns image reference \"sha256:fa3fdca615a501743d8deb39729a96e731312aac8d96accec061d5265360332f\"" Nov 8 00:30:50.579004 containerd[1649]: time="2025-11-08T00:30:50.578988835Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" Nov 8 00:30:51.328840 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1103501715.mount: Deactivated successfully. Nov 8 00:30:53.277100 containerd[1649]: time="2025-11-08T00:30:53.277061658Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:30:53.277426 containerd[1649]: time="2025-11-08T00:30:53.277407792Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.3: active requests=0, bytes read=18565241" Nov 8 00:30:53.278125 containerd[1649]: time="2025-11-08T00:30:53.278106298Z" level=info msg="ImageCreate event name:\"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:30:53.279833 containerd[1649]: time="2025-11-08T00:30:53.279818645Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:30:53.280565 containerd[1649]: time="2025-11-08T00:30:53.280550643Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.3\" with image id \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.3\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\", size \"18562039\" in 2.701543593s" Nov 8 00:30:53.280631 containerd[1649]: time="2025-11-08T00:30:53.280621410Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\"" Nov 8 00:30:53.280955 containerd[1649]: time="2025-11-08T00:30:53.280938112Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Nov 8 00:30:53.850835 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1025113916.mount: Deactivated successfully. Nov 8 00:30:53.852427 containerd[1649]: time="2025-11-08T00:30:53.852407793Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:30:53.852929 containerd[1649]: time="2025-11-08T00:30:53.852822511Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321138" Nov 8 00:30:53.853031 containerd[1649]: time="2025-11-08T00:30:53.853006511Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:30:53.854218 containerd[1649]: time="2025-11-08T00:30:53.854191767Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:30:53.854777 containerd[1649]: time="2025-11-08T00:30:53.854636727Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 573.637377ms" Nov 8 00:30:53.854777 containerd[1649]: time="2025-11-08T00:30:53.854658231Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Nov 8 00:30:53.855221 containerd[1649]: time="2025-11-08T00:30:53.855011613Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\"" Nov 8 00:30:54.054680 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 4. Nov 8 00:30:54.061066 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 8 00:30:54.206283 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 8 00:30:54.208867 (kubelet)[2293]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Nov 8 00:30:54.279922 kubelet[2293]: E1108 00:30:54.279883 2293 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Nov 8 00:30:54.280940 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Nov 8 00:30:54.281036 systemd[1]: kubelet.service: Failed with result 'exit-code'. Nov 8 00:30:54.770364 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3739967197.mount: Deactivated successfully. Nov 8 00:30:55.050975 update_engine[1622]: I20251108 00:30:55.050783 1622 update_attempter.cc:509] Updating boot flags... Nov 8 00:30:55.104247 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 35 scanned by (udev-worker) (2359) Nov 8 00:30:55.161725 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 35 scanned by (udev-worker) (2355) Nov 8 00:30:56.554931 containerd[1649]: time="2025-11-08T00:30:56.554335883Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.16-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:30:56.554931 containerd[1649]: time="2025-11-08T00:30:56.554708428Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.16-0: active requests=0, bytes read=57682056" Nov 8 00:30:56.555221 containerd[1649]: time="2025-11-08T00:30:56.555003810Z" level=info msg="ImageCreate event name:\"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:30:56.557315 containerd[1649]: time="2025-11-08T00:30:56.557297969Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:30:56.558036 containerd[1649]: time="2025-11-08T00:30:56.558020011Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.16-0\" with image id \"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\", repo tag \"registry.k8s.io/etcd:3.5.16-0\", repo digest \"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\", size \"57680541\" in 2.702992884s" Nov 8 00:30:56.558063 containerd[1649]: time="2025-11-08T00:30:56.558039997Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\" returns image reference \"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\"" Nov 8 00:30:58.565045 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Nov 8 00:30:58.570238 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 8 00:30:58.593175 systemd[1]: Reloading requested from client PID 2401 ('systemctl') (unit session-9.scope)... Nov 8 00:30:58.593282 systemd[1]: Reloading... Nov 8 00:30:58.650933 zram_generator::config[2438]: No configuration found. Nov 8 00:30:58.719538 systemd[1]: /etc/systemd/system/coreos-metadata.service:11: Ignoring unknown escape sequences: "echo "COREOS_CUSTOM_PRIVATE_IPV4=$(ip addr show ens192 | grep "inet 10." | grep -Po "inet \K[\d.]+") Nov 8 00:30:58.735871 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Nov 8 00:30:58.781169 systemd[1]: Reloading finished in 187 ms. Nov 8 00:30:58.808041 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Nov 8 00:30:58.808115 systemd[1]: kubelet.service: Failed with result 'signal'. Nov 8 00:30:58.808310 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Nov 8 00:30:58.814404 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 8 00:30:59.182859 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 8 00:30:59.186219 (kubelet)[2516]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Nov 8 00:30:59.213809 kubelet[2516]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Nov 8 00:30:59.213809 kubelet[2516]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Nov 8 00:30:59.213809 kubelet[2516]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Nov 8 00:30:59.218790 kubelet[2516]: I1108 00:30:59.218741 2516 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Nov 8 00:30:59.420683 kubelet[2516]: I1108 00:30:59.420655 2516 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Nov 8 00:30:59.420683 kubelet[2516]: I1108 00:30:59.420675 2516 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Nov 8 00:30:59.420857 kubelet[2516]: I1108 00:30:59.420845 2516 server.go:954] "Client rotation is on, will bootstrap in background" Nov 8 00:30:59.455824 kubelet[2516]: I1108 00:30:59.455616 2516 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Nov 8 00:30:59.456519 kubelet[2516]: E1108 00:30:59.456496 2516 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://139.178.70.109:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 139.178.70.109:6443: connect: connection refused" logger="UnhandledError" Nov 8 00:30:59.466535 kubelet[2516]: E1108 00:30:59.465813 2516 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Nov 8 00:30:59.466535 kubelet[2516]: I1108 00:30:59.465845 2516 server.go:1421] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Nov 8 00:30:59.469933 kubelet[2516]: I1108 00:30:59.469642 2516 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Nov 8 00:30:59.472798 kubelet[2516]: I1108 00:30:59.472740 2516 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Nov 8 00:30:59.474036 kubelet[2516]: I1108 00:30:59.472787 2516 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":1} Nov 8 00:30:59.476371 kubelet[2516]: I1108 00:30:59.476352 2516 topology_manager.go:138] "Creating topology manager with none policy" Nov 8 00:30:59.476371 kubelet[2516]: I1108 00:30:59.476372 2516 container_manager_linux.go:304] "Creating device plugin manager" Nov 8 00:30:59.477689 kubelet[2516]: I1108 00:30:59.477676 2516 state_mem.go:36] "Initialized new in-memory state store" Nov 8 00:30:59.481858 kubelet[2516]: I1108 00:30:59.481845 2516 kubelet.go:446] "Attempting to sync node with API server" Nov 8 00:30:59.481892 kubelet[2516]: I1108 00:30:59.481871 2516 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Nov 8 00:30:59.481892 kubelet[2516]: I1108 00:30:59.481891 2516 kubelet.go:352] "Adding apiserver pod source" Nov 8 00:30:59.482517 kubelet[2516]: I1108 00:30:59.481902 2516 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Nov 8 00:30:59.489608 kubelet[2516]: W1108 00:30:59.489530 2516 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://139.178.70.109:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 139.178.70.109:6443: connect: connection refused Nov 8 00:30:59.489702 kubelet[2516]: E1108 00:30:59.489619 2516 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://139.178.70.109:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 139.178.70.109:6443: connect: connection refused" logger="UnhandledError" Nov 8 00:30:59.489702 kubelet[2516]: W1108 00:30:59.489684 2516 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://139.178.70.109:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 139.178.70.109:6443: connect: connection refused Nov 8 00:30:59.490297 kubelet[2516]: E1108 00:30:59.489714 2516 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://139.178.70.109:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 139.178.70.109:6443: connect: connection refused" logger="UnhandledError" Nov 8 00:30:59.491052 kubelet[2516]: I1108 00:30:59.491037 2516 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Nov 8 00:30:59.493671 kubelet[2516]: I1108 00:30:59.493654 2516 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Nov 8 00:30:59.493721 kubelet[2516]: W1108 00:30:59.493707 2516 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Nov 8 00:30:59.494346 kubelet[2516]: I1108 00:30:59.494191 2516 watchdog_linux.go:99] "Systemd watchdog is not enabled" Nov 8 00:30:59.494346 kubelet[2516]: I1108 00:30:59.494218 2516 server.go:1287] "Started kubelet" Nov 8 00:30:59.496720 kubelet[2516]: I1108 00:30:59.496704 2516 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Nov 8 00:30:59.502859 kubelet[2516]: I1108 00:30:59.501537 2516 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Nov 8 00:30:59.502859 kubelet[2516]: I1108 00:30:59.502425 2516 server.go:479] "Adding debug handlers to kubelet server" Nov 8 00:30:59.506442 kubelet[2516]: I1108 00:30:59.506391 2516 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Nov 8 00:30:59.506626 kubelet[2516]: I1108 00:30:59.506613 2516 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Nov 8 00:30:59.507239 kubelet[2516]: E1108 00:30:59.500678 2516 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://139.178.70.109:6443/api/v1/namespaces/default/events\": dial tcp 139.178.70.109:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.1875e0ac39cdc743 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-11-08 00:30:59.494201155 +0000 UTC m=+0.305726740,LastTimestamp:2025-11-08 00:30:59.494201155 +0000 UTC m=+0.305726740,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Nov 8 00:30:59.507490 kubelet[2516]: I1108 00:30:59.507475 2516 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Nov 8 00:30:59.509430 kubelet[2516]: I1108 00:30:59.509418 2516 volume_manager.go:297] "Starting Kubelet Volume Manager" Nov 8 00:30:59.509917 kubelet[2516]: E1108 00:30:59.509627 2516 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Nov 8 00:30:59.509917 kubelet[2516]: I1108 00:30:59.509650 2516 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Nov 8 00:30:59.509917 kubelet[2516]: I1108 00:30:59.509681 2516 reconciler.go:26] "Reconciler: start to sync state" Nov 8 00:30:59.510453 kubelet[2516]: W1108 00:30:59.510426 2516 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://139.178.70.109:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 139.178.70.109:6443: connect: connection refused Nov 8 00:30:59.510482 kubelet[2516]: E1108 00:30:59.510462 2516 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://139.178.70.109:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 139.178.70.109:6443: connect: connection refused" logger="UnhandledError" Nov 8 00:30:59.510520 kubelet[2516]: E1108 00:30:59.510505 2516 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://139.178.70.109:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 139.178.70.109:6443: connect: connection refused" interval="200ms" Nov 8 00:30:59.510668 kubelet[2516]: I1108 00:30:59.510656 2516 factory.go:221] Registration of the systemd container factory successfully Nov 8 00:30:59.510724 kubelet[2516]: I1108 00:30:59.510713 2516 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Nov 8 00:30:59.515542 kubelet[2516]: E1108 00:30:59.515463 2516 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Nov 8 00:30:59.515645 kubelet[2516]: I1108 00:30:59.515637 2516 factory.go:221] Registration of the containerd container factory successfully Nov 8 00:30:59.535059 kubelet[2516]: I1108 00:30:59.535003 2516 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Nov 8 00:30:59.535929 kubelet[2516]: I1108 00:30:59.535887 2516 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Nov 8 00:30:59.535929 kubelet[2516]: I1108 00:30:59.535899 2516 status_manager.go:227] "Starting to sync pod status with apiserver" Nov 8 00:30:59.535929 kubelet[2516]: I1108 00:30:59.535910 2516 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Nov 8 00:30:59.535929 kubelet[2516]: I1108 00:30:59.535923 2516 kubelet.go:2382] "Starting kubelet main sync loop" Nov 8 00:30:59.536047 kubelet[2516]: E1108 00:30:59.535956 2516 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Nov 8 00:30:59.539708 kubelet[2516]: I1108 00:30:59.539688 2516 cpu_manager.go:221] "Starting CPU manager" policy="none" Nov 8 00:30:59.539708 kubelet[2516]: I1108 00:30:59.539697 2516 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Nov 8 00:30:59.539708 kubelet[2516]: I1108 00:30:59.539706 2516 state_mem.go:36] "Initialized new in-memory state store" Nov 8 00:30:59.556910 kubelet[2516]: W1108 00:30:59.556866 2516 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://139.178.70.109:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 139.178.70.109:6443: connect: connection refused Nov 8 00:30:59.579440 kubelet[2516]: E1108 00:30:59.556939 2516 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://139.178.70.109:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 139.178.70.109:6443: connect: connection refused" logger="UnhandledError" Nov 8 00:30:59.582048 kubelet[2516]: I1108 00:30:59.582017 2516 policy_none.go:49] "None policy: Start" Nov 8 00:30:59.582048 kubelet[2516]: I1108 00:30:59.582041 2516 memory_manager.go:186] "Starting memorymanager" policy="None" Nov 8 00:30:59.582048 kubelet[2516]: I1108 00:30:59.582053 2516 state_mem.go:35] "Initializing new in-memory state store" Nov 8 00:30:59.593235 kubelet[2516]: I1108 00:30:59.593171 2516 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Nov 8 00:30:59.593414 kubelet[2516]: I1108 00:30:59.593390 2516 eviction_manager.go:189] "Eviction manager: starting control loop" Nov 8 00:30:59.593441 kubelet[2516]: I1108 00:30:59.593403 2516 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Nov 8 00:30:59.594198 kubelet[2516]: I1108 00:30:59.594185 2516 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Nov 8 00:30:59.595960 kubelet[2516]: E1108 00:30:59.595947 2516 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Nov 8 00:30:59.596006 kubelet[2516]: E1108 00:30:59.595974 2516 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Nov 8 00:30:59.648935 kubelet[2516]: E1108 00:30:59.648457 2516 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Nov 8 00:30:59.650801 kubelet[2516]: E1108 00:30:59.650785 2516 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Nov 8 00:30:59.652584 kubelet[2516]: E1108 00:30:59.652569 2516 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Nov 8 00:30:59.695146 kubelet[2516]: I1108 00:30:59.695131 2516 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Nov 8 00:30:59.695536 kubelet[2516]: E1108 00:30:59.695524 2516 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://139.178.70.109:6443/api/v1/nodes\": dial tcp 139.178.70.109:6443: connect: connection refused" node="localhost" Nov 8 00:30:59.710978 kubelet[2516]: I1108 00:30:59.710786 2516 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/4654b122dbb389158fe3c0766e603624-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"4654b122dbb389158fe3c0766e603624\") " pod="kube-system/kube-controller-manager-localhost" Nov 8 00:30:59.710978 kubelet[2516]: I1108 00:30:59.710810 2516 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/4654b122dbb389158fe3c0766e603624-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"4654b122dbb389158fe3c0766e603624\") " pod="kube-system/kube-controller-manager-localhost" Nov 8 00:30:59.710978 kubelet[2516]: I1108 00:30:59.710821 2516 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/4654b122dbb389158fe3c0766e603624-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"4654b122dbb389158fe3c0766e603624\") " pod="kube-system/kube-controller-manager-localhost" Nov 8 00:30:59.710978 kubelet[2516]: I1108 00:30:59.710831 2516 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/a1d51be1ff02022474f2598f6e43038f-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"a1d51be1ff02022474f2598f6e43038f\") " pod="kube-system/kube-scheduler-localhost" Nov 8 00:30:59.710978 kubelet[2516]: I1108 00:30:59.710839 2516 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/7661182155fbf0ab61993ce142f62cd5-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"7661182155fbf0ab61993ce142f62cd5\") " pod="kube-system/kube-apiserver-localhost" Nov 8 00:30:59.711115 kubelet[2516]: I1108 00:30:59.710849 2516 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/7661182155fbf0ab61993ce142f62cd5-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"7661182155fbf0ab61993ce142f62cd5\") " pod="kube-system/kube-apiserver-localhost" Nov 8 00:30:59.711115 kubelet[2516]: I1108 00:30:59.710858 2516 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/7661182155fbf0ab61993ce142f62cd5-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"7661182155fbf0ab61993ce142f62cd5\") " pod="kube-system/kube-apiserver-localhost" Nov 8 00:30:59.711115 kubelet[2516]: E1108 00:30:59.710863 2516 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://139.178.70.109:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 139.178.70.109:6443: connect: connection refused" interval="400ms" Nov 8 00:30:59.711115 kubelet[2516]: I1108 00:30:59.710867 2516 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/4654b122dbb389158fe3c0766e603624-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"4654b122dbb389158fe3c0766e603624\") " pod="kube-system/kube-controller-manager-localhost" Nov 8 00:30:59.711115 kubelet[2516]: I1108 00:30:59.710889 2516 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/4654b122dbb389158fe3c0766e603624-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"4654b122dbb389158fe3c0766e603624\") " pod="kube-system/kube-controller-manager-localhost" Nov 8 00:30:59.896969 kubelet[2516]: I1108 00:30:59.896700 2516 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Nov 8 00:30:59.897145 kubelet[2516]: E1108 00:30:59.897125 2516 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://139.178.70.109:6443/api/v1/nodes\": dial tcp 139.178.70.109:6443: connect: connection refused" node="localhost" Nov 8 00:30:59.950715 containerd[1649]: time="2025-11-08T00:30:59.950684852Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:a1d51be1ff02022474f2598f6e43038f,Namespace:kube-system,Attempt:0,}" Nov 8 00:30:59.955676 containerd[1649]: time="2025-11-08T00:30:59.955325595Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:7661182155fbf0ab61993ce142f62cd5,Namespace:kube-system,Attempt:0,}" Nov 8 00:30:59.955676 containerd[1649]: time="2025-11-08T00:30:59.955623760Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:4654b122dbb389158fe3c0766e603624,Namespace:kube-system,Attempt:0,}" Nov 8 00:31:00.111611 kubelet[2516]: E1108 00:31:00.111546 2516 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://139.178.70.109:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 139.178.70.109:6443: connect: connection refused" interval="800ms" Nov 8 00:31:00.298408 kubelet[2516]: I1108 00:31:00.298379 2516 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Nov 8 00:31:00.298773 kubelet[2516]: E1108 00:31:00.298670 2516 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://139.178.70.109:6443/api/v1/nodes\": dial tcp 139.178.70.109:6443: connect: connection refused" node="localhost" Nov 8 00:31:00.578875 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3815649436.mount: Deactivated successfully. Nov 8 00:31:00.627598 containerd[1649]: time="2025-11-08T00:31:00.627526283Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Nov 8 00:31:00.629756 containerd[1649]: time="2025-11-08T00:31:00.629730519Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Nov 8 00:31:00.638956 containerd[1649]: time="2025-11-08T00:31:00.638852727Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312056" Nov 8 00:31:00.646581 containerd[1649]: time="2025-11-08T00:31:00.646538026Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Nov 8 00:31:00.651893 containerd[1649]: time="2025-11-08T00:31:00.651828620Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Nov 8 00:31:00.658419 containerd[1649]: time="2025-11-08T00:31:00.658311816Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Nov 8 00:31:00.658764 containerd[1649]: time="2025-11-08T00:31:00.658739754Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Nov 8 00:31:00.661522 containerd[1649]: time="2025-11-08T00:31:00.661489630Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Nov 8 00:31:00.663933 containerd[1649]: time="2025-11-08T00:31:00.662866892Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 712.125871ms" Nov 8 00:31:00.664132 containerd[1649]: time="2025-11-08T00:31:00.664113735Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 708.733381ms" Nov 8 00:31:00.666018 containerd[1649]: time="2025-11-08T00:31:00.665987244Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 710.335151ms" Nov 8 00:31:00.714360 kubelet[2516]: W1108 00:31:00.714313 2516 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://139.178.70.109:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 139.178.70.109:6443: connect: connection refused Nov 8 00:31:00.714489 kubelet[2516]: E1108 00:31:00.714476 2516 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://139.178.70.109:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 139.178.70.109:6443: connect: connection refused" logger="UnhandledError" Nov 8 00:31:00.772187 containerd[1649]: time="2025-11-08T00:31:00.770335705Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 8 00:31:00.772187 containerd[1649]: time="2025-11-08T00:31:00.770396266Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 8 00:31:00.772187 containerd[1649]: time="2025-11-08T00:31:00.770407922Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 8 00:31:00.772187 containerd[1649]: time="2025-11-08T00:31:00.770537121Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 8 00:31:00.776252 kubelet[2516]: W1108 00:31:00.776210 2516 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://139.178.70.109:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 139.178.70.109:6443: connect: connection refused Nov 8 00:31:00.776702 kubelet[2516]: E1108 00:31:00.776682 2516 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://139.178.70.109:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 139.178.70.109:6443: connect: connection refused" logger="UnhandledError" Nov 8 00:31:00.781194 containerd[1649]: time="2025-11-08T00:31:00.781154385Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 8 00:31:00.781347 containerd[1649]: time="2025-11-08T00:31:00.781278145Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 8 00:31:00.781347 containerd[1649]: time="2025-11-08T00:31:00.781298480Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 8 00:31:00.781561 containerd[1649]: time="2025-11-08T00:31:00.781542109Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 8 00:31:00.784148 containerd[1649]: time="2025-11-08T00:31:00.784109516Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 8 00:31:00.784774 containerd[1649]: time="2025-11-08T00:31:00.784235898Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 8 00:31:00.784878 containerd[1649]: time="2025-11-08T00:31:00.784855511Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 8 00:31:00.785059 containerd[1649]: time="2025-11-08T00:31:00.785035250Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 8 00:31:00.815296 kubelet[2516]: W1108 00:31:00.815243 2516 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://139.178.70.109:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 139.178.70.109:6443: connect: connection refused Nov 8 00:31:00.815296 kubelet[2516]: E1108 00:31:00.815295 2516 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://139.178.70.109:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 139.178.70.109:6443: connect: connection refused" logger="UnhandledError" Nov 8 00:31:00.849990 containerd[1649]: time="2025-11-08T00:31:00.849313373Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:a1d51be1ff02022474f2598f6e43038f,Namespace:kube-system,Attempt:0,} returns sandbox id \"1a3dee51ee4c4661f8f0c1013da79c0790ad22b05c6b2e5f64498de4a75a7b99\"" Nov 8 00:31:00.852226 containerd[1649]: time="2025-11-08T00:31:00.852173676Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:7661182155fbf0ab61993ce142f62cd5,Namespace:kube-system,Attempt:0,} returns sandbox id \"c6f68754dc105a2b53d7317097bf04ff31a30c348a28f3b7aad72acea78ac501\"" Nov 8 00:31:00.855550 containerd[1649]: time="2025-11-08T00:31:00.855519201Z" level=info msg="CreateContainer within sandbox \"c6f68754dc105a2b53d7317097bf04ff31a30c348a28f3b7aad72acea78ac501\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Nov 8 00:31:00.855672 containerd[1649]: time="2025-11-08T00:31:00.855656660Z" level=info msg="CreateContainer within sandbox \"1a3dee51ee4c4661f8f0c1013da79c0790ad22b05c6b2e5f64498de4a75a7b99\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Nov 8 00:31:00.860311 containerd[1649]: time="2025-11-08T00:31:00.860285684Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:4654b122dbb389158fe3c0766e603624,Namespace:kube-system,Attempt:0,} returns sandbox id \"1193da0217ea6c6c0f15a634aee3d230009d488ac995a4a34ce9788161266237\"" Nov 8 00:31:00.862941 containerd[1649]: time="2025-11-08T00:31:00.862196343Z" level=info msg="CreateContainer within sandbox \"1193da0217ea6c6c0f15a634aee3d230009d488ac995a4a34ce9788161266237\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Nov 8 00:31:00.865152 containerd[1649]: time="2025-11-08T00:31:00.865126208Z" level=info msg="CreateContainer within sandbox \"c6f68754dc105a2b53d7317097bf04ff31a30c348a28f3b7aad72acea78ac501\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"0c3fa8e2103f3cbbdb22bb41ccb80e68c0b2ba0662bd536027c5c07934c67032\"" Nov 8 00:31:00.865537 containerd[1649]: time="2025-11-08T00:31:00.865525845Z" level=info msg="StartContainer for \"0c3fa8e2103f3cbbdb22bb41ccb80e68c0b2ba0662bd536027c5c07934c67032\"" Nov 8 00:31:00.867535 containerd[1649]: time="2025-11-08T00:31:00.867520919Z" level=info msg="CreateContainer within sandbox \"1a3dee51ee4c4661f8f0c1013da79c0790ad22b05c6b2e5f64498de4a75a7b99\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"7ecd90ab979c19dcfcef615c634f7c4468631dc4323e82d06dbdbda46a9a8ce2\"" Nov 8 00:31:00.867790 containerd[1649]: time="2025-11-08T00:31:00.867746458Z" level=info msg="StartContainer for \"7ecd90ab979c19dcfcef615c634f7c4468631dc4323e82d06dbdbda46a9a8ce2\"" Nov 8 00:31:00.873825 containerd[1649]: time="2025-11-08T00:31:00.873803328Z" level=info msg="CreateContainer within sandbox \"1193da0217ea6c6c0f15a634aee3d230009d488ac995a4a34ce9788161266237\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"3e2dc602526bde06e8838a804fe8c37992fcaf948180651ded7e7e98b99cee96\"" Nov 8 00:31:00.874805 containerd[1649]: time="2025-11-08T00:31:00.874626585Z" level=info msg="StartContainer for \"3e2dc602526bde06e8838a804fe8c37992fcaf948180651ded7e7e98b99cee96\"" Nov 8 00:31:00.912162 kubelet[2516]: E1108 00:31:00.911983 2516 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://139.178.70.109:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 139.178.70.109:6443: connect: connection refused" interval="1.6s" Nov 8 00:31:00.939931 containerd[1649]: time="2025-11-08T00:31:00.939551406Z" level=info msg="StartContainer for \"0c3fa8e2103f3cbbdb22bb41ccb80e68c0b2ba0662bd536027c5c07934c67032\" returns successfully" Nov 8 00:31:00.947831 containerd[1649]: time="2025-11-08T00:31:00.947354070Z" level=info msg="StartContainer for \"3e2dc602526bde06e8838a804fe8c37992fcaf948180651ded7e7e98b99cee96\" returns successfully" Nov 8 00:31:00.962066 containerd[1649]: time="2025-11-08T00:31:00.962038442Z" level=info msg="StartContainer for \"7ecd90ab979c19dcfcef615c634f7c4468631dc4323e82d06dbdbda46a9a8ce2\" returns successfully" Nov 8 00:31:00.986402 kubelet[2516]: W1108 00:31:00.986336 2516 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://139.178.70.109:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 139.178.70.109:6443: connect: connection refused Nov 8 00:31:00.986402 kubelet[2516]: E1108 00:31:00.986387 2516 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://139.178.70.109:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 139.178.70.109:6443: connect: connection refused" logger="UnhandledError" Nov 8 00:31:01.100346 kubelet[2516]: I1108 00:31:01.100030 2516 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Nov 8 00:31:01.100653 kubelet[2516]: E1108 00:31:01.100574 2516 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://139.178.70.109:6443/api/v1/nodes\": dial tcp 139.178.70.109:6443: connect: connection refused" node="localhost" Nov 8 00:31:01.477994 kubelet[2516]: E1108 00:31:01.477842 2516 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://139.178.70.109:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 139.178.70.109:6443: connect: connection refused" logger="UnhandledError" Nov 8 00:31:01.546604 kubelet[2516]: E1108 00:31:01.546452 2516 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Nov 8 00:31:01.549264 kubelet[2516]: E1108 00:31:01.548947 2516 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Nov 8 00:31:01.549264 kubelet[2516]: E1108 00:31:01.549166 2516 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Nov 8 00:31:02.552150 kubelet[2516]: E1108 00:31:02.552123 2516 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Nov 8 00:31:02.552531 kubelet[2516]: E1108 00:31:02.552416 2516 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Nov 8 00:31:02.701801 kubelet[2516]: I1108 00:31:02.701435 2516 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Nov 8 00:31:03.081474 kubelet[2516]: E1108 00:31:03.081444 2516 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Nov 8 00:31:03.148764 kubelet[2516]: I1108 00:31:03.148723 2516 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Nov 8 00:31:03.210119 kubelet[2516]: I1108 00:31:03.210040 2516 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Nov 8 00:31:03.219243 kubelet[2516]: E1108 00:31:03.219063 2516 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-localhost" Nov 8 00:31:03.219243 kubelet[2516]: I1108 00:31:03.219086 2516 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Nov 8 00:31:03.221983 kubelet[2516]: E1108 00:31:03.221081 2516 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-localhost" Nov 8 00:31:03.221983 kubelet[2516]: I1108 00:31:03.221105 2516 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Nov 8 00:31:03.223344 kubelet[2516]: E1108 00:31:03.223278 2516 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-localhost" Nov 8 00:31:03.485055 kubelet[2516]: I1108 00:31:03.484899 2516 apiserver.go:52] "Watching apiserver" Nov 8 00:31:03.510455 kubelet[2516]: I1108 00:31:03.510423 2516 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Nov 8 00:31:04.741575 systemd[1]: Reloading requested from client PID 2785 ('systemctl') (unit session-9.scope)... Nov 8 00:31:04.741587 systemd[1]: Reloading... Nov 8 00:31:04.796355 zram_generator::config[2822]: No configuration found. Nov 8 00:31:04.876693 systemd[1]: /etc/systemd/system/coreos-metadata.service:11: Ignoring unknown escape sequences: "echo "COREOS_CUSTOM_PRIVATE_IPV4=$(ip addr show ens192 | grep "inet 10." | grep -Po "inet \K[\d.]+") Nov 8 00:31:04.894187 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Nov 8 00:31:04.945303 systemd[1]: Reloading finished in 203 ms. Nov 8 00:31:04.966747 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Nov 8 00:31:04.978151 systemd[1]: kubelet.service: Deactivated successfully. Nov 8 00:31:04.978313 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Nov 8 00:31:04.984245 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 8 00:31:05.364458 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 8 00:31:05.367610 (kubelet)[2900]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Nov 8 00:31:05.415750 kubelet[2900]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Nov 8 00:31:05.415989 kubelet[2900]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Nov 8 00:31:05.416017 kubelet[2900]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Nov 8 00:31:05.419186 kubelet[2900]: I1108 00:31:05.419166 2900 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Nov 8 00:31:05.426476 kubelet[2900]: I1108 00:31:05.426460 2900 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Nov 8 00:31:05.426547 kubelet[2900]: I1108 00:31:05.426542 2900 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Nov 8 00:31:05.426714 kubelet[2900]: I1108 00:31:05.426707 2900 server.go:954] "Client rotation is on, will bootstrap in background" Nov 8 00:31:05.428670 kubelet[2900]: I1108 00:31:05.428644 2900 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Nov 8 00:31:05.430617 kubelet[2900]: I1108 00:31:05.430591 2900 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Nov 8 00:31:05.439645 kubelet[2900]: E1108 00:31:05.439626 2900 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Nov 8 00:31:05.439645 kubelet[2900]: I1108 00:31:05.439644 2900 server.go:1421] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Nov 8 00:31:05.444846 kubelet[2900]: I1108 00:31:05.443673 2900 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Nov 8 00:31:05.444846 kubelet[2900]: I1108 00:31:05.444056 2900 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Nov 8 00:31:05.444846 kubelet[2900]: I1108 00:31:05.444069 2900 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":1} Nov 8 00:31:05.444846 kubelet[2900]: I1108 00:31:05.444261 2900 topology_manager.go:138] "Creating topology manager with none policy" Nov 8 00:31:05.445000 kubelet[2900]: I1108 00:31:05.444270 2900 container_manager_linux.go:304] "Creating device plugin manager" Nov 8 00:31:05.445000 kubelet[2900]: I1108 00:31:05.444301 2900 state_mem.go:36] "Initialized new in-memory state store" Nov 8 00:31:05.445000 kubelet[2900]: I1108 00:31:05.444471 2900 kubelet.go:446] "Attempting to sync node with API server" Nov 8 00:31:05.445000 kubelet[2900]: I1108 00:31:05.444491 2900 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Nov 8 00:31:05.445000 kubelet[2900]: I1108 00:31:05.444506 2900 kubelet.go:352] "Adding apiserver pod source" Nov 8 00:31:05.445000 kubelet[2900]: I1108 00:31:05.444515 2900 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Nov 8 00:31:05.450827 kubelet[2900]: I1108 00:31:05.450802 2900 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Nov 8 00:31:05.451111 kubelet[2900]: I1108 00:31:05.451099 2900 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Nov 8 00:31:05.453910 kubelet[2900]: I1108 00:31:05.453897 2900 watchdog_linux.go:99] "Systemd watchdog is not enabled" Nov 8 00:31:05.453956 kubelet[2900]: I1108 00:31:05.453938 2900 server.go:1287] "Started kubelet" Nov 8 00:31:05.455977 kubelet[2900]: I1108 00:31:05.455044 2900 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Nov 8 00:31:05.461183 kubelet[2900]: E1108 00:31:05.461164 2900 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Nov 8 00:31:05.463674 kubelet[2900]: I1108 00:31:05.463606 2900 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Nov 8 00:31:05.466018 kubelet[2900]: I1108 00:31:05.465980 2900 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Nov 8 00:31:05.466215 kubelet[2900]: I1108 00:31:05.466207 2900 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Nov 8 00:31:05.466376 kubelet[2900]: I1108 00:31:05.466368 2900 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Nov 8 00:31:05.469063 kubelet[2900]: I1108 00:31:05.469057 2900 volume_manager.go:297] "Starting Kubelet Volume Manager" Nov 8 00:31:05.469843 kubelet[2900]: I1108 00:31:05.469834 2900 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Nov 8 00:31:05.469980 kubelet[2900]: I1108 00:31:05.469973 2900 reconciler.go:26] "Reconciler: start to sync state" Nov 8 00:31:05.470019 kubelet[2900]: I1108 00:31:05.469982 2900 server.go:479] "Adding debug handlers to kubelet server" Nov 8 00:31:05.471072 kubelet[2900]: I1108 00:31:05.471058 2900 factory.go:221] Registration of the systemd container factory successfully Nov 8 00:31:05.471365 kubelet[2900]: I1108 00:31:05.471354 2900 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Nov 8 00:31:05.474202 kubelet[2900]: I1108 00:31:05.474174 2900 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Nov 8 00:31:05.474306 kubelet[2900]: I1108 00:31:05.474299 2900 factory.go:221] Registration of the containerd container factory successfully Nov 8 00:31:05.474812 kubelet[2900]: I1108 00:31:05.474801 2900 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Nov 8 00:31:05.475626 kubelet[2900]: I1108 00:31:05.475614 2900 status_manager.go:227] "Starting to sync pod status with apiserver" Nov 8 00:31:05.475654 kubelet[2900]: I1108 00:31:05.475633 2900 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Nov 8 00:31:05.475654 kubelet[2900]: I1108 00:31:05.475639 2900 kubelet.go:2382] "Starting kubelet main sync loop" Nov 8 00:31:05.475691 kubelet[2900]: E1108 00:31:05.475666 2900 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Nov 8 00:31:05.521000 kubelet[2900]: I1108 00:31:05.520979 2900 cpu_manager.go:221] "Starting CPU manager" policy="none" Nov 8 00:31:05.521000 kubelet[2900]: I1108 00:31:05.520994 2900 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Nov 8 00:31:05.521000 kubelet[2900]: I1108 00:31:05.521007 2900 state_mem.go:36] "Initialized new in-memory state store" Nov 8 00:31:05.521125 kubelet[2900]: I1108 00:31:05.521105 2900 state_mem.go:88] "Updated default CPUSet" cpuSet="" Nov 8 00:31:05.521125 kubelet[2900]: I1108 00:31:05.521112 2900 state_mem.go:96] "Updated CPUSet assignments" assignments={} Nov 8 00:31:05.521125 kubelet[2900]: I1108 00:31:05.521124 2900 policy_none.go:49] "None policy: Start" Nov 8 00:31:05.521172 kubelet[2900]: I1108 00:31:05.521130 2900 memory_manager.go:186] "Starting memorymanager" policy="None" Nov 8 00:31:05.521172 kubelet[2900]: I1108 00:31:05.521139 2900 state_mem.go:35] "Initializing new in-memory state store" Nov 8 00:31:05.521201 kubelet[2900]: I1108 00:31:05.521196 2900 state_mem.go:75] "Updated machine memory state" Nov 8 00:31:05.521842 kubelet[2900]: I1108 00:31:05.521830 2900 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Nov 8 00:31:05.522710 kubelet[2900]: I1108 00:31:05.521932 2900 eviction_manager.go:189] "Eviction manager: starting control loop" Nov 8 00:31:05.522710 kubelet[2900]: I1108 00:31:05.521941 2900 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Nov 8 00:31:05.522710 kubelet[2900]: I1108 00:31:05.522478 2900 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Nov 8 00:31:05.524157 kubelet[2900]: E1108 00:31:05.524145 2900 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Nov 8 00:31:05.576251 kubelet[2900]: I1108 00:31:05.576180 2900 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Nov 8 00:31:05.577462 kubelet[2900]: I1108 00:31:05.577443 2900 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Nov 8 00:31:05.577522 kubelet[2900]: I1108 00:31:05.577508 2900 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Nov 8 00:31:05.624779 kubelet[2900]: I1108 00:31:05.624705 2900 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Nov 8 00:31:05.630951 kubelet[2900]: I1108 00:31:05.630910 2900 kubelet_node_status.go:124] "Node was previously registered" node="localhost" Nov 8 00:31:05.631056 kubelet[2900]: I1108 00:31:05.630985 2900 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Nov 8 00:31:05.671188 kubelet[2900]: I1108 00:31:05.671158 2900 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/4654b122dbb389158fe3c0766e603624-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"4654b122dbb389158fe3c0766e603624\") " pod="kube-system/kube-controller-manager-localhost" Nov 8 00:31:05.671398 kubelet[2900]: I1108 00:31:05.671296 2900 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/a1d51be1ff02022474f2598f6e43038f-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"a1d51be1ff02022474f2598f6e43038f\") " pod="kube-system/kube-scheduler-localhost" Nov 8 00:31:05.671398 kubelet[2900]: I1108 00:31:05.671310 2900 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/7661182155fbf0ab61993ce142f62cd5-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"7661182155fbf0ab61993ce142f62cd5\") " pod="kube-system/kube-apiserver-localhost" Nov 8 00:31:05.671398 kubelet[2900]: I1108 00:31:05.671320 2900 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/7661182155fbf0ab61993ce142f62cd5-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"7661182155fbf0ab61993ce142f62cd5\") " pod="kube-system/kube-apiserver-localhost" Nov 8 00:31:05.671398 kubelet[2900]: I1108 00:31:05.671330 2900 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/4654b122dbb389158fe3c0766e603624-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"4654b122dbb389158fe3c0766e603624\") " pod="kube-system/kube-controller-manager-localhost" Nov 8 00:31:05.671398 kubelet[2900]: I1108 00:31:05.671339 2900 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/4654b122dbb389158fe3c0766e603624-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"4654b122dbb389158fe3c0766e603624\") " pod="kube-system/kube-controller-manager-localhost" Nov 8 00:31:05.671505 kubelet[2900]: I1108 00:31:05.671349 2900 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/7661182155fbf0ab61993ce142f62cd5-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"7661182155fbf0ab61993ce142f62cd5\") " pod="kube-system/kube-apiserver-localhost" Nov 8 00:31:05.671505 kubelet[2900]: I1108 00:31:05.671357 2900 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/4654b122dbb389158fe3c0766e603624-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"4654b122dbb389158fe3c0766e603624\") " pod="kube-system/kube-controller-manager-localhost" Nov 8 00:31:05.671505 kubelet[2900]: I1108 00:31:05.671366 2900 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/4654b122dbb389158fe3c0766e603624-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"4654b122dbb389158fe3c0766e603624\") " pod="kube-system/kube-controller-manager-localhost" Nov 8 00:31:06.448853 kubelet[2900]: I1108 00:31:06.448823 2900 apiserver.go:52] "Watching apiserver" Nov 8 00:31:06.470755 kubelet[2900]: I1108 00:31:06.470693 2900 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Nov 8 00:31:06.505100 kubelet[2900]: I1108 00:31:06.505076 2900 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Nov 8 00:31:06.505719 kubelet[2900]: I1108 00:31:06.505702 2900 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Nov 8 00:31:06.505830 kubelet[2900]: I1108 00:31:06.505818 2900 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Nov 8 00:31:06.508940 kubelet[2900]: E1108 00:31:06.508906 2900 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" Nov 8 00:31:06.509290 kubelet[2900]: E1108 00:31:06.509268 2900 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Nov 8 00:31:06.510183 kubelet[2900]: E1108 00:31:06.510168 2900 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" already exists" pod="kube-system/kube-controller-manager-localhost" Nov 8 00:31:06.523984 kubelet[2900]: I1108 00:31:06.523874 2900 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=1.5237494969999998 podStartE2EDuration="1.523749497s" podCreationTimestamp="2025-11-08 00:31:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-08 00:31:06.519968895 +0000 UTC m=+1.133423775" watchObservedRunningTime="2025-11-08 00:31:06.523749497 +0000 UTC m=+1.137204368" Nov 8 00:31:06.524108 kubelet[2900]: I1108 00:31:06.524009 2900 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=1.524004506 podStartE2EDuration="1.524004506s" podCreationTimestamp="2025-11-08 00:31:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-08 00:31:06.523546885 +0000 UTC m=+1.137001761" watchObservedRunningTime="2025-11-08 00:31:06.524004506 +0000 UTC m=+1.137459381" Nov 8 00:31:11.850252 kubelet[2900]: I1108 00:31:11.850231 2900 kuberuntime_manager.go:1702] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Nov 8 00:31:11.850732 kubelet[2900]: I1108 00:31:11.850524 2900 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Nov 8 00:31:11.850759 containerd[1649]: time="2025-11-08T00:31:11.850430020Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Nov 8 00:31:12.506227 kubelet[2900]: I1108 00:31:12.506182 2900 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=7.506167451 podStartE2EDuration="7.506167451s" podCreationTimestamp="2025-11-08 00:31:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-08 00:31:06.527784678 +0000 UTC m=+1.141239553" watchObservedRunningTime="2025-11-08 00:31:12.506167451 +0000 UTC m=+7.119622331" Nov 8 00:31:12.520082 kubelet[2900]: I1108 00:31:12.520052 2900 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2brj7\" (UniqueName: \"kubernetes.io/projected/432f5d80-0f1d-4e4f-b34d-19c49e59815c-kube-api-access-2brj7\") pod \"kube-proxy-bpz6s\" (UID: \"432f5d80-0f1d-4e4f-b34d-19c49e59815c\") " pod="kube-system/kube-proxy-bpz6s" Nov 8 00:31:12.520082 kubelet[2900]: I1108 00:31:12.520082 2900 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/432f5d80-0f1d-4e4f-b34d-19c49e59815c-kube-proxy\") pod \"kube-proxy-bpz6s\" (UID: \"432f5d80-0f1d-4e4f-b34d-19c49e59815c\") " pod="kube-system/kube-proxy-bpz6s" Nov 8 00:31:12.520248 kubelet[2900]: I1108 00:31:12.520096 2900 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/432f5d80-0f1d-4e4f-b34d-19c49e59815c-xtables-lock\") pod \"kube-proxy-bpz6s\" (UID: \"432f5d80-0f1d-4e4f-b34d-19c49e59815c\") " pod="kube-system/kube-proxy-bpz6s" Nov 8 00:31:12.520248 kubelet[2900]: I1108 00:31:12.520105 2900 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/432f5d80-0f1d-4e4f-b34d-19c49e59815c-lib-modules\") pod \"kube-proxy-bpz6s\" (UID: \"432f5d80-0f1d-4e4f-b34d-19c49e59815c\") " pod="kube-system/kube-proxy-bpz6s" Nov 8 00:31:12.810750 containerd[1649]: time="2025-11-08T00:31:12.810705625Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-bpz6s,Uid:432f5d80-0f1d-4e4f-b34d-19c49e59815c,Namespace:kube-system,Attempt:0,}" Nov 8 00:31:12.826833 containerd[1649]: time="2025-11-08T00:31:12.826762280Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 8 00:31:12.826833 containerd[1649]: time="2025-11-08T00:31:12.826810429Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 8 00:31:12.826833 containerd[1649]: time="2025-11-08T00:31:12.826817787Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 8 00:31:12.827077 containerd[1649]: time="2025-11-08T00:31:12.826866472Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 8 00:31:12.875927 containerd[1649]: time="2025-11-08T00:31:12.874676086Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-bpz6s,Uid:432f5d80-0f1d-4e4f-b34d-19c49e59815c,Namespace:kube-system,Attempt:0,} returns sandbox id \"3afadd94c4b02b7dea5c35d600f6d63aa8a91e3061c90c7ee40fb34957cd7f1f\"" Nov 8 00:31:12.890749 containerd[1649]: time="2025-11-08T00:31:12.890718797Z" level=info msg="CreateContainer within sandbox \"3afadd94c4b02b7dea5c35d600f6d63aa8a91e3061c90c7ee40fb34957cd7f1f\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Nov 8 00:31:12.897514 containerd[1649]: time="2025-11-08T00:31:12.897487690Z" level=info msg="CreateContainer within sandbox \"3afadd94c4b02b7dea5c35d600f6d63aa8a91e3061c90c7ee40fb34957cd7f1f\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"03603b965969941cc7a585d758538a04a0134f6b92c6ad889cf926179266583c\"" Nov 8 00:31:12.898658 containerd[1649]: time="2025-11-08T00:31:12.898060306Z" level=info msg="StartContainer for \"03603b965969941cc7a585d758538a04a0134f6b92c6ad889cf926179266583c\"" Nov 8 00:31:12.923137 kubelet[2900]: I1108 00:31:12.922770 2900 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fjjkn\" (UniqueName: \"kubernetes.io/projected/1ba0e694-bbff-44fd-9fef-f574415f91e9-kube-api-access-fjjkn\") pod \"tigera-operator-7dcd859c48-rgk82\" (UID: \"1ba0e694-bbff-44fd-9fef-f574415f91e9\") " pod="tigera-operator/tigera-operator-7dcd859c48-rgk82" Nov 8 00:31:12.923137 kubelet[2900]: I1108 00:31:12.922804 2900 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/1ba0e694-bbff-44fd-9fef-f574415f91e9-var-lib-calico\") pod \"tigera-operator-7dcd859c48-rgk82\" (UID: \"1ba0e694-bbff-44fd-9fef-f574415f91e9\") " pod="tigera-operator/tigera-operator-7dcd859c48-rgk82" Nov 8 00:31:12.934110 containerd[1649]: time="2025-11-08T00:31:12.934082146Z" level=info msg="StartContainer for \"03603b965969941cc7a585d758538a04a0134f6b92c6ad889cf926179266583c\" returns successfully" Nov 8 00:31:13.186716 containerd[1649]: time="2025-11-08T00:31:13.186552864Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-7dcd859c48-rgk82,Uid:1ba0e694-bbff-44fd-9fef-f574415f91e9,Namespace:tigera-operator,Attempt:0,}" Nov 8 00:31:13.205248 containerd[1649]: time="2025-11-08T00:31:13.205047191Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 8 00:31:13.205546 containerd[1649]: time="2025-11-08T00:31:13.205370195Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 8 00:31:13.205546 containerd[1649]: time="2025-11-08T00:31:13.205497608Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 8 00:31:13.205730 containerd[1649]: time="2025-11-08T00:31:13.205674445Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 8 00:31:13.256496 containerd[1649]: time="2025-11-08T00:31:13.256452508Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-7dcd859c48-rgk82,Uid:1ba0e694-bbff-44fd-9fef-f574415f91e9,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"2925000762de5789ea6a05491c80e3c2fdc569017f8bd49ea1274bf347fe48f5\"" Nov 8 00:31:13.258370 containerd[1649]: time="2025-11-08T00:31:13.257998587Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.7\"" Nov 8 00:31:13.525058 kubelet[2900]: I1108 00:31:13.524664 2900 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-bpz6s" podStartSLOduration=1.524652744 podStartE2EDuration="1.524652744s" podCreationTimestamp="2025-11-08 00:31:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-08 00:31:13.524624894 +0000 UTC m=+8.138079774" watchObservedRunningTime="2025-11-08 00:31:13.524652744 +0000 UTC m=+8.138107619" Nov 8 00:31:15.073121 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2911258282.mount: Deactivated successfully. Nov 8 00:31:15.441113 containerd[1649]: time="2025-11-08T00:31:15.440481031Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.38.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:31:15.441113 containerd[1649]: time="2025-11-08T00:31:15.440889352Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.38.7: active requests=0, bytes read=25061691" Nov 8 00:31:15.441113 containerd[1649]: time="2025-11-08T00:31:15.441045659Z" level=info msg="ImageCreate event name:\"sha256:f2c1be207523e593db82e3b8cf356a12f3ad8d1aad2225f8114b2cf9d6486cf1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:31:15.458350 containerd[1649]: time="2025-11-08T00:31:15.458320238Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:1b629a1403f5b6d7243f7dd523d04b8a50352a33c1d4d6970b6002a8733acf2e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:31:15.458968 containerd[1649]: time="2025-11-08T00:31:15.458943446Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.38.7\" with image id \"sha256:f2c1be207523e593db82e3b8cf356a12f3ad8d1aad2225f8114b2cf9d6486cf1\", repo tag \"quay.io/tigera/operator:v1.38.7\", repo digest \"quay.io/tigera/operator@sha256:1b629a1403f5b6d7243f7dd523d04b8a50352a33c1d4d6970b6002a8733acf2e\", size \"25057686\" in 2.200926859s" Nov 8 00:31:15.458968 containerd[1649]: time="2025-11-08T00:31:15.458966116Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.7\" returns image reference \"sha256:f2c1be207523e593db82e3b8cf356a12f3ad8d1aad2225f8114b2cf9d6486cf1\"" Nov 8 00:31:15.461510 containerd[1649]: time="2025-11-08T00:31:15.461481997Z" level=info msg="CreateContainer within sandbox \"2925000762de5789ea6a05491c80e3c2fdc569017f8bd49ea1274bf347fe48f5\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Nov 8 00:31:15.489904 containerd[1649]: time="2025-11-08T00:31:15.489873501Z" level=info msg="CreateContainer within sandbox \"2925000762de5789ea6a05491c80e3c2fdc569017f8bd49ea1274bf347fe48f5\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"94ba80c1548c26dcafec6b37978fa223074801cbc3d5757fd4b78edc171a2c2a\"" Nov 8 00:31:15.490449 containerd[1649]: time="2025-11-08T00:31:15.490323369Z" level=info msg="StartContainer for \"94ba80c1548c26dcafec6b37978fa223074801cbc3d5757fd4b78edc171a2c2a\"" Nov 8 00:31:15.530121 containerd[1649]: time="2025-11-08T00:31:15.530092442Z" level=info msg="StartContainer for \"94ba80c1548c26dcafec6b37978fa223074801cbc3d5757fd4b78edc171a2c2a\" returns successfully" Nov 8 00:31:16.534081 kubelet[2900]: I1108 00:31:16.534043 2900 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-7dcd859c48-rgk82" podStartSLOduration=2.331855739 podStartE2EDuration="4.534033115s" podCreationTimestamp="2025-11-08 00:31:12 +0000 UTC" firstStartedPulling="2025-11-08 00:31:13.257358091 +0000 UTC m=+7.870812963" lastFinishedPulling="2025-11-08 00:31:15.459535464 +0000 UTC m=+10.072990339" observedRunningTime="2025-11-08 00:31:16.533956331 +0000 UTC m=+11.147411210" watchObservedRunningTime="2025-11-08 00:31:16.534033115 +0000 UTC m=+11.147487995" Nov 8 00:31:21.476605 sudo[1976]: pam_unix(sudo:session): session closed for user root Nov 8 00:31:21.479755 sshd[1969]: pam_unix(sshd:session): session closed for user core Nov 8 00:31:21.484731 systemd[1]: sshd@6-139.178.70.109:22-147.75.109.163:50926.service: Deactivated successfully. Nov 8 00:31:21.492977 systemd-logind[1620]: Session 9 logged out. Waiting for processes to exit. Nov 8 00:31:21.493939 systemd[1]: session-9.scope: Deactivated successfully. Nov 8 00:31:21.496179 systemd-logind[1620]: Removed session 9. Nov 8 00:31:25.904581 kubelet[2900]: I1108 00:31:25.904549 2900 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/bd38ba54-5b11-49fb-ab18-e65daf157a29-tigera-ca-bundle\") pod \"calico-typha-6549c9bdb4-mjkt2\" (UID: \"bd38ba54-5b11-49fb-ab18-e65daf157a29\") " pod="calico-system/calico-typha-6549c9bdb4-mjkt2" Nov 8 00:31:25.904581 kubelet[2900]: I1108 00:31:25.904581 2900 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/bd38ba54-5b11-49fb-ab18-e65daf157a29-typha-certs\") pod \"calico-typha-6549c9bdb4-mjkt2\" (UID: \"bd38ba54-5b11-49fb-ab18-e65daf157a29\") " pod="calico-system/calico-typha-6549c9bdb4-mjkt2" Nov 8 00:31:25.904986 kubelet[2900]: I1108 00:31:25.904596 2900 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kbwnc\" (UniqueName: \"kubernetes.io/projected/bd38ba54-5b11-49fb-ab18-e65daf157a29-kube-api-access-kbwnc\") pod \"calico-typha-6549c9bdb4-mjkt2\" (UID: \"bd38ba54-5b11-49fb-ab18-e65daf157a29\") " pod="calico-system/calico-typha-6549c9bdb4-mjkt2" Nov 8 00:31:26.106063 kubelet[2900]: I1108 00:31:26.106030 2900 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/0a5a994b-8c0c-4136-aaf4-ce922cd84ee9-cni-net-dir\") pod \"calico-node-78nlx\" (UID: \"0a5a994b-8c0c-4136-aaf4-ce922cd84ee9\") " pod="calico-system/calico-node-78nlx" Nov 8 00:31:26.106063 kubelet[2900]: I1108 00:31:26.106060 2900 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/0a5a994b-8c0c-4136-aaf4-ce922cd84ee9-node-certs\") pod \"calico-node-78nlx\" (UID: \"0a5a994b-8c0c-4136-aaf4-ce922cd84ee9\") " pod="calico-system/calico-node-78nlx" Nov 8 00:31:26.106063 kubelet[2900]: I1108 00:31:26.106071 2900 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/0a5a994b-8c0c-4136-aaf4-ce922cd84ee9-cni-log-dir\") pod \"calico-node-78nlx\" (UID: \"0a5a994b-8c0c-4136-aaf4-ce922cd84ee9\") " pod="calico-system/calico-node-78nlx" Nov 8 00:31:26.106063 kubelet[2900]: I1108 00:31:26.106080 2900 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/0a5a994b-8c0c-4136-aaf4-ce922cd84ee9-var-lib-calico\") pod \"calico-node-78nlx\" (UID: \"0a5a994b-8c0c-4136-aaf4-ce922cd84ee9\") " pod="calico-system/calico-node-78nlx" Nov 8 00:31:26.112606 kubelet[2900]: I1108 00:31:26.106090 2900 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/0a5a994b-8c0c-4136-aaf4-ce922cd84ee9-var-run-calico\") pod \"calico-node-78nlx\" (UID: \"0a5a994b-8c0c-4136-aaf4-ce922cd84ee9\") " pod="calico-system/calico-node-78nlx" Nov 8 00:31:26.112606 kubelet[2900]: I1108 00:31:26.106099 2900 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4s4gx\" (UniqueName: \"kubernetes.io/projected/0a5a994b-8c0c-4136-aaf4-ce922cd84ee9-kube-api-access-4s4gx\") pod \"calico-node-78nlx\" (UID: \"0a5a994b-8c0c-4136-aaf4-ce922cd84ee9\") " pod="calico-system/calico-node-78nlx" Nov 8 00:31:26.112606 kubelet[2900]: I1108 00:31:26.106113 2900 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/0a5a994b-8c0c-4136-aaf4-ce922cd84ee9-cni-bin-dir\") pod \"calico-node-78nlx\" (UID: \"0a5a994b-8c0c-4136-aaf4-ce922cd84ee9\") " pod="calico-system/calico-node-78nlx" Nov 8 00:31:26.112606 kubelet[2900]: I1108 00:31:26.106122 2900 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/0a5a994b-8c0c-4136-aaf4-ce922cd84ee9-flexvol-driver-host\") pod \"calico-node-78nlx\" (UID: \"0a5a994b-8c0c-4136-aaf4-ce922cd84ee9\") " pod="calico-system/calico-node-78nlx" Nov 8 00:31:26.112606 kubelet[2900]: I1108 00:31:26.106130 2900 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/0a5a994b-8c0c-4136-aaf4-ce922cd84ee9-policysync\") pod \"calico-node-78nlx\" (UID: \"0a5a994b-8c0c-4136-aaf4-ce922cd84ee9\") " pod="calico-system/calico-node-78nlx" Nov 8 00:31:26.112742 kubelet[2900]: I1108 00:31:26.106141 2900 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/0a5a994b-8c0c-4136-aaf4-ce922cd84ee9-lib-modules\") pod \"calico-node-78nlx\" (UID: \"0a5a994b-8c0c-4136-aaf4-ce922cd84ee9\") " pod="calico-system/calico-node-78nlx" Nov 8 00:31:26.112742 kubelet[2900]: I1108 00:31:26.106151 2900 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0a5a994b-8c0c-4136-aaf4-ce922cd84ee9-tigera-ca-bundle\") pod \"calico-node-78nlx\" (UID: \"0a5a994b-8c0c-4136-aaf4-ce922cd84ee9\") " pod="calico-system/calico-node-78nlx" Nov 8 00:31:26.112742 kubelet[2900]: I1108 00:31:26.106159 2900 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/0a5a994b-8c0c-4136-aaf4-ce922cd84ee9-xtables-lock\") pod \"calico-node-78nlx\" (UID: \"0a5a994b-8c0c-4136-aaf4-ce922cd84ee9\") " pod="calico-system/calico-node-78nlx" Nov 8 00:31:26.163701 containerd[1649]: time="2025-11-08T00:31:26.163602631Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-6549c9bdb4-mjkt2,Uid:bd38ba54-5b11-49fb-ab18-e65daf157a29,Namespace:calico-system,Attempt:0,}" Nov 8 00:31:26.217457 kubelet[2900]: E1108 00:31:26.217329 2900 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:31:26.217457 kubelet[2900]: W1108 00:31:26.217342 2900 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:31:26.229716 kubelet[2900]: E1108 00:31:26.228928 2900 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:31:26.229716 kubelet[2900]: E1108 00:31:26.229121 2900 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:31:26.229716 kubelet[2900]: W1108 00:31:26.229130 2900 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:31:26.229716 kubelet[2900]: E1108 00:31:26.229144 2900 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:31:26.229716 kubelet[2900]: E1108 00:31:26.229234 2900 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:31:26.229716 kubelet[2900]: W1108 00:31:26.229238 2900 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:31:26.229716 kubelet[2900]: E1108 00:31:26.229244 2900 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:31:26.229716 kubelet[2900]: E1108 00:31:26.229356 2900 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:31:26.229716 kubelet[2900]: W1108 00:31:26.229361 2900 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:31:26.229716 kubelet[2900]: E1108 00:31:26.229366 2900 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:31:26.232431 kubelet[2900]: E1108 00:31:26.232416 2900 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:31:26.232431 kubelet[2900]: W1108 00:31:26.232428 2900 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:31:26.232499 kubelet[2900]: E1108 00:31:26.232440 2900 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:31:26.234001 kubelet[2900]: E1108 00:31:26.233987 2900 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:31:26.234001 kubelet[2900]: W1108 00:31:26.233998 2900 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:31:26.234070 kubelet[2900]: E1108 00:31:26.234010 2900 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:31:26.234111 kubelet[2900]: E1108 00:31:26.234102 2900 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:31:26.234111 kubelet[2900]: W1108 00:31:26.234109 2900 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:31:26.234152 kubelet[2900]: E1108 00:31:26.234119 2900 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:31:26.234205 kubelet[2900]: E1108 00:31:26.234197 2900 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:31:26.234254 kubelet[2900]: W1108 00:31:26.234205 2900 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:31:26.234254 kubelet[2900]: E1108 00:31:26.234212 2900 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:31:26.235574 kubelet[2900]: E1108 00:31:26.234315 2900 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:31:26.235574 kubelet[2900]: W1108 00:31:26.234319 2900 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:31:26.235574 kubelet[2900]: E1108 00:31:26.234324 2900 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:31:26.237805 kubelet[2900]: E1108 00:31:26.237479 2900 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:31:26.237805 kubelet[2900]: W1108 00:31:26.237492 2900 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:31:26.237805 kubelet[2900]: E1108 00:31:26.237504 2900 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:31:26.242931 containerd[1649]: time="2025-11-08T00:31:26.242844156Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 8 00:31:26.243380 containerd[1649]: time="2025-11-08T00:31:26.243026121Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 8 00:31:26.243380 containerd[1649]: time="2025-11-08T00:31:26.243297132Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 8 00:31:26.243537 containerd[1649]: time="2025-11-08T00:31:26.243395402Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 8 00:31:26.302073 kubelet[2900]: E1108 00:31:26.300134 2900 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-m4wsd" podUID="c5b205c6-f534-4f27-bd2e-0a8fe1443335" Nov 8 00:31:26.316178 containerd[1649]: time="2025-11-08T00:31:26.316151095Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-6549c9bdb4-mjkt2,Uid:bd38ba54-5b11-49fb-ab18-e65daf157a29,Namespace:calico-system,Attempt:0,} returns sandbox id \"b3fb762333a8c0ad8a010c0878f5d627c458e880010b3a2a882336b434fbf03e\"" Nov 8 00:31:26.318079 containerd[1649]: time="2025-11-08T00:31:26.317977727Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.4\"" Nov 8 00:31:26.356453 containerd[1649]: time="2025-11-08T00:31:26.356163791Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-78nlx,Uid:0a5a994b-8c0c-4136-aaf4-ce922cd84ee9,Namespace:calico-system,Attempt:0,}" Nov 8 00:31:26.398303 kubelet[2900]: E1108 00:31:26.398275 2900 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:31:26.398303 kubelet[2900]: W1108 00:31:26.398296 2900 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:31:26.398427 kubelet[2900]: E1108 00:31:26.398313 2900 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:31:26.398475 kubelet[2900]: E1108 00:31:26.398459 2900 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:31:26.398497 kubelet[2900]: W1108 00:31:26.398477 2900 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:31:26.398516 kubelet[2900]: E1108 00:31:26.398498 2900 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:31:26.399183 kubelet[2900]: E1108 00:31:26.398608 2900 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:31:26.399183 kubelet[2900]: W1108 00:31:26.398616 2900 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:31:26.399183 kubelet[2900]: E1108 00:31:26.398623 2900 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:31:26.399264 kubelet[2900]: E1108 00:31:26.399213 2900 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:31:26.399264 kubelet[2900]: W1108 00:31:26.399220 2900 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:31:26.399264 kubelet[2900]: E1108 00:31:26.399228 2900 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:31:26.399853 kubelet[2900]: E1108 00:31:26.399840 2900 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:31:26.399853 kubelet[2900]: W1108 00:31:26.399849 2900 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:31:26.399908 kubelet[2900]: E1108 00:31:26.399858 2900 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:31:26.404635 kubelet[2900]: E1108 00:31:26.400750 2900 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:31:26.404635 kubelet[2900]: W1108 00:31:26.400758 2900 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:31:26.404635 kubelet[2900]: E1108 00:31:26.400767 2900 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:31:26.404635 kubelet[2900]: E1108 00:31:26.400921 2900 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:31:26.404635 kubelet[2900]: W1108 00:31:26.400929 2900 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:31:26.404635 kubelet[2900]: E1108 00:31:26.400948 2900 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:31:26.404635 kubelet[2900]: E1108 00:31:26.401063 2900 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:31:26.404635 kubelet[2900]: W1108 00:31:26.401070 2900 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:31:26.404635 kubelet[2900]: E1108 00:31:26.401078 2900 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:31:26.404635 kubelet[2900]: E1108 00:31:26.402431 2900 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:31:26.404817 kubelet[2900]: W1108 00:31:26.402442 2900 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:31:26.404817 kubelet[2900]: E1108 00:31:26.402454 2900 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:31:26.404817 kubelet[2900]: E1108 00:31:26.402636 2900 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:31:26.404817 kubelet[2900]: W1108 00:31:26.402643 2900 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:31:26.404817 kubelet[2900]: E1108 00:31:26.402653 2900 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:31:26.404817 kubelet[2900]: E1108 00:31:26.402778 2900 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:31:26.404817 kubelet[2900]: W1108 00:31:26.402802 2900 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:31:26.404817 kubelet[2900]: E1108 00:31:26.402809 2900 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:31:26.404817 kubelet[2900]: E1108 00:31:26.402942 2900 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:31:26.404817 kubelet[2900]: W1108 00:31:26.402947 2900 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:31:26.404996 kubelet[2900]: E1108 00:31:26.402953 2900 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:31:26.404996 kubelet[2900]: E1108 00:31:26.403073 2900 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:31:26.404996 kubelet[2900]: W1108 00:31:26.403078 2900 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:31:26.404996 kubelet[2900]: E1108 00:31:26.403084 2900 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:31:26.404996 kubelet[2900]: E1108 00:31:26.403183 2900 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:31:26.404996 kubelet[2900]: W1108 00:31:26.403188 2900 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:31:26.404996 kubelet[2900]: E1108 00:31:26.403193 2900 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:31:26.404996 kubelet[2900]: E1108 00:31:26.403293 2900 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:31:26.404996 kubelet[2900]: W1108 00:31:26.403298 2900 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:31:26.404996 kubelet[2900]: E1108 00:31:26.403302 2900 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:31:26.413341 kubelet[2900]: E1108 00:31:26.403425 2900 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:31:26.413341 kubelet[2900]: W1108 00:31:26.403433 2900 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:31:26.413341 kubelet[2900]: E1108 00:31:26.403445 2900 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:31:26.413341 kubelet[2900]: E1108 00:31:26.403999 2900 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:31:26.413341 kubelet[2900]: W1108 00:31:26.404005 2900 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:31:26.413341 kubelet[2900]: E1108 00:31:26.404010 2900 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:31:26.413341 kubelet[2900]: E1108 00:31:26.405344 2900 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:31:26.413341 kubelet[2900]: W1108 00:31:26.405352 2900 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:31:26.413341 kubelet[2900]: E1108 00:31:26.405358 2900 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:31:26.413341 kubelet[2900]: E1108 00:31:26.405892 2900 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:31:26.413505 kubelet[2900]: W1108 00:31:26.405897 2900 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:31:26.413505 kubelet[2900]: E1108 00:31:26.405904 2900 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:31:26.413505 kubelet[2900]: E1108 00:31:26.406043 2900 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:31:26.413505 kubelet[2900]: W1108 00:31:26.406048 2900 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:31:26.413505 kubelet[2900]: E1108 00:31:26.406053 2900 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:31:26.413505 kubelet[2900]: E1108 00:31:26.409299 2900 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:31:26.413505 kubelet[2900]: W1108 00:31:26.409306 2900 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:31:26.413505 kubelet[2900]: E1108 00:31:26.409312 2900 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:31:26.413505 kubelet[2900]: I1108 00:31:26.409337 2900 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/c5b205c6-f534-4f27-bd2e-0a8fe1443335-varrun\") pod \"csi-node-driver-m4wsd\" (UID: \"c5b205c6-f534-4f27-bd2e-0a8fe1443335\") " pod="calico-system/csi-node-driver-m4wsd" Nov 8 00:31:26.413647 kubelet[2900]: E1108 00:31:26.409443 2900 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:31:26.413647 kubelet[2900]: W1108 00:31:26.409450 2900 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:31:26.413647 kubelet[2900]: E1108 00:31:26.409461 2900 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:31:26.413647 kubelet[2900]: I1108 00:31:26.409471 2900 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/c5b205c6-f534-4f27-bd2e-0a8fe1443335-socket-dir\") pod \"csi-node-driver-m4wsd\" (UID: \"c5b205c6-f534-4f27-bd2e-0a8fe1443335\") " pod="calico-system/csi-node-driver-m4wsd" Nov 8 00:31:26.413647 kubelet[2900]: E1108 00:31:26.409560 2900 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:31:26.413647 kubelet[2900]: W1108 00:31:26.409565 2900 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:31:26.413647 kubelet[2900]: E1108 00:31:26.409571 2900 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:31:26.413647 kubelet[2900]: I1108 00:31:26.409585 2900 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6xszc\" (UniqueName: \"kubernetes.io/projected/c5b205c6-f534-4f27-bd2e-0a8fe1443335-kube-api-access-6xszc\") pod \"csi-node-driver-m4wsd\" (UID: \"c5b205c6-f534-4f27-bd2e-0a8fe1443335\") " pod="calico-system/csi-node-driver-m4wsd" Nov 8 00:31:26.413647 kubelet[2900]: E1108 00:31:26.409682 2900 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:31:26.413817 kubelet[2900]: W1108 00:31:26.409687 2900 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:31:26.413817 kubelet[2900]: E1108 00:31:26.409698 2900 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:31:26.413817 kubelet[2900]: I1108 00:31:26.409706 2900 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/c5b205c6-f534-4f27-bd2e-0a8fe1443335-kubelet-dir\") pod \"csi-node-driver-m4wsd\" (UID: \"c5b205c6-f534-4f27-bd2e-0a8fe1443335\") " pod="calico-system/csi-node-driver-m4wsd" Nov 8 00:31:26.413817 kubelet[2900]: E1108 00:31:26.409798 2900 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:31:26.413817 kubelet[2900]: W1108 00:31:26.409803 2900 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:31:26.413817 kubelet[2900]: E1108 00:31:26.409813 2900 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:31:26.413817 kubelet[2900]: I1108 00:31:26.409821 2900 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/c5b205c6-f534-4f27-bd2e-0a8fe1443335-registration-dir\") pod \"csi-node-driver-m4wsd\" (UID: \"c5b205c6-f534-4f27-bd2e-0a8fe1443335\") " pod="calico-system/csi-node-driver-m4wsd" Nov 8 00:31:26.413817 kubelet[2900]: E1108 00:31:26.409907 2900 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:31:26.413974 kubelet[2900]: W1108 00:31:26.409933 2900 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:31:26.413974 kubelet[2900]: E1108 00:31:26.409946 2900 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:31:26.413974 kubelet[2900]: E1108 00:31:26.410061 2900 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:31:26.413974 kubelet[2900]: W1108 00:31:26.410069 2900 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:31:26.413974 kubelet[2900]: E1108 00:31:26.410079 2900 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:31:26.413974 kubelet[2900]: E1108 00:31:26.410197 2900 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:31:26.413974 kubelet[2900]: W1108 00:31:26.410203 2900 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:31:26.413974 kubelet[2900]: E1108 00:31:26.410211 2900 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:31:26.413974 kubelet[2900]: E1108 00:31:26.410312 2900 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:31:26.413974 kubelet[2900]: W1108 00:31:26.410317 2900 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:31:26.414140 kubelet[2900]: E1108 00:31:26.410326 2900 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:31:26.414140 kubelet[2900]: E1108 00:31:26.410424 2900 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:31:26.414140 kubelet[2900]: W1108 00:31:26.410429 2900 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:31:26.414140 kubelet[2900]: E1108 00:31:26.410437 2900 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:31:26.414140 kubelet[2900]: E1108 00:31:26.410526 2900 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:31:26.414140 kubelet[2900]: W1108 00:31:26.410531 2900 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:31:26.414140 kubelet[2900]: E1108 00:31:26.410537 2900 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:31:26.414140 kubelet[2900]: E1108 00:31:26.410646 2900 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:31:26.414140 kubelet[2900]: W1108 00:31:26.410651 2900 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:31:26.414140 kubelet[2900]: E1108 00:31:26.410657 2900 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:31:26.414302 kubelet[2900]: E1108 00:31:26.410750 2900 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:31:26.414302 kubelet[2900]: W1108 00:31:26.410755 2900 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:31:26.414302 kubelet[2900]: E1108 00:31:26.410759 2900 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:31:26.414302 kubelet[2900]: E1108 00:31:26.410845 2900 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:31:26.414302 kubelet[2900]: W1108 00:31:26.410850 2900 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:31:26.414302 kubelet[2900]: E1108 00:31:26.410854 2900 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:31:26.414302 kubelet[2900]: E1108 00:31:26.410953 2900 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:31:26.414302 kubelet[2900]: W1108 00:31:26.410958 2900 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:31:26.414302 kubelet[2900]: E1108 00:31:26.410963 2900 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:31:26.510290 kubelet[2900]: E1108 00:31:26.510216 2900 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:31:26.510290 kubelet[2900]: W1108 00:31:26.510229 2900 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:31:26.510290 kubelet[2900]: E1108 00:31:26.510242 2900 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:31:26.510688 kubelet[2900]: E1108 00:31:26.510601 2900 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:31:26.510688 kubelet[2900]: W1108 00:31:26.510612 2900 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:31:26.510688 kubelet[2900]: E1108 00:31:26.510621 2900 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:31:26.510836 kubelet[2900]: E1108 00:31:26.510828 2900 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:31:26.510928 kubelet[2900]: W1108 00:31:26.510874 2900 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:31:26.510928 kubelet[2900]: E1108 00:31:26.510888 2900 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:31:26.511157 kubelet[2900]: E1108 00:31:26.511101 2900 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:31:26.511157 kubelet[2900]: W1108 00:31:26.511108 2900 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:31:26.511157 kubelet[2900]: E1108 00:31:26.511117 2900 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:31:26.511295 kubelet[2900]: E1108 00:31:26.511228 2900 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:31:26.511295 kubelet[2900]: W1108 00:31:26.511233 2900 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:31:26.511295 kubelet[2900]: E1108 00:31:26.511239 2900 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:31:26.511404 kubelet[2900]: E1108 00:31:26.511399 2900 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:31:26.511566 kubelet[2900]: W1108 00:31:26.511492 2900 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:31:26.511566 kubelet[2900]: E1108 00:31:26.511503 2900 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:31:26.511746 kubelet[2900]: E1108 00:31:26.511691 2900 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:31:26.511746 kubelet[2900]: W1108 00:31:26.511701 2900 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:31:26.511746 kubelet[2900]: E1108 00:31:26.511709 2900 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:31:26.512068 kubelet[2900]: E1108 00:31:26.512028 2900 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:31:26.512068 kubelet[2900]: W1108 00:31:26.512034 2900 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:31:26.512068 kubelet[2900]: E1108 00:31:26.512043 2900 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:31:26.512277 kubelet[2900]: E1108 00:31:26.512202 2900 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:31:26.512277 kubelet[2900]: W1108 00:31:26.512208 2900 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:31:26.512430 kubelet[2900]: E1108 00:31:26.512369 2900 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:31:26.512430 kubelet[2900]: W1108 00:31:26.512378 2900 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:31:26.512651 kubelet[2900]: E1108 00:31:26.512557 2900 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:31:26.512651 kubelet[2900]: W1108 00:31:26.512567 2900 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:31:26.512651 kubelet[2900]: E1108 00:31:26.512574 2900 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:31:26.512651 kubelet[2900]: E1108 00:31:26.512614 2900 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:31:26.512728 kubelet[2900]: E1108 00:31:26.512654 2900 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:31:26.512853 kubelet[2900]: E1108 00:31:26.512765 2900 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:31:26.512853 kubelet[2900]: W1108 00:31:26.512772 2900 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:31:26.512853 kubelet[2900]: E1108 00:31:26.512778 2900 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:31:26.513074 kubelet[2900]: E1108 00:31:26.512940 2900 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:31:26.513074 kubelet[2900]: W1108 00:31:26.512946 2900 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:31:26.513074 kubelet[2900]: E1108 00:31:26.512952 2900 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:31:26.513281 kubelet[2900]: E1108 00:31:26.513207 2900 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:31:26.513281 kubelet[2900]: W1108 00:31:26.513214 2900 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:31:26.513281 kubelet[2900]: E1108 00:31:26.513225 2900 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:31:26.514288 kubelet[2900]: E1108 00:31:26.513934 2900 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:31:26.514288 kubelet[2900]: W1108 00:31:26.513943 2900 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:31:26.514288 kubelet[2900]: E1108 00:31:26.513953 2900 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:31:26.514723 kubelet[2900]: E1108 00:31:26.514477 2900 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:31:26.514723 kubelet[2900]: W1108 00:31:26.514485 2900 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:31:26.514723 kubelet[2900]: E1108 00:31:26.514645 2900 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:31:26.516377 kubelet[2900]: E1108 00:31:26.514838 2900 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:31:26.516377 kubelet[2900]: W1108 00:31:26.514845 2900 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:31:26.516570 kubelet[2900]: E1108 00:31:26.516453 2900 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:31:26.516952 kubelet[2900]: E1108 00:31:26.516679 2900 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:31:26.517009 kubelet[2900]: W1108 00:31:26.516686 2900 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:31:26.519014 kubelet[2900]: E1108 00:31:26.519004 2900 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:31:26.519075 kubelet[2900]: W1108 00:31:26.519067 2900 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:31:26.519298 kubelet[2900]: E1108 00:31:26.519291 2900 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:31:26.519429 kubelet[2900]: W1108 00:31:26.519349 2900 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:31:26.519429 kubelet[2900]: E1108 00:31:26.519362 2900 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:31:26.519642 kubelet[2900]: E1108 00:31:26.519521 2900 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:31:26.519642 kubelet[2900]: W1108 00:31:26.519527 2900 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:31:26.519642 kubelet[2900]: E1108 00:31:26.519533 2900 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:31:26.519809 kubelet[2900]: E1108 00:31:26.519759 2900 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:31:26.519809 kubelet[2900]: W1108 00:31:26.519765 2900 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:31:26.519809 kubelet[2900]: E1108 00:31:26.519771 2900 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:31:26.519943 kubelet[2900]: E1108 00:31:26.519897 2900 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:31:26.520733 kubelet[2900]: E1108 00:31:26.520673 2900 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:31:26.520733 kubelet[2900]: W1108 00:31:26.520681 2900 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:31:26.520733 kubelet[2900]: E1108 00:31:26.520688 2900 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:31:26.520941 kubelet[2900]: E1108 00:31:26.520701 2900 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:31:26.521056 kubelet[2900]: E1108 00:31:26.521049 2900 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:31:26.521242 kubelet[2900]: W1108 00:31:26.521174 2900 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:31:26.521242 kubelet[2900]: E1108 00:31:26.521185 2900 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:31:26.521471 kubelet[2900]: E1108 00:31:26.521330 2900 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:31:26.521471 kubelet[2900]: W1108 00:31:26.521448 2900 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:31:26.521471 kubelet[2900]: E1108 00:31:26.521458 2900 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:31:26.522822 containerd[1649]: time="2025-11-08T00:31:26.522519681Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 8 00:31:26.522822 containerd[1649]: time="2025-11-08T00:31:26.522556649Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 8 00:31:26.522822 containerd[1649]: time="2025-11-08T00:31:26.522755056Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 8 00:31:26.523026 containerd[1649]: time="2025-11-08T00:31:26.523011370Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 8 00:31:26.527951 kubelet[2900]: E1108 00:31:26.527933 2900 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:31:26.528075 kubelet[2900]: W1108 00:31:26.528033 2900 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:31:26.528075 kubelet[2900]: E1108 00:31:26.528049 2900 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:31:26.547292 containerd[1649]: time="2025-11-08T00:31:26.547271166Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-78nlx,Uid:0a5a994b-8c0c-4136-aaf4-ce922cd84ee9,Namespace:calico-system,Attempt:0,} returns sandbox id \"afe81996dbafbb6f5df31dcaa2a55e729ed96df660009ec11e9893b8b2a2459f\"" Nov 8 00:31:27.894327 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3711852773.mount: Deactivated successfully. Nov 8 00:31:28.494994 kubelet[2900]: E1108 00:31:28.494966 2900 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-m4wsd" podUID="c5b205c6-f534-4f27-bd2e-0a8fe1443335" Nov 8 00:31:28.520477 containerd[1649]: time="2025-11-08T00:31:28.520434172Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:31:28.521081 containerd[1649]: time="2025-11-08T00:31:28.521011908Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.30.4: active requests=0, bytes read=35234628" Nov 8 00:31:28.521780 containerd[1649]: time="2025-11-08T00:31:28.521666716Z" level=info msg="ImageCreate event name:\"sha256:aa1490366a77160b4cc8f9af82281ab7201ffda0882871f860e1eb1c4f825958\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:31:28.534081 containerd[1649]: time="2025-11-08T00:31:28.534039898Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:6f437220b5b3c627fb4a0fc8dc323363101f3c22a8f337612c2a1ddfb73b810c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:31:28.534744 containerd[1649]: time="2025-11-08T00:31:28.534373280Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.30.4\" with image id \"sha256:aa1490366a77160b4cc8f9af82281ab7201ffda0882871f860e1eb1c4f825958\", repo tag \"ghcr.io/flatcar/calico/typha:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:6f437220b5b3c627fb4a0fc8dc323363101f3c22a8f337612c2a1ddfb73b810c\", size \"35234482\" in 2.216375012s" Nov 8 00:31:28.534744 containerd[1649]: time="2025-11-08T00:31:28.534393683Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.4\" returns image reference \"sha256:aa1490366a77160b4cc8f9af82281ab7201ffda0882871f860e1eb1c4f825958\"" Nov 8 00:31:28.542591 containerd[1649]: time="2025-11-08T00:31:28.541988224Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\"" Nov 8 00:31:28.576138 containerd[1649]: time="2025-11-08T00:31:28.576104196Z" level=info msg="CreateContainer within sandbox \"b3fb762333a8c0ad8a010c0878f5d627c458e880010b3a2a882336b434fbf03e\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Nov 8 00:31:28.582015 containerd[1649]: time="2025-11-08T00:31:28.581989125Z" level=info msg="CreateContainer within sandbox \"b3fb762333a8c0ad8a010c0878f5d627c458e880010b3a2a882336b434fbf03e\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"4c3270ce636a87fc830baa84216c4740bba08619caf5ae30534d2bc2c92d1cfc\"" Nov 8 00:31:28.587541 containerd[1649]: time="2025-11-08T00:31:28.587513131Z" level=info msg="StartContainer for \"4c3270ce636a87fc830baa84216c4740bba08619caf5ae30534d2bc2c92d1cfc\"" Nov 8 00:31:28.642877 containerd[1649]: time="2025-11-08T00:31:28.642806065Z" level=info msg="StartContainer for \"4c3270ce636a87fc830baa84216c4740bba08619caf5ae30534d2bc2c92d1cfc\" returns successfully" Nov 8 00:31:28.846523 systemd-journald[1198]: Under memory pressure, flushing caches. Nov 8 00:31:28.841287 systemd-resolved[1542]: Under memory pressure, flushing caches. Nov 8 00:31:28.841322 systemd-resolved[1542]: Flushed all caches. Nov 8 00:31:29.632503 kubelet[2900]: E1108 00:31:29.632482 2900 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:31:29.632503 kubelet[2900]: W1108 00:31:29.632497 2900 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:31:29.654258 kubelet[2900]: E1108 00:31:29.654028 2900 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:31:29.654258 kubelet[2900]: E1108 00:31:29.654211 2900 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:31:29.654258 kubelet[2900]: W1108 00:31:29.654219 2900 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:31:29.654258 kubelet[2900]: E1108 00:31:29.654231 2900 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:31:29.654411 kubelet[2900]: E1108 00:31:29.654321 2900 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:31:29.654411 kubelet[2900]: W1108 00:31:29.654326 2900 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:31:29.654411 kubelet[2900]: E1108 00:31:29.654331 2900 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:31:29.654465 kubelet[2900]: E1108 00:31:29.654439 2900 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:31:29.654465 kubelet[2900]: W1108 00:31:29.654443 2900 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:31:29.654465 kubelet[2900]: E1108 00:31:29.654450 2900 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:31:29.654555 kubelet[2900]: E1108 00:31:29.654539 2900 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:31:29.654555 kubelet[2900]: W1108 00:31:29.654545 2900 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:31:29.654555 kubelet[2900]: E1108 00:31:29.654550 2900 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:31:29.654660 kubelet[2900]: E1108 00:31:29.654627 2900 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:31:29.654660 kubelet[2900]: W1108 00:31:29.654633 2900 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:31:29.654660 kubelet[2900]: E1108 00:31:29.654638 2900 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:31:29.654716 kubelet[2900]: E1108 00:31:29.654712 2900 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:31:29.654739 kubelet[2900]: W1108 00:31:29.654716 2900 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:31:29.654739 kubelet[2900]: E1108 00:31:29.654720 2900 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:31:29.654836 kubelet[2900]: E1108 00:31:29.654798 2900 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:31:29.654836 kubelet[2900]: W1108 00:31:29.654807 2900 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:31:29.654836 kubelet[2900]: E1108 00:31:29.654813 2900 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:31:29.661558 kubelet[2900]: E1108 00:31:29.654898 2900 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:31:29.661558 kubelet[2900]: W1108 00:31:29.654903 2900 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:31:29.661558 kubelet[2900]: E1108 00:31:29.654907 2900 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:31:29.661558 kubelet[2900]: E1108 00:31:29.655005 2900 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:31:29.661558 kubelet[2900]: W1108 00:31:29.655009 2900 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:31:29.661558 kubelet[2900]: E1108 00:31:29.655014 2900 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:31:29.661558 kubelet[2900]: E1108 00:31:29.655121 2900 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:31:29.661558 kubelet[2900]: W1108 00:31:29.655125 2900 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:31:29.661558 kubelet[2900]: E1108 00:31:29.655130 2900 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:31:29.661558 kubelet[2900]: E1108 00:31:29.655225 2900 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:31:29.661734 kubelet[2900]: W1108 00:31:29.655230 2900 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:31:29.661734 kubelet[2900]: E1108 00:31:29.655235 2900 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:31:29.661734 kubelet[2900]: E1108 00:31:29.655326 2900 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:31:29.661734 kubelet[2900]: W1108 00:31:29.655331 2900 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:31:29.661734 kubelet[2900]: E1108 00:31:29.655336 2900 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:31:29.661734 kubelet[2900]: E1108 00:31:29.655426 2900 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:31:29.661734 kubelet[2900]: W1108 00:31:29.655431 2900 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:31:29.661734 kubelet[2900]: E1108 00:31:29.655435 2900 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:31:29.661734 kubelet[2900]: E1108 00:31:29.655525 2900 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:31:29.661734 kubelet[2900]: W1108 00:31:29.655530 2900 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:31:29.661896 kubelet[2900]: E1108 00:31:29.655535 2900 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:31:29.661896 kubelet[2900]: E1108 00:31:29.656763 2900 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:31:29.661896 kubelet[2900]: W1108 00:31:29.656768 2900 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:31:29.661896 kubelet[2900]: E1108 00:31:29.656773 2900 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:31:29.661896 kubelet[2900]: E1108 00:31:29.656868 2900 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:31:29.661896 kubelet[2900]: W1108 00:31:29.656873 2900 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:31:29.661896 kubelet[2900]: E1108 00:31:29.656877 2900 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:31:29.661896 kubelet[2900]: E1108 00:31:29.657007 2900 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:31:29.661896 kubelet[2900]: W1108 00:31:29.657012 2900 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:31:29.661896 kubelet[2900]: E1108 00:31:29.657016 2900 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:31:29.678275 kubelet[2900]: E1108 00:31:29.657122 2900 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:31:29.678275 kubelet[2900]: W1108 00:31:29.657128 2900 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:31:29.678275 kubelet[2900]: E1108 00:31:29.657132 2900 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:31:29.678275 kubelet[2900]: E1108 00:31:29.657221 2900 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:31:29.678275 kubelet[2900]: W1108 00:31:29.657226 2900 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:31:29.678275 kubelet[2900]: E1108 00:31:29.657231 2900 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:31:29.678275 kubelet[2900]: E1108 00:31:29.657322 2900 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:31:29.678275 kubelet[2900]: W1108 00:31:29.657326 2900 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:31:29.678275 kubelet[2900]: E1108 00:31:29.657331 2900 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:31:29.678275 kubelet[2900]: E1108 00:31:29.660769 2900 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:31:29.687923 kubelet[2900]: W1108 00:31:29.660776 2900 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:31:29.687923 kubelet[2900]: E1108 00:31:29.660784 2900 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:31:29.687923 kubelet[2900]: E1108 00:31:29.660882 2900 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:31:29.687923 kubelet[2900]: W1108 00:31:29.660887 2900 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:31:29.687923 kubelet[2900]: E1108 00:31:29.660892 2900 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:31:29.687923 kubelet[2900]: E1108 00:31:29.661006 2900 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:31:29.687923 kubelet[2900]: W1108 00:31:29.661011 2900 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:31:29.687923 kubelet[2900]: E1108 00:31:29.661016 2900 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:31:29.687923 kubelet[2900]: E1108 00:31:29.673219 2900 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:31:29.687923 kubelet[2900]: W1108 00:31:29.673227 2900 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:31:29.688104 kubelet[2900]: E1108 00:31:29.673236 2900 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:31:29.688104 kubelet[2900]: E1108 00:31:29.673518 2900 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:31:29.688104 kubelet[2900]: W1108 00:31:29.673523 2900 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:31:29.688104 kubelet[2900]: E1108 00:31:29.673529 2900 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:31:29.688104 kubelet[2900]: E1108 00:31:29.673628 2900 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:31:29.688104 kubelet[2900]: W1108 00:31:29.673632 2900 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:31:29.688104 kubelet[2900]: E1108 00:31:29.673638 2900 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:31:29.688104 kubelet[2900]: E1108 00:31:29.673811 2900 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:31:29.688104 kubelet[2900]: W1108 00:31:29.673815 2900 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:31:29.688104 kubelet[2900]: E1108 00:31:29.673820 2900 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:31:29.688287 kubelet[2900]: E1108 00:31:29.673943 2900 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:31:29.688287 kubelet[2900]: W1108 00:31:29.673947 2900 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:31:29.688287 kubelet[2900]: E1108 00:31:29.673952 2900 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:31:29.688287 kubelet[2900]: E1108 00:31:29.674051 2900 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:31:29.688287 kubelet[2900]: W1108 00:31:29.674056 2900 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:31:29.688287 kubelet[2900]: E1108 00:31:29.674060 2900 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:31:29.688287 kubelet[2900]: E1108 00:31:29.674154 2900 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:31:29.688287 kubelet[2900]: W1108 00:31:29.674159 2900 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:31:29.688287 kubelet[2900]: E1108 00:31:29.674164 2900 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:31:29.688287 kubelet[2900]: E1108 00:31:29.674263 2900 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:31:29.688442 kubelet[2900]: W1108 00:31:29.674268 2900 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:31:29.688442 kubelet[2900]: E1108 00:31:29.674273 2900 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:31:29.688442 kubelet[2900]: E1108 00:31:29.674421 2900 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:31:29.688442 kubelet[2900]: W1108 00:31:29.674426 2900 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:31:29.688442 kubelet[2900]: E1108 00:31:29.674432 2900 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:31:30.065278 containerd[1649]: time="2025-11-08T00:31:30.065171413Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:31:30.067262 containerd[1649]: time="2025-11-08T00:31:30.067235252Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4: active requests=0, bytes read=4446754" Nov 8 00:31:30.067792 containerd[1649]: time="2025-11-08T00:31:30.067778778Z" level=info msg="ImageCreate event name:\"sha256:570719e9c34097019014ae2ad94edf4e523bc6892e77fb1c64c23e5b7f390fe5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:31:30.079017 containerd[1649]: time="2025-11-08T00:31:30.078990664Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:50bdfe370b7308fa9957ed1eaccd094aa4f27f9a4f1dfcfef2f8a7696a1551e1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:31:30.079687 containerd[1649]: time="2025-11-08T00:31:30.079672791Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" with image id \"sha256:570719e9c34097019014ae2ad94edf4e523bc6892e77fb1c64c23e5b7f390fe5\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:50bdfe370b7308fa9957ed1eaccd094aa4f27f9a4f1dfcfef2f8a7696a1551e1\", size \"5941314\" in 1.537657984s" Nov 8 00:31:30.079795 containerd[1649]: time="2025-11-08T00:31:30.079742149Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" returns image reference \"sha256:570719e9c34097019014ae2ad94edf4e523bc6892e77fb1c64c23e5b7f390fe5\"" Nov 8 00:31:30.087981 containerd[1649]: time="2025-11-08T00:31:30.087897316Z" level=info msg="CreateContainer within sandbox \"afe81996dbafbb6f5df31dcaa2a55e729ed96df660009ec11e9893b8b2a2459f\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Nov 8 00:31:30.214226 containerd[1649]: time="2025-11-08T00:31:30.213538106Z" level=info msg="CreateContainer within sandbox \"afe81996dbafbb6f5df31dcaa2a55e729ed96df660009ec11e9893b8b2a2459f\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"55b7e9e8ae58cd5b363673a9c0e813cba9ec6e1ab22d34bbf31f76502d9b5305\"" Nov 8 00:31:30.214443 containerd[1649]: time="2025-11-08T00:31:30.214423323Z" level=info msg="StartContainer for \"55b7e9e8ae58cd5b363673a9c0e813cba9ec6e1ab22d34bbf31f76502d9b5305\"" Nov 8 00:31:30.279694 containerd[1649]: time="2025-11-08T00:31:30.279663812Z" level=info msg="StartContainer for \"55b7e9e8ae58cd5b363673a9c0e813cba9ec6e1ab22d34bbf31f76502d9b5305\" returns successfully" Nov 8 00:31:30.476749 kubelet[2900]: E1108 00:31:30.476490 2900 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-m4wsd" podUID="c5b205c6-f534-4f27-bd2e-0a8fe1443335" Nov 8 00:31:30.553928 systemd[1]: run-containerd-runc-k8s.io-55b7e9e8ae58cd5b363673a9c0e813cba9ec6e1ab22d34bbf31f76502d9b5305-runc.lb1waT.mount: Deactivated successfully. Nov 8 00:31:30.554017 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-55b7e9e8ae58cd5b363673a9c0e813cba9ec6e1ab22d34bbf31f76502d9b5305-rootfs.mount: Deactivated successfully. Nov 8 00:31:30.630568 kubelet[2900]: I1108 00:31:30.629975 2900 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Nov 8 00:31:30.664432 kubelet[2900]: I1108 00:31:30.664400 2900 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-6549c9bdb4-mjkt2" podStartSLOduration=3.425838091 podStartE2EDuration="5.649791626s" podCreationTimestamp="2025-11-08 00:31:25 +0000 UTC" firstStartedPulling="2025-11-08 00:31:26.317640855 +0000 UTC m=+20.931095726" lastFinishedPulling="2025-11-08 00:31:28.541594388 +0000 UTC m=+23.155049261" observedRunningTime="2025-11-08 00:31:29.809163412 +0000 UTC m=+24.422618291" watchObservedRunningTime="2025-11-08 00:31:30.649791626 +0000 UTC m=+25.263246501" Nov 8 00:31:30.889124 systemd-resolved[1542]: Under memory pressure, flushing caches. Nov 8 00:31:30.890023 systemd-journald[1198]: Under memory pressure, flushing caches. Nov 8 00:31:30.889144 systemd-resolved[1542]: Flushed all caches. Nov 8 00:31:31.376706 containerd[1649]: time="2025-11-08T00:31:31.369007808Z" level=info msg="shim disconnected" id=55b7e9e8ae58cd5b363673a9c0e813cba9ec6e1ab22d34bbf31f76502d9b5305 namespace=k8s.io Nov 8 00:31:31.376706 containerd[1649]: time="2025-11-08T00:31:31.376635225Z" level=warning msg="cleaning up after shim disconnected" id=55b7e9e8ae58cd5b363673a9c0e813cba9ec6e1ab22d34bbf31f76502d9b5305 namespace=k8s.io Nov 8 00:31:31.376706 containerd[1649]: time="2025-11-08T00:31:31.376648410Z" level=info msg="cleaning up dead shim" namespace=k8s.io Nov 8 00:31:31.634067 containerd[1649]: time="2025-11-08T00:31:31.633489540Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.4\"" Nov 8 00:31:32.476275 kubelet[2900]: E1108 00:31:32.476228 2900 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-m4wsd" podUID="c5b205c6-f534-4f27-bd2e-0a8fe1443335" Nov 8 00:31:34.396272 containerd[1649]: time="2025-11-08T00:31:34.395804265Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:31:34.396890 containerd[1649]: time="2025-11-08T00:31:34.396868441Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.30.4: active requests=0, bytes read=70446859" Nov 8 00:31:34.397378 containerd[1649]: time="2025-11-08T00:31:34.397365955Z" level=info msg="ImageCreate event name:\"sha256:24e1e7377c738d4080eb462a29e2c6756d383d8d25ad87b7f49165581f20c3cd\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:31:34.398779 containerd[1649]: time="2025-11-08T00:31:34.398765454Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:273501a9cfbd848ade2b6a8452dfafdd3adb4f9bf9aec45c398a5d19b8026627\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:31:34.399474 containerd[1649]: time="2025-11-08T00:31:34.399093079Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.30.4\" with image id \"sha256:24e1e7377c738d4080eb462a29e2c6756d383d8d25ad87b7f49165581f20c3cd\", repo tag \"ghcr.io/flatcar/calico/cni:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:273501a9cfbd848ade2b6a8452dfafdd3adb4f9bf9aec45c398a5d19b8026627\", size \"71941459\" in 2.76558128s" Nov 8 00:31:34.399474 containerd[1649]: time="2025-11-08T00:31:34.399286011Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.4\" returns image reference \"sha256:24e1e7377c738d4080eb462a29e2c6756d383d8d25ad87b7f49165581f20c3cd\"" Nov 8 00:31:34.400784 containerd[1649]: time="2025-11-08T00:31:34.400719526Z" level=info msg="CreateContainer within sandbox \"afe81996dbafbb6f5df31dcaa2a55e729ed96df660009ec11e9893b8b2a2459f\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Nov 8 00:31:34.420052 containerd[1649]: time="2025-11-08T00:31:34.420019977Z" level=info msg="CreateContainer within sandbox \"afe81996dbafbb6f5df31dcaa2a55e729ed96df660009ec11e9893b8b2a2459f\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"0037584a7e3937aef37e849a84e5124b4fd1902ae0bf6e43421d9b9e7162c29c\"" Nov 8 00:31:34.421397 containerd[1649]: time="2025-11-08T00:31:34.420523134Z" level=info msg="StartContainer for \"0037584a7e3937aef37e849a84e5124b4fd1902ae0bf6e43421d9b9e7162c29c\"" Nov 8 00:31:34.470229 containerd[1649]: time="2025-11-08T00:31:34.470202488Z" level=info msg="StartContainer for \"0037584a7e3937aef37e849a84e5124b4fd1902ae0bf6e43421d9b9e7162c29c\" returns successfully" Nov 8 00:31:34.476935 kubelet[2900]: E1108 00:31:34.476803 2900 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-m4wsd" podUID="c5b205c6-f534-4f27-bd2e-0a8fe1443335" Nov 8 00:31:35.802747 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-0037584a7e3937aef37e849a84e5124b4fd1902ae0bf6e43421d9b9e7162c29c-rootfs.mount: Deactivated successfully. Nov 8 00:31:35.806782 containerd[1649]: time="2025-11-08T00:31:35.806672094Z" level=info msg="shim disconnected" id=0037584a7e3937aef37e849a84e5124b4fd1902ae0bf6e43421d9b9e7162c29c namespace=k8s.io Nov 8 00:31:35.806782 containerd[1649]: time="2025-11-08T00:31:35.806709008Z" level=warning msg="cleaning up after shim disconnected" id=0037584a7e3937aef37e849a84e5124b4fd1902ae0bf6e43421d9b9e7162c29c namespace=k8s.io Nov 8 00:31:35.806782 containerd[1649]: time="2025-11-08T00:31:35.806715316Z" level=info msg="cleaning up dead shim" namespace=k8s.io Nov 8 00:31:35.815319 containerd[1649]: time="2025-11-08T00:31:35.815293610Z" level=warning msg="cleanup warnings time=\"2025-11-08T00:31:35Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Nov 8 00:31:35.852657 kubelet[2900]: I1108 00:31:35.852444 2900 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Nov 8 00:31:35.903713 kubelet[2900]: I1108 00:31:35.903674 2900 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c0dfff3f-1568-463e-aed1-906fd9d64aa0-config\") pod \"goldmane-666569f655-cblkz\" (UID: \"c0dfff3f-1568-463e-aed1-906fd9d64aa0\") " pod="calico-system/goldmane-666569f655-cblkz" Nov 8 00:31:35.904207 kubelet[2900]: I1108 00:31:35.903785 2900 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sbgz5\" (UniqueName: \"kubernetes.io/projected/939bcda9-0a19-4e96-ac5d-405850005d65-kube-api-access-sbgz5\") pod \"coredns-668d6bf9bc-wq4zw\" (UID: \"939bcda9-0a19-4e96-ac5d-405850005d65\") " pod="kube-system/coredns-668d6bf9bc-wq4zw" Nov 8 00:31:35.904207 kubelet[2900]: I1108 00:31:35.903809 2900 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/8b2ed5b8-86fb-4b7a-9b26-26f59088b35b-config-volume\") pod \"coredns-668d6bf9bc-5p9cw\" (UID: \"8b2ed5b8-86fb-4b7a-9b26-26f59088b35b\") " pod="kube-system/coredns-668d6bf9bc-5p9cw" Nov 8 00:31:35.904207 kubelet[2900]: I1108 00:31:35.903821 2900 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/ce042206-9988-482a-bbf2-d9505d456f72-whisker-backend-key-pair\") pod \"whisker-764fb649d4-rkjxq\" (UID: \"ce042206-9988-482a-bbf2-d9505d456f72\") " pod="calico-system/whisker-764fb649d4-rkjxq" Nov 8 00:31:35.904207 kubelet[2900]: I1108 00:31:35.903831 2900 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ce042206-9988-482a-bbf2-d9505d456f72-whisker-ca-bundle\") pod \"whisker-764fb649d4-rkjxq\" (UID: \"ce042206-9988-482a-bbf2-d9505d456f72\") " pod="calico-system/whisker-764fb649d4-rkjxq" Nov 8 00:31:35.904207 kubelet[2900]: I1108 00:31:35.903845 2900 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/939bcda9-0a19-4e96-ac5d-405850005d65-config-volume\") pod \"coredns-668d6bf9bc-wq4zw\" (UID: \"939bcda9-0a19-4e96-ac5d-405850005d65\") " pod="kube-system/coredns-668d6bf9bc-wq4zw" Nov 8 00:31:35.905662 kubelet[2900]: I1108 00:31:35.903855 2900 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/c0dfff3f-1568-463e-aed1-906fd9d64aa0-goldmane-key-pair\") pod \"goldmane-666569f655-cblkz\" (UID: \"c0dfff3f-1568-463e-aed1-906fd9d64aa0\") " pod="calico-system/goldmane-666569f655-cblkz" Nov 8 00:31:35.905662 kubelet[2900]: I1108 00:31:35.903866 2900 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rgnlq\" (UniqueName: \"kubernetes.io/projected/ce042206-9988-482a-bbf2-d9505d456f72-kube-api-access-rgnlq\") pod \"whisker-764fb649d4-rkjxq\" (UID: \"ce042206-9988-482a-bbf2-d9505d456f72\") " pod="calico-system/whisker-764fb649d4-rkjxq" Nov 8 00:31:35.905662 kubelet[2900]: I1108 00:31:35.903877 2900 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c0dfff3f-1568-463e-aed1-906fd9d64aa0-goldmane-ca-bundle\") pod \"goldmane-666569f655-cblkz\" (UID: \"c0dfff3f-1568-463e-aed1-906fd9d64aa0\") " pod="calico-system/goldmane-666569f655-cblkz" Nov 8 00:31:35.905662 kubelet[2900]: I1108 00:31:35.903887 2900 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jtt6s\" (UniqueName: \"kubernetes.io/projected/c0dfff3f-1568-463e-aed1-906fd9d64aa0-kube-api-access-jtt6s\") pod \"goldmane-666569f655-cblkz\" (UID: \"c0dfff3f-1568-463e-aed1-906fd9d64aa0\") " pod="calico-system/goldmane-666569f655-cblkz" Nov 8 00:31:35.905662 kubelet[2900]: I1108 00:31:35.903897 2900 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m2pct\" (UniqueName: \"kubernetes.io/projected/8b2ed5b8-86fb-4b7a-9b26-26f59088b35b-kube-api-access-m2pct\") pod \"coredns-668d6bf9bc-5p9cw\" (UID: \"8b2ed5b8-86fb-4b7a-9b26-26f59088b35b\") " pod="kube-system/coredns-668d6bf9bc-5p9cw" Nov 8 00:31:36.004205 kubelet[2900]: I1108 00:31:36.004079 2900 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4fqms\" (UniqueName: \"kubernetes.io/projected/d8943d47-ae19-484d-8d89-dda3dcc29a60-kube-api-access-4fqms\") pod \"calico-kube-controllers-655bcd5b7f-mvm84\" (UID: \"d8943d47-ae19-484d-8d89-dda3dcc29a60\") " pod="calico-system/calico-kube-controllers-655bcd5b7f-mvm84" Nov 8 00:31:36.004205 kubelet[2900]: I1108 00:31:36.004132 2900 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/536546db-8e23-43bc-ada9-ff6aca8accce-calico-apiserver-certs\") pod \"calico-apiserver-84758c967d-czg8s\" (UID: \"536546db-8e23-43bc-ada9-ff6aca8accce\") " pod="calico-apiserver/calico-apiserver-84758c967d-czg8s" Nov 8 00:31:36.005395 kubelet[2900]: I1108 00:31:36.004348 2900 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/558dc8c2-70d1-4eda-a967-93f57dec2dc2-calico-apiserver-certs\") pod \"calico-apiserver-84758c967d-hp26p\" (UID: \"558dc8c2-70d1-4eda-a967-93f57dec2dc2\") " pod="calico-apiserver/calico-apiserver-84758c967d-hp26p" Nov 8 00:31:36.005395 kubelet[2900]: I1108 00:31:36.004363 2900 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d8943d47-ae19-484d-8d89-dda3dcc29a60-tigera-ca-bundle\") pod \"calico-kube-controllers-655bcd5b7f-mvm84\" (UID: \"d8943d47-ae19-484d-8d89-dda3dcc29a60\") " pod="calico-system/calico-kube-controllers-655bcd5b7f-mvm84" Nov 8 00:31:36.005395 kubelet[2900]: I1108 00:31:36.004404 2900 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g9gj7\" (UniqueName: \"kubernetes.io/projected/558dc8c2-70d1-4eda-a967-93f57dec2dc2-kube-api-access-g9gj7\") pod \"calico-apiserver-84758c967d-hp26p\" (UID: \"558dc8c2-70d1-4eda-a967-93f57dec2dc2\") " pod="calico-apiserver/calico-apiserver-84758c967d-hp26p" Nov 8 00:31:36.005395 kubelet[2900]: I1108 00:31:36.004424 2900 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8w9hr\" (UniqueName: \"kubernetes.io/projected/536546db-8e23-43bc-ada9-ff6aca8accce-kube-api-access-8w9hr\") pod \"calico-apiserver-84758c967d-czg8s\" (UID: \"536546db-8e23-43bc-ada9-ff6aca8accce\") " pod="calico-apiserver/calico-apiserver-84758c967d-czg8s" Nov 8 00:31:36.219684 containerd[1649]: time="2025-11-08T00:31:36.218978857Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-84758c967d-hp26p,Uid:558dc8c2-70d1-4eda-a967-93f57dec2dc2,Namespace:calico-apiserver,Attempt:0,}" Nov 8 00:31:36.220607 containerd[1649]: time="2025-11-08T00:31:36.220588836Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-wq4zw,Uid:939bcda9-0a19-4e96-ac5d-405850005d65,Namespace:kube-system,Attempt:0,}" Nov 8 00:31:36.221734 containerd[1649]: time="2025-11-08T00:31:36.221720046Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-84758c967d-czg8s,Uid:536546db-8e23-43bc-ada9-ff6aca8accce,Namespace:calico-apiserver,Attempt:0,}" Nov 8 00:31:36.226214 containerd[1649]: time="2025-11-08T00:31:36.226190608Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-655bcd5b7f-mvm84,Uid:d8943d47-ae19-484d-8d89-dda3dcc29a60,Namespace:calico-system,Attempt:0,}" Nov 8 00:31:36.226434 containerd[1649]: time="2025-11-08T00:31:36.226422468Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-cblkz,Uid:c0dfff3f-1568-463e-aed1-906fd9d64aa0,Namespace:calico-system,Attempt:0,}" Nov 8 00:31:36.228447 containerd[1649]: time="2025-11-08T00:31:36.228433024Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-5p9cw,Uid:8b2ed5b8-86fb-4b7a-9b26-26f59088b35b,Namespace:kube-system,Attempt:0,}" Nov 8 00:31:36.230531 containerd[1649]: time="2025-11-08T00:31:36.230517344Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-764fb649d4-rkjxq,Uid:ce042206-9988-482a-bbf2-d9505d456f72,Namespace:calico-system,Attempt:0,}" Nov 8 00:31:36.479515 containerd[1649]: time="2025-11-08T00:31:36.479155195Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-m4wsd,Uid:c5b205c6-f534-4f27-bd2e-0a8fe1443335,Namespace:calico-system,Attempt:0,}" Nov 8 00:31:36.491903 containerd[1649]: time="2025-11-08T00:31:36.491869009Z" level=error msg="Failed to destroy network for sandbox \"7f129ede87c085322632e017826913d76d407d9023e8f529257ad197229f85d6\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:31:36.492759 containerd[1649]: time="2025-11-08T00:31:36.492288722Z" level=error msg="Failed to destroy network for sandbox \"96878bc64984967382d672bcc498f35002b46b5f999b2ea235d916cf67b3a2f8\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:31:36.494802 containerd[1649]: time="2025-11-08T00:31:36.494778367Z" level=error msg="encountered an error cleaning up failed sandbox \"7f129ede87c085322632e017826913d76d407d9023e8f529257ad197229f85d6\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:31:36.506932 containerd[1649]: time="2025-11-08T00:31:36.506884542Z" level=error msg="Failed to destroy network for sandbox \"f8a2276ce0fe12dd5a4d868f1552f2d4fc878719d08d26e02e0cf5f9142eae72\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:31:36.507592 containerd[1649]: time="2025-11-08T00:31:36.507485195Z" level=error msg="encountered an error cleaning up failed sandbox \"f8a2276ce0fe12dd5a4d868f1552f2d4fc878719d08d26e02e0cf5f9142eae72\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:31:36.510064 containerd[1649]: time="2025-11-08T00:31:36.510040140Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-5p9cw,Uid:8b2ed5b8-86fb-4b7a-9b26-26f59088b35b,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"f8a2276ce0fe12dd5a4d868f1552f2d4fc878719d08d26e02e0cf5f9142eae72\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:31:36.510335 containerd[1649]: time="2025-11-08T00:31:36.497154993Z" level=error msg="encountered an error cleaning up failed sandbox \"96878bc64984967382d672bcc498f35002b46b5f999b2ea235d916cf67b3a2f8\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:31:36.510425 containerd[1649]: time="2025-11-08T00:31:36.510410414Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-84758c967d-czg8s,Uid:536546db-8e23-43bc-ada9-ff6aca8accce,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"96878bc64984967382d672bcc498f35002b46b5f999b2ea235d916cf67b3a2f8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:31:36.510725 containerd[1649]: time="2025-11-08T00:31:36.510593679Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-84758c967d-hp26p,Uid:558dc8c2-70d1-4eda-a967-93f57dec2dc2,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"7f129ede87c085322632e017826913d76d407d9023e8f529257ad197229f85d6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:31:36.516039 containerd[1649]: time="2025-11-08T00:31:36.515909850Z" level=error msg="Failed to destroy network for sandbox \"3265a2745e9ae0466ceafbf33ebb57c974e2beed74c3345f0502814ff46829ec\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:31:36.516039 containerd[1649]: time="2025-11-08T00:31:36.507179637Z" level=error msg="Failed to destroy network for sandbox \"f61f076e1d917da84535491d563c1e29d08b4d894442db8d86e62f6b688e02a0\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:31:36.516536 containerd[1649]: time="2025-11-08T00:31:36.516478341Z" level=error msg="Failed to destroy network for sandbox \"caf42fdb3ada0ed0ba850958182b309f6760189d49c1f7086237e2d2431c9939\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:31:36.516935 containerd[1649]: time="2025-11-08T00:31:36.516758637Z" level=error msg="encountered an error cleaning up failed sandbox \"caf42fdb3ada0ed0ba850958182b309f6760189d49c1f7086237e2d2431c9939\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:31:36.517186 containerd[1649]: time="2025-11-08T00:31:36.516785409Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-655bcd5b7f-mvm84,Uid:d8943d47-ae19-484d-8d89-dda3dcc29a60,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"caf42fdb3ada0ed0ba850958182b309f6760189d49c1f7086237e2d2431c9939\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:31:36.517238 containerd[1649]: time="2025-11-08T00:31:36.516829045Z" level=error msg="encountered an error cleaning up failed sandbox \"3265a2745e9ae0466ceafbf33ebb57c974e2beed74c3345f0502814ff46829ec\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:31:36.517238 containerd[1649]: time="2025-11-08T00:31:36.517222531Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-cblkz,Uid:c0dfff3f-1568-463e-aed1-906fd9d64aa0,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"3265a2745e9ae0466ceafbf33ebb57c974e2beed74c3345f0502814ff46829ec\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:31:36.517288 containerd[1649]: time="2025-11-08T00:31:36.516898314Z" level=error msg="encountered an error cleaning up failed sandbox \"f61f076e1d917da84535491d563c1e29d08b4d894442db8d86e62f6b688e02a0\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:31:36.517288 containerd[1649]: time="2025-11-08T00:31:36.517276344Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-764fb649d4-rkjxq,Uid:ce042206-9988-482a-bbf2-d9505d456f72,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"f61f076e1d917da84535491d563c1e29d08b4d894442db8d86e62f6b688e02a0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:31:36.517337 containerd[1649]: time="2025-11-08T00:31:36.516929313Z" level=error msg="Failed to destroy network for sandbox \"83d1a9e89ea2e606570757e3d36ce51ba3e34485b1ed3bee50364c6355c3528e\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:31:36.518923 containerd[1649]: time="2025-11-08T00:31:36.517489762Z" level=error msg="encountered an error cleaning up failed sandbox \"83d1a9e89ea2e606570757e3d36ce51ba3e34485b1ed3bee50364c6355c3528e\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:31:36.518923 containerd[1649]: time="2025-11-08T00:31:36.517509192Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-wq4zw,Uid:939bcda9-0a19-4e96-ac5d-405850005d65,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"83d1a9e89ea2e606570757e3d36ce51ba3e34485b1ed3bee50364c6355c3528e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:31:36.529076 kubelet[2900]: E1108 00:31:36.528997 2900 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7f129ede87c085322632e017826913d76d407d9023e8f529257ad197229f85d6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:31:36.529526 kubelet[2900]: E1108 00:31:36.529348 2900 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"83d1a9e89ea2e606570757e3d36ce51ba3e34485b1ed3bee50364c6355c3528e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:31:36.529585 kubelet[2900]: E1108 00:31:36.517052 2900 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f8a2276ce0fe12dd5a4d868f1552f2d4fc878719d08d26e02e0cf5f9142eae72\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:31:36.530144 kubelet[2900]: E1108 00:31:36.529637 2900 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"caf42fdb3ada0ed0ba850958182b309f6760189d49c1f7086237e2d2431c9939\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:31:36.530208 kubelet[2900]: E1108 00:31:36.530197 2900 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3265a2745e9ae0466ceafbf33ebb57c974e2beed74c3345f0502814ff46829ec\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:31:36.530267 kubelet[2900]: E1108 00:31:36.530257 2900 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f61f076e1d917da84535491d563c1e29d08b4d894442db8d86e62f6b688e02a0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:31:36.530305 kubelet[2900]: E1108 00:31:36.517053 2900 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"96878bc64984967382d672bcc498f35002b46b5f999b2ea235d916cf67b3a2f8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:31:36.538589 kubelet[2900]: E1108 00:31:36.538555 2900 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"caf42fdb3ada0ed0ba850958182b309f6760189d49c1f7086237e2d2431c9939\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-655bcd5b7f-mvm84" Nov 8 00:31:36.538715 kubelet[2900]: E1108 00:31:36.538589 2900 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"caf42fdb3ada0ed0ba850958182b309f6760189d49c1f7086237e2d2431c9939\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-655bcd5b7f-mvm84" Nov 8 00:31:36.538715 kubelet[2900]: E1108 00:31:36.538634 2900 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-655bcd5b7f-mvm84_calico-system(d8943d47-ae19-484d-8d89-dda3dcc29a60)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-655bcd5b7f-mvm84_calico-system(d8943d47-ae19-484d-8d89-dda3dcc29a60)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"caf42fdb3ada0ed0ba850958182b309f6760189d49c1f7086237e2d2431c9939\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-655bcd5b7f-mvm84" podUID="d8943d47-ae19-484d-8d89-dda3dcc29a60" Nov 8 00:31:36.539334 kubelet[2900]: E1108 00:31:36.538471 2900 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7f129ede87c085322632e017826913d76d407d9023e8f529257ad197229f85d6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-84758c967d-hp26p" Nov 8 00:31:36.539371 kubelet[2900]: E1108 00:31:36.539336 2900 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7f129ede87c085322632e017826913d76d407d9023e8f529257ad197229f85d6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-84758c967d-hp26p" Nov 8 00:31:36.539371 kubelet[2900]: E1108 00:31:36.539356 2900 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-84758c967d-hp26p_calico-apiserver(558dc8c2-70d1-4eda-a967-93f57dec2dc2)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-84758c967d-hp26p_calico-apiserver(558dc8c2-70d1-4eda-a967-93f57dec2dc2)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"7f129ede87c085322632e017826913d76d407d9023e8f529257ad197229f85d6\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-84758c967d-hp26p" podUID="558dc8c2-70d1-4eda-a967-93f57dec2dc2" Nov 8 00:31:36.539816 kubelet[2900]: E1108 00:31:36.539797 2900 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3265a2745e9ae0466ceafbf33ebb57c974e2beed74c3345f0502814ff46829ec\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-666569f655-cblkz" Nov 8 00:31:36.539853 kubelet[2900]: E1108 00:31:36.539812 2900 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3265a2745e9ae0466ceafbf33ebb57c974e2beed74c3345f0502814ff46829ec\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-666569f655-cblkz" Nov 8 00:31:36.539853 kubelet[2900]: E1108 00:31:36.539832 2900 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-666569f655-cblkz_calico-system(c0dfff3f-1568-463e-aed1-906fd9d64aa0)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-666569f655-cblkz_calico-system(c0dfff3f-1568-463e-aed1-906fd9d64aa0)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"3265a2745e9ae0466ceafbf33ebb57c974e2beed74c3345f0502814ff46829ec\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-666569f655-cblkz" podUID="c0dfff3f-1568-463e-aed1-906fd9d64aa0" Nov 8 00:31:36.539909 kubelet[2900]: E1108 00:31:36.539855 2900 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f61f076e1d917da84535491d563c1e29d08b4d894442db8d86e62f6b688e02a0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-764fb649d4-rkjxq" Nov 8 00:31:36.539909 kubelet[2900]: E1108 00:31:36.539870 2900 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f61f076e1d917da84535491d563c1e29d08b4d894442db8d86e62f6b688e02a0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-764fb649d4-rkjxq" Nov 8 00:31:36.539909 kubelet[2900]: E1108 00:31:36.539887 2900 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-764fb649d4-rkjxq_calico-system(ce042206-9988-482a-bbf2-d9505d456f72)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-764fb649d4-rkjxq_calico-system(ce042206-9988-482a-bbf2-d9505d456f72)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"f61f076e1d917da84535491d563c1e29d08b4d894442db8d86e62f6b688e02a0\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-764fb649d4-rkjxq" podUID="ce042206-9988-482a-bbf2-d9505d456f72" Nov 8 00:31:36.539990 kubelet[2900]: E1108 00:31:36.539904 2900 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"96878bc64984967382d672bcc498f35002b46b5f999b2ea235d916cf67b3a2f8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-84758c967d-czg8s" Nov 8 00:31:36.539990 kubelet[2900]: E1108 00:31:36.539917 2900 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"96878bc64984967382d672bcc498f35002b46b5f999b2ea235d916cf67b3a2f8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-84758c967d-czg8s" Nov 8 00:31:36.539990 kubelet[2900]: E1108 00:31:36.539930 2900 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-84758c967d-czg8s_calico-apiserver(536546db-8e23-43bc-ada9-ff6aca8accce)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-84758c967d-czg8s_calico-apiserver(536546db-8e23-43bc-ada9-ff6aca8accce)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"96878bc64984967382d672bcc498f35002b46b5f999b2ea235d916cf67b3a2f8\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-84758c967d-czg8s" podUID="536546db-8e23-43bc-ada9-ff6aca8accce" Nov 8 00:31:36.540065 kubelet[2900]: E1108 00:31:36.539945 2900 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f8a2276ce0fe12dd5a4d868f1552f2d4fc878719d08d26e02e0cf5f9142eae72\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-5p9cw" Nov 8 00:31:36.540065 kubelet[2900]: E1108 00:31:36.539953 2900 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f8a2276ce0fe12dd5a4d868f1552f2d4fc878719d08d26e02e0cf5f9142eae72\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-5p9cw" Nov 8 00:31:36.540065 kubelet[2900]: E1108 00:31:36.539965 2900 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-5p9cw_kube-system(8b2ed5b8-86fb-4b7a-9b26-26f59088b35b)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-5p9cw_kube-system(8b2ed5b8-86fb-4b7a-9b26-26f59088b35b)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"f8a2276ce0fe12dd5a4d868f1552f2d4fc878719d08d26e02e0cf5f9142eae72\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-5p9cw" podUID="8b2ed5b8-86fb-4b7a-9b26-26f59088b35b" Nov 8 00:31:36.540135 kubelet[2900]: E1108 00:31:36.539980 2900 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"83d1a9e89ea2e606570757e3d36ce51ba3e34485b1ed3bee50364c6355c3528e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-wq4zw" Nov 8 00:31:36.540135 kubelet[2900]: E1108 00:31:36.539988 2900 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"83d1a9e89ea2e606570757e3d36ce51ba3e34485b1ed3bee50364c6355c3528e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-wq4zw" Nov 8 00:31:36.540135 kubelet[2900]: E1108 00:31:36.540001 2900 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-wq4zw_kube-system(939bcda9-0a19-4e96-ac5d-405850005d65)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-wq4zw_kube-system(939bcda9-0a19-4e96-ac5d-405850005d65)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"83d1a9e89ea2e606570757e3d36ce51ba3e34485b1ed3bee50364c6355c3528e\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-wq4zw" podUID="939bcda9-0a19-4e96-ac5d-405850005d65" Nov 8 00:31:36.553461 containerd[1649]: time="2025-11-08T00:31:36.553424208Z" level=error msg="Failed to destroy network for sandbox \"24ba4751fabe1dc2f6631bba257332c1654347d2c88ebc4dc0a88fcd43b3da0b\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:31:36.554105 containerd[1649]: time="2025-11-08T00:31:36.554062293Z" level=error msg="encountered an error cleaning up failed sandbox \"24ba4751fabe1dc2f6631bba257332c1654347d2c88ebc4dc0a88fcd43b3da0b\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:31:36.554195 containerd[1649]: time="2025-11-08T00:31:36.554174650Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-m4wsd,Uid:c5b205c6-f534-4f27-bd2e-0a8fe1443335,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"24ba4751fabe1dc2f6631bba257332c1654347d2c88ebc4dc0a88fcd43b3da0b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:31:36.554434 kubelet[2900]: E1108 00:31:36.554413 2900 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"24ba4751fabe1dc2f6631bba257332c1654347d2c88ebc4dc0a88fcd43b3da0b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:31:36.554556 kubelet[2900]: E1108 00:31:36.554496 2900 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"24ba4751fabe1dc2f6631bba257332c1654347d2c88ebc4dc0a88fcd43b3da0b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-m4wsd" Nov 8 00:31:36.554556 kubelet[2900]: E1108 00:31:36.554510 2900 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"24ba4751fabe1dc2f6631bba257332c1654347d2c88ebc4dc0a88fcd43b3da0b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-m4wsd" Nov 8 00:31:36.554625 kubelet[2900]: E1108 00:31:36.554609 2900 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-m4wsd_calico-system(c5b205c6-f534-4f27-bd2e-0a8fe1443335)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-m4wsd_calico-system(c5b205c6-f534-4f27-bd2e-0a8fe1443335)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"24ba4751fabe1dc2f6631bba257332c1654347d2c88ebc4dc0a88fcd43b3da0b\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-m4wsd" podUID="c5b205c6-f534-4f27-bd2e-0a8fe1443335" Nov 8 00:31:36.652155 kubelet[2900]: I1108 00:31:36.651834 2900 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="caf42fdb3ada0ed0ba850958182b309f6760189d49c1f7086237e2d2431c9939" Nov 8 00:31:36.652310 kubelet[2900]: I1108 00:31:36.652296 2900 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="96878bc64984967382d672bcc498f35002b46b5f999b2ea235d916cf67b3a2f8" Nov 8 00:31:36.663414 kubelet[2900]: I1108 00:31:36.663325 2900 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="83d1a9e89ea2e606570757e3d36ce51ba3e34485b1ed3bee50364c6355c3528e" Nov 8 00:31:36.675403 containerd[1649]: time="2025-11-08T00:31:36.675264521Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.4\"" Nov 8 00:31:36.680316 kubelet[2900]: I1108 00:31:36.680201 2900 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="24ba4751fabe1dc2f6631bba257332c1654347d2c88ebc4dc0a88fcd43b3da0b" Nov 8 00:31:36.691089 containerd[1649]: time="2025-11-08T00:31:36.689060561Z" level=info msg="StopPodSandbox for \"83d1a9e89ea2e606570757e3d36ce51ba3e34485b1ed3bee50364c6355c3528e\"" Nov 8 00:31:36.691089 containerd[1649]: time="2025-11-08T00:31:36.689826681Z" level=info msg="Ensure that sandbox 83d1a9e89ea2e606570757e3d36ce51ba3e34485b1ed3bee50364c6355c3528e in task-service has been cleanup successfully" Nov 8 00:31:36.691269 containerd[1649]: time="2025-11-08T00:31:36.691258073Z" level=info msg="StopPodSandbox for \"24ba4751fabe1dc2f6631bba257332c1654347d2c88ebc4dc0a88fcd43b3da0b\"" Nov 8 00:31:36.691383 containerd[1649]: time="2025-11-08T00:31:36.691373350Z" level=info msg="Ensure that sandbox 24ba4751fabe1dc2f6631bba257332c1654347d2c88ebc4dc0a88fcd43b3da0b in task-service has been cleanup successfully" Nov 8 00:31:36.692511 containerd[1649]: time="2025-11-08T00:31:36.692499107Z" level=info msg="StopPodSandbox for \"caf42fdb3ada0ed0ba850958182b309f6760189d49c1f7086237e2d2431c9939\"" Nov 8 00:31:36.692825 containerd[1649]: time="2025-11-08T00:31:36.692814594Z" level=info msg="Ensure that sandbox caf42fdb3ada0ed0ba850958182b309f6760189d49c1f7086237e2d2431c9939 in task-service has been cleanup successfully" Nov 8 00:31:36.693844 containerd[1649]: time="2025-11-08T00:31:36.692686596Z" level=info msg="StopPodSandbox for \"96878bc64984967382d672bcc498f35002b46b5f999b2ea235d916cf67b3a2f8\"" Nov 8 00:31:36.693974 containerd[1649]: time="2025-11-08T00:31:36.693963838Z" level=info msg="Ensure that sandbox 96878bc64984967382d672bcc498f35002b46b5f999b2ea235d916cf67b3a2f8 in task-service has been cleanup successfully" Nov 8 00:31:36.694602 kubelet[2900]: I1108 00:31:36.694592 2900 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f61f076e1d917da84535491d563c1e29d08b4d894442db8d86e62f6b688e02a0" Nov 8 00:31:36.695982 containerd[1649]: time="2025-11-08T00:31:36.695967966Z" level=info msg="StopPodSandbox for \"f61f076e1d917da84535491d563c1e29d08b4d894442db8d86e62f6b688e02a0\"" Nov 8 00:31:36.696112 containerd[1649]: time="2025-11-08T00:31:36.696102902Z" level=info msg="Ensure that sandbox f61f076e1d917da84535491d563c1e29d08b4d894442db8d86e62f6b688e02a0 in task-service has been cleanup successfully" Nov 8 00:31:36.700111 kubelet[2900]: I1108 00:31:36.700091 2900 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f8a2276ce0fe12dd5a4d868f1552f2d4fc878719d08d26e02e0cf5f9142eae72" Nov 8 00:31:36.701183 containerd[1649]: time="2025-11-08T00:31:36.701149356Z" level=info msg="StopPodSandbox for \"f8a2276ce0fe12dd5a4d868f1552f2d4fc878719d08d26e02e0cf5f9142eae72\"" Nov 8 00:31:36.701445 containerd[1649]: time="2025-11-08T00:31:36.701429497Z" level=info msg="Ensure that sandbox f8a2276ce0fe12dd5a4d868f1552f2d4fc878719d08d26e02e0cf5f9142eae72 in task-service has been cleanup successfully" Nov 8 00:31:36.706057 kubelet[2900]: I1108 00:31:36.706032 2900 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3265a2745e9ae0466ceafbf33ebb57c974e2beed74c3345f0502814ff46829ec" Nov 8 00:31:36.707141 containerd[1649]: time="2025-11-08T00:31:36.707125089Z" level=info msg="StopPodSandbox for \"3265a2745e9ae0466ceafbf33ebb57c974e2beed74c3345f0502814ff46829ec\"" Nov 8 00:31:36.707424 containerd[1649]: time="2025-11-08T00:31:36.707317427Z" level=info msg="Ensure that sandbox 3265a2745e9ae0466ceafbf33ebb57c974e2beed74c3345f0502814ff46829ec in task-service has been cleanup successfully" Nov 8 00:31:36.712965 kubelet[2900]: I1108 00:31:36.712950 2900 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7f129ede87c085322632e017826913d76d407d9023e8f529257ad197229f85d6" Nov 8 00:31:36.714611 containerd[1649]: time="2025-11-08T00:31:36.714143167Z" level=info msg="StopPodSandbox for \"7f129ede87c085322632e017826913d76d407d9023e8f529257ad197229f85d6\"" Nov 8 00:31:36.714611 containerd[1649]: time="2025-11-08T00:31:36.714299802Z" level=info msg="Ensure that sandbox 7f129ede87c085322632e017826913d76d407d9023e8f529257ad197229f85d6 in task-service has been cleanup successfully" Nov 8 00:31:36.738777 containerd[1649]: time="2025-11-08T00:31:36.737458321Z" level=error msg="StopPodSandbox for \"24ba4751fabe1dc2f6631bba257332c1654347d2c88ebc4dc0a88fcd43b3da0b\" failed" error="failed to destroy network for sandbox \"24ba4751fabe1dc2f6631bba257332c1654347d2c88ebc4dc0a88fcd43b3da0b\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:31:36.738777 containerd[1649]: time="2025-11-08T00:31:36.737526392Z" level=error msg="StopPodSandbox for \"83d1a9e89ea2e606570757e3d36ce51ba3e34485b1ed3bee50364c6355c3528e\" failed" error="failed to destroy network for sandbox \"83d1a9e89ea2e606570757e3d36ce51ba3e34485b1ed3bee50364c6355c3528e\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:31:36.738863 kubelet[2900]: E1108 00:31:36.737643 2900 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"83d1a9e89ea2e606570757e3d36ce51ba3e34485b1ed3bee50364c6355c3528e\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="83d1a9e89ea2e606570757e3d36ce51ba3e34485b1ed3bee50364c6355c3528e" Nov 8 00:31:36.740174 kubelet[2900]: E1108 00:31:36.739739 2900 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"24ba4751fabe1dc2f6631bba257332c1654347d2c88ebc4dc0a88fcd43b3da0b\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="24ba4751fabe1dc2f6631bba257332c1654347d2c88ebc4dc0a88fcd43b3da0b" Nov 8 00:31:36.750043 kubelet[2900]: E1108 00:31:36.746014 2900 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"83d1a9e89ea2e606570757e3d36ce51ba3e34485b1ed3bee50364c6355c3528e"} Nov 8 00:31:36.750512 kubelet[2900]: E1108 00:31:36.750498 2900 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"939bcda9-0a19-4e96-ac5d-405850005d65\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"83d1a9e89ea2e606570757e3d36ce51ba3e34485b1ed3bee50364c6355c3528e\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Nov 8 00:31:36.750620 kubelet[2900]: E1108 00:31:36.750603 2900 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"939bcda9-0a19-4e96-ac5d-405850005d65\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"83d1a9e89ea2e606570757e3d36ce51ba3e34485b1ed3bee50364c6355c3528e\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-wq4zw" podUID="939bcda9-0a19-4e96-ac5d-405850005d65" Nov 8 00:31:36.750690 kubelet[2900]: E1108 00:31:36.746084 2900 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"24ba4751fabe1dc2f6631bba257332c1654347d2c88ebc4dc0a88fcd43b3da0b"} Nov 8 00:31:36.750739 kubelet[2900]: E1108 00:31:36.750730 2900 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"c5b205c6-f534-4f27-bd2e-0a8fe1443335\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"24ba4751fabe1dc2f6631bba257332c1654347d2c88ebc4dc0a88fcd43b3da0b\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Nov 8 00:31:36.750814 kubelet[2900]: E1108 00:31:36.750803 2900 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"c5b205c6-f534-4f27-bd2e-0a8fe1443335\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"24ba4751fabe1dc2f6631bba257332c1654347d2c88ebc4dc0a88fcd43b3da0b\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-m4wsd" podUID="c5b205c6-f534-4f27-bd2e-0a8fe1443335" Nov 8 00:31:36.776094 containerd[1649]: time="2025-11-08T00:31:36.776065119Z" level=error msg="StopPodSandbox for \"caf42fdb3ada0ed0ba850958182b309f6760189d49c1f7086237e2d2431c9939\" failed" error="failed to destroy network for sandbox \"caf42fdb3ada0ed0ba850958182b309f6760189d49c1f7086237e2d2431c9939\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:31:36.776681 kubelet[2900]: E1108 00:31:36.776442 2900 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"caf42fdb3ada0ed0ba850958182b309f6760189d49c1f7086237e2d2431c9939\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="caf42fdb3ada0ed0ba850958182b309f6760189d49c1f7086237e2d2431c9939" Nov 8 00:31:36.776681 kubelet[2900]: E1108 00:31:36.776476 2900 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"caf42fdb3ada0ed0ba850958182b309f6760189d49c1f7086237e2d2431c9939"} Nov 8 00:31:36.776681 kubelet[2900]: E1108 00:31:36.776498 2900 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"d8943d47-ae19-484d-8d89-dda3dcc29a60\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"caf42fdb3ada0ed0ba850958182b309f6760189d49c1f7086237e2d2431c9939\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Nov 8 00:31:36.776681 kubelet[2900]: E1108 00:31:36.776513 2900 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"d8943d47-ae19-484d-8d89-dda3dcc29a60\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"caf42fdb3ada0ed0ba850958182b309f6760189d49c1f7086237e2d2431c9939\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-655bcd5b7f-mvm84" podUID="d8943d47-ae19-484d-8d89-dda3dcc29a60" Nov 8 00:31:36.777004 containerd[1649]: time="2025-11-08T00:31:36.776984258Z" level=error msg="StopPodSandbox for \"96878bc64984967382d672bcc498f35002b46b5f999b2ea235d916cf67b3a2f8\" failed" error="failed to destroy network for sandbox \"96878bc64984967382d672bcc498f35002b46b5f999b2ea235d916cf67b3a2f8\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:31:36.777080 containerd[1649]: time="2025-11-08T00:31:36.777052408Z" level=error msg="StopPodSandbox for \"f61f076e1d917da84535491d563c1e29d08b4d894442db8d86e62f6b688e02a0\" failed" error="failed to destroy network for sandbox \"f61f076e1d917da84535491d563c1e29d08b4d894442db8d86e62f6b688e02a0\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:31:36.777389 kubelet[2900]: E1108 00:31:36.777139 2900 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"96878bc64984967382d672bcc498f35002b46b5f999b2ea235d916cf67b3a2f8\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="96878bc64984967382d672bcc498f35002b46b5f999b2ea235d916cf67b3a2f8" Nov 8 00:31:36.777389 kubelet[2900]: E1108 00:31:36.777163 2900 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"96878bc64984967382d672bcc498f35002b46b5f999b2ea235d916cf67b3a2f8"} Nov 8 00:31:36.777389 kubelet[2900]: E1108 00:31:36.777179 2900 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"536546db-8e23-43bc-ada9-ff6aca8accce\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"96878bc64984967382d672bcc498f35002b46b5f999b2ea235d916cf67b3a2f8\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Nov 8 00:31:36.777389 kubelet[2900]: E1108 00:31:36.777196 2900 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"536546db-8e23-43bc-ada9-ff6aca8accce\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"96878bc64984967382d672bcc498f35002b46b5f999b2ea235d916cf67b3a2f8\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-84758c967d-czg8s" podUID="536546db-8e23-43bc-ada9-ff6aca8accce" Nov 8 00:31:36.777614 kubelet[2900]: E1108 00:31:36.777544 2900 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"f61f076e1d917da84535491d563c1e29d08b4d894442db8d86e62f6b688e02a0\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="f61f076e1d917da84535491d563c1e29d08b4d894442db8d86e62f6b688e02a0" Nov 8 00:31:36.777614 kubelet[2900]: E1108 00:31:36.777560 2900 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"f61f076e1d917da84535491d563c1e29d08b4d894442db8d86e62f6b688e02a0"} Nov 8 00:31:36.777614 kubelet[2900]: E1108 00:31:36.777573 2900 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"ce042206-9988-482a-bbf2-d9505d456f72\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"f61f076e1d917da84535491d563c1e29d08b4d894442db8d86e62f6b688e02a0\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Nov 8 00:31:36.777614 kubelet[2900]: E1108 00:31:36.777599 2900 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"ce042206-9988-482a-bbf2-d9505d456f72\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"f61f076e1d917da84535491d563c1e29d08b4d894442db8d86e62f6b688e02a0\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-764fb649d4-rkjxq" podUID="ce042206-9988-482a-bbf2-d9505d456f72" Nov 8 00:31:36.777739 containerd[1649]: time="2025-11-08T00:31:36.777620260Z" level=error msg="StopPodSandbox for \"f8a2276ce0fe12dd5a4d868f1552f2d4fc878719d08d26e02e0cf5f9142eae72\" failed" error="failed to destroy network for sandbox \"f8a2276ce0fe12dd5a4d868f1552f2d4fc878719d08d26e02e0cf5f9142eae72\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:31:36.777959 kubelet[2900]: E1108 00:31:36.777797 2900 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"f8a2276ce0fe12dd5a4d868f1552f2d4fc878719d08d26e02e0cf5f9142eae72\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="f8a2276ce0fe12dd5a4d868f1552f2d4fc878719d08d26e02e0cf5f9142eae72" Nov 8 00:31:36.777959 kubelet[2900]: E1108 00:31:36.777811 2900 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"f8a2276ce0fe12dd5a4d868f1552f2d4fc878719d08d26e02e0cf5f9142eae72"} Nov 8 00:31:36.777959 kubelet[2900]: E1108 00:31:36.777824 2900 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"8b2ed5b8-86fb-4b7a-9b26-26f59088b35b\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"f8a2276ce0fe12dd5a4d868f1552f2d4fc878719d08d26e02e0cf5f9142eae72\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Nov 8 00:31:36.777959 kubelet[2900]: E1108 00:31:36.777834 2900 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"8b2ed5b8-86fb-4b7a-9b26-26f59088b35b\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"f8a2276ce0fe12dd5a4d868f1552f2d4fc878719d08d26e02e0cf5f9142eae72\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-5p9cw" podUID="8b2ed5b8-86fb-4b7a-9b26-26f59088b35b" Nov 8 00:31:36.778635 containerd[1649]: time="2025-11-08T00:31:36.778517473Z" level=error msg="StopPodSandbox for \"7f129ede87c085322632e017826913d76d407d9023e8f529257ad197229f85d6\" failed" error="failed to destroy network for sandbox \"7f129ede87c085322632e017826913d76d407d9023e8f529257ad197229f85d6\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:31:36.778671 kubelet[2900]: E1108 00:31:36.778581 2900 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"7f129ede87c085322632e017826913d76d407d9023e8f529257ad197229f85d6\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="7f129ede87c085322632e017826913d76d407d9023e8f529257ad197229f85d6" Nov 8 00:31:36.778671 kubelet[2900]: E1108 00:31:36.778597 2900 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"7f129ede87c085322632e017826913d76d407d9023e8f529257ad197229f85d6"} Nov 8 00:31:36.778671 kubelet[2900]: E1108 00:31:36.778610 2900 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"558dc8c2-70d1-4eda-a967-93f57dec2dc2\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"7f129ede87c085322632e017826913d76d407d9023e8f529257ad197229f85d6\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Nov 8 00:31:36.778671 kubelet[2900]: E1108 00:31:36.778621 2900 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"558dc8c2-70d1-4eda-a967-93f57dec2dc2\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"7f129ede87c085322632e017826913d76d407d9023e8f529257ad197229f85d6\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-84758c967d-hp26p" podUID="558dc8c2-70d1-4eda-a967-93f57dec2dc2" Nov 8 00:31:36.780791 containerd[1649]: time="2025-11-08T00:31:36.780644846Z" level=error msg="StopPodSandbox for \"3265a2745e9ae0466ceafbf33ebb57c974e2beed74c3345f0502814ff46829ec\" failed" error="failed to destroy network for sandbox \"3265a2745e9ae0466ceafbf33ebb57c974e2beed74c3345f0502814ff46829ec\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:31:36.780830 kubelet[2900]: E1108 00:31:36.780731 2900 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"3265a2745e9ae0466ceafbf33ebb57c974e2beed74c3345f0502814ff46829ec\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="3265a2745e9ae0466ceafbf33ebb57c974e2beed74c3345f0502814ff46829ec" Nov 8 00:31:36.780830 kubelet[2900]: E1108 00:31:36.780749 2900 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"3265a2745e9ae0466ceafbf33ebb57c974e2beed74c3345f0502814ff46829ec"} Nov 8 00:31:36.780830 kubelet[2900]: E1108 00:31:36.780766 2900 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"c0dfff3f-1568-463e-aed1-906fd9d64aa0\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"3265a2745e9ae0466ceafbf33ebb57c974e2beed74c3345f0502814ff46829ec\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Nov 8 00:31:36.780830 kubelet[2900]: E1108 00:31:36.780777 2900 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"c0dfff3f-1568-463e-aed1-906fd9d64aa0\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"3265a2745e9ae0466ceafbf33ebb57c974e2beed74c3345f0502814ff46829ec\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-666569f655-cblkz" podUID="c0dfff3f-1568-463e-aed1-906fd9d64aa0" Nov 8 00:31:36.840990 systemd-resolved[1542]: Under memory pressure, flushing caches. Nov 8 00:31:36.841011 systemd-resolved[1542]: Flushed all caches. Nov 8 00:31:36.841943 systemd-journald[1198]: Under memory pressure, flushing caches. Nov 8 00:31:40.873332 systemd-resolved[1542]: Under memory pressure, flushing caches. Nov 8 00:31:40.874091 systemd-journald[1198]: Under memory pressure, flushing caches. Nov 8 00:31:40.873352 systemd-resolved[1542]: Flushed all caches. Nov 8 00:31:40.991236 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1071734701.mount: Deactivated successfully. Nov 8 00:31:41.038119 containerd[1649]: time="2025-11-08T00:31:41.038080459Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.30.4: active requests=0, bytes read=156883675" Nov 8 00:31:41.039012 containerd[1649]: time="2025-11-08T00:31:41.038710912Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:31:41.052333 containerd[1649]: time="2025-11-08T00:31:41.052303002Z" level=info msg="ImageCreate event name:\"sha256:833e8e11d9dc187377eab6f31e275114a6b0f8f0afc3bf578a2a00507e85afc9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:31:41.053492 containerd[1649]: time="2025-11-08T00:31:41.053083904Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:e92cca333202c87d07bf57f38182fd68f0779f912ef55305eda1fccc9f33667c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:31:41.054653 containerd[1649]: time="2025-11-08T00:31:41.054632927Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.30.4\" with image id \"sha256:833e8e11d9dc187377eab6f31e275114a6b0f8f0afc3bf578a2a00507e85afc9\", repo tag \"ghcr.io/flatcar/calico/node:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/node@sha256:e92cca333202c87d07bf57f38182fd68f0779f912ef55305eda1fccc9f33667c\", size \"156883537\" in 4.377514395s" Nov 8 00:31:41.054712 containerd[1649]: time="2025-11-08T00:31:41.054702066Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.4\" returns image reference \"sha256:833e8e11d9dc187377eab6f31e275114a6b0f8f0afc3bf578a2a00507e85afc9\"" Nov 8 00:31:41.068090 containerd[1649]: time="2025-11-08T00:31:41.068063813Z" level=info msg="CreateContainer within sandbox \"afe81996dbafbb6f5df31dcaa2a55e729ed96df660009ec11e9893b8b2a2459f\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Nov 8 00:31:41.095104 containerd[1649]: time="2025-11-08T00:31:41.095077926Z" level=info msg="CreateContainer within sandbox \"afe81996dbafbb6f5df31dcaa2a55e729ed96df660009ec11e9893b8b2a2459f\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"b29e3cca10042891180f708237a59ab7f1e877430fdf0aed5e3f63d4ff4f3cf4\"" Nov 8 00:31:41.101258 containerd[1649]: time="2025-11-08T00:31:41.100979224Z" level=info msg="StartContainer for \"b29e3cca10042891180f708237a59ab7f1e877430fdf0aed5e3f63d4ff4f3cf4\"" Nov 8 00:31:41.231222 containerd[1649]: time="2025-11-08T00:31:41.230465197Z" level=info msg="StartContainer for \"b29e3cca10042891180f708237a59ab7f1e877430fdf0aed5e3f63d4ff4f3cf4\" returns successfully" Nov 8 00:31:41.402413 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Nov 8 00:31:41.404240 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Nov 8 00:31:41.674874 containerd[1649]: time="2025-11-08T00:31:41.674179294Z" level=info msg="StopPodSandbox for \"f61f076e1d917da84535491d563c1e29d08b4d894442db8d86e62f6b688e02a0\"" Nov 8 00:31:42.371405 kubelet[2900]: I1108 00:31:42.353622 2900 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-78nlx" podStartSLOduration=1.817420554 podStartE2EDuration="16.324586644s" podCreationTimestamp="2025-11-08 00:31:26 +0000 UTC" firstStartedPulling="2025-11-08 00:31:26.548049968 +0000 UTC m=+21.161504838" lastFinishedPulling="2025-11-08 00:31:41.055216054 +0000 UTC m=+35.668670928" observedRunningTime="2025-11-08 00:31:41.763499568 +0000 UTC m=+36.376954445" watchObservedRunningTime="2025-11-08 00:31:42.324586644 +0000 UTC m=+36.938041519" Nov 8 00:31:42.737647 containerd[1649]: 2025-11-08 00:31:42.325 [INFO][4081] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="f61f076e1d917da84535491d563c1e29d08b4d894442db8d86e62f6b688e02a0" Nov 8 00:31:42.737647 containerd[1649]: 2025-11-08 00:31:42.336 [INFO][4081] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="f61f076e1d917da84535491d563c1e29d08b4d894442db8d86e62f6b688e02a0" iface="eth0" netns="/var/run/netns/cni-452d3a4c-ad82-d5d9-9b6d-5fbf9a58454d" Nov 8 00:31:42.737647 containerd[1649]: 2025-11-08 00:31:42.336 [INFO][4081] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="f61f076e1d917da84535491d563c1e29d08b4d894442db8d86e62f6b688e02a0" iface="eth0" netns="/var/run/netns/cni-452d3a4c-ad82-d5d9-9b6d-5fbf9a58454d" Nov 8 00:31:42.737647 containerd[1649]: 2025-11-08 00:31:42.353 [INFO][4081] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="f61f076e1d917da84535491d563c1e29d08b4d894442db8d86e62f6b688e02a0" iface="eth0" netns="/var/run/netns/cni-452d3a4c-ad82-d5d9-9b6d-5fbf9a58454d" Nov 8 00:31:42.737647 containerd[1649]: 2025-11-08 00:31:42.353 [INFO][4081] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="f61f076e1d917da84535491d563c1e29d08b4d894442db8d86e62f6b688e02a0" Nov 8 00:31:42.737647 containerd[1649]: 2025-11-08 00:31:42.353 [INFO][4081] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="f61f076e1d917da84535491d563c1e29d08b4d894442db8d86e62f6b688e02a0" Nov 8 00:31:42.737647 containerd[1649]: 2025-11-08 00:31:42.703 [INFO][4145] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="f61f076e1d917da84535491d563c1e29d08b4d894442db8d86e62f6b688e02a0" HandleID="k8s-pod-network.f61f076e1d917da84535491d563c1e29d08b4d894442db8d86e62f6b688e02a0" Workload="localhost-k8s-whisker--764fb649d4--rkjxq-eth0" Nov 8 00:31:42.737647 containerd[1649]: 2025-11-08 00:31:42.709 [INFO][4145] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 8 00:31:42.737647 containerd[1649]: 2025-11-08 00:31:42.709 [INFO][4145] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 8 00:31:42.737647 containerd[1649]: 2025-11-08 00:31:42.728 [WARNING][4145] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="f61f076e1d917da84535491d563c1e29d08b4d894442db8d86e62f6b688e02a0" HandleID="k8s-pod-network.f61f076e1d917da84535491d563c1e29d08b4d894442db8d86e62f6b688e02a0" Workload="localhost-k8s-whisker--764fb649d4--rkjxq-eth0" Nov 8 00:31:42.737647 containerd[1649]: 2025-11-08 00:31:42.728 [INFO][4145] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="f61f076e1d917da84535491d563c1e29d08b4d894442db8d86e62f6b688e02a0" HandleID="k8s-pod-network.f61f076e1d917da84535491d563c1e29d08b4d894442db8d86e62f6b688e02a0" Workload="localhost-k8s-whisker--764fb649d4--rkjxq-eth0" Nov 8 00:31:42.737647 containerd[1649]: 2025-11-08 00:31:42.731 [INFO][4145] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 8 00:31:42.737647 containerd[1649]: 2025-11-08 00:31:42.734 [INFO][4081] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="f61f076e1d917da84535491d563c1e29d08b4d894442db8d86e62f6b688e02a0" Nov 8 00:31:42.741049 containerd[1649]: time="2025-11-08T00:31:42.740984257Z" level=info msg="TearDown network for sandbox \"f61f076e1d917da84535491d563c1e29d08b4d894442db8d86e62f6b688e02a0\" successfully" Nov 8 00:31:42.741049 containerd[1649]: time="2025-11-08T00:31:42.741048783Z" level=info msg="StopPodSandbox for \"f61f076e1d917da84535491d563c1e29d08b4d894442db8d86e62f6b688e02a0\" returns successfully" Nov 8 00:31:42.743106 systemd[1]: run-netns-cni\x2d452d3a4c\x2dad82\x2dd5d9\x2d9b6d\x2d5fbf9a58454d.mount: Deactivated successfully. Nov 8 00:31:42.867386 kubelet[2900]: I1108 00:31:42.867306 2900 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rgnlq\" (UniqueName: \"kubernetes.io/projected/ce042206-9988-482a-bbf2-d9505d456f72-kube-api-access-rgnlq\") pod \"ce042206-9988-482a-bbf2-d9505d456f72\" (UID: \"ce042206-9988-482a-bbf2-d9505d456f72\") " Nov 8 00:31:42.867386 kubelet[2900]: I1108 00:31:42.867380 2900 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/ce042206-9988-482a-bbf2-d9505d456f72-whisker-backend-key-pair\") pod \"ce042206-9988-482a-bbf2-d9505d456f72\" (UID: \"ce042206-9988-482a-bbf2-d9505d456f72\") " Nov 8 00:31:42.871260 kubelet[2900]: I1108 00:31:42.870793 2900 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ce042206-9988-482a-bbf2-d9505d456f72-whisker-ca-bundle\") pod \"ce042206-9988-482a-bbf2-d9505d456f72\" (UID: \"ce042206-9988-482a-bbf2-d9505d456f72\") " Nov 8 00:31:42.878951 kubelet[2900]: I1108 00:31:42.878872 2900 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ce042206-9988-482a-bbf2-d9505d456f72-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "ce042206-9988-482a-bbf2-d9505d456f72" (UID: "ce042206-9988-482a-bbf2-d9505d456f72"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Nov 8 00:31:42.879074 kubelet[2900]: I1108 00:31:42.876888 2900 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ce042206-9988-482a-bbf2-d9505d456f72-kube-api-access-rgnlq" (OuterVolumeSpecName: "kube-api-access-rgnlq") pod "ce042206-9988-482a-bbf2-d9505d456f72" (UID: "ce042206-9988-482a-bbf2-d9505d456f72"). InnerVolumeSpecName "kube-api-access-rgnlq". PluginName "kubernetes.io/projected", VolumeGIDValue "" Nov 8 00:31:42.879952 kubelet[2900]: I1108 00:31:42.879244 2900 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ce042206-9988-482a-bbf2-d9505d456f72-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "ce042206-9988-482a-bbf2-d9505d456f72" (UID: "ce042206-9988-482a-bbf2-d9505d456f72"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGIDValue "" Nov 8 00:31:42.922642 systemd-journald[1198]: Under memory pressure, flushing caches. Nov 8 00:31:42.920989 systemd-resolved[1542]: Under memory pressure, flushing caches. Nov 8 00:31:42.920995 systemd-resolved[1542]: Flushed all caches. Nov 8 00:31:42.972059 kubelet[2900]: I1108 00:31:42.972030 2900 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-rgnlq\" (UniqueName: \"kubernetes.io/projected/ce042206-9988-482a-bbf2-d9505d456f72-kube-api-access-rgnlq\") on node \"localhost\" DevicePath \"\"" Nov 8 00:31:42.972059 kubelet[2900]: I1108 00:31:42.972053 2900 reconciler_common.go:299] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/ce042206-9988-482a-bbf2-d9505d456f72-whisker-backend-key-pair\") on node \"localhost\" DevicePath \"\"" Nov 8 00:31:42.972059 kubelet[2900]: I1108 00:31:42.972059 2900 reconciler_common.go:299] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ce042206-9988-482a-bbf2-d9505d456f72-whisker-ca-bundle\") on node \"localhost\" DevicePath \"\"" Nov 8 00:31:42.994543 systemd[1]: run-containerd-runc-k8s.io-b29e3cca10042891180f708237a59ab7f1e877430fdf0aed5e3f63d4ff4f3cf4-runc.9Qb4S4.mount: Deactivated successfully. Nov 8 00:31:42.994642 systemd[1]: var-lib-kubelet-pods-ce042206\x2d9988\x2d482a\x2dbbf2\x2dd9505d456f72-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2drgnlq.mount: Deactivated successfully. Nov 8 00:31:42.994714 systemd[1]: var-lib-kubelet-pods-ce042206\x2d9988\x2d482a\x2dbbf2\x2dd9505d456f72-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Nov 8 00:31:43.706363 kubelet[2900]: I1108 00:31:43.706111 2900 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Nov 8 00:31:43.977113 kubelet[2900]: I1108 00:31:43.977024 2900 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/1e4b7614-5497-46ae-a96f-7f92d3916cde-whisker-backend-key-pair\") pod \"whisker-5b7c8bd886-mhdkg\" (UID: \"1e4b7614-5497-46ae-a96f-7f92d3916cde\") " pod="calico-system/whisker-5b7c8bd886-mhdkg" Nov 8 00:31:43.977113 kubelet[2900]: I1108 00:31:43.977064 2900 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1e4b7614-5497-46ae-a96f-7f92d3916cde-whisker-ca-bundle\") pod \"whisker-5b7c8bd886-mhdkg\" (UID: \"1e4b7614-5497-46ae-a96f-7f92d3916cde\") " pod="calico-system/whisker-5b7c8bd886-mhdkg" Nov 8 00:31:43.977113 kubelet[2900]: I1108 00:31:43.977079 2900 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ss2b5\" (UniqueName: \"kubernetes.io/projected/1e4b7614-5497-46ae-a96f-7f92d3916cde-kube-api-access-ss2b5\") pod \"whisker-5b7c8bd886-mhdkg\" (UID: \"1e4b7614-5497-46ae-a96f-7f92d3916cde\") " pod="calico-system/whisker-5b7c8bd886-mhdkg" Nov 8 00:31:44.165844 containerd[1649]: time="2025-11-08T00:31:44.165539392Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-5b7c8bd886-mhdkg,Uid:1e4b7614-5497-46ae-a96f-7f92d3916cde,Namespace:calico-system,Attempt:0,}" Nov 8 00:31:44.381929 systemd-networkd[1287]: calia522a2efb7b: Link UP Nov 8 00:31:44.382199 systemd-networkd[1287]: calia522a2efb7b: Gained carrier Nov 8 00:31:44.395176 containerd[1649]: 2025-11-08 00:31:44.250 [INFO][4293] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Nov 8 00:31:44.395176 containerd[1649]: 2025-11-08 00:31:44.273 [INFO][4293] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-whisker--5b7c8bd886--mhdkg-eth0 whisker-5b7c8bd886- calico-system 1e4b7614-5497-46ae-a96f-7f92d3916cde 872 0 2025-11-08 00:31:43 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:5b7c8bd886 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s localhost whisker-5b7c8bd886-mhdkg eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] calia522a2efb7b [] [] }} ContainerID="9fec56f1ce61216d5b8a9b7f0ba17a8f3d9f5329085c36cf88fef6bf0a21aa03" Namespace="calico-system" Pod="whisker-5b7c8bd886-mhdkg" WorkloadEndpoint="localhost-k8s-whisker--5b7c8bd886--mhdkg-" Nov 8 00:31:44.395176 containerd[1649]: 2025-11-08 00:31:44.273 [INFO][4293] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="9fec56f1ce61216d5b8a9b7f0ba17a8f3d9f5329085c36cf88fef6bf0a21aa03" Namespace="calico-system" Pod="whisker-5b7c8bd886-mhdkg" WorkloadEndpoint="localhost-k8s-whisker--5b7c8bd886--mhdkg-eth0" Nov 8 00:31:44.395176 containerd[1649]: 2025-11-08 00:31:44.310 [INFO][4314] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="9fec56f1ce61216d5b8a9b7f0ba17a8f3d9f5329085c36cf88fef6bf0a21aa03" HandleID="k8s-pod-network.9fec56f1ce61216d5b8a9b7f0ba17a8f3d9f5329085c36cf88fef6bf0a21aa03" Workload="localhost-k8s-whisker--5b7c8bd886--mhdkg-eth0" Nov 8 00:31:44.395176 containerd[1649]: 2025-11-08 00:31:44.311 [INFO][4314] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="9fec56f1ce61216d5b8a9b7f0ba17a8f3d9f5329085c36cf88fef6bf0a21aa03" HandleID="k8s-pod-network.9fec56f1ce61216d5b8a9b7f0ba17a8f3d9f5329085c36cf88fef6bf0a21aa03" Workload="localhost-k8s-whisker--5b7c8bd886--mhdkg-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002d5800), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"whisker-5b7c8bd886-mhdkg", "timestamp":"2025-11-08 00:31:44.310620813 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 8 00:31:44.395176 containerd[1649]: 2025-11-08 00:31:44.311 [INFO][4314] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 8 00:31:44.395176 containerd[1649]: 2025-11-08 00:31:44.311 [INFO][4314] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 8 00:31:44.395176 containerd[1649]: 2025-11-08 00:31:44.311 [INFO][4314] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Nov 8 00:31:44.395176 containerd[1649]: 2025-11-08 00:31:44.320 [INFO][4314] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.9fec56f1ce61216d5b8a9b7f0ba17a8f3d9f5329085c36cf88fef6bf0a21aa03" host="localhost" Nov 8 00:31:44.395176 containerd[1649]: 2025-11-08 00:31:44.334 [INFO][4314] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Nov 8 00:31:44.395176 containerd[1649]: 2025-11-08 00:31:44.339 [INFO][4314] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Nov 8 00:31:44.395176 containerd[1649]: 2025-11-08 00:31:44.341 [INFO][4314] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Nov 8 00:31:44.395176 containerd[1649]: 2025-11-08 00:31:44.343 [INFO][4314] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Nov 8 00:31:44.395176 containerd[1649]: 2025-11-08 00:31:44.343 [INFO][4314] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.9fec56f1ce61216d5b8a9b7f0ba17a8f3d9f5329085c36cf88fef6bf0a21aa03" host="localhost" Nov 8 00:31:44.395176 containerd[1649]: 2025-11-08 00:31:44.344 [INFO][4314] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.9fec56f1ce61216d5b8a9b7f0ba17a8f3d9f5329085c36cf88fef6bf0a21aa03 Nov 8 00:31:44.395176 containerd[1649]: 2025-11-08 00:31:44.348 [INFO][4314] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.9fec56f1ce61216d5b8a9b7f0ba17a8f3d9f5329085c36cf88fef6bf0a21aa03" host="localhost" Nov 8 00:31:44.395176 containerd[1649]: 2025-11-08 00:31:44.352 [INFO][4314] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.129/26] block=192.168.88.128/26 handle="k8s-pod-network.9fec56f1ce61216d5b8a9b7f0ba17a8f3d9f5329085c36cf88fef6bf0a21aa03" host="localhost" Nov 8 00:31:44.395176 containerd[1649]: 2025-11-08 00:31:44.352 [INFO][4314] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.129/26] handle="k8s-pod-network.9fec56f1ce61216d5b8a9b7f0ba17a8f3d9f5329085c36cf88fef6bf0a21aa03" host="localhost" Nov 8 00:31:44.395176 containerd[1649]: 2025-11-08 00:31:44.352 [INFO][4314] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 8 00:31:44.395176 containerd[1649]: 2025-11-08 00:31:44.352 [INFO][4314] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.129/26] IPv6=[] ContainerID="9fec56f1ce61216d5b8a9b7f0ba17a8f3d9f5329085c36cf88fef6bf0a21aa03" HandleID="k8s-pod-network.9fec56f1ce61216d5b8a9b7f0ba17a8f3d9f5329085c36cf88fef6bf0a21aa03" Workload="localhost-k8s-whisker--5b7c8bd886--mhdkg-eth0" Nov 8 00:31:44.401315 containerd[1649]: 2025-11-08 00:31:44.355 [INFO][4293] cni-plugin/k8s.go 418: Populated endpoint ContainerID="9fec56f1ce61216d5b8a9b7f0ba17a8f3d9f5329085c36cf88fef6bf0a21aa03" Namespace="calico-system" Pod="whisker-5b7c8bd886-mhdkg" WorkloadEndpoint="localhost-k8s-whisker--5b7c8bd886--mhdkg-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--5b7c8bd886--mhdkg-eth0", GenerateName:"whisker-5b7c8bd886-", Namespace:"calico-system", SelfLink:"", UID:"1e4b7614-5497-46ae-a96f-7f92d3916cde", ResourceVersion:"872", Generation:0, CreationTimestamp:time.Date(2025, time.November, 8, 0, 31, 43, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"5b7c8bd886", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"whisker-5b7c8bd886-mhdkg", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"calia522a2efb7b", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 8 00:31:44.401315 containerd[1649]: 2025-11-08 00:31:44.355 [INFO][4293] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.129/32] ContainerID="9fec56f1ce61216d5b8a9b7f0ba17a8f3d9f5329085c36cf88fef6bf0a21aa03" Namespace="calico-system" Pod="whisker-5b7c8bd886-mhdkg" WorkloadEndpoint="localhost-k8s-whisker--5b7c8bd886--mhdkg-eth0" Nov 8 00:31:44.401315 containerd[1649]: 2025-11-08 00:31:44.355 [INFO][4293] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calia522a2efb7b ContainerID="9fec56f1ce61216d5b8a9b7f0ba17a8f3d9f5329085c36cf88fef6bf0a21aa03" Namespace="calico-system" Pod="whisker-5b7c8bd886-mhdkg" WorkloadEndpoint="localhost-k8s-whisker--5b7c8bd886--mhdkg-eth0" Nov 8 00:31:44.401315 containerd[1649]: 2025-11-08 00:31:44.378 [INFO][4293] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="9fec56f1ce61216d5b8a9b7f0ba17a8f3d9f5329085c36cf88fef6bf0a21aa03" Namespace="calico-system" Pod="whisker-5b7c8bd886-mhdkg" WorkloadEndpoint="localhost-k8s-whisker--5b7c8bd886--mhdkg-eth0" Nov 8 00:31:44.401315 containerd[1649]: 2025-11-08 00:31:44.381 [INFO][4293] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="9fec56f1ce61216d5b8a9b7f0ba17a8f3d9f5329085c36cf88fef6bf0a21aa03" Namespace="calico-system" Pod="whisker-5b7c8bd886-mhdkg" WorkloadEndpoint="localhost-k8s-whisker--5b7c8bd886--mhdkg-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--5b7c8bd886--mhdkg-eth0", GenerateName:"whisker-5b7c8bd886-", Namespace:"calico-system", SelfLink:"", UID:"1e4b7614-5497-46ae-a96f-7f92d3916cde", ResourceVersion:"872", Generation:0, CreationTimestamp:time.Date(2025, time.November, 8, 0, 31, 43, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"5b7c8bd886", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"9fec56f1ce61216d5b8a9b7f0ba17a8f3d9f5329085c36cf88fef6bf0a21aa03", Pod:"whisker-5b7c8bd886-mhdkg", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"calia522a2efb7b", MAC:"26:31:04:d7:65:f9", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 8 00:31:44.401315 containerd[1649]: 2025-11-08 00:31:44.392 [INFO][4293] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="9fec56f1ce61216d5b8a9b7f0ba17a8f3d9f5329085c36cf88fef6bf0a21aa03" Namespace="calico-system" Pod="whisker-5b7c8bd886-mhdkg" WorkloadEndpoint="localhost-k8s-whisker--5b7c8bd886--mhdkg-eth0" Nov 8 00:31:44.425955 containerd[1649]: time="2025-11-08T00:31:44.424751965Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 8 00:31:44.425955 containerd[1649]: time="2025-11-08T00:31:44.424792920Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 8 00:31:44.425955 containerd[1649]: time="2025-11-08T00:31:44.424804085Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 8 00:31:44.432812 containerd[1649]: time="2025-11-08T00:31:44.431892053Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 8 00:31:44.461171 systemd-resolved[1542]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Nov 8 00:31:44.477930 kernel: bpftool[4394]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set Nov 8 00:31:44.507412 containerd[1649]: time="2025-11-08T00:31:44.507342514Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-5b7c8bd886-mhdkg,Uid:1e4b7614-5497-46ae-a96f-7f92d3916cde,Namespace:calico-system,Attempt:0,} returns sandbox id \"9fec56f1ce61216d5b8a9b7f0ba17a8f3d9f5329085c36cf88fef6bf0a21aa03\"" Nov 8 00:31:44.516049 containerd[1649]: time="2025-11-08T00:31:44.508315440Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Nov 8 00:31:44.741963 systemd-networkd[1287]: vxlan.calico: Link UP Nov 8 00:31:44.741967 systemd-networkd[1287]: vxlan.calico: Gained carrier Nov 8 00:31:44.867642 containerd[1649]: time="2025-11-08T00:31:44.867169285Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 8 00:31:44.969614 systemd-resolved[1542]: Under memory pressure, flushing caches. Nov 8 00:31:44.970617 containerd[1649]: time="2025-11-08T00:31:44.963877864Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Nov 8 00:31:44.970617 containerd[1649]: time="2025-11-08T00:31:44.964217706Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Nov 8 00:31:44.970687 kubelet[2900]: E1108 00:31:44.969760 2900 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Nov 8 00:31:44.970687 kubelet[2900]: E1108 00:31:44.969805 2900 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Nov 8 00:31:44.969626 systemd-resolved[1542]: Flushed all caches. Nov 8 00:31:44.974957 systemd-journald[1198]: Under memory pressure, flushing caches. Nov 8 00:31:44.989428 kubelet[2900]: E1108 00:31:44.989378 2900 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:a618f084f8064cdab9db195677f26467,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-ss2b5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-5b7c8bd886-mhdkg_calico-system(1e4b7614-5497-46ae-a96f-7f92d3916cde): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Nov 8 00:31:44.991489 containerd[1649]: time="2025-11-08T00:31:44.991330255Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Nov 8 00:31:45.366065 containerd[1649]: time="2025-11-08T00:31:45.366011502Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 8 00:31:45.367025 containerd[1649]: time="2025-11-08T00:31:45.366987762Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Nov 8 00:31:45.367110 containerd[1649]: time="2025-11-08T00:31:45.367055554Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Nov 8 00:31:45.367910 kubelet[2900]: E1108 00:31:45.367258 2900 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Nov 8 00:31:45.367910 kubelet[2900]: E1108 00:31:45.367300 2900 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Nov 8 00:31:45.368036 kubelet[2900]: E1108 00:31:45.367397 2900 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-ss2b5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-5b7c8bd886-mhdkg_calico-system(1e4b7614-5497-46ae-a96f-7f92d3916cde): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Nov 8 00:31:45.370043 kubelet[2900]: E1108 00:31:45.369966 2900 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-5b7c8bd886-mhdkg" podUID="1e4b7614-5497-46ae-a96f-7f92d3916cde" Nov 8 00:31:45.478639 kubelet[2900]: I1108 00:31:45.478504 2900 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ce042206-9988-482a-bbf2-d9505d456f72" path="/var/lib/kubelet/pods/ce042206-9988-482a-bbf2-d9505d456f72/volumes" Nov 8 00:31:45.805462 kubelet[2900]: E1108 00:31:45.805271 2900 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-5b7c8bd886-mhdkg" podUID="1e4b7614-5497-46ae-a96f-7f92d3916cde" Nov 8 00:31:45.865097 systemd-networkd[1287]: calia522a2efb7b: Gained IPv6LL Nov 8 00:31:46.569075 systemd-networkd[1287]: vxlan.calico: Gained IPv6LL Nov 8 00:31:47.477938 containerd[1649]: time="2025-11-08T00:31:47.477893036Z" level=info msg="StopPodSandbox for \"f8a2276ce0fe12dd5a4d868f1552f2d4fc878719d08d26e02e0cf5f9142eae72\"" Nov 8 00:31:47.478875 containerd[1649]: time="2025-11-08T00:31:47.478269753Z" level=info msg="StopPodSandbox for \"96878bc64984967382d672bcc498f35002b46b5f999b2ea235d916cf67b3a2f8\"" Nov 8 00:31:47.569552 containerd[1649]: 2025-11-08 00:31:47.532 [INFO][4509] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="96878bc64984967382d672bcc498f35002b46b5f999b2ea235d916cf67b3a2f8" Nov 8 00:31:47.569552 containerd[1649]: 2025-11-08 00:31:47.533 [INFO][4509] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="96878bc64984967382d672bcc498f35002b46b5f999b2ea235d916cf67b3a2f8" iface="eth0" netns="/var/run/netns/cni-15cde766-3450-7122-69b1-1d8c17226f25" Nov 8 00:31:47.569552 containerd[1649]: 2025-11-08 00:31:47.533 [INFO][4509] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="96878bc64984967382d672bcc498f35002b46b5f999b2ea235d916cf67b3a2f8" iface="eth0" netns="/var/run/netns/cni-15cde766-3450-7122-69b1-1d8c17226f25" Nov 8 00:31:47.569552 containerd[1649]: 2025-11-08 00:31:47.533 [INFO][4509] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="96878bc64984967382d672bcc498f35002b46b5f999b2ea235d916cf67b3a2f8" iface="eth0" netns="/var/run/netns/cni-15cde766-3450-7122-69b1-1d8c17226f25" Nov 8 00:31:47.569552 containerd[1649]: 2025-11-08 00:31:47.533 [INFO][4509] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="96878bc64984967382d672bcc498f35002b46b5f999b2ea235d916cf67b3a2f8" Nov 8 00:31:47.569552 containerd[1649]: 2025-11-08 00:31:47.533 [INFO][4509] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="96878bc64984967382d672bcc498f35002b46b5f999b2ea235d916cf67b3a2f8" Nov 8 00:31:47.569552 containerd[1649]: 2025-11-08 00:31:47.559 [INFO][4522] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="96878bc64984967382d672bcc498f35002b46b5f999b2ea235d916cf67b3a2f8" HandleID="k8s-pod-network.96878bc64984967382d672bcc498f35002b46b5f999b2ea235d916cf67b3a2f8" Workload="localhost-k8s-calico--apiserver--84758c967d--czg8s-eth0" Nov 8 00:31:47.569552 containerd[1649]: 2025-11-08 00:31:47.559 [INFO][4522] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 8 00:31:47.569552 containerd[1649]: 2025-11-08 00:31:47.559 [INFO][4522] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 8 00:31:47.569552 containerd[1649]: 2025-11-08 00:31:47.564 [WARNING][4522] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="96878bc64984967382d672bcc498f35002b46b5f999b2ea235d916cf67b3a2f8" HandleID="k8s-pod-network.96878bc64984967382d672bcc498f35002b46b5f999b2ea235d916cf67b3a2f8" Workload="localhost-k8s-calico--apiserver--84758c967d--czg8s-eth0" Nov 8 00:31:47.569552 containerd[1649]: 2025-11-08 00:31:47.564 [INFO][4522] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="96878bc64984967382d672bcc498f35002b46b5f999b2ea235d916cf67b3a2f8" HandleID="k8s-pod-network.96878bc64984967382d672bcc498f35002b46b5f999b2ea235d916cf67b3a2f8" Workload="localhost-k8s-calico--apiserver--84758c967d--czg8s-eth0" Nov 8 00:31:47.569552 containerd[1649]: 2025-11-08 00:31:47.565 [INFO][4522] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 8 00:31:47.569552 containerd[1649]: 2025-11-08 00:31:47.567 [INFO][4509] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="96878bc64984967382d672bcc498f35002b46b5f999b2ea235d916cf67b3a2f8" Nov 8 00:31:47.573157 systemd[1]: run-netns-cni\x2d15cde766\x2d3450\x2d7122\x2d69b1\x2d1d8c17226f25.mount: Deactivated successfully. Nov 8 00:31:47.574100 containerd[1649]: time="2025-11-08T00:31:47.573029048Z" level=info msg="TearDown network for sandbox \"96878bc64984967382d672bcc498f35002b46b5f999b2ea235d916cf67b3a2f8\" successfully" Nov 8 00:31:47.574100 containerd[1649]: time="2025-11-08T00:31:47.573280283Z" level=info msg="StopPodSandbox for \"96878bc64984967382d672bcc498f35002b46b5f999b2ea235d916cf67b3a2f8\" returns successfully" Nov 8 00:31:47.575512 containerd[1649]: time="2025-11-08T00:31:47.575199569Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-84758c967d-czg8s,Uid:536546db-8e23-43bc-ada9-ff6aca8accce,Namespace:calico-apiserver,Attempt:1,}" Nov 8 00:31:47.600785 containerd[1649]: 2025-11-08 00:31:47.545 [INFO][4510] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="f8a2276ce0fe12dd5a4d868f1552f2d4fc878719d08d26e02e0cf5f9142eae72" Nov 8 00:31:47.600785 containerd[1649]: 2025-11-08 00:31:47.545 [INFO][4510] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="f8a2276ce0fe12dd5a4d868f1552f2d4fc878719d08d26e02e0cf5f9142eae72" iface="eth0" netns="/var/run/netns/cni-0eac7fd6-e8d8-f8fd-9075-c681fb2d5d56" Nov 8 00:31:47.600785 containerd[1649]: 2025-11-08 00:31:47.546 [INFO][4510] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="f8a2276ce0fe12dd5a4d868f1552f2d4fc878719d08d26e02e0cf5f9142eae72" iface="eth0" netns="/var/run/netns/cni-0eac7fd6-e8d8-f8fd-9075-c681fb2d5d56" Nov 8 00:31:47.600785 containerd[1649]: 2025-11-08 00:31:47.547 [INFO][4510] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="f8a2276ce0fe12dd5a4d868f1552f2d4fc878719d08d26e02e0cf5f9142eae72" iface="eth0" netns="/var/run/netns/cni-0eac7fd6-e8d8-f8fd-9075-c681fb2d5d56" Nov 8 00:31:47.600785 containerd[1649]: 2025-11-08 00:31:47.547 [INFO][4510] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="f8a2276ce0fe12dd5a4d868f1552f2d4fc878719d08d26e02e0cf5f9142eae72" Nov 8 00:31:47.600785 containerd[1649]: 2025-11-08 00:31:47.547 [INFO][4510] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="f8a2276ce0fe12dd5a4d868f1552f2d4fc878719d08d26e02e0cf5f9142eae72" Nov 8 00:31:47.600785 containerd[1649]: 2025-11-08 00:31:47.590 [INFO][4527] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="f8a2276ce0fe12dd5a4d868f1552f2d4fc878719d08d26e02e0cf5f9142eae72" HandleID="k8s-pod-network.f8a2276ce0fe12dd5a4d868f1552f2d4fc878719d08d26e02e0cf5f9142eae72" Workload="localhost-k8s-coredns--668d6bf9bc--5p9cw-eth0" Nov 8 00:31:47.600785 containerd[1649]: 2025-11-08 00:31:47.591 [INFO][4527] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 8 00:31:47.600785 containerd[1649]: 2025-11-08 00:31:47.591 [INFO][4527] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 8 00:31:47.600785 containerd[1649]: 2025-11-08 00:31:47.596 [WARNING][4527] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="f8a2276ce0fe12dd5a4d868f1552f2d4fc878719d08d26e02e0cf5f9142eae72" HandleID="k8s-pod-network.f8a2276ce0fe12dd5a4d868f1552f2d4fc878719d08d26e02e0cf5f9142eae72" Workload="localhost-k8s-coredns--668d6bf9bc--5p9cw-eth0" Nov 8 00:31:47.600785 containerd[1649]: 2025-11-08 00:31:47.596 [INFO][4527] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="f8a2276ce0fe12dd5a4d868f1552f2d4fc878719d08d26e02e0cf5f9142eae72" HandleID="k8s-pod-network.f8a2276ce0fe12dd5a4d868f1552f2d4fc878719d08d26e02e0cf5f9142eae72" Workload="localhost-k8s-coredns--668d6bf9bc--5p9cw-eth0" Nov 8 00:31:47.600785 containerd[1649]: 2025-11-08 00:31:47.597 [INFO][4527] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 8 00:31:47.600785 containerd[1649]: 2025-11-08 00:31:47.598 [INFO][4510] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="f8a2276ce0fe12dd5a4d868f1552f2d4fc878719d08d26e02e0cf5f9142eae72" Nov 8 00:31:47.606438 containerd[1649]: time="2025-11-08T00:31:47.604308091Z" level=info msg="TearDown network for sandbox \"f8a2276ce0fe12dd5a4d868f1552f2d4fc878719d08d26e02e0cf5f9142eae72\" successfully" Nov 8 00:31:47.606438 containerd[1649]: time="2025-11-08T00:31:47.604334108Z" level=info msg="StopPodSandbox for \"f8a2276ce0fe12dd5a4d868f1552f2d4fc878719d08d26e02e0cf5f9142eae72\" returns successfully" Nov 8 00:31:47.606438 containerd[1649]: time="2025-11-08T00:31:47.605661554Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-5p9cw,Uid:8b2ed5b8-86fb-4b7a-9b26-26f59088b35b,Namespace:kube-system,Attempt:1,}" Nov 8 00:31:47.603904 systemd[1]: run-netns-cni\x2d0eac7fd6\x2de8d8\x2df8fd\x2d9075\x2dc681fb2d5d56.mount: Deactivated successfully. Nov 8 00:31:47.708168 systemd-networkd[1287]: calif15c353ce9f: Link UP Nov 8 00:31:47.712568 systemd-networkd[1287]: calif15c353ce9f: Gained carrier Nov 8 00:31:47.731554 containerd[1649]: 2025-11-08 00:31:47.642 [INFO][4536] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--84758c967d--czg8s-eth0 calico-apiserver-84758c967d- calico-apiserver 536546db-8e23-43bc-ada9-ff6aca8accce 900 0 2025-11-08 00:31:22 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:84758c967d projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-84758c967d-czg8s eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] calif15c353ce9f [] [] }} ContainerID="4e02226b20918f15a8fd45a4034b30a9b9dc3506c662c6c2b81790bd98c0fed6" Namespace="calico-apiserver" Pod="calico-apiserver-84758c967d-czg8s" WorkloadEndpoint="localhost-k8s-calico--apiserver--84758c967d--czg8s-" Nov 8 00:31:47.731554 containerd[1649]: 2025-11-08 00:31:47.642 [INFO][4536] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="4e02226b20918f15a8fd45a4034b30a9b9dc3506c662c6c2b81790bd98c0fed6" Namespace="calico-apiserver" Pod="calico-apiserver-84758c967d-czg8s" WorkloadEndpoint="localhost-k8s-calico--apiserver--84758c967d--czg8s-eth0" Nov 8 00:31:47.731554 containerd[1649]: 2025-11-08 00:31:47.665 [INFO][4557] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="4e02226b20918f15a8fd45a4034b30a9b9dc3506c662c6c2b81790bd98c0fed6" HandleID="k8s-pod-network.4e02226b20918f15a8fd45a4034b30a9b9dc3506c662c6c2b81790bd98c0fed6" Workload="localhost-k8s-calico--apiserver--84758c967d--czg8s-eth0" Nov 8 00:31:47.731554 containerd[1649]: 2025-11-08 00:31:47.665 [INFO][4557] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="4e02226b20918f15a8fd45a4034b30a9b9dc3506c662c6c2b81790bd98c0fed6" HandleID="k8s-pod-network.4e02226b20918f15a8fd45a4034b30a9b9dc3506c662c6c2b81790bd98c0fed6" Workload="localhost-k8s-calico--apiserver--84758c967d--czg8s-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002d5030), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-84758c967d-czg8s", "timestamp":"2025-11-08 00:31:47.665589452 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 8 00:31:47.731554 containerd[1649]: 2025-11-08 00:31:47.665 [INFO][4557] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 8 00:31:47.731554 containerd[1649]: 2025-11-08 00:31:47.665 [INFO][4557] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 8 00:31:47.731554 containerd[1649]: 2025-11-08 00:31:47.665 [INFO][4557] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Nov 8 00:31:47.731554 containerd[1649]: 2025-11-08 00:31:47.673 [INFO][4557] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.4e02226b20918f15a8fd45a4034b30a9b9dc3506c662c6c2b81790bd98c0fed6" host="localhost" Nov 8 00:31:47.731554 containerd[1649]: 2025-11-08 00:31:47.677 [INFO][4557] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Nov 8 00:31:47.731554 containerd[1649]: 2025-11-08 00:31:47.682 [INFO][4557] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Nov 8 00:31:47.731554 containerd[1649]: 2025-11-08 00:31:47.685 [INFO][4557] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Nov 8 00:31:47.731554 containerd[1649]: 2025-11-08 00:31:47.687 [INFO][4557] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Nov 8 00:31:47.731554 containerd[1649]: 2025-11-08 00:31:47.687 [INFO][4557] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.4e02226b20918f15a8fd45a4034b30a9b9dc3506c662c6c2b81790bd98c0fed6" host="localhost" Nov 8 00:31:47.731554 containerd[1649]: 2025-11-08 00:31:47.688 [INFO][4557] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.4e02226b20918f15a8fd45a4034b30a9b9dc3506c662c6c2b81790bd98c0fed6 Nov 8 00:31:47.731554 containerd[1649]: 2025-11-08 00:31:47.691 [INFO][4557] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.4e02226b20918f15a8fd45a4034b30a9b9dc3506c662c6c2b81790bd98c0fed6" host="localhost" Nov 8 00:31:47.731554 containerd[1649]: 2025-11-08 00:31:47.697 [INFO][4557] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.130/26] block=192.168.88.128/26 handle="k8s-pod-network.4e02226b20918f15a8fd45a4034b30a9b9dc3506c662c6c2b81790bd98c0fed6" host="localhost" Nov 8 00:31:47.731554 containerd[1649]: 2025-11-08 00:31:47.697 [INFO][4557] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.130/26] handle="k8s-pod-network.4e02226b20918f15a8fd45a4034b30a9b9dc3506c662c6c2b81790bd98c0fed6" host="localhost" Nov 8 00:31:47.731554 containerd[1649]: 2025-11-08 00:31:47.698 [INFO][4557] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 8 00:31:47.731554 containerd[1649]: 2025-11-08 00:31:47.698 [INFO][4557] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.130/26] IPv6=[] ContainerID="4e02226b20918f15a8fd45a4034b30a9b9dc3506c662c6c2b81790bd98c0fed6" HandleID="k8s-pod-network.4e02226b20918f15a8fd45a4034b30a9b9dc3506c662c6c2b81790bd98c0fed6" Workload="localhost-k8s-calico--apiserver--84758c967d--czg8s-eth0" Nov 8 00:31:47.733456 containerd[1649]: 2025-11-08 00:31:47.700 [INFO][4536] cni-plugin/k8s.go 418: Populated endpoint ContainerID="4e02226b20918f15a8fd45a4034b30a9b9dc3506c662c6c2b81790bd98c0fed6" Namespace="calico-apiserver" Pod="calico-apiserver-84758c967d-czg8s" WorkloadEndpoint="localhost-k8s-calico--apiserver--84758c967d--czg8s-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--84758c967d--czg8s-eth0", GenerateName:"calico-apiserver-84758c967d-", Namespace:"calico-apiserver", SelfLink:"", UID:"536546db-8e23-43bc-ada9-ff6aca8accce", ResourceVersion:"900", Generation:0, CreationTimestamp:time.Date(2025, time.November, 8, 0, 31, 22, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"84758c967d", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-84758c967d-czg8s", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calif15c353ce9f", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 8 00:31:47.733456 containerd[1649]: 2025-11-08 00:31:47.700 [INFO][4536] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.130/32] ContainerID="4e02226b20918f15a8fd45a4034b30a9b9dc3506c662c6c2b81790bd98c0fed6" Namespace="calico-apiserver" Pod="calico-apiserver-84758c967d-czg8s" WorkloadEndpoint="localhost-k8s-calico--apiserver--84758c967d--czg8s-eth0" Nov 8 00:31:47.733456 containerd[1649]: 2025-11-08 00:31:47.700 [INFO][4536] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calif15c353ce9f ContainerID="4e02226b20918f15a8fd45a4034b30a9b9dc3506c662c6c2b81790bd98c0fed6" Namespace="calico-apiserver" Pod="calico-apiserver-84758c967d-czg8s" WorkloadEndpoint="localhost-k8s-calico--apiserver--84758c967d--czg8s-eth0" Nov 8 00:31:47.733456 containerd[1649]: 2025-11-08 00:31:47.705 [INFO][4536] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="4e02226b20918f15a8fd45a4034b30a9b9dc3506c662c6c2b81790bd98c0fed6" Namespace="calico-apiserver" Pod="calico-apiserver-84758c967d-czg8s" WorkloadEndpoint="localhost-k8s-calico--apiserver--84758c967d--czg8s-eth0" Nov 8 00:31:47.733456 containerd[1649]: 2025-11-08 00:31:47.706 [INFO][4536] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="4e02226b20918f15a8fd45a4034b30a9b9dc3506c662c6c2b81790bd98c0fed6" Namespace="calico-apiserver" Pod="calico-apiserver-84758c967d-czg8s" WorkloadEndpoint="localhost-k8s-calico--apiserver--84758c967d--czg8s-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--84758c967d--czg8s-eth0", GenerateName:"calico-apiserver-84758c967d-", Namespace:"calico-apiserver", SelfLink:"", UID:"536546db-8e23-43bc-ada9-ff6aca8accce", ResourceVersion:"900", Generation:0, CreationTimestamp:time.Date(2025, time.November, 8, 0, 31, 22, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"84758c967d", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"4e02226b20918f15a8fd45a4034b30a9b9dc3506c662c6c2b81790bd98c0fed6", Pod:"calico-apiserver-84758c967d-czg8s", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calif15c353ce9f", MAC:"5e:41:fe:51:61:9a", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 8 00:31:47.733456 containerd[1649]: 2025-11-08 00:31:47.715 [INFO][4536] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="4e02226b20918f15a8fd45a4034b30a9b9dc3506c662c6c2b81790bd98c0fed6" Namespace="calico-apiserver" Pod="calico-apiserver-84758c967d-czg8s" WorkloadEndpoint="localhost-k8s-calico--apiserver--84758c967d--czg8s-eth0" Nov 8 00:31:47.759389 containerd[1649]: time="2025-11-08T00:31:47.759243580Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 8 00:31:47.759389 containerd[1649]: time="2025-11-08T00:31:47.759314683Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 8 00:31:47.759389 containerd[1649]: time="2025-11-08T00:31:47.759333711Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 8 00:31:47.759772 containerd[1649]: time="2025-11-08T00:31:47.759726331Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 8 00:31:47.794058 systemd-resolved[1542]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Nov 8 00:31:47.805179 systemd-networkd[1287]: cali314e203a221: Link UP Nov 8 00:31:47.811238 systemd-networkd[1287]: cali314e203a221: Gained carrier Nov 8 00:31:47.827337 containerd[1649]: 2025-11-08 00:31:47.688 [INFO][4551] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--668d6bf9bc--5p9cw-eth0 coredns-668d6bf9bc- kube-system 8b2ed5b8-86fb-4b7a-9b26-26f59088b35b 901 0 2025-11-08 00:31:12 +0000 UTC map[k8s-app:kube-dns pod-template-hash:668d6bf9bc projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-668d6bf9bc-5p9cw eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali314e203a221 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="8824dde692459c3a5ae668761e9c4fde48dccb2688bff55df5b48f438c8fa3e9" Namespace="kube-system" Pod="coredns-668d6bf9bc-5p9cw" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--5p9cw-" Nov 8 00:31:47.827337 containerd[1649]: 2025-11-08 00:31:47.688 [INFO][4551] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="8824dde692459c3a5ae668761e9c4fde48dccb2688bff55df5b48f438c8fa3e9" Namespace="kube-system" Pod="coredns-668d6bf9bc-5p9cw" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--5p9cw-eth0" Nov 8 00:31:47.827337 containerd[1649]: 2025-11-08 00:31:47.739 [INFO][4566] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="8824dde692459c3a5ae668761e9c4fde48dccb2688bff55df5b48f438c8fa3e9" HandleID="k8s-pod-network.8824dde692459c3a5ae668761e9c4fde48dccb2688bff55df5b48f438c8fa3e9" Workload="localhost-k8s-coredns--668d6bf9bc--5p9cw-eth0" Nov 8 00:31:47.827337 containerd[1649]: 2025-11-08 00:31:47.739 [INFO][4566] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="8824dde692459c3a5ae668761e9c4fde48dccb2688bff55df5b48f438c8fa3e9" HandleID="k8s-pod-network.8824dde692459c3a5ae668761e9c4fde48dccb2688bff55df5b48f438c8fa3e9" Workload="localhost-k8s-coredns--668d6bf9bc--5p9cw-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002cf270), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-668d6bf9bc-5p9cw", "timestamp":"2025-11-08 00:31:47.739212419 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 8 00:31:47.827337 containerd[1649]: 2025-11-08 00:31:47.739 [INFO][4566] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 8 00:31:47.827337 containerd[1649]: 2025-11-08 00:31:47.739 [INFO][4566] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 8 00:31:47.827337 containerd[1649]: 2025-11-08 00:31:47.739 [INFO][4566] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Nov 8 00:31:47.827337 containerd[1649]: 2025-11-08 00:31:47.774 [INFO][4566] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.8824dde692459c3a5ae668761e9c4fde48dccb2688bff55df5b48f438c8fa3e9" host="localhost" Nov 8 00:31:47.827337 containerd[1649]: 2025-11-08 00:31:47.782 [INFO][4566] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Nov 8 00:31:47.827337 containerd[1649]: 2025-11-08 00:31:47.787 [INFO][4566] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Nov 8 00:31:47.827337 containerd[1649]: 2025-11-08 00:31:47.789 [INFO][4566] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Nov 8 00:31:47.827337 containerd[1649]: 2025-11-08 00:31:47.791 [INFO][4566] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Nov 8 00:31:47.827337 containerd[1649]: 2025-11-08 00:31:47.791 [INFO][4566] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.8824dde692459c3a5ae668761e9c4fde48dccb2688bff55df5b48f438c8fa3e9" host="localhost" Nov 8 00:31:47.827337 containerd[1649]: 2025-11-08 00:31:47.793 [INFO][4566] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.8824dde692459c3a5ae668761e9c4fde48dccb2688bff55df5b48f438c8fa3e9 Nov 8 00:31:47.827337 containerd[1649]: 2025-11-08 00:31:47.796 [INFO][4566] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.8824dde692459c3a5ae668761e9c4fde48dccb2688bff55df5b48f438c8fa3e9" host="localhost" Nov 8 00:31:47.827337 containerd[1649]: 2025-11-08 00:31:47.800 [INFO][4566] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.131/26] block=192.168.88.128/26 handle="k8s-pod-network.8824dde692459c3a5ae668761e9c4fde48dccb2688bff55df5b48f438c8fa3e9" host="localhost" Nov 8 00:31:47.827337 containerd[1649]: 2025-11-08 00:31:47.800 [INFO][4566] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.131/26] handle="k8s-pod-network.8824dde692459c3a5ae668761e9c4fde48dccb2688bff55df5b48f438c8fa3e9" host="localhost" Nov 8 00:31:47.827337 containerd[1649]: 2025-11-08 00:31:47.800 [INFO][4566] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 8 00:31:47.827337 containerd[1649]: 2025-11-08 00:31:47.800 [INFO][4566] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.131/26] IPv6=[] ContainerID="8824dde692459c3a5ae668761e9c4fde48dccb2688bff55df5b48f438c8fa3e9" HandleID="k8s-pod-network.8824dde692459c3a5ae668761e9c4fde48dccb2688bff55df5b48f438c8fa3e9" Workload="localhost-k8s-coredns--668d6bf9bc--5p9cw-eth0" Nov 8 00:31:47.828346 containerd[1649]: 2025-11-08 00:31:47.802 [INFO][4551] cni-plugin/k8s.go 418: Populated endpoint ContainerID="8824dde692459c3a5ae668761e9c4fde48dccb2688bff55df5b48f438c8fa3e9" Namespace="kube-system" Pod="coredns-668d6bf9bc-5p9cw" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--5p9cw-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--668d6bf9bc--5p9cw-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"8b2ed5b8-86fb-4b7a-9b26-26f59088b35b", ResourceVersion:"901", Generation:0, CreationTimestamp:time.Date(2025, time.November, 8, 0, 31, 12, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-668d6bf9bc-5p9cw", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali314e203a221", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 8 00:31:47.828346 containerd[1649]: 2025-11-08 00:31:47.802 [INFO][4551] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.131/32] ContainerID="8824dde692459c3a5ae668761e9c4fde48dccb2688bff55df5b48f438c8fa3e9" Namespace="kube-system" Pod="coredns-668d6bf9bc-5p9cw" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--5p9cw-eth0" Nov 8 00:31:47.828346 containerd[1649]: 2025-11-08 00:31:47.802 [INFO][4551] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali314e203a221 ContainerID="8824dde692459c3a5ae668761e9c4fde48dccb2688bff55df5b48f438c8fa3e9" Namespace="kube-system" Pod="coredns-668d6bf9bc-5p9cw" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--5p9cw-eth0" Nov 8 00:31:47.828346 containerd[1649]: 2025-11-08 00:31:47.811 [INFO][4551] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="8824dde692459c3a5ae668761e9c4fde48dccb2688bff55df5b48f438c8fa3e9" Namespace="kube-system" Pod="coredns-668d6bf9bc-5p9cw" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--5p9cw-eth0" Nov 8 00:31:47.828346 containerd[1649]: 2025-11-08 00:31:47.815 [INFO][4551] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="8824dde692459c3a5ae668761e9c4fde48dccb2688bff55df5b48f438c8fa3e9" Namespace="kube-system" Pod="coredns-668d6bf9bc-5p9cw" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--5p9cw-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--668d6bf9bc--5p9cw-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"8b2ed5b8-86fb-4b7a-9b26-26f59088b35b", ResourceVersion:"901", Generation:0, CreationTimestamp:time.Date(2025, time.November, 8, 0, 31, 12, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"8824dde692459c3a5ae668761e9c4fde48dccb2688bff55df5b48f438c8fa3e9", Pod:"coredns-668d6bf9bc-5p9cw", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali314e203a221", MAC:"22:ef:ee:0b:ed:2e", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 8 00:31:47.828346 containerd[1649]: 2025-11-08 00:31:47.823 [INFO][4551] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="8824dde692459c3a5ae668761e9c4fde48dccb2688bff55df5b48f438c8fa3e9" Namespace="kube-system" Pod="coredns-668d6bf9bc-5p9cw" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--5p9cw-eth0" Nov 8 00:31:47.843984 containerd[1649]: time="2025-11-08T00:31:47.843552270Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 8 00:31:47.843984 containerd[1649]: time="2025-11-08T00:31:47.843602752Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 8 00:31:47.843984 containerd[1649]: time="2025-11-08T00:31:47.843613773Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 8 00:31:47.843984 containerd[1649]: time="2025-11-08T00:31:47.843684532Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 8 00:31:47.866344 systemd-resolved[1542]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Nov 8 00:31:47.872407 containerd[1649]: time="2025-11-08T00:31:47.872338740Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-84758c967d-czg8s,Uid:536546db-8e23-43bc-ada9-ff6aca8accce,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"4e02226b20918f15a8fd45a4034b30a9b9dc3506c662c6c2b81790bd98c0fed6\"" Nov 8 00:31:47.874563 containerd[1649]: time="2025-11-08T00:31:47.874443021Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 8 00:31:47.896040 containerd[1649]: time="2025-11-08T00:31:47.896004220Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-5p9cw,Uid:8b2ed5b8-86fb-4b7a-9b26-26f59088b35b,Namespace:kube-system,Attempt:1,} returns sandbox id \"8824dde692459c3a5ae668761e9c4fde48dccb2688bff55df5b48f438c8fa3e9\"" Nov 8 00:31:47.908291 containerd[1649]: time="2025-11-08T00:31:47.908132571Z" level=info msg="CreateContainer within sandbox \"8824dde692459c3a5ae668761e9c4fde48dccb2688bff55df5b48f438c8fa3e9\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Nov 8 00:31:47.933865 containerd[1649]: time="2025-11-08T00:31:47.933827112Z" level=info msg="CreateContainer within sandbox \"8824dde692459c3a5ae668761e9c4fde48dccb2688bff55df5b48f438c8fa3e9\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"942b0f975582e1ef3015e62b96f8c882e067fc3520d732a65aea1e25c11b42af\"" Nov 8 00:31:47.934685 containerd[1649]: time="2025-11-08T00:31:47.934601766Z" level=info msg="StartContainer for \"942b0f975582e1ef3015e62b96f8c882e067fc3520d732a65aea1e25c11b42af\"" Nov 8 00:31:47.980557 containerd[1649]: time="2025-11-08T00:31:47.980529588Z" level=info msg="StartContainer for \"942b0f975582e1ef3015e62b96f8c882e067fc3520d732a65aea1e25c11b42af\" returns successfully" Nov 8 00:31:48.211052 containerd[1649]: time="2025-11-08T00:31:48.210870362Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 8 00:31:48.212871 containerd[1649]: time="2025-11-08T00:31:48.212814986Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 8 00:31:48.212945 containerd[1649]: time="2025-11-08T00:31:48.212859277Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Nov 8 00:31:48.213097 kubelet[2900]: E1108 00:31:48.213063 2900 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 8 00:31:48.213998 kubelet[2900]: E1108 00:31:48.213103 2900 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 8 00:31:48.213998 kubelet[2900]: E1108 00:31:48.213204 2900 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-8w9hr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-84758c967d-czg8s_calico-apiserver(536546db-8e23-43bc-ada9-ff6aca8accce): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 8 00:31:48.214539 kubelet[2900]: E1108 00:31:48.214511 2900 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-84758c967d-czg8s" podUID="536546db-8e23-43bc-ada9-ff6aca8accce" Nov 8 00:31:48.477427 containerd[1649]: time="2025-11-08T00:31:48.477354594Z" level=info msg="StopPodSandbox for \"caf42fdb3ada0ed0ba850958182b309f6760189d49c1f7086237e2d2431c9939\"" Nov 8 00:31:48.535366 containerd[1649]: 2025-11-08 00:31:48.508 [INFO][4720] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="caf42fdb3ada0ed0ba850958182b309f6760189d49c1f7086237e2d2431c9939" Nov 8 00:31:48.535366 containerd[1649]: 2025-11-08 00:31:48.508 [INFO][4720] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="caf42fdb3ada0ed0ba850958182b309f6760189d49c1f7086237e2d2431c9939" iface="eth0" netns="/var/run/netns/cni-c47806d8-5caa-56be-531f-797caee1264e" Nov 8 00:31:48.535366 containerd[1649]: 2025-11-08 00:31:48.508 [INFO][4720] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="caf42fdb3ada0ed0ba850958182b309f6760189d49c1f7086237e2d2431c9939" iface="eth0" netns="/var/run/netns/cni-c47806d8-5caa-56be-531f-797caee1264e" Nov 8 00:31:48.535366 containerd[1649]: 2025-11-08 00:31:48.508 [INFO][4720] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="caf42fdb3ada0ed0ba850958182b309f6760189d49c1f7086237e2d2431c9939" iface="eth0" netns="/var/run/netns/cni-c47806d8-5caa-56be-531f-797caee1264e" Nov 8 00:31:48.535366 containerd[1649]: 2025-11-08 00:31:48.508 [INFO][4720] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="caf42fdb3ada0ed0ba850958182b309f6760189d49c1f7086237e2d2431c9939" Nov 8 00:31:48.535366 containerd[1649]: 2025-11-08 00:31:48.510 [INFO][4720] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="caf42fdb3ada0ed0ba850958182b309f6760189d49c1f7086237e2d2431c9939" Nov 8 00:31:48.535366 containerd[1649]: 2025-11-08 00:31:48.525 [INFO][4727] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="caf42fdb3ada0ed0ba850958182b309f6760189d49c1f7086237e2d2431c9939" HandleID="k8s-pod-network.caf42fdb3ada0ed0ba850958182b309f6760189d49c1f7086237e2d2431c9939" Workload="localhost-k8s-calico--kube--controllers--655bcd5b7f--mvm84-eth0" Nov 8 00:31:48.535366 containerd[1649]: 2025-11-08 00:31:48.525 [INFO][4727] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 8 00:31:48.535366 containerd[1649]: 2025-11-08 00:31:48.525 [INFO][4727] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 8 00:31:48.535366 containerd[1649]: 2025-11-08 00:31:48.530 [WARNING][4727] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="caf42fdb3ada0ed0ba850958182b309f6760189d49c1f7086237e2d2431c9939" HandleID="k8s-pod-network.caf42fdb3ada0ed0ba850958182b309f6760189d49c1f7086237e2d2431c9939" Workload="localhost-k8s-calico--kube--controllers--655bcd5b7f--mvm84-eth0" Nov 8 00:31:48.535366 containerd[1649]: 2025-11-08 00:31:48.530 [INFO][4727] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="caf42fdb3ada0ed0ba850958182b309f6760189d49c1f7086237e2d2431c9939" HandleID="k8s-pod-network.caf42fdb3ada0ed0ba850958182b309f6760189d49c1f7086237e2d2431c9939" Workload="localhost-k8s-calico--kube--controllers--655bcd5b7f--mvm84-eth0" Nov 8 00:31:48.535366 containerd[1649]: 2025-11-08 00:31:48.532 [INFO][4727] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 8 00:31:48.535366 containerd[1649]: 2025-11-08 00:31:48.534 [INFO][4720] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="caf42fdb3ada0ed0ba850958182b309f6760189d49c1f7086237e2d2431c9939" Nov 8 00:31:48.535366 containerd[1649]: time="2025-11-08T00:31:48.535313366Z" level=info msg="TearDown network for sandbox \"caf42fdb3ada0ed0ba850958182b309f6760189d49c1f7086237e2d2431c9939\" successfully" Nov 8 00:31:48.535366 containerd[1649]: time="2025-11-08T00:31:48.535330406Z" level=info msg="StopPodSandbox for \"caf42fdb3ada0ed0ba850958182b309f6760189d49c1f7086237e2d2431c9939\" returns successfully" Nov 8 00:31:48.536641 containerd[1649]: time="2025-11-08T00:31:48.536580210Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-655bcd5b7f-mvm84,Uid:d8943d47-ae19-484d-8d89-dda3dcc29a60,Namespace:calico-system,Attempt:1,}" Nov 8 00:31:48.577457 systemd[1]: run-netns-cni\x2dc47806d8\x2d5caa\x2d56be\x2d531f\x2d797caee1264e.mount: Deactivated successfully. Nov 8 00:31:48.608693 systemd-networkd[1287]: cali25bb6117a09: Link UP Nov 8 00:31:48.609116 systemd-networkd[1287]: cali25bb6117a09: Gained carrier Nov 8 00:31:48.620155 containerd[1649]: 2025-11-08 00:31:48.561 [INFO][4733] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--kube--controllers--655bcd5b7f--mvm84-eth0 calico-kube-controllers-655bcd5b7f- calico-system d8943d47-ae19-484d-8d89-dda3dcc29a60 915 0 2025-11-08 00:31:26 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:655bcd5b7f projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s localhost calico-kube-controllers-655bcd5b7f-mvm84 eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali25bb6117a09 [] [] }} ContainerID="94020e00258befab3f47db16741239d79bf3e685f133d30db10a9881e26e7b9e" Namespace="calico-system" Pod="calico-kube-controllers-655bcd5b7f-mvm84" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--655bcd5b7f--mvm84-" Nov 8 00:31:48.620155 containerd[1649]: 2025-11-08 00:31:48.561 [INFO][4733] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="94020e00258befab3f47db16741239d79bf3e685f133d30db10a9881e26e7b9e" Namespace="calico-system" Pod="calico-kube-controllers-655bcd5b7f-mvm84" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--655bcd5b7f--mvm84-eth0" Nov 8 00:31:48.620155 containerd[1649]: 2025-11-08 00:31:48.586 [INFO][4745] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="94020e00258befab3f47db16741239d79bf3e685f133d30db10a9881e26e7b9e" HandleID="k8s-pod-network.94020e00258befab3f47db16741239d79bf3e685f133d30db10a9881e26e7b9e" Workload="localhost-k8s-calico--kube--controllers--655bcd5b7f--mvm84-eth0" Nov 8 00:31:48.620155 containerd[1649]: 2025-11-08 00:31:48.586 [INFO][4745] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="94020e00258befab3f47db16741239d79bf3e685f133d30db10a9881e26e7b9e" HandleID="k8s-pod-network.94020e00258befab3f47db16741239d79bf3e685f133d30db10a9881e26e7b9e" Workload="localhost-k8s-calico--kube--controllers--655bcd5b7f--mvm84-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002d5170), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"calico-kube-controllers-655bcd5b7f-mvm84", "timestamp":"2025-11-08 00:31:48.586001996 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 8 00:31:48.620155 containerd[1649]: 2025-11-08 00:31:48.586 [INFO][4745] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 8 00:31:48.620155 containerd[1649]: 2025-11-08 00:31:48.586 [INFO][4745] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 8 00:31:48.620155 containerd[1649]: 2025-11-08 00:31:48.586 [INFO][4745] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Nov 8 00:31:48.620155 containerd[1649]: 2025-11-08 00:31:48.590 [INFO][4745] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.94020e00258befab3f47db16741239d79bf3e685f133d30db10a9881e26e7b9e" host="localhost" Nov 8 00:31:48.620155 containerd[1649]: 2025-11-08 00:31:48.592 [INFO][4745] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Nov 8 00:31:48.620155 containerd[1649]: 2025-11-08 00:31:48.595 [INFO][4745] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Nov 8 00:31:48.620155 containerd[1649]: 2025-11-08 00:31:48.596 [INFO][4745] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Nov 8 00:31:48.620155 containerd[1649]: 2025-11-08 00:31:48.597 [INFO][4745] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Nov 8 00:31:48.620155 containerd[1649]: 2025-11-08 00:31:48.597 [INFO][4745] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.94020e00258befab3f47db16741239d79bf3e685f133d30db10a9881e26e7b9e" host="localhost" Nov 8 00:31:48.620155 containerd[1649]: 2025-11-08 00:31:48.598 [INFO][4745] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.94020e00258befab3f47db16741239d79bf3e685f133d30db10a9881e26e7b9e Nov 8 00:31:48.620155 containerd[1649]: 2025-11-08 00:31:48.600 [INFO][4745] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.94020e00258befab3f47db16741239d79bf3e685f133d30db10a9881e26e7b9e" host="localhost" Nov 8 00:31:48.620155 containerd[1649]: 2025-11-08 00:31:48.603 [INFO][4745] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.132/26] block=192.168.88.128/26 handle="k8s-pod-network.94020e00258befab3f47db16741239d79bf3e685f133d30db10a9881e26e7b9e" host="localhost" Nov 8 00:31:48.620155 containerd[1649]: 2025-11-08 00:31:48.603 [INFO][4745] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.132/26] handle="k8s-pod-network.94020e00258befab3f47db16741239d79bf3e685f133d30db10a9881e26e7b9e" host="localhost" Nov 8 00:31:48.620155 containerd[1649]: 2025-11-08 00:31:48.603 [INFO][4745] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 8 00:31:48.620155 containerd[1649]: 2025-11-08 00:31:48.603 [INFO][4745] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.132/26] IPv6=[] ContainerID="94020e00258befab3f47db16741239d79bf3e685f133d30db10a9881e26e7b9e" HandleID="k8s-pod-network.94020e00258befab3f47db16741239d79bf3e685f133d30db10a9881e26e7b9e" Workload="localhost-k8s-calico--kube--controllers--655bcd5b7f--mvm84-eth0" Nov 8 00:31:48.622235 containerd[1649]: 2025-11-08 00:31:48.605 [INFO][4733] cni-plugin/k8s.go 418: Populated endpoint ContainerID="94020e00258befab3f47db16741239d79bf3e685f133d30db10a9881e26e7b9e" Namespace="calico-system" Pod="calico-kube-controllers-655bcd5b7f-mvm84" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--655bcd5b7f--mvm84-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--655bcd5b7f--mvm84-eth0", GenerateName:"calico-kube-controllers-655bcd5b7f-", Namespace:"calico-system", SelfLink:"", UID:"d8943d47-ae19-484d-8d89-dda3dcc29a60", ResourceVersion:"915", Generation:0, CreationTimestamp:time.Date(2025, time.November, 8, 0, 31, 26, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"655bcd5b7f", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-kube-controllers-655bcd5b7f-mvm84", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali25bb6117a09", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 8 00:31:48.622235 containerd[1649]: 2025-11-08 00:31:48.605 [INFO][4733] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.132/32] ContainerID="94020e00258befab3f47db16741239d79bf3e685f133d30db10a9881e26e7b9e" Namespace="calico-system" Pod="calico-kube-controllers-655bcd5b7f-mvm84" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--655bcd5b7f--mvm84-eth0" Nov 8 00:31:48.622235 containerd[1649]: 2025-11-08 00:31:48.605 [INFO][4733] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali25bb6117a09 ContainerID="94020e00258befab3f47db16741239d79bf3e685f133d30db10a9881e26e7b9e" Namespace="calico-system" Pod="calico-kube-controllers-655bcd5b7f-mvm84" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--655bcd5b7f--mvm84-eth0" Nov 8 00:31:48.622235 containerd[1649]: 2025-11-08 00:31:48.609 [INFO][4733] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="94020e00258befab3f47db16741239d79bf3e685f133d30db10a9881e26e7b9e" Namespace="calico-system" Pod="calico-kube-controllers-655bcd5b7f-mvm84" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--655bcd5b7f--mvm84-eth0" Nov 8 00:31:48.622235 containerd[1649]: 2025-11-08 00:31:48.609 [INFO][4733] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="94020e00258befab3f47db16741239d79bf3e685f133d30db10a9881e26e7b9e" Namespace="calico-system" Pod="calico-kube-controllers-655bcd5b7f-mvm84" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--655bcd5b7f--mvm84-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--655bcd5b7f--mvm84-eth0", GenerateName:"calico-kube-controllers-655bcd5b7f-", Namespace:"calico-system", SelfLink:"", UID:"d8943d47-ae19-484d-8d89-dda3dcc29a60", ResourceVersion:"915", Generation:0, CreationTimestamp:time.Date(2025, time.November, 8, 0, 31, 26, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"655bcd5b7f", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"94020e00258befab3f47db16741239d79bf3e685f133d30db10a9881e26e7b9e", Pod:"calico-kube-controllers-655bcd5b7f-mvm84", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali25bb6117a09", MAC:"86:4d:cd:59:98:9d", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 8 00:31:48.622235 containerd[1649]: 2025-11-08 00:31:48.617 [INFO][4733] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="94020e00258befab3f47db16741239d79bf3e685f133d30db10a9881e26e7b9e" Namespace="calico-system" Pod="calico-kube-controllers-655bcd5b7f-mvm84" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--655bcd5b7f--mvm84-eth0" Nov 8 00:31:48.637837 containerd[1649]: time="2025-11-08T00:31:48.637776220Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 8 00:31:48.637999 containerd[1649]: time="2025-11-08T00:31:48.637977415Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 8 00:31:48.638319 containerd[1649]: time="2025-11-08T00:31:48.638298046Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 8 00:31:48.638521 containerd[1649]: time="2025-11-08T00:31:48.638506435Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 8 00:31:48.664092 systemd-resolved[1542]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Nov 8 00:31:48.686265 containerd[1649]: time="2025-11-08T00:31:48.686244059Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-655bcd5b7f-mvm84,Uid:d8943d47-ae19-484d-8d89-dda3dcc29a60,Namespace:calico-system,Attempt:1,} returns sandbox id \"94020e00258befab3f47db16741239d79bf3e685f133d30db10a9881e26e7b9e\"" Nov 8 00:31:48.687234 containerd[1649]: time="2025-11-08T00:31:48.687164742Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Nov 8 00:31:48.814370 kubelet[2900]: E1108 00:31:48.814317 2900 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-84758c967d-czg8s" podUID="536546db-8e23-43bc-ada9-ff6aca8accce" Nov 8 00:31:48.855444 kubelet[2900]: I1108 00:31:48.854702 2900 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-5p9cw" podStartSLOduration=36.854690177 podStartE2EDuration="36.854690177s" podCreationTimestamp="2025-11-08 00:31:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-08 00:31:48.85449994 +0000 UTC m=+43.467954822" watchObservedRunningTime="2025-11-08 00:31:48.854690177 +0000 UTC m=+43.468145053" Nov 8 00:31:48.873108 systemd-networkd[1287]: cali314e203a221: Gained IPv6LL Nov 8 00:31:49.071873 containerd[1649]: time="2025-11-08T00:31:49.071677682Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 8 00:31:49.072020 containerd[1649]: time="2025-11-08T00:31:49.071994942Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Nov 8 00:31:49.072063 containerd[1649]: time="2025-11-08T00:31:49.072045985Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=85" Nov 8 00:31:49.072194 kubelet[2900]: E1108 00:31:49.072161 2900 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Nov 8 00:31:49.072271 kubelet[2900]: E1108 00:31:49.072202 2900 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Nov 8 00:31:49.072362 kubelet[2900]: E1108 00:31:49.072285 2900 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-4fqms,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-655bcd5b7f-mvm84_calico-system(d8943d47-ae19-484d-8d89-dda3dcc29a60): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Nov 8 00:31:49.073452 kubelet[2900]: E1108 00:31:49.073432 2900 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-655bcd5b7f-mvm84" podUID="d8943d47-ae19-484d-8d89-dda3dcc29a60" Nov 8 00:31:49.385050 systemd-networkd[1287]: calif15c353ce9f: Gained IPv6LL Nov 8 00:31:49.837011 kubelet[2900]: E1108 00:31:49.836969 2900 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-655bcd5b7f-mvm84" podUID="d8943d47-ae19-484d-8d89-dda3dcc29a60" Nov 8 00:31:49.837807 kubelet[2900]: E1108 00:31:49.836969 2900 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-84758c967d-czg8s" podUID="536546db-8e23-43bc-ada9-ff6aca8accce" Nov 8 00:31:50.476593 containerd[1649]: time="2025-11-08T00:31:50.476551371Z" level=info msg="StopPodSandbox for \"83d1a9e89ea2e606570757e3d36ce51ba3e34485b1ed3bee50364c6355c3528e\"" Nov 8 00:31:50.523220 containerd[1649]: 2025-11-08 00:31:50.502 [INFO][4821] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="83d1a9e89ea2e606570757e3d36ce51ba3e34485b1ed3bee50364c6355c3528e" Nov 8 00:31:50.523220 containerd[1649]: 2025-11-08 00:31:50.502 [INFO][4821] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="83d1a9e89ea2e606570757e3d36ce51ba3e34485b1ed3bee50364c6355c3528e" iface="eth0" netns="/var/run/netns/cni-906d49f4-90a7-40b6-6989-e4df0fb1a006" Nov 8 00:31:50.523220 containerd[1649]: 2025-11-08 00:31:50.502 [INFO][4821] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="83d1a9e89ea2e606570757e3d36ce51ba3e34485b1ed3bee50364c6355c3528e" iface="eth0" netns="/var/run/netns/cni-906d49f4-90a7-40b6-6989-e4df0fb1a006" Nov 8 00:31:50.523220 containerd[1649]: 2025-11-08 00:31:50.502 [INFO][4821] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="83d1a9e89ea2e606570757e3d36ce51ba3e34485b1ed3bee50364c6355c3528e" iface="eth0" netns="/var/run/netns/cni-906d49f4-90a7-40b6-6989-e4df0fb1a006" Nov 8 00:31:50.523220 containerd[1649]: 2025-11-08 00:31:50.503 [INFO][4821] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="83d1a9e89ea2e606570757e3d36ce51ba3e34485b1ed3bee50364c6355c3528e" Nov 8 00:31:50.523220 containerd[1649]: 2025-11-08 00:31:50.503 [INFO][4821] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="83d1a9e89ea2e606570757e3d36ce51ba3e34485b1ed3bee50364c6355c3528e" Nov 8 00:31:50.523220 containerd[1649]: 2025-11-08 00:31:50.516 [INFO][4828] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="83d1a9e89ea2e606570757e3d36ce51ba3e34485b1ed3bee50364c6355c3528e" HandleID="k8s-pod-network.83d1a9e89ea2e606570757e3d36ce51ba3e34485b1ed3bee50364c6355c3528e" Workload="localhost-k8s-coredns--668d6bf9bc--wq4zw-eth0" Nov 8 00:31:50.523220 containerd[1649]: 2025-11-08 00:31:50.516 [INFO][4828] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 8 00:31:50.523220 containerd[1649]: 2025-11-08 00:31:50.516 [INFO][4828] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 8 00:31:50.523220 containerd[1649]: 2025-11-08 00:31:50.520 [WARNING][4828] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="83d1a9e89ea2e606570757e3d36ce51ba3e34485b1ed3bee50364c6355c3528e" HandleID="k8s-pod-network.83d1a9e89ea2e606570757e3d36ce51ba3e34485b1ed3bee50364c6355c3528e" Workload="localhost-k8s-coredns--668d6bf9bc--wq4zw-eth0" Nov 8 00:31:50.523220 containerd[1649]: 2025-11-08 00:31:50.520 [INFO][4828] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="83d1a9e89ea2e606570757e3d36ce51ba3e34485b1ed3bee50364c6355c3528e" HandleID="k8s-pod-network.83d1a9e89ea2e606570757e3d36ce51ba3e34485b1ed3bee50364c6355c3528e" Workload="localhost-k8s-coredns--668d6bf9bc--wq4zw-eth0" Nov 8 00:31:50.523220 containerd[1649]: 2025-11-08 00:31:50.521 [INFO][4828] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 8 00:31:50.523220 containerd[1649]: 2025-11-08 00:31:50.522 [INFO][4821] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="83d1a9e89ea2e606570757e3d36ce51ba3e34485b1ed3bee50364c6355c3528e" Nov 8 00:31:50.524722 containerd[1649]: time="2025-11-08T00:31:50.523341725Z" level=info msg="TearDown network for sandbox \"83d1a9e89ea2e606570757e3d36ce51ba3e34485b1ed3bee50364c6355c3528e\" successfully" Nov 8 00:31:50.524722 containerd[1649]: time="2025-11-08T00:31:50.523359586Z" level=info msg="StopPodSandbox for \"83d1a9e89ea2e606570757e3d36ce51ba3e34485b1ed3bee50364c6355c3528e\" returns successfully" Nov 8 00:31:50.525340 containerd[1649]: time="2025-11-08T00:31:50.525326394Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-wq4zw,Uid:939bcda9-0a19-4e96-ac5d-405850005d65,Namespace:kube-system,Attempt:1,}" Nov 8 00:31:50.526228 systemd[1]: run-netns-cni\x2d906d49f4\x2d90a7\x2d40b6\x2d6989\x2de4df0fb1a006.mount: Deactivated successfully. Nov 8 00:31:50.537202 systemd-networkd[1287]: cali25bb6117a09: Gained IPv6LL Nov 8 00:31:50.594420 systemd-networkd[1287]: cali26c92eb0f4a: Link UP Nov 8 00:31:50.595063 systemd-networkd[1287]: cali26c92eb0f4a: Gained carrier Nov 8 00:31:50.606123 containerd[1649]: 2025-11-08 00:31:50.554 [INFO][4834] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--668d6bf9bc--wq4zw-eth0 coredns-668d6bf9bc- kube-system 939bcda9-0a19-4e96-ac5d-405850005d65 950 0 2025-11-08 00:31:12 +0000 UTC map[k8s-app:kube-dns pod-template-hash:668d6bf9bc projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-668d6bf9bc-wq4zw eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali26c92eb0f4a [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="c73d78a2c9429af1fe38503688b5341f4387662ff13129bd7061317d2367a3dd" Namespace="kube-system" Pod="coredns-668d6bf9bc-wq4zw" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--wq4zw-" Nov 8 00:31:50.606123 containerd[1649]: 2025-11-08 00:31:50.555 [INFO][4834] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="c73d78a2c9429af1fe38503688b5341f4387662ff13129bd7061317d2367a3dd" Namespace="kube-system" Pod="coredns-668d6bf9bc-wq4zw" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--wq4zw-eth0" Nov 8 00:31:50.606123 containerd[1649]: 2025-11-08 00:31:50.571 [INFO][4846] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="c73d78a2c9429af1fe38503688b5341f4387662ff13129bd7061317d2367a3dd" HandleID="k8s-pod-network.c73d78a2c9429af1fe38503688b5341f4387662ff13129bd7061317d2367a3dd" Workload="localhost-k8s-coredns--668d6bf9bc--wq4zw-eth0" Nov 8 00:31:50.606123 containerd[1649]: 2025-11-08 00:31:50.571 [INFO][4846] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="c73d78a2c9429af1fe38503688b5341f4387662ff13129bd7061317d2367a3dd" HandleID="k8s-pod-network.c73d78a2c9429af1fe38503688b5341f4387662ff13129bd7061317d2367a3dd" Workload="localhost-k8s-coredns--668d6bf9bc--wq4zw-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00024efe0), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-668d6bf9bc-wq4zw", "timestamp":"2025-11-08 00:31:50.571617675 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 8 00:31:50.606123 containerd[1649]: 2025-11-08 00:31:50.571 [INFO][4846] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 8 00:31:50.606123 containerd[1649]: 2025-11-08 00:31:50.571 [INFO][4846] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 8 00:31:50.606123 containerd[1649]: 2025-11-08 00:31:50.571 [INFO][4846] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Nov 8 00:31:50.606123 containerd[1649]: 2025-11-08 00:31:50.575 [INFO][4846] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.c73d78a2c9429af1fe38503688b5341f4387662ff13129bd7061317d2367a3dd" host="localhost" Nov 8 00:31:50.606123 containerd[1649]: 2025-11-08 00:31:50.578 [INFO][4846] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Nov 8 00:31:50.606123 containerd[1649]: 2025-11-08 00:31:50.581 [INFO][4846] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Nov 8 00:31:50.606123 containerd[1649]: 2025-11-08 00:31:50.582 [INFO][4846] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Nov 8 00:31:50.606123 containerd[1649]: 2025-11-08 00:31:50.583 [INFO][4846] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Nov 8 00:31:50.606123 containerd[1649]: 2025-11-08 00:31:50.583 [INFO][4846] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.c73d78a2c9429af1fe38503688b5341f4387662ff13129bd7061317d2367a3dd" host="localhost" Nov 8 00:31:50.606123 containerd[1649]: 2025-11-08 00:31:50.584 [INFO][4846] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.c73d78a2c9429af1fe38503688b5341f4387662ff13129bd7061317d2367a3dd Nov 8 00:31:50.606123 containerd[1649]: 2025-11-08 00:31:50.586 [INFO][4846] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.c73d78a2c9429af1fe38503688b5341f4387662ff13129bd7061317d2367a3dd" host="localhost" Nov 8 00:31:50.606123 containerd[1649]: 2025-11-08 00:31:50.589 [INFO][4846] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.133/26] block=192.168.88.128/26 handle="k8s-pod-network.c73d78a2c9429af1fe38503688b5341f4387662ff13129bd7061317d2367a3dd" host="localhost" Nov 8 00:31:50.606123 containerd[1649]: 2025-11-08 00:31:50.589 [INFO][4846] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.133/26] handle="k8s-pod-network.c73d78a2c9429af1fe38503688b5341f4387662ff13129bd7061317d2367a3dd" host="localhost" Nov 8 00:31:50.606123 containerd[1649]: 2025-11-08 00:31:50.589 [INFO][4846] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 8 00:31:50.606123 containerd[1649]: 2025-11-08 00:31:50.589 [INFO][4846] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.133/26] IPv6=[] ContainerID="c73d78a2c9429af1fe38503688b5341f4387662ff13129bd7061317d2367a3dd" HandleID="k8s-pod-network.c73d78a2c9429af1fe38503688b5341f4387662ff13129bd7061317d2367a3dd" Workload="localhost-k8s-coredns--668d6bf9bc--wq4zw-eth0" Nov 8 00:31:50.606525 containerd[1649]: 2025-11-08 00:31:50.591 [INFO][4834] cni-plugin/k8s.go 418: Populated endpoint ContainerID="c73d78a2c9429af1fe38503688b5341f4387662ff13129bd7061317d2367a3dd" Namespace="kube-system" Pod="coredns-668d6bf9bc-wq4zw" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--wq4zw-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--668d6bf9bc--wq4zw-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"939bcda9-0a19-4e96-ac5d-405850005d65", ResourceVersion:"950", Generation:0, CreationTimestamp:time.Date(2025, time.November, 8, 0, 31, 12, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-668d6bf9bc-wq4zw", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali26c92eb0f4a", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 8 00:31:50.606525 containerd[1649]: 2025-11-08 00:31:50.591 [INFO][4834] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.133/32] ContainerID="c73d78a2c9429af1fe38503688b5341f4387662ff13129bd7061317d2367a3dd" Namespace="kube-system" Pod="coredns-668d6bf9bc-wq4zw" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--wq4zw-eth0" Nov 8 00:31:50.606525 containerd[1649]: 2025-11-08 00:31:50.591 [INFO][4834] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali26c92eb0f4a ContainerID="c73d78a2c9429af1fe38503688b5341f4387662ff13129bd7061317d2367a3dd" Namespace="kube-system" Pod="coredns-668d6bf9bc-wq4zw" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--wq4zw-eth0" Nov 8 00:31:50.606525 containerd[1649]: 2025-11-08 00:31:50.595 [INFO][4834] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="c73d78a2c9429af1fe38503688b5341f4387662ff13129bd7061317d2367a3dd" Namespace="kube-system" Pod="coredns-668d6bf9bc-wq4zw" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--wq4zw-eth0" Nov 8 00:31:50.606525 containerd[1649]: 2025-11-08 00:31:50.595 [INFO][4834] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="c73d78a2c9429af1fe38503688b5341f4387662ff13129bd7061317d2367a3dd" Namespace="kube-system" Pod="coredns-668d6bf9bc-wq4zw" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--wq4zw-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--668d6bf9bc--wq4zw-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"939bcda9-0a19-4e96-ac5d-405850005d65", ResourceVersion:"950", Generation:0, CreationTimestamp:time.Date(2025, time.November, 8, 0, 31, 12, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"c73d78a2c9429af1fe38503688b5341f4387662ff13129bd7061317d2367a3dd", Pod:"coredns-668d6bf9bc-wq4zw", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali26c92eb0f4a", MAC:"e2:76:c2:44:6d:ac", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 8 00:31:50.606525 containerd[1649]: 2025-11-08 00:31:50.604 [INFO][4834] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="c73d78a2c9429af1fe38503688b5341f4387662ff13129bd7061317d2367a3dd" Namespace="kube-system" Pod="coredns-668d6bf9bc-wq4zw" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--wq4zw-eth0" Nov 8 00:31:50.625794 containerd[1649]: time="2025-11-08T00:31:50.625522936Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 8 00:31:50.626435 containerd[1649]: time="2025-11-08T00:31:50.626174469Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 8 00:31:50.627174 containerd[1649]: time="2025-11-08T00:31:50.626846386Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 8 00:31:50.627244 containerd[1649]: time="2025-11-08T00:31:50.627186068Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 8 00:31:50.643273 systemd[1]: run-containerd-runc-k8s.io-c73d78a2c9429af1fe38503688b5341f4387662ff13129bd7061317d2367a3dd-runc.nSaxN3.mount: Deactivated successfully. Nov 8 00:31:50.653788 systemd-resolved[1542]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Nov 8 00:31:50.679547 containerd[1649]: time="2025-11-08T00:31:50.679515840Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-wq4zw,Uid:939bcda9-0a19-4e96-ac5d-405850005d65,Namespace:kube-system,Attempt:1,} returns sandbox id \"c73d78a2c9429af1fe38503688b5341f4387662ff13129bd7061317d2367a3dd\"" Nov 8 00:31:50.681910 containerd[1649]: time="2025-11-08T00:31:50.681884438Z" level=info msg="CreateContainer within sandbox \"c73d78a2c9429af1fe38503688b5341f4387662ff13129bd7061317d2367a3dd\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Nov 8 00:31:50.715561 containerd[1649]: time="2025-11-08T00:31:50.715533297Z" level=info msg="CreateContainer within sandbox \"c73d78a2c9429af1fe38503688b5341f4387662ff13129bd7061317d2367a3dd\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"dc483695aaf3eabd9c86dad3e58399b60a41bdae7a4d53207e6f84bf48059dee\"" Nov 8 00:31:50.716233 containerd[1649]: time="2025-11-08T00:31:50.716036591Z" level=info msg="StartContainer for \"dc483695aaf3eabd9c86dad3e58399b60a41bdae7a4d53207e6f84bf48059dee\"" Nov 8 00:31:50.751403 containerd[1649]: time="2025-11-08T00:31:50.750542195Z" level=info msg="StartContainer for \"dc483695aaf3eabd9c86dad3e58399b60a41bdae7a4d53207e6f84bf48059dee\" returns successfully" Nov 8 00:31:50.845334 kubelet[2900]: I1108 00:31:50.845287 2900 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-wq4zw" podStartSLOduration=38.845273986 podStartE2EDuration="38.845273986s" podCreationTimestamp="2025-11-08 00:31:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-08 00:31:50.845052335 +0000 UTC m=+45.458507216" watchObservedRunningTime="2025-11-08 00:31:50.845273986 +0000 UTC m=+45.458728869" Nov 8 00:31:51.478002 containerd[1649]: time="2025-11-08T00:31:51.477754594Z" level=info msg="StopPodSandbox for \"7f129ede87c085322632e017826913d76d407d9023e8f529257ad197229f85d6\"" Nov 8 00:31:51.478328 containerd[1649]: time="2025-11-08T00:31:51.478221042Z" level=info msg="StopPodSandbox for \"24ba4751fabe1dc2f6631bba257332c1654347d2c88ebc4dc0a88fcd43b3da0b\"" Nov 8 00:31:51.480180 containerd[1649]: time="2025-11-08T00:31:51.479966962Z" level=info msg="StopPodSandbox for \"3265a2745e9ae0466ceafbf33ebb57c974e2beed74c3345f0502814ff46829ec\"" Nov 8 00:31:51.568219 containerd[1649]: 2025-11-08 00:31:51.533 [INFO][4972] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="3265a2745e9ae0466ceafbf33ebb57c974e2beed74c3345f0502814ff46829ec" Nov 8 00:31:51.568219 containerd[1649]: 2025-11-08 00:31:51.534 [INFO][4972] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="3265a2745e9ae0466ceafbf33ebb57c974e2beed74c3345f0502814ff46829ec" iface="eth0" netns="/var/run/netns/cni-010d5c36-c9aa-df01-2788-136365eaca96" Nov 8 00:31:51.568219 containerd[1649]: 2025-11-08 00:31:51.534 [INFO][4972] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="3265a2745e9ae0466ceafbf33ebb57c974e2beed74c3345f0502814ff46829ec" iface="eth0" netns="/var/run/netns/cni-010d5c36-c9aa-df01-2788-136365eaca96" Nov 8 00:31:51.568219 containerd[1649]: 2025-11-08 00:31:51.534 [INFO][4972] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="3265a2745e9ae0466ceafbf33ebb57c974e2beed74c3345f0502814ff46829ec" iface="eth0" netns="/var/run/netns/cni-010d5c36-c9aa-df01-2788-136365eaca96" Nov 8 00:31:51.568219 containerd[1649]: 2025-11-08 00:31:51.534 [INFO][4972] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="3265a2745e9ae0466ceafbf33ebb57c974e2beed74c3345f0502814ff46829ec" Nov 8 00:31:51.568219 containerd[1649]: 2025-11-08 00:31:51.534 [INFO][4972] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="3265a2745e9ae0466ceafbf33ebb57c974e2beed74c3345f0502814ff46829ec" Nov 8 00:31:51.568219 containerd[1649]: 2025-11-08 00:31:51.554 [INFO][4997] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="3265a2745e9ae0466ceafbf33ebb57c974e2beed74c3345f0502814ff46829ec" HandleID="k8s-pod-network.3265a2745e9ae0466ceafbf33ebb57c974e2beed74c3345f0502814ff46829ec" Workload="localhost-k8s-goldmane--666569f655--cblkz-eth0" Nov 8 00:31:51.568219 containerd[1649]: 2025-11-08 00:31:51.554 [INFO][4997] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 8 00:31:51.568219 containerd[1649]: 2025-11-08 00:31:51.554 [INFO][4997] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 8 00:31:51.568219 containerd[1649]: 2025-11-08 00:31:51.561 [WARNING][4997] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="3265a2745e9ae0466ceafbf33ebb57c974e2beed74c3345f0502814ff46829ec" HandleID="k8s-pod-network.3265a2745e9ae0466ceafbf33ebb57c974e2beed74c3345f0502814ff46829ec" Workload="localhost-k8s-goldmane--666569f655--cblkz-eth0" Nov 8 00:31:51.568219 containerd[1649]: 2025-11-08 00:31:51.561 [INFO][4997] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="3265a2745e9ae0466ceafbf33ebb57c974e2beed74c3345f0502814ff46829ec" HandleID="k8s-pod-network.3265a2745e9ae0466ceafbf33ebb57c974e2beed74c3345f0502814ff46829ec" Workload="localhost-k8s-goldmane--666569f655--cblkz-eth0" Nov 8 00:31:51.568219 containerd[1649]: 2025-11-08 00:31:51.562 [INFO][4997] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 8 00:31:51.568219 containerd[1649]: 2025-11-08 00:31:51.565 [INFO][4972] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="3265a2745e9ae0466ceafbf33ebb57c974e2beed74c3345f0502814ff46829ec" Nov 8 00:31:51.570983 containerd[1649]: time="2025-11-08T00:31:51.568602721Z" level=info msg="TearDown network for sandbox \"3265a2745e9ae0466ceafbf33ebb57c974e2beed74c3345f0502814ff46829ec\" successfully" Nov 8 00:31:51.570983 containerd[1649]: time="2025-11-08T00:31:51.568622485Z" level=info msg="StopPodSandbox for \"3265a2745e9ae0466ceafbf33ebb57c974e2beed74c3345f0502814ff46829ec\" returns successfully" Nov 8 00:31:51.570983 containerd[1649]: time="2025-11-08T00:31:51.570037687Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-cblkz,Uid:c0dfff3f-1568-463e-aed1-906fd9d64aa0,Namespace:calico-system,Attempt:1,}" Nov 8 00:31:51.571307 systemd[1]: run-netns-cni\x2d010d5c36\x2dc9aa\x2ddf01\x2d2788\x2d136365eaca96.mount: Deactivated successfully. Nov 8 00:31:51.573878 containerd[1649]: 2025-11-08 00:31:51.520 [INFO][4971] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="7f129ede87c085322632e017826913d76d407d9023e8f529257ad197229f85d6" Nov 8 00:31:51.573878 containerd[1649]: 2025-11-08 00:31:51.522 [INFO][4971] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="7f129ede87c085322632e017826913d76d407d9023e8f529257ad197229f85d6" iface="eth0" netns="/var/run/netns/cni-4e92edfc-f6f0-a783-b558-20f7a55ff35e" Nov 8 00:31:51.573878 containerd[1649]: 2025-11-08 00:31:51.523 [INFO][4971] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="7f129ede87c085322632e017826913d76d407d9023e8f529257ad197229f85d6" iface="eth0" netns="/var/run/netns/cni-4e92edfc-f6f0-a783-b558-20f7a55ff35e" Nov 8 00:31:51.573878 containerd[1649]: 2025-11-08 00:31:51.526 [INFO][4971] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="7f129ede87c085322632e017826913d76d407d9023e8f529257ad197229f85d6" iface="eth0" netns="/var/run/netns/cni-4e92edfc-f6f0-a783-b558-20f7a55ff35e" Nov 8 00:31:51.573878 containerd[1649]: 2025-11-08 00:31:51.526 [INFO][4971] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="7f129ede87c085322632e017826913d76d407d9023e8f529257ad197229f85d6" Nov 8 00:31:51.573878 containerd[1649]: 2025-11-08 00:31:51.526 [INFO][4971] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="7f129ede87c085322632e017826913d76d407d9023e8f529257ad197229f85d6" Nov 8 00:31:51.573878 containerd[1649]: 2025-11-08 00:31:51.559 [INFO][4991] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="7f129ede87c085322632e017826913d76d407d9023e8f529257ad197229f85d6" HandleID="k8s-pod-network.7f129ede87c085322632e017826913d76d407d9023e8f529257ad197229f85d6" Workload="localhost-k8s-calico--apiserver--84758c967d--hp26p-eth0" Nov 8 00:31:51.573878 containerd[1649]: 2025-11-08 00:31:51.559 [INFO][4991] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 8 00:31:51.573878 containerd[1649]: 2025-11-08 00:31:51.562 [INFO][4991] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 8 00:31:51.573878 containerd[1649]: 2025-11-08 00:31:51.566 [WARNING][4991] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="7f129ede87c085322632e017826913d76d407d9023e8f529257ad197229f85d6" HandleID="k8s-pod-network.7f129ede87c085322632e017826913d76d407d9023e8f529257ad197229f85d6" Workload="localhost-k8s-calico--apiserver--84758c967d--hp26p-eth0" Nov 8 00:31:51.573878 containerd[1649]: 2025-11-08 00:31:51.566 [INFO][4991] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="7f129ede87c085322632e017826913d76d407d9023e8f529257ad197229f85d6" HandleID="k8s-pod-network.7f129ede87c085322632e017826913d76d407d9023e8f529257ad197229f85d6" Workload="localhost-k8s-calico--apiserver--84758c967d--hp26p-eth0" Nov 8 00:31:51.573878 containerd[1649]: 2025-11-08 00:31:51.567 [INFO][4991] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 8 00:31:51.573878 containerd[1649]: 2025-11-08 00:31:51.571 [INFO][4971] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="7f129ede87c085322632e017826913d76d407d9023e8f529257ad197229f85d6" Nov 8 00:31:51.573878 containerd[1649]: time="2025-11-08T00:31:51.573828343Z" level=info msg="TearDown network for sandbox \"7f129ede87c085322632e017826913d76d407d9023e8f529257ad197229f85d6\" successfully" Nov 8 00:31:51.573878 containerd[1649]: time="2025-11-08T00:31:51.573841950Z" level=info msg="StopPodSandbox for \"7f129ede87c085322632e017826913d76d407d9023e8f529257ad197229f85d6\" returns successfully" Nov 8 00:31:51.576569 containerd[1649]: time="2025-11-08T00:31:51.575387823Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-84758c967d-hp26p,Uid:558dc8c2-70d1-4eda-a967-93f57dec2dc2,Namespace:calico-apiserver,Attempt:1,}" Nov 8 00:31:51.575568 systemd[1]: run-netns-cni\x2d4e92edfc\x2df6f0\x2da783\x2db558\x2d20f7a55ff35e.mount: Deactivated successfully. Nov 8 00:31:51.589686 containerd[1649]: 2025-11-08 00:31:51.537 [INFO][4973] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="24ba4751fabe1dc2f6631bba257332c1654347d2c88ebc4dc0a88fcd43b3da0b" Nov 8 00:31:51.589686 containerd[1649]: 2025-11-08 00:31:51.537 [INFO][4973] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="24ba4751fabe1dc2f6631bba257332c1654347d2c88ebc4dc0a88fcd43b3da0b" iface="eth0" netns="/var/run/netns/cni-afad84dc-0d80-b533-b8e2-7b96bd1a9595" Nov 8 00:31:51.589686 containerd[1649]: 2025-11-08 00:31:51.537 [INFO][4973] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="24ba4751fabe1dc2f6631bba257332c1654347d2c88ebc4dc0a88fcd43b3da0b" iface="eth0" netns="/var/run/netns/cni-afad84dc-0d80-b533-b8e2-7b96bd1a9595" Nov 8 00:31:51.589686 containerd[1649]: 2025-11-08 00:31:51.537 [INFO][4973] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="24ba4751fabe1dc2f6631bba257332c1654347d2c88ebc4dc0a88fcd43b3da0b" iface="eth0" netns="/var/run/netns/cni-afad84dc-0d80-b533-b8e2-7b96bd1a9595" Nov 8 00:31:51.589686 containerd[1649]: 2025-11-08 00:31:51.537 [INFO][4973] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="24ba4751fabe1dc2f6631bba257332c1654347d2c88ebc4dc0a88fcd43b3da0b" Nov 8 00:31:51.589686 containerd[1649]: 2025-11-08 00:31:51.537 [INFO][4973] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="24ba4751fabe1dc2f6631bba257332c1654347d2c88ebc4dc0a88fcd43b3da0b" Nov 8 00:31:51.589686 containerd[1649]: 2025-11-08 00:31:51.576 [INFO][5002] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="24ba4751fabe1dc2f6631bba257332c1654347d2c88ebc4dc0a88fcd43b3da0b" HandleID="k8s-pod-network.24ba4751fabe1dc2f6631bba257332c1654347d2c88ebc4dc0a88fcd43b3da0b" Workload="localhost-k8s-csi--node--driver--m4wsd-eth0" Nov 8 00:31:51.589686 containerd[1649]: 2025-11-08 00:31:51.576 [INFO][5002] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 8 00:31:51.589686 containerd[1649]: 2025-11-08 00:31:51.576 [INFO][5002] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 8 00:31:51.589686 containerd[1649]: 2025-11-08 00:31:51.580 [WARNING][5002] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="24ba4751fabe1dc2f6631bba257332c1654347d2c88ebc4dc0a88fcd43b3da0b" HandleID="k8s-pod-network.24ba4751fabe1dc2f6631bba257332c1654347d2c88ebc4dc0a88fcd43b3da0b" Workload="localhost-k8s-csi--node--driver--m4wsd-eth0" Nov 8 00:31:51.589686 containerd[1649]: 2025-11-08 00:31:51.580 [INFO][5002] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="24ba4751fabe1dc2f6631bba257332c1654347d2c88ebc4dc0a88fcd43b3da0b" HandleID="k8s-pod-network.24ba4751fabe1dc2f6631bba257332c1654347d2c88ebc4dc0a88fcd43b3da0b" Workload="localhost-k8s-csi--node--driver--m4wsd-eth0" Nov 8 00:31:51.589686 containerd[1649]: 2025-11-08 00:31:51.582 [INFO][5002] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 8 00:31:51.589686 containerd[1649]: 2025-11-08 00:31:51.585 [INFO][4973] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="24ba4751fabe1dc2f6631bba257332c1654347d2c88ebc4dc0a88fcd43b3da0b" Nov 8 00:31:51.589686 containerd[1649]: time="2025-11-08T00:31:51.589442751Z" level=info msg="TearDown network for sandbox \"24ba4751fabe1dc2f6631bba257332c1654347d2c88ebc4dc0a88fcd43b3da0b\" successfully" Nov 8 00:31:51.589686 containerd[1649]: time="2025-11-08T00:31:51.589458392Z" level=info msg="StopPodSandbox for \"24ba4751fabe1dc2f6631bba257332c1654347d2c88ebc4dc0a88fcd43b3da0b\" returns successfully" Nov 8 00:31:51.592186 containerd[1649]: time="2025-11-08T00:31:51.589975866Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-m4wsd,Uid:c5b205c6-f534-4f27-bd2e-0a8fe1443335,Namespace:calico-system,Attempt:1,}" Nov 8 00:31:51.673064 systemd-networkd[1287]: cali4a9a388a2c2: Link UP Nov 8 00:31:51.673590 systemd-networkd[1287]: cali4a9a388a2c2: Gained carrier Nov 8 00:31:51.687566 containerd[1649]: 2025-11-08 00:31:51.615 [INFO][5014] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-goldmane--666569f655--cblkz-eth0 goldmane-666569f655- calico-system c0dfff3f-1568-463e-aed1-906fd9d64aa0 973 0 2025-11-08 00:31:23 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:666569f655 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s localhost goldmane-666569f655-cblkz eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] cali4a9a388a2c2 [] [] }} ContainerID="95aa7be66221f7f4575ab0603d5687c89c3e760dd3c7116fc3ac00f19e2b26fc" Namespace="calico-system" Pod="goldmane-666569f655-cblkz" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--cblkz-" Nov 8 00:31:51.687566 containerd[1649]: 2025-11-08 00:31:51.615 [INFO][5014] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="95aa7be66221f7f4575ab0603d5687c89c3e760dd3c7116fc3ac00f19e2b26fc" Namespace="calico-system" Pod="goldmane-666569f655-cblkz" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--cblkz-eth0" Nov 8 00:31:51.687566 containerd[1649]: 2025-11-08 00:31:51.648 [INFO][5047] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="95aa7be66221f7f4575ab0603d5687c89c3e760dd3c7116fc3ac00f19e2b26fc" HandleID="k8s-pod-network.95aa7be66221f7f4575ab0603d5687c89c3e760dd3c7116fc3ac00f19e2b26fc" Workload="localhost-k8s-goldmane--666569f655--cblkz-eth0" Nov 8 00:31:51.687566 containerd[1649]: 2025-11-08 00:31:51.648 [INFO][5047] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="95aa7be66221f7f4575ab0603d5687c89c3e760dd3c7116fc3ac00f19e2b26fc" HandleID="k8s-pod-network.95aa7be66221f7f4575ab0603d5687c89c3e760dd3c7116fc3ac00f19e2b26fc" Workload="localhost-k8s-goldmane--666569f655--cblkz-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002ad4a0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"goldmane-666569f655-cblkz", "timestamp":"2025-11-08 00:31:51.64862746 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 8 00:31:51.687566 containerd[1649]: 2025-11-08 00:31:51.648 [INFO][5047] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 8 00:31:51.687566 containerd[1649]: 2025-11-08 00:31:51.648 [INFO][5047] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 8 00:31:51.687566 containerd[1649]: 2025-11-08 00:31:51.648 [INFO][5047] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Nov 8 00:31:51.687566 containerd[1649]: 2025-11-08 00:31:51.653 [INFO][5047] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.95aa7be66221f7f4575ab0603d5687c89c3e760dd3c7116fc3ac00f19e2b26fc" host="localhost" Nov 8 00:31:51.687566 containerd[1649]: 2025-11-08 00:31:51.657 [INFO][5047] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Nov 8 00:31:51.687566 containerd[1649]: 2025-11-08 00:31:51.659 [INFO][5047] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Nov 8 00:31:51.687566 containerd[1649]: 2025-11-08 00:31:51.660 [INFO][5047] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Nov 8 00:31:51.687566 containerd[1649]: 2025-11-08 00:31:51.661 [INFO][5047] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Nov 8 00:31:51.687566 containerd[1649]: 2025-11-08 00:31:51.661 [INFO][5047] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.95aa7be66221f7f4575ab0603d5687c89c3e760dd3c7116fc3ac00f19e2b26fc" host="localhost" Nov 8 00:31:51.687566 containerd[1649]: 2025-11-08 00:31:51.662 [INFO][5047] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.95aa7be66221f7f4575ab0603d5687c89c3e760dd3c7116fc3ac00f19e2b26fc Nov 8 00:31:51.687566 containerd[1649]: 2025-11-08 00:31:51.664 [INFO][5047] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.95aa7be66221f7f4575ab0603d5687c89c3e760dd3c7116fc3ac00f19e2b26fc" host="localhost" Nov 8 00:31:51.687566 containerd[1649]: 2025-11-08 00:31:51.667 [INFO][5047] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.134/26] block=192.168.88.128/26 handle="k8s-pod-network.95aa7be66221f7f4575ab0603d5687c89c3e760dd3c7116fc3ac00f19e2b26fc" host="localhost" Nov 8 00:31:51.687566 containerd[1649]: 2025-11-08 00:31:51.667 [INFO][5047] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.134/26] handle="k8s-pod-network.95aa7be66221f7f4575ab0603d5687c89c3e760dd3c7116fc3ac00f19e2b26fc" host="localhost" Nov 8 00:31:51.687566 containerd[1649]: 2025-11-08 00:31:51.667 [INFO][5047] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 8 00:31:51.687566 containerd[1649]: 2025-11-08 00:31:51.667 [INFO][5047] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.134/26] IPv6=[] ContainerID="95aa7be66221f7f4575ab0603d5687c89c3e760dd3c7116fc3ac00f19e2b26fc" HandleID="k8s-pod-network.95aa7be66221f7f4575ab0603d5687c89c3e760dd3c7116fc3ac00f19e2b26fc" Workload="localhost-k8s-goldmane--666569f655--cblkz-eth0" Nov 8 00:31:51.688352 containerd[1649]: 2025-11-08 00:31:51.669 [INFO][5014] cni-plugin/k8s.go 418: Populated endpoint ContainerID="95aa7be66221f7f4575ab0603d5687c89c3e760dd3c7116fc3ac00f19e2b26fc" Namespace="calico-system" Pod="goldmane-666569f655-cblkz" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--cblkz-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--666569f655--cblkz-eth0", GenerateName:"goldmane-666569f655-", Namespace:"calico-system", SelfLink:"", UID:"c0dfff3f-1568-463e-aed1-906fd9d64aa0", ResourceVersion:"973", Generation:0, CreationTimestamp:time.Date(2025, time.November, 8, 0, 31, 23, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"666569f655", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"goldmane-666569f655-cblkz", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali4a9a388a2c2", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 8 00:31:51.688352 containerd[1649]: 2025-11-08 00:31:51.669 [INFO][5014] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.134/32] ContainerID="95aa7be66221f7f4575ab0603d5687c89c3e760dd3c7116fc3ac00f19e2b26fc" Namespace="calico-system" Pod="goldmane-666569f655-cblkz" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--cblkz-eth0" Nov 8 00:31:51.688352 containerd[1649]: 2025-11-08 00:31:51.670 [INFO][5014] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali4a9a388a2c2 ContainerID="95aa7be66221f7f4575ab0603d5687c89c3e760dd3c7116fc3ac00f19e2b26fc" Namespace="calico-system" Pod="goldmane-666569f655-cblkz" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--cblkz-eth0" Nov 8 00:31:51.688352 containerd[1649]: 2025-11-08 00:31:51.676 [INFO][5014] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="95aa7be66221f7f4575ab0603d5687c89c3e760dd3c7116fc3ac00f19e2b26fc" Namespace="calico-system" Pod="goldmane-666569f655-cblkz" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--cblkz-eth0" Nov 8 00:31:51.688352 containerd[1649]: 2025-11-08 00:31:51.676 [INFO][5014] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="95aa7be66221f7f4575ab0603d5687c89c3e760dd3c7116fc3ac00f19e2b26fc" Namespace="calico-system" Pod="goldmane-666569f655-cblkz" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--cblkz-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--666569f655--cblkz-eth0", GenerateName:"goldmane-666569f655-", Namespace:"calico-system", SelfLink:"", UID:"c0dfff3f-1568-463e-aed1-906fd9d64aa0", ResourceVersion:"973", Generation:0, CreationTimestamp:time.Date(2025, time.November, 8, 0, 31, 23, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"666569f655", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"95aa7be66221f7f4575ab0603d5687c89c3e760dd3c7116fc3ac00f19e2b26fc", Pod:"goldmane-666569f655-cblkz", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali4a9a388a2c2", MAC:"d2:40:d6:bd:2a:17", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 8 00:31:51.688352 containerd[1649]: 2025-11-08 00:31:51.682 [INFO][5014] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="95aa7be66221f7f4575ab0603d5687c89c3e760dd3c7116fc3ac00f19e2b26fc" Namespace="calico-system" Pod="goldmane-666569f655-cblkz" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--cblkz-eth0" Nov 8 00:31:51.704520 containerd[1649]: time="2025-11-08T00:31:51.704464209Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 8 00:31:51.705065 containerd[1649]: time="2025-11-08T00:31:51.704968598Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 8 00:31:51.705065 containerd[1649]: time="2025-11-08T00:31:51.704993014Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 8 00:31:51.705125 containerd[1649]: time="2025-11-08T00:31:51.705057102Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 8 00:31:51.745400 systemd-resolved[1542]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Nov 8 00:31:51.785563 systemd-networkd[1287]: cali93bea2385e3: Link UP Nov 8 00:31:51.788208 systemd-networkd[1287]: cali93bea2385e3: Gained carrier Nov 8 00:31:51.791866 containerd[1649]: time="2025-11-08T00:31:51.791796134Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-cblkz,Uid:c0dfff3f-1568-463e-aed1-906fd9d64aa0,Namespace:calico-system,Attempt:1,} returns sandbox id \"95aa7be66221f7f4575ab0603d5687c89c3e760dd3c7116fc3ac00f19e2b26fc\"" Nov 8 00:31:51.794381 containerd[1649]: time="2025-11-08T00:31:51.794169517Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Nov 8 00:31:51.798335 containerd[1649]: 2025-11-08 00:31:51.630 [INFO][5034] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-csi--node--driver--m4wsd-eth0 csi-node-driver- calico-system c5b205c6-f534-4f27-bd2e-0a8fe1443335 972 0 2025-11-08 00:31:26 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:857b56db8f k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s localhost csi-node-driver-m4wsd eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali93bea2385e3 [] [] }} ContainerID="90b0e7e5441d77290d5e556ac669a0805ead84e6b16b8778d2e5514b9d191f33" Namespace="calico-system" Pod="csi-node-driver-m4wsd" WorkloadEndpoint="localhost-k8s-csi--node--driver--m4wsd-" Nov 8 00:31:51.798335 containerd[1649]: 2025-11-08 00:31:51.631 [INFO][5034] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="90b0e7e5441d77290d5e556ac669a0805ead84e6b16b8778d2e5514b9d191f33" Namespace="calico-system" Pod="csi-node-driver-m4wsd" WorkloadEndpoint="localhost-k8s-csi--node--driver--m4wsd-eth0" Nov 8 00:31:51.798335 containerd[1649]: 2025-11-08 00:31:51.658 [INFO][5056] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="90b0e7e5441d77290d5e556ac669a0805ead84e6b16b8778d2e5514b9d191f33" HandleID="k8s-pod-network.90b0e7e5441d77290d5e556ac669a0805ead84e6b16b8778d2e5514b9d191f33" Workload="localhost-k8s-csi--node--driver--m4wsd-eth0" Nov 8 00:31:51.798335 containerd[1649]: 2025-11-08 00:31:51.658 [INFO][5056] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="90b0e7e5441d77290d5e556ac669a0805ead84e6b16b8778d2e5514b9d191f33" HandleID="k8s-pod-network.90b0e7e5441d77290d5e556ac669a0805ead84e6b16b8778d2e5514b9d191f33" Workload="localhost-k8s-csi--node--driver--m4wsd-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002d58f0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"csi-node-driver-m4wsd", "timestamp":"2025-11-08 00:31:51.658850919 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 8 00:31:51.798335 containerd[1649]: 2025-11-08 00:31:51.658 [INFO][5056] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 8 00:31:51.798335 containerd[1649]: 2025-11-08 00:31:51.667 [INFO][5056] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 8 00:31:51.798335 containerd[1649]: 2025-11-08 00:31:51.667 [INFO][5056] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Nov 8 00:31:51.798335 containerd[1649]: 2025-11-08 00:31:51.754 [INFO][5056] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.90b0e7e5441d77290d5e556ac669a0805ead84e6b16b8778d2e5514b9d191f33" host="localhost" Nov 8 00:31:51.798335 containerd[1649]: 2025-11-08 00:31:51.760 [INFO][5056] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Nov 8 00:31:51.798335 containerd[1649]: 2025-11-08 00:31:51.762 [INFO][5056] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Nov 8 00:31:51.798335 containerd[1649]: 2025-11-08 00:31:51.763 [INFO][5056] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Nov 8 00:31:51.798335 containerd[1649]: 2025-11-08 00:31:51.765 [INFO][5056] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Nov 8 00:31:51.798335 containerd[1649]: 2025-11-08 00:31:51.765 [INFO][5056] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.90b0e7e5441d77290d5e556ac669a0805ead84e6b16b8778d2e5514b9d191f33" host="localhost" Nov 8 00:31:51.798335 containerd[1649]: 2025-11-08 00:31:51.766 [INFO][5056] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.90b0e7e5441d77290d5e556ac669a0805ead84e6b16b8778d2e5514b9d191f33 Nov 8 00:31:51.798335 containerd[1649]: 2025-11-08 00:31:51.769 [INFO][5056] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.90b0e7e5441d77290d5e556ac669a0805ead84e6b16b8778d2e5514b9d191f33" host="localhost" Nov 8 00:31:51.798335 containerd[1649]: 2025-11-08 00:31:51.775 [INFO][5056] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.135/26] block=192.168.88.128/26 handle="k8s-pod-network.90b0e7e5441d77290d5e556ac669a0805ead84e6b16b8778d2e5514b9d191f33" host="localhost" Nov 8 00:31:51.798335 containerd[1649]: 2025-11-08 00:31:51.775 [INFO][5056] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.135/26] handle="k8s-pod-network.90b0e7e5441d77290d5e556ac669a0805ead84e6b16b8778d2e5514b9d191f33" host="localhost" Nov 8 00:31:51.798335 containerd[1649]: 2025-11-08 00:31:51.775 [INFO][5056] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 8 00:31:51.798335 containerd[1649]: 2025-11-08 00:31:51.775 [INFO][5056] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.135/26] IPv6=[] ContainerID="90b0e7e5441d77290d5e556ac669a0805ead84e6b16b8778d2e5514b9d191f33" HandleID="k8s-pod-network.90b0e7e5441d77290d5e556ac669a0805ead84e6b16b8778d2e5514b9d191f33" Workload="localhost-k8s-csi--node--driver--m4wsd-eth0" Nov 8 00:31:51.798956 containerd[1649]: 2025-11-08 00:31:51.776 [INFO][5034] cni-plugin/k8s.go 418: Populated endpoint ContainerID="90b0e7e5441d77290d5e556ac669a0805ead84e6b16b8778d2e5514b9d191f33" Namespace="calico-system" Pod="csi-node-driver-m4wsd" WorkloadEndpoint="localhost-k8s-csi--node--driver--m4wsd-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--m4wsd-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"c5b205c6-f534-4f27-bd2e-0a8fe1443335", ResourceVersion:"972", Generation:0, CreationTimestamp:time.Date(2025, time.November, 8, 0, 31, 26, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"857b56db8f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"csi-node-driver-m4wsd", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali93bea2385e3", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 8 00:31:51.798956 containerd[1649]: 2025-11-08 00:31:51.776 [INFO][5034] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.135/32] ContainerID="90b0e7e5441d77290d5e556ac669a0805ead84e6b16b8778d2e5514b9d191f33" Namespace="calico-system" Pod="csi-node-driver-m4wsd" WorkloadEndpoint="localhost-k8s-csi--node--driver--m4wsd-eth0" Nov 8 00:31:51.798956 containerd[1649]: 2025-11-08 00:31:51.777 [INFO][5034] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali93bea2385e3 ContainerID="90b0e7e5441d77290d5e556ac669a0805ead84e6b16b8778d2e5514b9d191f33" Namespace="calico-system" Pod="csi-node-driver-m4wsd" WorkloadEndpoint="localhost-k8s-csi--node--driver--m4wsd-eth0" Nov 8 00:31:51.798956 containerd[1649]: 2025-11-08 00:31:51.788 [INFO][5034] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="90b0e7e5441d77290d5e556ac669a0805ead84e6b16b8778d2e5514b9d191f33" Namespace="calico-system" Pod="csi-node-driver-m4wsd" WorkloadEndpoint="localhost-k8s-csi--node--driver--m4wsd-eth0" Nov 8 00:31:51.798956 containerd[1649]: 2025-11-08 00:31:51.788 [INFO][5034] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="90b0e7e5441d77290d5e556ac669a0805ead84e6b16b8778d2e5514b9d191f33" Namespace="calico-system" Pod="csi-node-driver-m4wsd" WorkloadEndpoint="localhost-k8s-csi--node--driver--m4wsd-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--m4wsd-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"c5b205c6-f534-4f27-bd2e-0a8fe1443335", ResourceVersion:"972", Generation:0, CreationTimestamp:time.Date(2025, time.November, 8, 0, 31, 26, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"857b56db8f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"90b0e7e5441d77290d5e556ac669a0805ead84e6b16b8778d2e5514b9d191f33", Pod:"csi-node-driver-m4wsd", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali93bea2385e3", MAC:"e2:15:59:1a:60:a1", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 8 00:31:51.798956 containerd[1649]: 2025-11-08 00:31:51.796 [INFO][5034] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="90b0e7e5441d77290d5e556ac669a0805ead84e6b16b8778d2e5514b9d191f33" Namespace="calico-system" Pod="csi-node-driver-m4wsd" WorkloadEndpoint="localhost-k8s-csi--node--driver--m4wsd-eth0" Nov 8 00:31:51.817209 containerd[1649]: time="2025-11-08T00:31:51.816894394Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 8 00:31:51.817209 containerd[1649]: time="2025-11-08T00:31:51.817023151Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 8 00:31:51.817209 containerd[1649]: time="2025-11-08T00:31:51.817035012Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 8 00:31:51.817209 containerd[1649]: time="2025-11-08T00:31:51.817172875Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 8 00:31:51.835687 systemd-resolved[1542]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Nov 8 00:31:51.851837 containerd[1649]: time="2025-11-08T00:31:51.851701695Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-m4wsd,Uid:c5b205c6-f534-4f27-bd2e-0a8fe1443335,Namespace:calico-system,Attempt:1,} returns sandbox id \"90b0e7e5441d77290d5e556ac669a0805ead84e6b16b8778d2e5514b9d191f33\"" Nov 8 00:31:51.907008 systemd-networkd[1287]: cali94138dca4c4: Link UP Nov 8 00:31:51.907147 systemd-networkd[1287]: cali94138dca4c4: Gained carrier Nov 8 00:31:51.926032 containerd[1649]: 2025-11-08 00:31:51.633 [INFO][5024] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--84758c967d--hp26p-eth0 calico-apiserver-84758c967d- calico-apiserver 558dc8c2-70d1-4eda-a967-93f57dec2dc2 971 0 2025-11-08 00:31:22 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:84758c967d projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-84758c967d-hp26p eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali94138dca4c4 [] [] }} ContainerID="57c4db2578048aa4a5634d3fb876e7ffd6013ec8ea4fe0d0d688d0f191d523a0" Namespace="calico-apiserver" Pod="calico-apiserver-84758c967d-hp26p" WorkloadEndpoint="localhost-k8s-calico--apiserver--84758c967d--hp26p-" Nov 8 00:31:51.926032 containerd[1649]: 2025-11-08 00:31:51.633 [INFO][5024] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="57c4db2578048aa4a5634d3fb876e7ffd6013ec8ea4fe0d0d688d0f191d523a0" Namespace="calico-apiserver" Pod="calico-apiserver-84758c967d-hp26p" WorkloadEndpoint="localhost-k8s-calico--apiserver--84758c967d--hp26p-eth0" Nov 8 00:31:51.926032 containerd[1649]: 2025-11-08 00:31:51.657 [INFO][5058] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="57c4db2578048aa4a5634d3fb876e7ffd6013ec8ea4fe0d0d688d0f191d523a0" HandleID="k8s-pod-network.57c4db2578048aa4a5634d3fb876e7ffd6013ec8ea4fe0d0d688d0f191d523a0" Workload="localhost-k8s-calico--apiserver--84758c967d--hp26p-eth0" Nov 8 00:31:51.926032 containerd[1649]: 2025-11-08 00:31:51.658 [INFO][5058] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="57c4db2578048aa4a5634d3fb876e7ffd6013ec8ea4fe0d0d688d0f191d523a0" HandleID="k8s-pod-network.57c4db2578048aa4a5634d3fb876e7ffd6013ec8ea4fe0d0d688d0f191d523a0" Workload="localhost-k8s-calico--apiserver--84758c967d--hp26p-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00024f200), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-84758c967d-hp26p", "timestamp":"2025-11-08 00:31:51.657935748 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 8 00:31:51.926032 containerd[1649]: 2025-11-08 00:31:51.659 [INFO][5058] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 8 00:31:51.926032 containerd[1649]: 2025-11-08 00:31:51.777 [INFO][5058] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 8 00:31:51.926032 containerd[1649]: 2025-11-08 00:31:51.778 [INFO][5058] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Nov 8 00:31:51.926032 containerd[1649]: 2025-11-08 00:31:51.855 [INFO][5058] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.57c4db2578048aa4a5634d3fb876e7ffd6013ec8ea4fe0d0d688d0f191d523a0" host="localhost" Nov 8 00:31:51.926032 containerd[1649]: 2025-11-08 00:31:51.875 [INFO][5058] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Nov 8 00:31:51.926032 containerd[1649]: 2025-11-08 00:31:51.879 [INFO][5058] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Nov 8 00:31:51.926032 containerd[1649]: 2025-11-08 00:31:51.881 [INFO][5058] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Nov 8 00:31:51.926032 containerd[1649]: 2025-11-08 00:31:51.883 [INFO][5058] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Nov 8 00:31:51.926032 containerd[1649]: 2025-11-08 00:31:51.883 [INFO][5058] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.57c4db2578048aa4a5634d3fb876e7ffd6013ec8ea4fe0d0d688d0f191d523a0" host="localhost" Nov 8 00:31:51.926032 containerd[1649]: 2025-11-08 00:31:51.883 [INFO][5058] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.57c4db2578048aa4a5634d3fb876e7ffd6013ec8ea4fe0d0d688d0f191d523a0 Nov 8 00:31:51.926032 containerd[1649]: 2025-11-08 00:31:51.889 [INFO][5058] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.57c4db2578048aa4a5634d3fb876e7ffd6013ec8ea4fe0d0d688d0f191d523a0" host="localhost" Nov 8 00:31:51.926032 containerd[1649]: 2025-11-08 00:31:51.900 [INFO][5058] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.136/26] block=192.168.88.128/26 handle="k8s-pod-network.57c4db2578048aa4a5634d3fb876e7ffd6013ec8ea4fe0d0d688d0f191d523a0" host="localhost" Nov 8 00:31:51.926032 containerd[1649]: 2025-11-08 00:31:51.900 [INFO][5058] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.136/26] handle="k8s-pod-network.57c4db2578048aa4a5634d3fb876e7ffd6013ec8ea4fe0d0d688d0f191d523a0" host="localhost" Nov 8 00:31:51.926032 containerd[1649]: 2025-11-08 00:31:51.900 [INFO][5058] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 8 00:31:51.926032 containerd[1649]: 2025-11-08 00:31:51.900 [INFO][5058] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.136/26] IPv6=[] ContainerID="57c4db2578048aa4a5634d3fb876e7ffd6013ec8ea4fe0d0d688d0f191d523a0" HandleID="k8s-pod-network.57c4db2578048aa4a5634d3fb876e7ffd6013ec8ea4fe0d0d688d0f191d523a0" Workload="localhost-k8s-calico--apiserver--84758c967d--hp26p-eth0" Nov 8 00:31:51.931058 containerd[1649]: 2025-11-08 00:31:51.904 [INFO][5024] cni-plugin/k8s.go 418: Populated endpoint ContainerID="57c4db2578048aa4a5634d3fb876e7ffd6013ec8ea4fe0d0d688d0f191d523a0" Namespace="calico-apiserver" Pod="calico-apiserver-84758c967d-hp26p" WorkloadEndpoint="localhost-k8s-calico--apiserver--84758c967d--hp26p-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--84758c967d--hp26p-eth0", GenerateName:"calico-apiserver-84758c967d-", Namespace:"calico-apiserver", SelfLink:"", UID:"558dc8c2-70d1-4eda-a967-93f57dec2dc2", ResourceVersion:"971", Generation:0, CreationTimestamp:time.Date(2025, time.November, 8, 0, 31, 22, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"84758c967d", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-84758c967d-hp26p", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali94138dca4c4", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 8 00:31:51.931058 containerd[1649]: 2025-11-08 00:31:51.904 [INFO][5024] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.136/32] ContainerID="57c4db2578048aa4a5634d3fb876e7ffd6013ec8ea4fe0d0d688d0f191d523a0" Namespace="calico-apiserver" Pod="calico-apiserver-84758c967d-hp26p" WorkloadEndpoint="localhost-k8s-calico--apiserver--84758c967d--hp26p-eth0" Nov 8 00:31:51.931058 containerd[1649]: 2025-11-08 00:31:51.904 [INFO][5024] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali94138dca4c4 ContainerID="57c4db2578048aa4a5634d3fb876e7ffd6013ec8ea4fe0d0d688d0f191d523a0" Namespace="calico-apiserver" Pod="calico-apiserver-84758c967d-hp26p" WorkloadEndpoint="localhost-k8s-calico--apiserver--84758c967d--hp26p-eth0" Nov 8 00:31:51.931058 containerd[1649]: 2025-11-08 00:31:51.906 [INFO][5024] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="57c4db2578048aa4a5634d3fb876e7ffd6013ec8ea4fe0d0d688d0f191d523a0" Namespace="calico-apiserver" Pod="calico-apiserver-84758c967d-hp26p" WorkloadEndpoint="localhost-k8s-calico--apiserver--84758c967d--hp26p-eth0" Nov 8 00:31:51.931058 containerd[1649]: 2025-11-08 00:31:51.907 [INFO][5024] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="57c4db2578048aa4a5634d3fb876e7ffd6013ec8ea4fe0d0d688d0f191d523a0" Namespace="calico-apiserver" Pod="calico-apiserver-84758c967d-hp26p" WorkloadEndpoint="localhost-k8s-calico--apiserver--84758c967d--hp26p-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--84758c967d--hp26p-eth0", GenerateName:"calico-apiserver-84758c967d-", Namespace:"calico-apiserver", SelfLink:"", UID:"558dc8c2-70d1-4eda-a967-93f57dec2dc2", ResourceVersion:"971", Generation:0, CreationTimestamp:time.Date(2025, time.November, 8, 0, 31, 22, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"84758c967d", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"57c4db2578048aa4a5634d3fb876e7ffd6013ec8ea4fe0d0d688d0f191d523a0", Pod:"calico-apiserver-84758c967d-hp26p", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali94138dca4c4", MAC:"ee:3c:1c:eb:f5:96", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 8 00:31:51.931058 containerd[1649]: 2025-11-08 00:31:51.923 [INFO][5024] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="57c4db2578048aa4a5634d3fb876e7ffd6013ec8ea4fe0d0d688d0f191d523a0" Namespace="calico-apiserver" Pod="calico-apiserver-84758c967d-hp26p" WorkloadEndpoint="localhost-k8s-calico--apiserver--84758c967d--hp26p-eth0" Nov 8 00:31:51.944708 containerd[1649]: time="2025-11-08T00:31:51.944461340Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 8 00:31:51.944708 containerd[1649]: time="2025-11-08T00:31:51.944606401Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 8 00:31:51.944708 containerd[1649]: time="2025-11-08T00:31:51.944633952Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 8 00:31:51.944903 containerd[1649]: time="2025-11-08T00:31:51.944865950Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 8 00:31:51.945336 systemd-networkd[1287]: cali26c92eb0f4a: Gained IPv6LL Nov 8 00:31:51.964958 systemd-resolved[1542]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Nov 8 00:31:51.988169 containerd[1649]: time="2025-11-08T00:31:51.988146622Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-84758c967d-hp26p,Uid:558dc8c2-70d1-4eda-a967-93f57dec2dc2,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"57c4db2578048aa4a5634d3fb876e7ffd6013ec8ea4fe0d0d688d0f191d523a0\"" Nov 8 00:31:52.182097 containerd[1649]: time="2025-11-08T00:31:52.182056685Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 8 00:31:52.186689 containerd[1649]: time="2025-11-08T00:31:52.186666371Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Nov 8 00:31:52.186785 containerd[1649]: time="2025-11-08T00:31:52.186718130Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=77" Nov 8 00:31:52.187134 kubelet[2900]: E1108 00:31:52.186892 2900 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Nov 8 00:31:52.187134 kubelet[2900]: E1108 00:31:52.186939 2900 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Nov 8 00:31:52.187134 kubelet[2900]: E1108 00:31:52.187097 2900 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-jtt6s,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-cblkz_calico-system(c0dfff3f-1568-463e-aed1-906fd9d64aa0): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Nov 8 00:31:52.188626 kubelet[2900]: E1108 00:31:52.188255 2900 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-cblkz" podUID="c0dfff3f-1568-463e-aed1-906fd9d64aa0" Nov 8 00:31:52.188667 containerd[1649]: time="2025-11-08T00:31:52.188322932Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Nov 8 00:31:52.520570 containerd[1649]: time="2025-11-08T00:31:52.520481601Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 8 00:31:52.526543 containerd[1649]: time="2025-11-08T00:31:52.526466346Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Nov 8 00:31:52.526543 containerd[1649]: time="2025-11-08T00:31:52.526516964Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Nov 8 00:31:52.528116 kubelet[2900]: E1108 00:31:52.527993 2900 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Nov 8 00:31:52.528116 kubelet[2900]: E1108 00:31:52.528029 2900 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Nov 8 00:31:52.528655 kubelet[2900]: E1108 00:31:52.528186 2900 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-6xszc,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-m4wsd_calico-system(c5b205c6-f534-4f27-bd2e-0a8fe1443335): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Nov 8 00:31:52.528734 containerd[1649]: time="2025-11-08T00:31:52.528292412Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 8 00:31:52.530476 systemd[1]: run-netns-cni\x2dafad84dc\x2d0d80\x2db533\x2db8e2\x2d7b96bd1a9595.mount: Deactivated successfully. Nov 8 00:31:52.861118 kubelet[2900]: E1108 00:31:52.860932 2900 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-cblkz" podUID="c0dfff3f-1568-463e-aed1-906fd9d64aa0" Nov 8 00:31:52.925017 containerd[1649]: time="2025-11-08T00:31:52.924985902Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 8 00:31:52.929879 containerd[1649]: time="2025-11-08T00:31:52.929854491Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 8 00:31:52.930009 containerd[1649]: time="2025-11-08T00:31:52.929903724Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Nov 8 00:31:52.930080 kubelet[2900]: E1108 00:31:52.930049 2900 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 8 00:31:52.930114 kubelet[2900]: E1108 00:31:52.930092 2900 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 8 00:31:52.930269 kubelet[2900]: E1108 00:31:52.930241 2900 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-g9gj7,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-84758c967d-hp26p_calico-apiserver(558dc8c2-70d1-4eda-a967-93f57dec2dc2): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 8 00:31:52.930714 containerd[1649]: time="2025-11-08T00:31:52.930698704Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Nov 8 00:31:52.931817 kubelet[2900]: E1108 00:31:52.931788 2900 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-84758c967d-hp26p" podUID="558dc8c2-70d1-4eda-a967-93f57dec2dc2" Nov 8 00:31:52.969033 systemd-networkd[1287]: cali4a9a388a2c2: Gained IPv6LL Nov 8 00:31:53.276067 containerd[1649]: time="2025-11-08T00:31:53.275897509Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 8 00:31:53.276282 containerd[1649]: time="2025-11-08T00:31:53.276253956Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Nov 8 00:31:53.276356 containerd[1649]: time="2025-11-08T00:31:53.276308564Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Nov 8 00:31:53.277084 kubelet[2900]: E1108 00:31:53.276668 2900 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Nov 8 00:31:53.277084 kubelet[2900]: E1108 00:31:53.276702 2900 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Nov 8 00:31:53.277084 kubelet[2900]: E1108 00:31:53.276795 2900 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-6xszc,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-m4wsd_calico-system(c5b205c6-f534-4f27-bd2e-0a8fe1443335): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Nov 8 00:31:53.278099 kubelet[2900]: E1108 00:31:53.277989 2900 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-m4wsd" podUID="c5b205c6-f534-4f27-bd2e-0a8fe1443335" Nov 8 00:31:53.417067 systemd-networkd[1287]: cali93bea2385e3: Gained IPv6LL Nov 8 00:31:53.673092 systemd-networkd[1287]: cali94138dca4c4: Gained IPv6LL Nov 8 00:31:53.861477 kubelet[2900]: E1108 00:31:53.861399 2900 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-84758c967d-hp26p" podUID="558dc8c2-70d1-4eda-a967-93f57dec2dc2" Nov 8 00:31:53.862137 kubelet[2900]: E1108 00:31:53.862081 2900 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-m4wsd" podUID="c5b205c6-f534-4f27-bd2e-0a8fe1443335" Nov 8 00:31:58.478157 containerd[1649]: time="2025-11-08T00:31:58.478006611Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Nov 8 00:31:58.843199 containerd[1649]: time="2025-11-08T00:31:58.843155802Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 8 00:31:58.843539 containerd[1649]: time="2025-11-08T00:31:58.843510024Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Nov 8 00:31:58.843629 containerd[1649]: time="2025-11-08T00:31:58.843567910Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Nov 8 00:31:58.843678 kubelet[2900]: E1108 00:31:58.843650 2900 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Nov 8 00:31:58.843964 kubelet[2900]: E1108 00:31:58.843685 2900 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Nov 8 00:31:58.843964 kubelet[2900]: E1108 00:31:58.843767 2900 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:a618f084f8064cdab9db195677f26467,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-ss2b5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-5b7c8bd886-mhdkg_calico-system(1e4b7614-5497-46ae-a96f-7f92d3916cde): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Nov 8 00:31:58.846894 containerd[1649]: time="2025-11-08T00:31:58.846870728Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Nov 8 00:31:59.239844 containerd[1649]: time="2025-11-08T00:31:59.239747487Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 8 00:31:59.240497 containerd[1649]: time="2025-11-08T00:31:59.240421398Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Nov 8 00:31:59.240497 containerd[1649]: time="2025-11-08T00:31:59.240461136Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Nov 8 00:31:59.241675 kubelet[2900]: E1108 00:31:59.240572 2900 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Nov 8 00:31:59.241675 kubelet[2900]: E1108 00:31:59.240607 2900 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Nov 8 00:31:59.241675 kubelet[2900]: E1108 00:31:59.240755 2900 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-ss2b5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-5b7c8bd886-mhdkg_calico-system(1e4b7614-5497-46ae-a96f-7f92d3916cde): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Nov 8 00:31:59.242252 kubelet[2900]: E1108 00:31:59.242216 2900 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-5b7c8bd886-mhdkg" podUID="1e4b7614-5497-46ae-a96f-7f92d3916cde" Nov 8 00:32:01.478075 containerd[1649]: time="2025-11-08T00:32:01.477995140Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 8 00:32:01.841356 containerd[1649]: time="2025-11-08T00:32:01.841314354Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 8 00:32:01.841683 containerd[1649]: time="2025-11-08T00:32:01.841652512Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 8 00:32:01.841732 containerd[1649]: time="2025-11-08T00:32:01.841716122Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Nov 8 00:32:01.842020 kubelet[2900]: E1108 00:32:01.841811 2900 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 8 00:32:01.842020 kubelet[2900]: E1108 00:32:01.841851 2900 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 8 00:32:01.842020 kubelet[2900]: E1108 00:32:01.841970 2900 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-8w9hr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-84758c967d-czg8s_calico-apiserver(536546db-8e23-43bc-ada9-ff6aca8accce): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 8 00:32:01.843808 kubelet[2900]: E1108 00:32:01.843782 2900 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-84758c967d-czg8s" podUID="536546db-8e23-43bc-ada9-ff6aca8accce" Nov 8 00:32:02.479264 containerd[1649]: time="2025-11-08T00:32:02.479228408Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Nov 8 00:32:02.862858 containerd[1649]: time="2025-11-08T00:32:02.862804711Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 8 00:32:02.863336 containerd[1649]: time="2025-11-08T00:32:02.863276622Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Nov 8 00:32:02.863336 containerd[1649]: time="2025-11-08T00:32:02.863302464Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=85" Nov 8 00:32:02.863466 kubelet[2900]: E1108 00:32:02.863431 2900 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Nov 8 00:32:02.863826 kubelet[2900]: E1108 00:32:02.863471 2900 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Nov 8 00:32:02.863826 kubelet[2900]: E1108 00:32:02.863566 2900 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-4fqms,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-655bcd5b7f-mvm84_calico-system(d8943d47-ae19-484d-8d89-dda3dcc29a60): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Nov 8 00:32:02.866136 kubelet[2900]: E1108 00:32:02.866065 2900 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-655bcd5b7f-mvm84" podUID="d8943d47-ae19-484d-8d89-dda3dcc29a60" Nov 8 00:32:05.768405 containerd[1649]: time="2025-11-08T00:32:05.768112918Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Nov 8 00:32:05.768905 containerd[1649]: time="2025-11-08T00:32:05.768760319Z" level=info msg="StopPodSandbox for \"7f129ede87c085322632e017826913d76d407d9023e8f529257ad197229f85d6\"" Nov 8 00:32:05.844748 containerd[1649]: 2025-11-08 00:32:05.820 [WARNING][5260] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="7f129ede87c085322632e017826913d76d407d9023e8f529257ad197229f85d6" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--84758c967d--hp26p-eth0", GenerateName:"calico-apiserver-84758c967d-", Namespace:"calico-apiserver", SelfLink:"", UID:"558dc8c2-70d1-4eda-a967-93f57dec2dc2", ResourceVersion:"1010", Generation:0, CreationTimestamp:time.Date(2025, time.November, 8, 0, 31, 22, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"84758c967d", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"57c4db2578048aa4a5634d3fb876e7ffd6013ec8ea4fe0d0d688d0f191d523a0", Pod:"calico-apiserver-84758c967d-hp26p", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali94138dca4c4", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 8 00:32:05.844748 containerd[1649]: 2025-11-08 00:32:05.820 [INFO][5260] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="7f129ede87c085322632e017826913d76d407d9023e8f529257ad197229f85d6" Nov 8 00:32:05.844748 containerd[1649]: 2025-11-08 00:32:05.820 [INFO][5260] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="7f129ede87c085322632e017826913d76d407d9023e8f529257ad197229f85d6" iface="eth0" netns="" Nov 8 00:32:05.844748 containerd[1649]: 2025-11-08 00:32:05.820 [INFO][5260] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="7f129ede87c085322632e017826913d76d407d9023e8f529257ad197229f85d6" Nov 8 00:32:05.844748 containerd[1649]: 2025-11-08 00:32:05.820 [INFO][5260] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="7f129ede87c085322632e017826913d76d407d9023e8f529257ad197229f85d6" Nov 8 00:32:05.844748 containerd[1649]: 2025-11-08 00:32:05.836 [INFO][5267] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="7f129ede87c085322632e017826913d76d407d9023e8f529257ad197229f85d6" HandleID="k8s-pod-network.7f129ede87c085322632e017826913d76d407d9023e8f529257ad197229f85d6" Workload="localhost-k8s-calico--apiserver--84758c967d--hp26p-eth0" Nov 8 00:32:05.844748 containerd[1649]: 2025-11-08 00:32:05.836 [INFO][5267] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 8 00:32:05.844748 containerd[1649]: 2025-11-08 00:32:05.836 [INFO][5267] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 8 00:32:05.844748 containerd[1649]: 2025-11-08 00:32:05.841 [WARNING][5267] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="7f129ede87c085322632e017826913d76d407d9023e8f529257ad197229f85d6" HandleID="k8s-pod-network.7f129ede87c085322632e017826913d76d407d9023e8f529257ad197229f85d6" Workload="localhost-k8s-calico--apiserver--84758c967d--hp26p-eth0" Nov 8 00:32:05.844748 containerd[1649]: 2025-11-08 00:32:05.841 [INFO][5267] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="7f129ede87c085322632e017826913d76d407d9023e8f529257ad197229f85d6" HandleID="k8s-pod-network.7f129ede87c085322632e017826913d76d407d9023e8f529257ad197229f85d6" Workload="localhost-k8s-calico--apiserver--84758c967d--hp26p-eth0" Nov 8 00:32:05.844748 containerd[1649]: 2025-11-08 00:32:05.842 [INFO][5267] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 8 00:32:05.844748 containerd[1649]: 2025-11-08 00:32:05.843 [INFO][5260] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="7f129ede87c085322632e017826913d76d407d9023e8f529257ad197229f85d6" Nov 8 00:32:05.854423 containerd[1649]: time="2025-11-08T00:32:05.844771127Z" level=info msg="TearDown network for sandbox \"7f129ede87c085322632e017826913d76d407d9023e8f529257ad197229f85d6\" successfully" Nov 8 00:32:05.854423 containerd[1649]: time="2025-11-08T00:32:05.844787603Z" level=info msg="StopPodSandbox for \"7f129ede87c085322632e017826913d76d407d9023e8f529257ad197229f85d6\" returns successfully" Nov 8 00:32:06.056143 containerd[1649]: time="2025-11-08T00:32:06.056092567Z" level=info msg="RemovePodSandbox for \"7f129ede87c085322632e017826913d76d407d9023e8f529257ad197229f85d6\"" Nov 8 00:32:06.056143 containerd[1649]: time="2025-11-08T00:32:06.056125296Z" level=info msg="Forcibly stopping sandbox \"7f129ede87c085322632e017826913d76d407d9023e8f529257ad197229f85d6\"" Nov 8 00:32:06.112513 containerd[1649]: 2025-11-08 00:32:06.089 [WARNING][5281] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="7f129ede87c085322632e017826913d76d407d9023e8f529257ad197229f85d6" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--84758c967d--hp26p-eth0", GenerateName:"calico-apiserver-84758c967d-", Namespace:"calico-apiserver", SelfLink:"", UID:"558dc8c2-70d1-4eda-a967-93f57dec2dc2", ResourceVersion:"1058", Generation:0, CreationTimestamp:time.Date(2025, time.November, 8, 0, 31, 22, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"84758c967d", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"57c4db2578048aa4a5634d3fb876e7ffd6013ec8ea4fe0d0d688d0f191d523a0", Pod:"calico-apiserver-84758c967d-hp26p", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali94138dca4c4", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 8 00:32:06.112513 containerd[1649]: 2025-11-08 00:32:06.089 [INFO][5281] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="7f129ede87c085322632e017826913d76d407d9023e8f529257ad197229f85d6" Nov 8 00:32:06.112513 containerd[1649]: 2025-11-08 00:32:06.089 [INFO][5281] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="7f129ede87c085322632e017826913d76d407d9023e8f529257ad197229f85d6" iface="eth0" netns="" Nov 8 00:32:06.112513 containerd[1649]: 2025-11-08 00:32:06.089 [INFO][5281] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="7f129ede87c085322632e017826913d76d407d9023e8f529257ad197229f85d6" Nov 8 00:32:06.112513 containerd[1649]: 2025-11-08 00:32:06.089 [INFO][5281] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="7f129ede87c085322632e017826913d76d407d9023e8f529257ad197229f85d6" Nov 8 00:32:06.112513 containerd[1649]: 2025-11-08 00:32:06.103 [INFO][5289] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="7f129ede87c085322632e017826913d76d407d9023e8f529257ad197229f85d6" HandleID="k8s-pod-network.7f129ede87c085322632e017826913d76d407d9023e8f529257ad197229f85d6" Workload="localhost-k8s-calico--apiserver--84758c967d--hp26p-eth0" Nov 8 00:32:06.112513 containerd[1649]: 2025-11-08 00:32:06.104 [INFO][5289] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 8 00:32:06.112513 containerd[1649]: 2025-11-08 00:32:06.104 [INFO][5289] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 8 00:32:06.112513 containerd[1649]: 2025-11-08 00:32:06.109 [WARNING][5289] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="7f129ede87c085322632e017826913d76d407d9023e8f529257ad197229f85d6" HandleID="k8s-pod-network.7f129ede87c085322632e017826913d76d407d9023e8f529257ad197229f85d6" Workload="localhost-k8s-calico--apiserver--84758c967d--hp26p-eth0" Nov 8 00:32:06.112513 containerd[1649]: 2025-11-08 00:32:06.109 [INFO][5289] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="7f129ede87c085322632e017826913d76d407d9023e8f529257ad197229f85d6" HandleID="k8s-pod-network.7f129ede87c085322632e017826913d76d407d9023e8f529257ad197229f85d6" Workload="localhost-k8s-calico--apiserver--84758c967d--hp26p-eth0" Nov 8 00:32:06.112513 containerd[1649]: 2025-11-08 00:32:06.109 [INFO][5289] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 8 00:32:06.112513 containerd[1649]: 2025-11-08 00:32:06.111 [INFO][5281] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="7f129ede87c085322632e017826913d76d407d9023e8f529257ad197229f85d6" Nov 8 00:32:06.128404 containerd[1649]: time="2025-11-08T00:32:06.112538156Z" level=info msg="TearDown network for sandbox \"7f129ede87c085322632e017826913d76d407d9023e8f529257ad197229f85d6\" successfully" Nov 8 00:32:06.129231 containerd[1649]: time="2025-11-08T00:32:06.129197955Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"7f129ede87c085322632e017826913d76d407d9023e8f529257ad197229f85d6\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Nov 8 00:32:06.129302 containerd[1649]: time="2025-11-08T00:32:06.129281839Z" level=info msg="RemovePodSandbox \"7f129ede87c085322632e017826913d76d407d9023e8f529257ad197229f85d6\" returns successfully" Nov 8 00:32:06.129683 containerd[1649]: time="2025-11-08T00:32:06.129666083Z" level=info msg="StopPodSandbox for \"f8a2276ce0fe12dd5a4d868f1552f2d4fc878719d08d26e02e0cf5f9142eae72\"" Nov 8 00:32:06.182345 containerd[1649]: time="2025-11-08T00:32:06.182297241Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 8 00:32:06.190634 containerd[1649]: time="2025-11-08T00:32:06.190599339Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Nov 8 00:32:06.190737 containerd[1649]: time="2025-11-08T00:32:06.190661043Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=77" Nov 8 00:32:06.191776 kubelet[2900]: E1108 00:32:06.190984 2900 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Nov 8 00:32:06.191776 kubelet[2900]: E1108 00:32:06.191019 2900 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Nov 8 00:32:06.191776 kubelet[2900]: E1108 00:32:06.191195 2900 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-jtt6s,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-cblkz_calico-system(c0dfff3f-1568-463e-aed1-906fd9d64aa0): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Nov 8 00:32:06.205638 containerd[1649]: time="2025-11-08T00:32:06.191261651Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 8 00:32:06.205683 kubelet[2900]: E1108 00:32:06.192569 2900 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-cblkz" podUID="c0dfff3f-1568-463e-aed1-906fd9d64aa0" Nov 8 00:32:06.223327 containerd[1649]: 2025-11-08 00:32:06.181 [WARNING][5303] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="f8a2276ce0fe12dd5a4d868f1552f2d4fc878719d08d26e02e0cf5f9142eae72" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--668d6bf9bc--5p9cw-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"8b2ed5b8-86fb-4b7a-9b26-26f59088b35b", ResourceVersion:"929", Generation:0, CreationTimestamp:time.Date(2025, time.November, 8, 0, 31, 12, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"8824dde692459c3a5ae668761e9c4fde48dccb2688bff55df5b48f438c8fa3e9", Pod:"coredns-668d6bf9bc-5p9cw", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali314e203a221", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 8 00:32:06.223327 containerd[1649]: 2025-11-08 00:32:06.181 [INFO][5303] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="f8a2276ce0fe12dd5a4d868f1552f2d4fc878719d08d26e02e0cf5f9142eae72" Nov 8 00:32:06.223327 containerd[1649]: 2025-11-08 00:32:06.181 [INFO][5303] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="f8a2276ce0fe12dd5a4d868f1552f2d4fc878719d08d26e02e0cf5f9142eae72" iface="eth0" netns="" Nov 8 00:32:06.223327 containerd[1649]: 2025-11-08 00:32:06.181 [INFO][5303] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="f8a2276ce0fe12dd5a4d868f1552f2d4fc878719d08d26e02e0cf5f9142eae72" Nov 8 00:32:06.223327 containerd[1649]: 2025-11-08 00:32:06.181 [INFO][5303] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="f8a2276ce0fe12dd5a4d868f1552f2d4fc878719d08d26e02e0cf5f9142eae72" Nov 8 00:32:06.223327 containerd[1649]: 2025-11-08 00:32:06.215 [INFO][5310] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="f8a2276ce0fe12dd5a4d868f1552f2d4fc878719d08d26e02e0cf5f9142eae72" HandleID="k8s-pod-network.f8a2276ce0fe12dd5a4d868f1552f2d4fc878719d08d26e02e0cf5f9142eae72" Workload="localhost-k8s-coredns--668d6bf9bc--5p9cw-eth0" Nov 8 00:32:06.223327 containerd[1649]: 2025-11-08 00:32:06.215 [INFO][5310] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 8 00:32:06.223327 containerd[1649]: 2025-11-08 00:32:06.215 [INFO][5310] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 8 00:32:06.223327 containerd[1649]: 2025-11-08 00:32:06.220 [WARNING][5310] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="f8a2276ce0fe12dd5a4d868f1552f2d4fc878719d08d26e02e0cf5f9142eae72" HandleID="k8s-pod-network.f8a2276ce0fe12dd5a4d868f1552f2d4fc878719d08d26e02e0cf5f9142eae72" Workload="localhost-k8s-coredns--668d6bf9bc--5p9cw-eth0" Nov 8 00:32:06.223327 containerd[1649]: 2025-11-08 00:32:06.220 [INFO][5310] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="f8a2276ce0fe12dd5a4d868f1552f2d4fc878719d08d26e02e0cf5f9142eae72" HandleID="k8s-pod-network.f8a2276ce0fe12dd5a4d868f1552f2d4fc878719d08d26e02e0cf5f9142eae72" Workload="localhost-k8s-coredns--668d6bf9bc--5p9cw-eth0" Nov 8 00:32:06.223327 containerd[1649]: 2025-11-08 00:32:06.221 [INFO][5310] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 8 00:32:06.223327 containerd[1649]: 2025-11-08 00:32:06.222 [INFO][5303] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="f8a2276ce0fe12dd5a4d868f1552f2d4fc878719d08d26e02e0cf5f9142eae72" Nov 8 00:32:06.235136 containerd[1649]: time="2025-11-08T00:32:06.223366154Z" level=info msg="TearDown network for sandbox \"f8a2276ce0fe12dd5a4d868f1552f2d4fc878719d08d26e02e0cf5f9142eae72\" successfully" Nov 8 00:32:06.235136 containerd[1649]: time="2025-11-08T00:32:06.223386299Z" level=info msg="StopPodSandbox for \"f8a2276ce0fe12dd5a4d868f1552f2d4fc878719d08d26e02e0cf5f9142eae72\" returns successfully" Nov 8 00:32:06.235136 containerd[1649]: time="2025-11-08T00:32:06.223661642Z" level=info msg="RemovePodSandbox for \"f8a2276ce0fe12dd5a4d868f1552f2d4fc878719d08d26e02e0cf5f9142eae72\"" Nov 8 00:32:06.235136 containerd[1649]: time="2025-11-08T00:32:06.223684236Z" level=info msg="Forcibly stopping sandbox \"f8a2276ce0fe12dd5a4d868f1552f2d4fc878719d08d26e02e0cf5f9142eae72\"" Nov 8 00:32:06.274630 containerd[1649]: 2025-11-08 00:32:06.247 [WARNING][5324] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="f8a2276ce0fe12dd5a4d868f1552f2d4fc878719d08d26e02e0cf5f9142eae72" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--668d6bf9bc--5p9cw-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"8b2ed5b8-86fb-4b7a-9b26-26f59088b35b", ResourceVersion:"929", Generation:0, CreationTimestamp:time.Date(2025, time.November, 8, 0, 31, 12, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"8824dde692459c3a5ae668761e9c4fde48dccb2688bff55df5b48f438c8fa3e9", Pod:"coredns-668d6bf9bc-5p9cw", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali314e203a221", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 8 00:32:06.274630 containerd[1649]: 2025-11-08 00:32:06.248 [INFO][5324] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="f8a2276ce0fe12dd5a4d868f1552f2d4fc878719d08d26e02e0cf5f9142eae72" Nov 8 00:32:06.274630 containerd[1649]: 2025-11-08 00:32:06.248 [INFO][5324] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="f8a2276ce0fe12dd5a4d868f1552f2d4fc878719d08d26e02e0cf5f9142eae72" iface="eth0" netns="" Nov 8 00:32:06.274630 containerd[1649]: 2025-11-08 00:32:06.248 [INFO][5324] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="f8a2276ce0fe12dd5a4d868f1552f2d4fc878719d08d26e02e0cf5f9142eae72" Nov 8 00:32:06.274630 containerd[1649]: 2025-11-08 00:32:06.248 [INFO][5324] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="f8a2276ce0fe12dd5a4d868f1552f2d4fc878719d08d26e02e0cf5f9142eae72" Nov 8 00:32:06.274630 containerd[1649]: 2025-11-08 00:32:06.267 [INFO][5332] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="f8a2276ce0fe12dd5a4d868f1552f2d4fc878719d08d26e02e0cf5f9142eae72" HandleID="k8s-pod-network.f8a2276ce0fe12dd5a4d868f1552f2d4fc878719d08d26e02e0cf5f9142eae72" Workload="localhost-k8s-coredns--668d6bf9bc--5p9cw-eth0" Nov 8 00:32:06.274630 containerd[1649]: 2025-11-08 00:32:06.267 [INFO][5332] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 8 00:32:06.274630 containerd[1649]: 2025-11-08 00:32:06.267 [INFO][5332] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 8 00:32:06.274630 containerd[1649]: 2025-11-08 00:32:06.271 [WARNING][5332] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="f8a2276ce0fe12dd5a4d868f1552f2d4fc878719d08d26e02e0cf5f9142eae72" HandleID="k8s-pod-network.f8a2276ce0fe12dd5a4d868f1552f2d4fc878719d08d26e02e0cf5f9142eae72" Workload="localhost-k8s-coredns--668d6bf9bc--5p9cw-eth0" Nov 8 00:32:06.274630 containerd[1649]: 2025-11-08 00:32:06.271 [INFO][5332] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="f8a2276ce0fe12dd5a4d868f1552f2d4fc878719d08d26e02e0cf5f9142eae72" HandleID="k8s-pod-network.f8a2276ce0fe12dd5a4d868f1552f2d4fc878719d08d26e02e0cf5f9142eae72" Workload="localhost-k8s-coredns--668d6bf9bc--5p9cw-eth0" Nov 8 00:32:06.274630 containerd[1649]: 2025-11-08 00:32:06.272 [INFO][5332] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 8 00:32:06.274630 containerd[1649]: 2025-11-08 00:32:06.273 [INFO][5324] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="f8a2276ce0fe12dd5a4d868f1552f2d4fc878719d08d26e02e0cf5f9142eae72" Nov 8 00:32:06.275439 containerd[1649]: time="2025-11-08T00:32:06.274662041Z" level=info msg="TearDown network for sandbox \"f8a2276ce0fe12dd5a4d868f1552f2d4fc878719d08d26e02e0cf5f9142eae72\" successfully" Nov 8 00:32:06.276498 containerd[1649]: time="2025-11-08T00:32:06.276464163Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"f8a2276ce0fe12dd5a4d868f1552f2d4fc878719d08d26e02e0cf5f9142eae72\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Nov 8 00:32:06.276556 containerd[1649]: time="2025-11-08T00:32:06.276509278Z" level=info msg="RemovePodSandbox \"f8a2276ce0fe12dd5a4d868f1552f2d4fc878719d08d26e02e0cf5f9142eae72\" returns successfully" Nov 8 00:32:06.277049 containerd[1649]: time="2025-11-08T00:32:06.276906839Z" level=info msg="StopPodSandbox for \"3265a2745e9ae0466ceafbf33ebb57c974e2beed74c3345f0502814ff46829ec\"" Nov 8 00:32:06.329068 containerd[1649]: 2025-11-08 00:32:06.305 [WARNING][5346] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="3265a2745e9ae0466ceafbf33ebb57c974e2beed74c3345f0502814ff46829ec" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--666569f655--cblkz-eth0", GenerateName:"goldmane-666569f655-", Namespace:"calico-system", SelfLink:"", UID:"c0dfff3f-1568-463e-aed1-906fd9d64aa0", ResourceVersion:"1056", Generation:0, CreationTimestamp:time.Date(2025, time.November, 8, 0, 31, 23, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"666569f655", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"95aa7be66221f7f4575ab0603d5687c89c3e760dd3c7116fc3ac00f19e2b26fc", Pod:"goldmane-666569f655-cblkz", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali4a9a388a2c2", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 8 00:32:06.329068 containerd[1649]: 2025-11-08 00:32:06.305 [INFO][5346] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="3265a2745e9ae0466ceafbf33ebb57c974e2beed74c3345f0502814ff46829ec" Nov 8 00:32:06.329068 containerd[1649]: 2025-11-08 00:32:06.305 [INFO][5346] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="3265a2745e9ae0466ceafbf33ebb57c974e2beed74c3345f0502814ff46829ec" iface="eth0" netns="" Nov 8 00:32:06.329068 containerd[1649]: 2025-11-08 00:32:06.305 [INFO][5346] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="3265a2745e9ae0466ceafbf33ebb57c974e2beed74c3345f0502814ff46829ec" Nov 8 00:32:06.329068 containerd[1649]: 2025-11-08 00:32:06.305 [INFO][5346] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="3265a2745e9ae0466ceafbf33ebb57c974e2beed74c3345f0502814ff46829ec" Nov 8 00:32:06.329068 containerd[1649]: 2025-11-08 00:32:06.320 [INFO][5353] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="3265a2745e9ae0466ceafbf33ebb57c974e2beed74c3345f0502814ff46829ec" HandleID="k8s-pod-network.3265a2745e9ae0466ceafbf33ebb57c974e2beed74c3345f0502814ff46829ec" Workload="localhost-k8s-goldmane--666569f655--cblkz-eth0" Nov 8 00:32:06.329068 containerd[1649]: 2025-11-08 00:32:06.320 [INFO][5353] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 8 00:32:06.329068 containerd[1649]: 2025-11-08 00:32:06.320 [INFO][5353] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 8 00:32:06.329068 containerd[1649]: 2025-11-08 00:32:06.324 [WARNING][5353] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="3265a2745e9ae0466ceafbf33ebb57c974e2beed74c3345f0502814ff46829ec" HandleID="k8s-pod-network.3265a2745e9ae0466ceafbf33ebb57c974e2beed74c3345f0502814ff46829ec" Workload="localhost-k8s-goldmane--666569f655--cblkz-eth0" Nov 8 00:32:06.329068 containerd[1649]: 2025-11-08 00:32:06.325 [INFO][5353] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="3265a2745e9ae0466ceafbf33ebb57c974e2beed74c3345f0502814ff46829ec" HandleID="k8s-pod-network.3265a2745e9ae0466ceafbf33ebb57c974e2beed74c3345f0502814ff46829ec" Workload="localhost-k8s-goldmane--666569f655--cblkz-eth0" Nov 8 00:32:06.329068 containerd[1649]: 2025-11-08 00:32:06.326 [INFO][5353] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 8 00:32:06.329068 containerd[1649]: 2025-11-08 00:32:06.327 [INFO][5346] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="3265a2745e9ae0466ceafbf33ebb57c974e2beed74c3345f0502814ff46829ec" Nov 8 00:32:06.329068 containerd[1649]: time="2025-11-08T00:32:06.329052429Z" level=info msg="TearDown network for sandbox \"3265a2745e9ae0466ceafbf33ebb57c974e2beed74c3345f0502814ff46829ec\" successfully" Nov 8 00:32:06.337661 containerd[1649]: time="2025-11-08T00:32:06.329073181Z" level=info msg="StopPodSandbox for \"3265a2745e9ae0466ceafbf33ebb57c974e2beed74c3345f0502814ff46829ec\" returns successfully" Nov 8 00:32:06.337661 containerd[1649]: time="2025-11-08T00:32:06.331077987Z" level=info msg="RemovePodSandbox for \"3265a2745e9ae0466ceafbf33ebb57c974e2beed74c3345f0502814ff46829ec\"" Nov 8 00:32:06.337661 containerd[1649]: time="2025-11-08T00:32:06.331099817Z" level=info msg="Forcibly stopping sandbox \"3265a2745e9ae0466ceafbf33ebb57c974e2beed74c3345f0502814ff46829ec\"" Nov 8 00:32:06.383802 containerd[1649]: 2025-11-08 00:32:06.356 [WARNING][5367] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="3265a2745e9ae0466ceafbf33ebb57c974e2beed74c3345f0502814ff46829ec" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--666569f655--cblkz-eth0", GenerateName:"goldmane-666569f655-", Namespace:"calico-system", SelfLink:"", UID:"c0dfff3f-1568-463e-aed1-906fd9d64aa0", ResourceVersion:"1056", Generation:0, CreationTimestamp:time.Date(2025, time.November, 8, 0, 31, 23, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"666569f655", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"95aa7be66221f7f4575ab0603d5687c89c3e760dd3c7116fc3ac00f19e2b26fc", Pod:"goldmane-666569f655-cblkz", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali4a9a388a2c2", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 8 00:32:06.383802 containerd[1649]: 2025-11-08 00:32:06.356 [INFO][5367] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="3265a2745e9ae0466ceafbf33ebb57c974e2beed74c3345f0502814ff46829ec" Nov 8 00:32:06.383802 containerd[1649]: 2025-11-08 00:32:06.357 [INFO][5367] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="3265a2745e9ae0466ceafbf33ebb57c974e2beed74c3345f0502814ff46829ec" iface="eth0" netns="" Nov 8 00:32:06.383802 containerd[1649]: 2025-11-08 00:32:06.357 [INFO][5367] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="3265a2745e9ae0466ceafbf33ebb57c974e2beed74c3345f0502814ff46829ec" Nov 8 00:32:06.383802 containerd[1649]: 2025-11-08 00:32:06.357 [INFO][5367] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="3265a2745e9ae0466ceafbf33ebb57c974e2beed74c3345f0502814ff46829ec" Nov 8 00:32:06.383802 containerd[1649]: 2025-11-08 00:32:06.376 [INFO][5374] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="3265a2745e9ae0466ceafbf33ebb57c974e2beed74c3345f0502814ff46829ec" HandleID="k8s-pod-network.3265a2745e9ae0466ceafbf33ebb57c974e2beed74c3345f0502814ff46829ec" Workload="localhost-k8s-goldmane--666569f655--cblkz-eth0" Nov 8 00:32:06.383802 containerd[1649]: 2025-11-08 00:32:06.376 [INFO][5374] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 8 00:32:06.383802 containerd[1649]: 2025-11-08 00:32:06.376 [INFO][5374] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 8 00:32:06.383802 containerd[1649]: 2025-11-08 00:32:06.380 [WARNING][5374] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="3265a2745e9ae0466ceafbf33ebb57c974e2beed74c3345f0502814ff46829ec" HandleID="k8s-pod-network.3265a2745e9ae0466ceafbf33ebb57c974e2beed74c3345f0502814ff46829ec" Workload="localhost-k8s-goldmane--666569f655--cblkz-eth0" Nov 8 00:32:06.383802 containerd[1649]: 2025-11-08 00:32:06.380 [INFO][5374] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="3265a2745e9ae0466ceafbf33ebb57c974e2beed74c3345f0502814ff46829ec" HandleID="k8s-pod-network.3265a2745e9ae0466ceafbf33ebb57c974e2beed74c3345f0502814ff46829ec" Workload="localhost-k8s-goldmane--666569f655--cblkz-eth0" Nov 8 00:32:06.383802 containerd[1649]: 2025-11-08 00:32:06.381 [INFO][5374] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 8 00:32:06.383802 containerd[1649]: 2025-11-08 00:32:06.382 [INFO][5367] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="3265a2745e9ae0466ceafbf33ebb57c974e2beed74c3345f0502814ff46829ec" Nov 8 00:32:06.388150 containerd[1649]: time="2025-11-08T00:32:06.383834592Z" level=info msg="TearDown network for sandbox \"3265a2745e9ae0466ceafbf33ebb57c974e2beed74c3345f0502814ff46829ec\" successfully" Nov 8 00:32:06.390265 containerd[1649]: time="2025-11-08T00:32:06.390232698Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"3265a2745e9ae0466ceafbf33ebb57c974e2beed74c3345f0502814ff46829ec\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Nov 8 00:32:06.390357 containerd[1649]: time="2025-11-08T00:32:06.390307790Z" level=info msg="RemovePodSandbox \"3265a2745e9ae0466ceafbf33ebb57c974e2beed74c3345f0502814ff46829ec\" returns successfully" Nov 8 00:32:06.390959 containerd[1649]: time="2025-11-08T00:32:06.390684548Z" level=info msg="StopPodSandbox for \"83d1a9e89ea2e606570757e3d36ce51ba3e34485b1ed3bee50364c6355c3528e\"" Nov 8 00:32:06.442755 containerd[1649]: 2025-11-08 00:32:06.420 [WARNING][5388] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="83d1a9e89ea2e606570757e3d36ce51ba3e34485b1ed3bee50364c6355c3528e" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--668d6bf9bc--wq4zw-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"939bcda9-0a19-4e96-ac5d-405850005d65", ResourceVersion:"961", Generation:0, CreationTimestamp:time.Date(2025, time.November, 8, 0, 31, 12, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"c73d78a2c9429af1fe38503688b5341f4387662ff13129bd7061317d2367a3dd", Pod:"coredns-668d6bf9bc-wq4zw", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali26c92eb0f4a", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 8 00:32:06.442755 containerd[1649]: 2025-11-08 00:32:06.420 [INFO][5388] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="83d1a9e89ea2e606570757e3d36ce51ba3e34485b1ed3bee50364c6355c3528e" Nov 8 00:32:06.442755 containerd[1649]: 2025-11-08 00:32:06.420 [INFO][5388] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="83d1a9e89ea2e606570757e3d36ce51ba3e34485b1ed3bee50364c6355c3528e" iface="eth0" netns="" Nov 8 00:32:06.442755 containerd[1649]: 2025-11-08 00:32:06.420 [INFO][5388] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="83d1a9e89ea2e606570757e3d36ce51ba3e34485b1ed3bee50364c6355c3528e" Nov 8 00:32:06.442755 containerd[1649]: 2025-11-08 00:32:06.420 [INFO][5388] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="83d1a9e89ea2e606570757e3d36ce51ba3e34485b1ed3bee50364c6355c3528e" Nov 8 00:32:06.442755 containerd[1649]: 2025-11-08 00:32:06.434 [INFO][5395] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="83d1a9e89ea2e606570757e3d36ce51ba3e34485b1ed3bee50364c6355c3528e" HandleID="k8s-pod-network.83d1a9e89ea2e606570757e3d36ce51ba3e34485b1ed3bee50364c6355c3528e" Workload="localhost-k8s-coredns--668d6bf9bc--wq4zw-eth0" Nov 8 00:32:06.442755 containerd[1649]: 2025-11-08 00:32:06.434 [INFO][5395] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 8 00:32:06.442755 containerd[1649]: 2025-11-08 00:32:06.434 [INFO][5395] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 8 00:32:06.442755 containerd[1649]: 2025-11-08 00:32:06.439 [WARNING][5395] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="83d1a9e89ea2e606570757e3d36ce51ba3e34485b1ed3bee50364c6355c3528e" HandleID="k8s-pod-network.83d1a9e89ea2e606570757e3d36ce51ba3e34485b1ed3bee50364c6355c3528e" Workload="localhost-k8s-coredns--668d6bf9bc--wq4zw-eth0" Nov 8 00:32:06.442755 containerd[1649]: 2025-11-08 00:32:06.439 [INFO][5395] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="83d1a9e89ea2e606570757e3d36ce51ba3e34485b1ed3bee50364c6355c3528e" HandleID="k8s-pod-network.83d1a9e89ea2e606570757e3d36ce51ba3e34485b1ed3bee50364c6355c3528e" Workload="localhost-k8s-coredns--668d6bf9bc--wq4zw-eth0" Nov 8 00:32:06.442755 containerd[1649]: 2025-11-08 00:32:06.440 [INFO][5395] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 8 00:32:06.442755 containerd[1649]: 2025-11-08 00:32:06.441 [INFO][5388] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="83d1a9e89ea2e606570757e3d36ce51ba3e34485b1ed3bee50364c6355c3528e" Nov 8 00:32:06.449951 containerd[1649]: time="2025-11-08T00:32:06.442798314Z" level=info msg="TearDown network for sandbox \"83d1a9e89ea2e606570757e3d36ce51ba3e34485b1ed3bee50364c6355c3528e\" successfully" Nov 8 00:32:06.449951 containerd[1649]: time="2025-11-08T00:32:06.442821296Z" level=info msg="StopPodSandbox for \"83d1a9e89ea2e606570757e3d36ce51ba3e34485b1ed3bee50364c6355c3528e\" returns successfully" Nov 8 00:32:06.449951 containerd[1649]: time="2025-11-08T00:32:06.443224958Z" level=info msg="RemovePodSandbox for \"83d1a9e89ea2e606570757e3d36ce51ba3e34485b1ed3bee50364c6355c3528e\"" Nov 8 00:32:06.449951 containerd[1649]: time="2025-11-08T00:32:06.443239501Z" level=info msg="Forcibly stopping sandbox \"83d1a9e89ea2e606570757e3d36ce51ba3e34485b1ed3bee50364c6355c3528e\"" Nov 8 00:32:06.497477 containerd[1649]: 2025-11-08 00:32:06.471 [WARNING][5409] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="83d1a9e89ea2e606570757e3d36ce51ba3e34485b1ed3bee50364c6355c3528e" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--668d6bf9bc--wq4zw-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"939bcda9-0a19-4e96-ac5d-405850005d65", ResourceVersion:"961", Generation:0, CreationTimestamp:time.Date(2025, time.November, 8, 0, 31, 12, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"c73d78a2c9429af1fe38503688b5341f4387662ff13129bd7061317d2367a3dd", Pod:"coredns-668d6bf9bc-wq4zw", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali26c92eb0f4a", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 8 00:32:06.497477 containerd[1649]: 2025-11-08 00:32:06.471 [INFO][5409] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="83d1a9e89ea2e606570757e3d36ce51ba3e34485b1ed3bee50364c6355c3528e" Nov 8 00:32:06.497477 containerd[1649]: 2025-11-08 00:32:06.471 [INFO][5409] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="83d1a9e89ea2e606570757e3d36ce51ba3e34485b1ed3bee50364c6355c3528e" iface="eth0" netns="" Nov 8 00:32:06.497477 containerd[1649]: 2025-11-08 00:32:06.471 [INFO][5409] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="83d1a9e89ea2e606570757e3d36ce51ba3e34485b1ed3bee50364c6355c3528e" Nov 8 00:32:06.497477 containerd[1649]: 2025-11-08 00:32:06.471 [INFO][5409] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="83d1a9e89ea2e606570757e3d36ce51ba3e34485b1ed3bee50364c6355c3528e" Nov 8 00:32:06.497477 containerd[1649]: 2025-11-08 00:32:06.489 [INFO][5416] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="83d1a9e89ea2e606570757e3d36ce51ba3e34485b1ed3bee50364c6355c3528e" HandleID="k8s-pod-network.83d1a9e89ea2e606570757e3d36ce51ba3e34485b1ed3bee50364c6355c3528e" Workload="localhost-k8s-coredns--668d6bf9bc--wq4zw-eth0" Nov 8 00:32:06.497477 containerd[1649]: 2025-11-08 00:32:06.489 [INFO][5416] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 8 00:32:06.497477 containerd[1649]: 2025-11-08 00:32:06.489 [INFO][5416] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 8 00:32:06.497477 containerd[1649]: 2025-11-08 00:32:06.493 [WARNING][5416] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="83d1a9e89ea2e606570757e3d36ce51ba3e34485b1ed3bee50364c6355c3528e" HandleID="k8s-pod-network.83d1a9e89ea2e606570757e3d36ce51ba3e34485b1ed3bee50364c6355c3528e" Workload="localhost-k8s-coredns--668d6bf9bc--wq4zw-eth0" Nov 8 00:32:06.497477 containerd[1649]: 2025-11-08 00:32:06.494 [INFO][5416] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="83d1a9e89ea2e606570757e3d36ce51ba3e34485b1ed3bee50364c6355c3528e" HandleID="k8s-pod-network.83d1a9e89ea2e606570757e3d36ce51ba3e34485b1ed3bee50364c6355c3528e" Workload="localhost-k8s-coredns--668d6bf9bc--wq4zw-eth0" Nov 8 00:32:06.497477 containerd[1649]: 2025-11-08 00:32:06.495 [INFO][5416] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 8 00:32:06.497477 containerd[1649]: 2025-11-08 00:32:06.496 [INFO][5409] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="83d1a9e89ea2e606570757e3d36ce51ba3e34485b1ed3bee50364c6355c3528e" Nov 8 00:32:06.497942 containerd[1649]: time="2025-11-08T00:32:06.497504552Z" level=info msg="TearDown network for sandbox \"83d1a9e89ea2e606570757e3d36ce51ba3e34485b1ed3bee50364c6355c3528e\" successfully" Nov 8 00:32:06.508810 containerd[1649]: time="2025-11-08T00:32:06.508766688Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"83d1a9e89ea2e606570757e3d36ce51ba3e34485b1ed3bee50364c6355c3528e\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Nov 8 00:32:06.508810 containerd[1649]: time="2025-11-08T00:32:06.508812248Z" level=info msg="RemovePodSandbox \"83d1a9e89ea2e606570757e3d36ce51ba3e34485b1ed3bee50364c6355c3528e\" returns successfully" Nov 8 00:32:06.509269 containerd[1649]: time="2025-11-08T00:32:06.509251413Z" level=info msg="StopPodSandbox for \"caf42fdb3ada0ed0ba850958182b309f6760189d49c1f7086237e2d2431c9939\"" Nov 8 00:32:06.547146 containerd[1649]: time="2025-11-08T00:32:06.547111398Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 8 00:32:06.550328 containerd[1649]: time="2025-11-08T00:32:06.550238233Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 8 00:32:06.550328 containerd[1649]: time="2025-11-08T00:32:06.550288147Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Nov 8 00:32:06.550462 kubelet[2900]: E1108 00:32:06.550401 2900 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 8 00:32:06.550462 kubelet[2900]: E1108 00:32:06.550450 2900 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 8 00:32:06.551011 kubelet[2900]: E1108 00:32:06.550554 2900 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-g9gj7,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-84758c967d-hp26p_calico-apiserver(558dc8c2-70d1-4eda-a967-93f57dec2dc2): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 8 00:32:06.552203 kubelet[2900]: E1108 00:32:06.551910 2900 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-84758c967d-hp26p" podUID="558dc8c2-70d1-4eda-a967-93f57dec2dc2" Nov 8 00:32:06.564233 containerd[1649]: 2025-11-08 00:32:06.533 [WARNING][5430] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="caf42fdb3ada0ed0ba850958182b309f6760189d49c1f7086237e2d2431c9939" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--655bcd5b7f--mvm84-eth0", GenerateName:"calico-kube-controllers-655bcd5b7f-", Namespace:"calico-system", SelfLink:"", UID:"d8943d47-ae19-484d-8d89-dda3dcc29a60", ResourceVersion:"1042", Generation:0, CreationTimestamp:time.Date(2025, time.November, 8, 0, 31, 26, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"655bcd5b7f", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"94020e00258befab3f47db16741239d79bf3e685f133d30db10a9881e26e7b9e", Pod:"calico-kube-controllers-655bcd5b7f-mvm84", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali25bb6117a09", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 8 00:32:06.564233 containerd[1649]: 2025-11-08 00:32:06.534 [INFO][5430] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="caf42fdb3ada0ed0ba850958182b309f6760189d49c1f7086237e2d2431c9939" Nov 8 00:32:06.564233 containerd[1649]: 2025-11-08 00:32:06.534 [INFO][5430] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="caf42fdb3ada0ed0ba850958182b309f6760189d49c1f7086237e2d2431c9939" iface="eth0" netns="" Nov 8 00:32:06.564233 containerd[1649]: 2025-11-08 00:32:06.534 [INFO][5430] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="caf42fdb3ada0ed0ba850958182b309f6760189d49c1f7086237e2d2431c9939" Nov 8 00:32:06.564233 containerd[1649]: 2025-11-08 00:32:06.534 [INFO][5430] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="caf42fdb3ada0ed0ba850958182b309f6760189d49c1f7086237e2d2431c9939" Nov 8 00:32:06.564233 containerd[1649]: 2025-11-08 00:32:06.548 [INFO][5438] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="caf42fdb3ada0ed0ba850958182b309f6760189d49c1f7086237e2d2431c9939" HandleID="k8s-pod-network.caf42fdb3ada0ed0ba850958182b309f6760189d49c1f7086237e2d2431c9939" Workload="localhost-k8s-calico--kube--controllers--655bcd5b7f--mvm84-eth0" Nov 8 00:32:06.564233 containerd[1649]: 2025-11-08 00:32:06.548 [INFO][5438] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 8 00:32:06.564233 containerd[1649]: 2025-11-08 00:32:06.548 [INFO][5438] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 8 00:32:06.564233 containerd[1649]: 2025-11-08 00:32:06.553 [WARNING][5438] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="caf42fdb3ada0ed0ba850958182b309f6760189d49c1f7086237e2d2431c9939" HandleID="k8s-pod-network.caf42fdb3ada0ed0ba850958182b309f6760189d49c1f7086237e2d2431c9939" Workload="localhost-k8s-calico--kube--controllers--655bcd5b7f--mvm84-eth0" Nov 8 00:32:06.564233 containerd[1649]: 2025-11-08 00:32:06.553 [INFO][5438] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="caf42fdb3ada0ed0ba850958182b309f6760189d49c1f7086237e2d2431c9939" HandleID="k8s-pod-network.caf42fdb3ada0ed0ba850958182b309f6760189d49c1f7086237e2d2431c9939" Workload="localhost-k8s-calico--kube--controllers--655bcd5b7f--mvm84-eth0" Nov 8 00:32:06.564233 containerd[1649]: 2025-11-08 00:32:06.561 [INFO][5438] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 8 00:32:06.564233 containerd[1649]: 2025-11-08 00:32:06.562 [INFO][5430] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="caf42fdb3ada0ed0ba850958182b309f6760189d49c1f7086237e2d2431c9939" Nov 8 00:32:06.565131 containerd[1649]: time="2025-11-08T00:32:06.564270506Z" level=info msg="TearDown network for sandbox \"caf42fdb3ada0ed0ba850958182b309f6760189d49c1f7086237e2d2431c9939\" successfully" Nov 8 00:32:06.565131 containerd[1649]: time="2025-11-08T00:32:06.564293431Z" level=info msg="StopPodSandbox for \"caf42fdb3ada0ed0ba850958182b309f6760189d49c1f7086237e2d2431c9939\" returns successfully" Nov 8 00:32:06.565131 containerd[1649]: time="2025-11-08T00:32:06.564872163Z" level=info msg="RemovePodSandbox for \"caf42fdb3ada0ed0ba850958182b309f6760189d49c1f7086237e2d2431c9939\"" Nov 8 00:32:06.565131 containerd[1649]: time="2025-11-08T00:32:06.564890789Z" level=info msg="Forcibly stopping sandbox \"caf42fdb3ada0ed0ba850958182b309f6760189d49c1f7086237e2d2431c9939\"" Nov 8 00:32:06.614548 containerd[1649]: 2025-11-08 00:32:06.590 [WARNING][5452] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="caf42fdb3ada0ed0ba850958182b309f6760189d49c1f7086237e2d2431c9939" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--655bcd5b7f--mvm84-eth0", GenerateName:"calico-kube-controllers-655bcd5b7f-", Namespace:"calico-system", SelfLink:"", UID:"d8943d47-ae19-484d-8d89-dda3dcc29a60", ResourceVersion:"1042", Generation:0, CreationTimestamp:time.Date(2025, time.November, 8, 0, 31, 26, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"655bcd5b7f", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"94020e00258befab3f47db16741239d79bf3e685f133d30db10a9881e26e7b9e", Pod:"calico-kube-controllers-655bcd5b7f-mvm84", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali25bb6117a09", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 8 00:32:06.614548 containerd[1649]: 2025-11-08 00:32:06.590 [INFO][5452] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="caf42fdb3ada0ed0ba850958182b309f6760189d49c1f7086237e2d2431c9939" Nov 8 00:32:06.614548 containerd[1649]: 2025-11-08 00:32:06.590 [INFO][5452] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="caf42fdb3ada0ed0ba850958182b309f6760189d49c1f7086237e2d2431c9939" iface="eth0" netns="" Nov 8 00:32:06.614548 containerd[1649]: 2025-11-08 00:32:06.590 [INFO][5452] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="caf42fdb3ada0ed0ba850958182b309f6760189d49c1f7086237e2d2431c9939" Nov 8 00:32:06.614548 containerd[1649]: 2025-11-08 00:32:06.590 [INFO][5452] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="caf42fdb3ada0ed0ba850958182b309f6760189d49c1f7086237e2d2431c9939" Nov 8 00:32:06.614548 containerd[1649]: 2025-11-08 00:32:06.605 [INFO][5459] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="caf42fdb3ada0ed0ba850958182b309f6760189d49c1f7086237e2d2431c9939" HandleID="k8s-pod-network.caf42fdb3ada0ed0ba850958182b309f6760189d49c1f7086237e2d2431c9939" Workload="localhost-k8s-calico--kube--controllers--655bcd5b7f--mvm84-eth0" Nov 8 00:32:06.614548 containerd[1649]: 2025-11-08 00:32:06.606 [INFO][5459] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 8 00:32:06.614548 containerd[1649]: 2025-11-08 00:32:06.606 [INFO][5459] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 8 00:32:06.614548 containerd[1649]: 2025-11-08 00:32:06.610 [WARNING][5459] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="caf42fdb3ada0ed0ba850958182b309f6760189d49c1f7086237e2d2431c9939" HandleID="k8s-pod-network.caf42fdb3ada0ed0ba850958182b309f6760189d49c1f7086237e2d2431c9939" Workload="localhost-k8s-calico--kube--controllers--655bcd5b7f--mvm84-eth0" Nov 8 00:32:06.614548 containerd[1649]: 2025-11-08 00:32:06.610 [INFO][5459] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="caf42fdb3ada0ed0ba850958182b309f6760189d49c1f7086237e2d2431c9939" HandleID="k8s-pod-network.caf42fdb3ada0ed0ba850958182b309f6760189d49c1f7086237e2d2431c9939" Workload="localhost-k8s-calico--kube--controllers--655bcd5b7f--mvm84-eth0" Nov 8 00:32:06.614548 containerd[1649]: 2025-11-08 00:32:06.611 [INFO][5459] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 8 00:32:06.614548 containerd[1649]: 2025-11-08 00:32:06.613 [INFO][5452] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="caf42fdb3ada0ed0ba850958182b309f6760189d49c1f7086237e2d2431c9939" Nov 8 00:32:06.614950 containerd[1649]: time="2025-11-08T00:32:06.614536380Z" level=info msg="TearDown network for sandbox \"caf42fdb3ada0ed0ba850958182b309f6760189d49c1f7086237e2d2431c9939\" successfully" Nov 8 00:32:06.632159 containerd[1649]: time="2025-11-08T00:32:06.632121022Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"caf42fdb3ada0ed0ba850958182b309f6760189d49c1f7086237e2d2431c9939\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Nov 8 00:32:06.632260 containerd[1649]: time="2025-11-08T00:32:06.632185543Z" level=info msg="RemovePodSandbox \"caf42fdb3ada0ed0ba850958182b309f6760189d49c1f7086237e2d2431c9939\" returns successfully" Nov 8 00:32:06.632993 containerd[1649]: time="2025-11-08T00:32:06.632630985Z" level=info msg="StopPodSandbox for \"96878bc64984967382d672bcc498f35002b46b5f999b2ea235d916cf67b3a2f8\"" Nov 8 00:32:06.675134 containerd[1649]: 2025-11-08 00:32:06.655 [WARNING][5473] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="96878bc64984967382d672bcc498f35002b46b5f999b2ea235d916cf67b3a2f8" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--84758c967d--czg8s-eth0", GenerateName:"calico-apiserver-84758c967d-", Namespace:"calico-apiserver", SelfLink:"", UID:"536546db-8e23-43bc-ada9-ff6aca8accce", ResourceVersion:"948", Generation:0, CreationTimestamp:time.Date(2025, time.November, 8, 0, 31, 22, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"84758c967d", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"4e02226b20918f15a8fd45a4034b30a9b9dc3506c662c6c2b81790bd98c0fed6", Pod:"calico-apiserver-84758c967d-czg8s", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calif15c353ce9f", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 8 00:32:06.675134 containerd[1649]: 2025-11-08 00:32:06.656 [INFO][5473] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="96878bc64984967382d672bcc498f35002b46b5f999b2ea235d916cf67b3a2f8" Nov 8 00:32:06.675134 containerd[1649]: 2025-11-08 00:32:06.656 [INFO][5473] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="96878bc64984967382d672bcc498f35002b46b5f999b2ea235d916cf67b3a2f8" iface="eth0" netns="" Nov 8 00:32:06.675134 containerd[1649]: 2025-11-08 00:32:06.656 [INFO][5473] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="96878bc64984967382d672bcc498f35002b46b5f999b2ea235d916cf67b3a2f8" Nov 8 00:32:06.675134 containerd[1649]: 2025-11-08 00:32:06.656 [INFO][5473] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="96878bc64984967382d672bcc498f35002b46b5f999b2ea235d916cf67b3a2f8" Nov 8 00:32:06.675134 containerd[1649]: 2025-11-08 00:32:06.668 [INFO][5480] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="96878bc64984967382d672bcc498f35002b46b5f999b2ea235d916cf67b3a2f8" HandleID="k8s-pod-network.96878bc64984967382d672bcc498f35002b46b5f999b2ea235d916cf67b3a2f8" Workload="localhost-k8s-calico--apiserver--84758c967d--czg8s-eth0" Nov 8 00:32:06.675134 containerd[1649]: 2025-11-08 00:32:06.668 [INFO][5480] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 8 00:32:06.675134 containerd[1649]: 2025-11-08 00:32:06.668 [INFO][5480] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 8 00:32:06.675134 containerd[1649]: 2025-11-08 00:32:06.672 [WARNING][5480] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="96878bc64984967382d672bcc498f35002b46b5f999b2ea235d916cf67b3a2f8" HandleID="k8s-pod-network.96878bc64984967382d672bcc498f35002b46b5f999b2ea235d916cf67b3a2f8" Workload="localhost-k8s-calico--apiserver--84758c967d--czg8s-eth0" Nov 8 00:32:06.675134 containerd[1649]: 2025-11-08 00:32:06.672 [INFO][5480] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="96878bc64984967382d672bcc498f35002b46b5f999b2ea235d916cf67b3a2f8" HandleID="k8s-pod-network.96878bc64984967382d672bcc498f35002b46b5f999b2ea235d916cf67b3a2f8" Workload="localhost-k8s-calico--apiserver--84758c967d--czg8s-eth0" Nov 8 00:32:06.675134 containerd[1649]: 2025-11-08 00:32:06.673 [INFO][5480] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 8 00:32:06.675134 containerd[1649]: 2025-11-08 00:32:06.674 [INFO][5473] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="96878bc64984967382d672bcc498f35002b46b5f999b2ea235d916cf67b3a2f8" Nov 8 00:32:06.675522 containerd[1649]: time="2025-11-08T00:32:06.675161327Z" level=info msg="TearDown network for sandbox \"96878bc64984967382d672bcc498f35002b46b5f999b2ea235d916cf67b3a2f8\" successfully" Nov 8 00:32:06.675522 containerd[1649]: time="2025-11-08T00:32:06.675178116Z" level=info msg="StopPodSandbox for \"96878bc64984967382d672bcc498f35002b46b5f999b2ea235d916cf67b3a2f8\" returns successfully" Nov 8 00:32:06.675946 containerd[1649]: time="2025-11-08T00:32:06.675738772Z" level=info msg="RemovePodSandbox for \"96878bc64984967382d672bcc498f35002b46b5f999b2ea235d916cf67b3a2f8\"" Nov 8 00:32:06.675946 containerd[1649]: time="2025-11-08T00:32:06.675757132Z" level=info msg="Forcibly stopping sandbox \"96878bc64984967382d672bcc498f35002b46b5f999b2ea235d916cf67b3a2f8\"" Nov 8 00:32:06.714881 containerd[1649]: 2025-11-08 00:32:06.695 [WARNING][5494] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="96878bc64984967382d672bcc498f35002b46b5f999b2ea235d916cf67b3a2f8" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--84758c967d--czg8s-eth0", GenerateName:"calico-apiserver-84758c967d-", Namespace:"calico-apiserver", SelfLink:"", UID:"536546db-8e23-43bc-ada9-ff6aca8accce", ResourceVersion:"948", Generation:0, CreationTimestamp:time.Date(2025, time.November, 8, 0, 31, 22, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"84758c967d", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"4e02226b20918f15a8fd45a4034b30a9b9dc3506c662c6c2b81790bd98c0fed6", Pod:"calico-apiserver-84758c967d-czg8s", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calif15c353ce9f", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 8 00:32:06.714881 containerd[1649]: 2025-11-08 00:32:06.695 [INFO][5494] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="96878bc64984967382d672bcc498f35002b46b5f999b2ea235d916cf67b3a2f8" Nov 8 00:32:06.714881 containerd[1649]: 2025-11-08 00:32:06.695 [INFO][5494] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="96878bc64984967382d672bcc498f35002b46b5f999b2ea235d916cf67b3a2f8" iface="eth0" netns="" Nov 8 00:32:06.714881 containerd[1649]: 2025-11-08 00:32:06.695 [INFO][5494] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="96878bc64984967382d672bcc498f35002b46b5f999b2ea235d916cf67b3a2f8" Nov 8 00:32:06.714881 containerd[1649]: 2025-11-08 00:32:06.695 [INFO][5494] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="96878bc64984967382d672bcc498f35002b46b5f999b2ea235d916cf67b3a2f8" Nov 8 00:32:06.714881 containerd[1649]: 2025-11-08 00:32:06.708 [INFO][5501] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="96878bc64984967382d672bcc498f35002b46b5f999b2ea235d916cf67b3a2f8" HandleID="k8s-pod-network.96878bc64984967382d672bcc498f35002b46b5f999b2ea235d916cf67b3a2f8" Workload="localhost-k8s-calico--apiserver--84758c967d--czg8s-eth0" Nov 8 00:32:06.714881 containerd[1649]: 2025-11-08 00:32:06.708 [INFO][5501] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 8 00:32:06.714881 containerd[1649]: 2025-11-08 00:32:06.708 [INFO][5501] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 8 00:32:06.714881 containerd[1649]: 2025-11-08 00:32:06.712 [WARNING][5501] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="96878bc64984967382d672bcc498f35002b46b5f999b2ea235d916cf67b3a2f8" HandleID="k8s-pod-network.96878bc64984967382d672bcc498f35002b46b5f999b2ea235d916cf67b3a2f8" Workload="localhost-k8s-calico--apiserver--84758c967d--czg8s-eth0" Nov 8 00:32:06.714881 containerd[1649]: 2025-11-08 00:32:06.712 [INFO][5501] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="96878bc64984967382d672bcc498f35002b46b5f999b2ea235d916cf67b3a2f8" HandleID="k8s-pod-network.96878bc64984967382d672bcc498f35002b46b5f999b2ea235d916cf67b3a2f8" Workload="localhost-k8s-calico--apiserver--84758c967d--czg8s-eth0" Nov 8 00:32:06.714881 containerd[1649]: 2025-11-08 00:32:06.712 [INFO][5501] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 8 00:32:06.714881 containerd[1649]: 2025-11-08 00:32:06.713 [INFO][5494] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="96878bc64984967382d672bcc498f35002b46b5f999b2ea235d916cf67b3a2f8" Nov 8 00:32:06.715249 containerd[1649]: time="2025-11-08T00:32:06.714907860Z" level=info msg="TearDown network for sandbox \"96878bc64984967382d672bcc498f35002b46b5f999b2ea235d916cf67b3a2f8\" successfully" Nov 8 00:32:06.739557 containerd[1649]: time="2025-11-08T00:32:06.739455904Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"96878bc64984967382d672bcc498f35002b46b5f999b2ea235d916cf67b3a2f8\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Nov 8 00:32:06.739557 containerd[1649]: time="2025-11-08T00:32:06.739510025Z" level=info msg="RemovePodSandbox \"96878bc64984967382d672bcc498f35002b46b5f999b2ea235d916cf67b3a2f8\" returns successfully" Nov 8 00:32:06.740068 containerd[1649]: time="2025-11-08T00:32:06.739845067Z" level=info msg="StopPodSandbox for \"24ba4751fabe1dc2f6631bba257332c1654347d2c88ebc4dc0a88fcd43b3da0b\"" Nov 8 00:32:06.796116 containerd[1649]: 2025-11-08 00:32:06.767 [WARNING][5515] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="24ba4751fabe1dc2f6631bba257332c1654347d2c88ebc4dc0a88fcd43b3da0b" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--m4wsd-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"c5b205c6-f534-4f27-bd2e-0a8fe1443335", ResourceVersion:"1008", Generation:0, CreationTimestamp:time.Date(2025, time.November, 8, 0, 31, 26, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"857b56db8f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"90b0e7e5441d77290d5e556ac669a0805ead84e6b16b8778d2e5514b9d191f33", Pod:"csi-node-driver-m4wsd", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali93bea2385e3", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 8 00:32:06.796116 containerd[1649]: 2025-11-08 00:32:06.769 [INFO][5515] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="24ba4751fabe1dc2f6631bba257332c1654347d2c88ebc4dc0a88fcd43b3da0b" Nov 8 00:32:06.796116 containerd[1649]: 2025-11-08 00:32:06.769 [INFO][5515] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="24ba4751fabe1dc2f6631bba257332c1654347d2c88ebc4dc0a88fcd43b3da0b" iface="eth0" netns="" Nov 8 00:32:06.796116 containerd[1649]: 2025-11-08 00:32:06.769 [INFO][5515] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="24ba4751fabe1dc2f6631bba257332c1654347d2c88ebc4dc0a88fcd43b3da0b" Nov 8 00:32:06.796116 containerd[1649]: 2025-11-08 00:32:06.769 [INFO][5515] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="24ba4751fabe1dc2f6631bba257332c1654347d2c88ebc4dc0a88fcd43b3da0b" Nov 8 00:32:06.796116 containerd[1649]: 2025-11-08 00:32:06.784 [INFO][5522] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="24ba4751fabe1dc2f6631bba257332c1654347d2c88ebc4dc0a88fcd43b3da0b" HandleID="k8s-pod-network.24ba4751fabe1dc2f6631bba257332c1654347d2c88ebc4dc0a88fcd43b3da0b" Workload="localhost-k8s-csi--node--driver--m4wsd-eth0" Nov 8 00:32:06.796116 containerd[1649]: 2025-11-08 00:32:06.784 [INFO][5522] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 8 00:32:06.796116 containerd[1649]: 2025-11-08 00:32:06.784 [INFO][5522] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 8 00:32:06.796116 containerd[1649]: 2025-11-08 00:32:06.788 [WARNING][5522] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="24ba4751fabe1dc2f6631bba257332c1654347d2c88ebc4dc0a88fcd43b3da0b" HandleID="k8s-pod-network.24ba4751fabe1dc2f6631bba257332c1654347d2c88ebc4dc0a88fcd43b3da0b" Workload="localhost-k8s-csi--node--driver--m4wsd-eth0" Nov 8 00:32:06.796116 containerd[1649]: 2025-11-08 00:32:06.788 [INFO][5522] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="24ba4751fabe1dc2f6631bba257332c1654347d2c88ebc4dc0a88fcd43b3da0b" HandleID="k8s-pod-network.24ba4751fabe1dc2f6631bba257332c1654347d2c88ebc4dc0a88fcd43b3da0b" Workload="localhost-k8s-csi--node--driver--m4wsd-eth0" Nov 8 00:32:06.796116 containerd[1649]: 2025-11-08 00:32:06.793 [INFO][5522] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 8 00:32:06.796116 containerd[1649]: 2025-11-08 00:32:06.794 [INFO][5515] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="24ba4751fabe1dc2f6631bba257332c1654347d2c88ebc4dc0a88fcd43b3da0b" Nov 8 00:32:06.796116 containerd[1649]: time="2025-11-08T00:32:06.796017476Z" level=info msg="TearDown network for sandbox \"24ba4751fabe1dc2f6631bba257332c1654347d2c88ebc4dc0a88fcd43b3da0b\" successfully" Nov 8 00:32:06.796116 containerd[1649]: time="2025-11-08T00:32:06.796047938Z" level=info msg="StopPodSandbox for \"24ba4751fabe1dc2f6631bba257332c1654347d2c88ebc4dc0a88fcd43b3da0b\" returns successfully" Nov 8 00:32:06.797263 containerd[1649]: time="2025-11-08T00:32:06.797071603Z" level=info msg="RemovePodSandbox for \"24ba4751fabe1dc2f6631bba257332c1654347d2c88ebc4dc0a88fcd43b3da0b\"" Nov 8 00:32:06.797263 containerd[1649]: time="2025-11-08T00:32:06.797090191Z" level=info msg="Forcibly stopping sandbox \"24ba4751fabe1dc2f6631bba257332c1654347d2c88ebc4dc0a88fcd43b3da0b\"" Nov 8 00:32:06.861053 systemd-journald[1198]: Under memory pressure, flushing caches. Nov 8 00:32:06.857248 systemd-resolved[1542]: Under memory pressure, flushing caches. Nov 8 00:32:06.857263 systemd-resolved[1542]: Flushed all caches. Nov 8 00:32:06.865164 containerd[1649]: 2025-11-08 00:32:06.822 [WARNING][5536] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="24ba4751fabe1dc2f6631bba257332c1654347d2c88ebc4dc0a88fcd43b3da0b" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--m4wsd-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"c5b205c6-f534-4f27-bd2e-0a8fe1443335", ResourceVersion:"1008", Generation:0, CreationTimestamp:time.Date(2025, time.November, 8, 0, 31, 26, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"857b56db8f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"90b0e7e5441d77290d5e556ac669a0805ead84e6b16b8778d2e5514b9d191f33", Pod:"csi-node-driver-m4wsd", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali93bea2385e3", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 8 00:32:06.865164 containerd[1649]: 2025-11-08 00:32:06.822 [INFO][5536] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="24ba4751fabe1dc2f6631bba257332c1654347d2c88ebc4dc0a88fcd43b3da0b" Nov 8 00:32:06.865164 containerd[1649]: 2025-11-08 00:32:06.822 [INFO][5536] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="24ba4751fabe1dc2f6631bba257332c1654347d2c88ebc4dc0a88fcd43b3da0b" iface="eth0" netns="" Nov 8 00:32:06.865164 containerd[1649]: 2025-11-08 00:32:06.822 [INFO][5536] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="24ba4751fabe1dc2f6631bba257332c1654347d2c88ebc4dc0a88fcd43b3da0b" Nov 8 00:32:06.865164 containerd[1649]: 2025-11-08 00:32:06.822 [INFO][5536] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="24ba4751fabe1dc2f6631bba257332c1654347d2c88ebc4dc0a88fcd43b3da0b" Nov 8 00:32:06.865164 containerd[1649]: 2025-11-08 00:32:06.853 [INFO][5543] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="24ba4751fabe1dc2f6631bba257332c1654347d2c88ebc4dc0a88fcd43b3da0b" HandleID="k8s-pod-network.24ba4751fabe1dc2f6631bba257332c1654347d2c88ebc4dc0a88fcd43b3da0b" Workload="localhost-k8s-csi--node--driver--m4wsd-eth0" Nov 8 00:32:06.865164 containerd[1649]: 2025-11-08 00:32:06.853 [INFO][5543] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 8 00:32:06.865164 containerd[1649]: 2025-11-08 00:32:06.853 [INFO][5543] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 8 00:32:06.865164 containerd[1649]: 2025-11-08 00:32:06.861 [WARNING][5543] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="24ba4751fabe1dc2f6631bba257332c1654347d2c88ebc4dc0a88fcd43b3da0b" HandleID="k8s-pod-network.24ba4751fabe1dc2f6631bba257332c1654347d2c88ebc4dc0a88fcd43b3da0b" Workload="localhost-k8s-csi--node--driver--m4wsd-eth0" Nov 8 00:32:06.865164 containerd[1649]: 2025-11-08 00:32:06.861 [INFO][5543] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="24ba4751fabe1dc2f6631bba257332c1654347d2c88ebc4dc0a88fcd43b3da0b" HandleID="k8s-pod-network.24ba4751fabe1dc2f6631bba257332c1654347d2c88ebc4dc0a88fcd43b3da0b" Workload="localhost-k8s-csi--node--driver--m4wsd-eth0" Nov 8 00:32:06.865164 containerd[1649]: 2025-11-08 00:32:06.862 [INFO][5543] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 8 00:32:06.865164 containerd[1649]: 2025-11-08 00:32:06.863 [INFO][5536] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="24ba4751fabe1dc2f6631bba257332c1654347d2c88ebc4dc0a88fcd43b3da0b" Nov 8 00:32:06.866359 containerd[1649]: time="2025-11-08T00:32:06.865614691Z" level=info msg="TearDown network for sandbox \"24ba4751fabe1dc2f6631bba257332c1654347d2c88ebc4dc0a88fcd43b3da0b\" successfully" Nov 8 00:32:06.885488 containerd[1649]: time="2025-11-08T00:32:06.885454353Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"24ba4751fabe1dc2f6631bba257332c1654347d2c88ebc4dc0a88fcd43b3da0b\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Nov 8 00:32:06.885860 containerd[1649]: time="2025-11-08T00:32:06.885672908Z" level=info msg="RemovePodSandbox \"24ba4751fabe1dc2f6631bba257332c1654347d2c88ebc4dc0a88fcd43b3da0b\" returns successfully" Nov 8 00:32:06.886113 containerd[1649]: time="2025-11-08T00:32:06.886079297Z" level=info msg="StopPodSandbox for \"f61f076e1d917da84535491d563c1e29d08b4d894442db8d86e62f6b688e02a0\"" Nov 8 00:32:06.932646 containerd[1649]: 2025-11-08 00:32:06.909 [WARNING][5557] cni-plugin/k8s.go 598: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="f61f076e1d917da84535491d563c1e29d08b4d894442db8d86e62f6b688e02a0" WorkloadEndpoint="localhost-k8s-whisker--764fb649d4--rkjxq-eth0" Nov 8 00:32:06.932646 containerd[1649]: 2025-11-08 00:32:06.909 [INFO][5557] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="f61f076e1d917da84535491d563c1e29d08b4d894442db8d86e62f6b688e02a0" Nov 8 00:32:06.932646 containerd[1649]: 2025-11-08 00:32:06.909 [INFO][5557] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="f61f076e1d917da84535491d563c1e29d08b4d894442db8d86e62f6b688e02a0" iface="eth0" netns="" Nov 8 00:32:06.932646 containerd[1649]: 2025-11-08 00:32:06.909 [INFO][5557] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="f61f076e1d917da84535491d563c1e29d08b4d894442db8d86e62f6b688e02a0" Nov 8 00:32:06.932646 containerd[1649]: 2025-11-08 00:32:06.909 [INFO][5557] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="f61f076e1d917da84535491d563c1e29d08b4d894442db8d86e62f6b688e02a0" Nov 8 00:32:06.932646 containerd[1649]: 2025-11-08 00:32:06.924 [INFO][5564] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="f61f076e1d917da84535491d563c1e29d08b4d894442db8d86e62f6b688e02a0" HandleID="k8s-pod-network.f61f076e1d917da84535491d563c1e29d08b4d894442db8d86e62f6b688e02a0" Workload="localhost-k8s-whisker--764fb649d4--rkjxq-eth0" Nov 8 00:32:06.932646 containerd[1649]: 2025-11-08 00:32:06.924 [INFO][5564] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 8 00:32:06.932646 containerd[1649]: 2025-11-08 00:32:06.924 [INFO][5564] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 8 00:32:06.932646 containerd[1649]: 2025-11-08 00:32:06.929 [WARNING][5564] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="f61f076e1d917da84535491d563c1e29d08b4d894442db8d86e62f6b688e02a0" HandleID="k8s-pod-network.f61f076e1d917da84535491d563c1e29d08b4d894442db8d86e62f6b688e02a0" Workload="localhost-k8s-whisker--764fb649d4--rkjxq-eth0" Nov 8 00:32:06.932646 containerd[1649]: 2025-11-08 00:32:06.929 [INFO][5564] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="f61f076e1d917da84535491d563c1e29d08b4d894442db8d86e62f6b688e02a0" HandleID="k8s-pod-network.f61f076e1d917da84535491d563c1e29d08b4d894442db8d86e62f6b688e02a0" Workload="localhost-k8s-whisker--764fb649d4--rkjxq-eth0" Nov 8 00:32:06.932646 containerd[1649]: 2025-11-08 00:32:06.930 [INFO][5564] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 8 00:32:06.932646 containerd[1649]: 2025-11-08 00:32:06.931 [INFO][5557] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="f61f076e1d917da84535491d563c1e29d08b4d894442db8d86e62f6b688e02a0" Nov 8 00:32:06.932646 containerd[1649]: time="2025-11-08T00:32:06.932560162Z" level=info msg="TearDown network for sandbox \"f61f076e1d917da84535491d563c1e29d08b4d894442db8d86e62f6b688e02a0\" successfully" Nov 8 00:32:06.932646 containerd[1649]: time="2025-11-08T00:32:06.932579545Z" level=info msg="StopPodSandbox for \"f61f076e1d917da84535491d563c1e29d08b4d894442db8d86e62f6b688e02a0\" returns successfully" Nov 8 00:32:06.933534 containerd[1649]: time="2025-11-08T00:32:06.933301846Z" level=info msg="RemovePodSandbox for \"f61f076e1d917da84535491d563c1e29d08b4d894442db8d86e62f6b688e02a0\"" Nov 8 00:32:06.933534 containerd[1649]: time="2025-11-08T00:32:06.933321460Z" level=info msg="Forcibly stopping sandbox \"f61f076e1d917da84535491d563c1e29d08b4d894442db8d86e62f6b688e02a0\"" Nov 8 00:32:06.995842 containerd[1649]: 2025-11-08 00:32:06.971 [WARNING][5578] cni-plugin/k8s.go 598: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="f61f076e1d917da84535491d563c1e29d08b4d894442db8d86e62f6b688e02a0" WorkloadEndpoint="localhost-k8s-whisker--764fb649d4--rkjxq-eth0" Nov 8 00:32:06.995842 containerd[1649]: 2025-11-08 00:32:06.971 [INFO][5578] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="f61f076e1d917da84535491d563c1e29d08b4d894442db8d86e62f6b688e02a0" Nov 8 00:32:06.995842 containerd[1649]: 2025-11-08 00:32:06.971 [INFO][5578] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="f61f076e1d917da84535491d563c1e29d08b4d894442db8d86e62f6b688e02a0" iface="eth0" netns="" Nov 8 00:32:06.995842 containerd[1649]: 2025-11-08 00:32:06.971 [INFO][5578] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="f61f076e1d917da84535491d563c1e29d08b4d894442db8d86e62f6b688e02a0" Nov 8 00:32:06.995842 containerd[1649]: 2025-11-08 00:32:06.971 [INFO][5578] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="f61f076e1d917da84535491d563c1e29d08b4d894442db8d86e62f6b688e02a0" Nov 8 00:32:06.995842 containerd[1649]: 2025-11-08 00:32:06.988 [INFO][5585] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="f61f076e1d917da84535491d563c1e29d08b4d894442db8d86e62f6b688e02a0" HandleID="k8s-pod-network.f61f076e1d917da84535491d563c1e29d08b4d894442db8d86e62f6b688e02a0" Workload="localhost-k8s-whisker--764fb649d4--rkjxq-eth0" Nov 8 00:32:06.995842 containerd[1649]: 2025-11-08 00:32:06.988 [INFO][5585] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 8 00:32:06.995842 containerd[1649]: 2025-11-08 00:32:06.988 [INFO][5585] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 8 00:32:06.995842 containerd[1649]: 2025-11-08 00:32:06.992 [WARNING][5585] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="f61f076e1d917da84535491d563c1e29d08b4d894442db8d86e62f6b688e02a0" HandleID="k8s-pod-network.f61f076e1d917da84535491d563c1e29d08b4d894442db8d86e62f6b688e02a0" Workload="localhost-k8s-whisker--764fb649d4--rkjxq-eth0" Nov 8 00:32:06.995842 containerd[1649]: 2025-11-08 00:32:06.992 [INFO][5585] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="f61f076e1d917da84535491d563c1e29d08b4d894442db8d86e62f6b688e02a0" HandleID="k8s-pod-network.f61f076e1d917da84535491d563c1e29d08b4d894442db8d86e62f6b688e02a0" Workload="localhost-k8s-whisker--764fb649d4--rkjxq-eth0" Nov 8 00:32:06.995842 containerd[1649]: 2025-11-08 00:32:06.993 [INFO][5585] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 8 00:32:06.995842 containerd[1649]: 2025-11-08 00:32:06.994 [INFO][5578] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="f61f076e1d917da84535491d563c1e29d08b4d894442db8d86e62f6b688e02a0" Nov 8 00:32:06.996924 containerd[1649]: time="2025-11-08T00:32:06.996251598Z" level=info msg="TearDown network for sandbox \"f61f076e1d917da84535491d563c1e29d08b4d894442db8d86e62f6b688e02a0\" successfully" Nov 8 00:32:07.005902 containerd[1649]: time="2025-11-08T00:32:07.005873463Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"f61f076e1d917da84535491d563c1e29d08b4d894442db8d86e62f6b688e02a0\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Nov 8 00:32:07.006079 containerd[1649]: time="2025-11-08T00:32:07.006067141Z" level=info msg="RemovePodSandbox \"f61f076e1d917da84535491d563c1e29d08b4d894442db8d86e62f6b688e02a0\" returns successfully" Nov 8 00:32:08.478404 containerd[1649]: time="2025-11-08T00:32:08.478371923Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Nov 8 00:32:08.838959 containerd[1649]: time="2025-11-08T00:32:08.838792414Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 8 00:32:08.839640 containerd[1649]: time="2025-11-08T00:32:08.839610704Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Nov 8 00:32:08.839725 containerd[1649]: time="2025-11-08T00:32:08.839671626Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Nov 8 00:32:08.839935 kubelet[2900]: E1108 00:32:08.839762 2900 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Nov 8 00:32:08.839935 kubelet[2900]: E1108 00:32:08.839812 2900 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Nov 8 00:32:08.840874 kubelet[2900]: E1108 00:32:08.839966 2900 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-6xszc,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-m4wsd_calico-system(c5b205c6-f534-4f27-bd2e-0a8fe1443335): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Nov 8 00:32:08.842659 containerd[1649]: time="2025-11-08T00:32:08.842638368Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Nov 8 00:32:09.220746 containerd[1649]: time="2025-11-08T00:32:09.215831967Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 8 00:32:09.225528 containerd[1649]: time="2025-11-08T00:32:09.225464537Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Nov 8 00:32:09.225700 containerd[1649]: time="2025-11-08T00:32:09.225506456Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Nov 8 00:32:09.225978 kubelet[2900]: E1108 00:32:09.225948 2900 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Nov 8 00:32:09.226269 kubelet[2900]: E1108 00:32:09.225988 2900 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Nov 8 00:32:09.226269 kubelet[2900]: E1108 00:32:09.226073 2900 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-6xszc,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-m4wsd_calico-system(c5b205c6-f534-4f27-bd2e-0a8fe1443335): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Nov 8 00:32:09.227415 kubelet[2900]: E1108 00:32:09.227385 2900 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-m4wsd" podUID="c5b205c6-f534-4f27-bd2e-0a8fe1443335" Nov 8 00:32:11.479182 kubelet[2900]: E1108 00:32:11.479116 2900 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-5b7c8bd886-mhdkg" podUID="1e4b7614-5497-46ae-a96f-7f92d3916cde" Nov 8 00:32:14.477538 kubelet[2900]: E1108 00:32:14.477419 2900 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-84758c967d-czg8s" podUID="536546db-8e23-43bc-ada9-ff6aca8accce" Nov 8 00:32:17.479218 kubelet[2900]: E1108 00:32:17.479187 2900 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-655bcd5b7f-mvm84" podUID="d8943d47-ae19-484d-8d89-dda3dcc29a60" Nov 8 00:32:19.480543 kubelet[2900]: E1108 00:32:19.480108 2900 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-84758c967d-hp26p" podUID="558dc8c2-70d1-4eda-a967-93f57dec2dc2" Nov 8 00:32:21.479518 kubelet[2900]: E1108 00:32:21.479312 2900 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-cblkz" podUID="c0dfff3f-1568-463e-aed1-906fd9d64aa0" Nov 8 00:32:21.646905 systemd[1]: Started sshd@7-139.178.70.109:22-147.75.109.163:38168.service - OpenSSH per-connection server daemon (147.75.109.163:38168). Nov 8 00:32:21.884865 sshd[5623]: Accepted publickey for core from 147.75.109.163 port 38168 ssh2: RSA SHA256:w/xGfnUobRwx5tSVykkPMEgd5qYNjwGoDH1wcB/4M9g Nov 8 00:32:21.905543 sshd[5623]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 8 00:32:21.936789 systemd-logind[1620]: New session 10 of user core. Nov 8 00:32:21.941514 systemd[1]: Started session-10.scope - Session 10 of User core. Nov 8 00:32:22.968504 sshd[5623]: pam_unix(sshd:session): session closed for user core Nov 8 00:32:22.972805 systemd[1]: sshd@7-139.178.70.109:22-147.75.109.163:38168.service: Deactivated successfully. Nov 8 00:32:22.977334 systemd-logind[1620]: Session 10 logged out. Waiting for processes to exit. Nov 8 00:32:22.977733 systemd[1]: session-10.scope: Deactivated successfully. Nov 8 00:32:22.978649 systemd-logind[1620]: Removed session 10. Nov 8 00:32:23.490262 kubelet[2900]: E1108 00:32:23.490212 2900 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-m4wsd" podUID="c5b205c6-f534-4f27-bd2e-0a8fe1443335" Nov 8 00:32:26.481365 containerd[1649]: time="2025-11-08T00:32:26.481072745Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Nov 8 00:32:26.859447 containerd[1649]: time="2025-11-08T00:32:26.859409122Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 8 00:32:26.897209 containerd[1649]: time="2025-11-08T00:32:26.897167647Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Nov 8 00:32:26.897412 containerd[1649]: time="2025-11-08T00:32:26.897209555Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Nov 8 00:32:26.897843 kubelet[2900]: E1108 00:32:26.897353 2900 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Nov 8 00:32:26.898204 kubelet[2900]: E1108 00:32:26.897840 2900 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Nov 8 00:32:26.898662 kubelet[2900]: E1108 00:32:26.897980 2900 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:a618f084f8064cdab9db195677f26467,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-ss2b5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-5b7c8bd886-mhdkg_calico-system(1e4b7614-5497-46ae-a96f-7f92d3916cde): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Nov 8 00:32:26.901272 containerd[1649]: time="2025-11-08T00:32:26.901241885Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Nov 8 00:32:27.253759 containerd[1649]: time="2025-11-08T00:32:27.253587609Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 8 00:32:27.254309 containerd[1649]: time="2025-11-08T00:32:27.254170523Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Nov 8 00:32:27.254309 containerd[1649]: time="2025-11-08T00:32:27.254229362Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Nov 8 00:32:27.254411 kubelet[2900]: E1108 00:32:27.254331 2900 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Nov 8 00:32:27.254411 kubelet[2900]: E1108 00:32:27.254378 2900 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Nov 8 00:32:27.254522 kubelet[2900]: E1108 00:32:27.254476 2900 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-ss2b5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-5b7c8bd886-mhdkg_calico-system(1e4b7614-5497-46ae-a96f-7f92d3916cde): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Nov 8 00:32:27.256106 kubelet[2900]: E1108 00:32:27.255893 2900 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-5b7c8bd886-mhdkg" podUID="1e4b7614-5497-46ae-a96f-7f92d3916cde" Nov 8 00:32:27.480003 containerd[1649]: time="2025-11-08T00:32:27.478447467Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 8 00:32:27.866943 containerd[1649]: time="2025-11-08T00:32:27.866022387Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 8 00:32:27.869093 containerd[1649]: time="2025-11-08T00:32:27.869007855Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 8 00:32:27.869093 containerd[1649]: time="2025-11-08T00:32:27.869060365Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Nov 8 00:32:27.869997 kubelet[2900]: E1108 00:32:27.869657 2900 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 8 00:32:27.869997 kubelet[2900]: E1108 00:32:27.869704 2900 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 8 00:32:27.869997 kubelet[2900]: E1108 00:32:27.869801 2900 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-8w9hr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-84758c967d-czg8s_calico-apiserver(536546db-8e23-43bc-ada9-ff6aca8accce): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 8 00:32:27.872078 kubelet[2900]: E1108 00:32:27.872024 2900 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-84758c967d-czg8s" podUID="536546db-8e23-43bc-ada9-ff6aca8accce" Nov 8 00:32:27.977235 systemd[1]: Started sshd@8-139.178.70.109:22-147.75.109.163:38178.service - OpenSSH per-connection server daemon (147.75.109.163:38178). Nov 8 00:32:28.067475 sshd[5648]: Accepted publickey for core from 147.75.109.163 port 38178 ssh2: RSA SHA256:w/xGfnUobRwx5tSVykkPMEgd5qYNjwGoDH1wcB/4M9g Nov 8 00:32:28.068671 sshd[5648]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 8 00:32:28.071801 systemd-logind[1620]: New session 11 of user core. Nov 8 00:32:28.077194 systemd[1]: Started session-11.scope - Session 11 of User core. Nov 8 00:32:28.301042 sshd[5648]: pam_unix(sshd:session): session closed for user core Nov 8 00:32:28.302997 systemd-logind[1620]: Session 11 logged out. Waiting for processes to exit. Nov 8 00:32:28.305220 systemd[1]: sshd@8-139.178.70.109:22-147.75.109.163:38178.service: Deactivated successfully. Nov 8 00:32:28.306917 systemd[1]: session-11.scope: Deactivated successfully. Nov 8 00:32:28.309407 systemd-logind[1620]: Removed session 11. Nov 8 00:32:31.482758 containerd[1649]: time="2025-11-08T00:32:31.482730045Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 8 00:32:31.855947 containerd[1649]: time="2025-11-08T00:32:31.855892670Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 8 00:32:31.856230 containerd[1649]: time="2025-11-08T00:32:31.856207538Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 8 00:32:31.856287 containerd[1649]: time="2025-11-08T00:32:31.856265801Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Nov 8 00:32:31.856429 kubelet[2900]: E1108 00:32:31.856375 2900 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 8 00:32:31.856429 kubelet[2900]: E1108 00:32:31.856415 2900 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 8 00:32:31.857738 kubelet[2900]: E1108 00:32:31.856794 2900 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-g9gj7,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-84758c967d-hp26p_calico-apiserver(558dc8c2-70d1-4eda-a967-93f57dec2dc2): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 8 00:32:31.858149 containerd[1649]: time="2025-11-08T00:32:31.857517642Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Nov 8 00:32:31.858192 kubelet[2900]: E1108 00:32:31.857838 2900 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-84758c967d-hp26p" podUID="558dc8c2-70d1-4eda-a967-93f57dec2dc2" Nov 8 00:32:32.210349 containerd[1649]: time="2025-11-08T00:32:32.210260346Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 8 00:32:32.211424 containerd[1649]: time="2025-11-08T00:32:32.211393395Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Nov 8 00:32:32.211526 containerd[1649]: time="2025-11-08T00:32:32.211447735Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=85" Nov 8 00:32:32.211585 kubelet[2900]: E1108 00:32:32.211545 2900 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Nov 8 00:32:32.211585 kubelet[2900]: E1108 00:32:32.211580 2900 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Nov 8 00:32:32.211959 kubelet[2900]: E1108 00:32:32.211662 2900 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-4fqms,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-655bcd5b7f-mvm84_calico-system(d8943d47-ae19-484d-8d89-dda3dcc29a60): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Nov 8 00:32:32.213190 kubelet[2900]: E1108 00:32:32.213163 2900 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-655bcd5b7f-mvm84" podUID="d8943d47-ae19-484d-8d89-dda3dcc29a60" Nov 8 00:32:32.479283 containerd[1649]: time="2025-11-08T00:32:32.478072032Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Nov 8 00:32:32.828059 containerd[1649]: time="2025-11-08T00:32:32.827817801Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 8 00:32:32.828665 containerd[1649]: time="2025-11-08T00:32:32.828531130Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Nov 8 00:32:32.828665 containerd[1649]: time="2025-11-08T00:32:32.828584097Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=77" Nov 8 00:32:32.828948 kubelet[2900]: E1108 00:32:32.828761 2900 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Nov 8 00:32:32.828948 kubelet[2900]: E1108 00:32:32.828793 2900 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Nov 8 00:32:32.828948 kubelet[2900]: E1108 00:32:32.828878 2900 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-jtt6s,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-cblkz_calico-system(c0dfff3f-1568-463e-aed1-906fd9d64aa0): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Nov 8 00:32:32.830793 kubelet[2900]: E1108 00:32:32.830185 2900 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-cblkz" podUID="c0dfff3f-1568-463e-aed1-906fd9d64aa0" Nov 8 00:32:33.308069 systemd[1]: Started sshd@9-139.178.70.109:22-147.75.109.163:39978.service - OpenSSH per-connection server daemon (147.75.109.163:39978). Nov 8 00:32:33.352616 sshd[5664]: Accepted publickey for core from 147.75.109.163 port 39978 ssh2: RSA SHA256:w/xGfnUobRwx5tSVykkPMEgd5qYNjwGoDH1wcB/4M9g Nov 8 00:32:33.353791 sshd[5664]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 8 00:32:33.356515 systemd-logind[1620]: New session 12 of user core. Nov 8 00:32:33.359097 systemd[1]: Started session-12.scope - Session 12 of User core. Nov 8 00:32:33.469951 sshd[5664]: pam_unix(sshd:session): session closed for user core Nov 8 00:32:33.474198 systemd[1]: Started sshd@10-139.178.70.109:22-147.75.109.163:39988.service - OpenSSH per-connection server daemon (147.75.109.163:39988). Nov 8 00:32:33.475281 systemd[1]: sshd@9-139.178.70.109:22-147.75.109.163:39978.service: Deactivated successfully. Nov 8 00:32:33.478623 systemd[1]: session-12.scope: Deactivated successfully. Nov 8 00:32:33.480339 systemd-logind[1620]: Session 12 logged out. Waiting for processes to exit. Nov 8 00:32:33.482103 systemd-logind[1620]: Removed session 12. Nov 8 00:32:33.515323 sshd[5676]: Accepted publickey for core from 147.75.109.163 port 39988 ssh2: RSA SHA256:w/xGfnUobRwx5tSVykkPMEgd5qYNjwGoDH1wcB/4M9g Nov 8 00:32:33.515809 sshd[5676]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 8 00:32:33.523415 systemd-logind[1620]: New session 13 of user core. Nov 8 00:32:33.532155 systemd[1]: Started session-13.scope - Session 13 of User core. Nov 8 00:32:33.723094 sshd[5676]: pam_unix(sshd:session): session closed for user core Nov 8 00:32:33.733151 systemd[1]: Started sshd@11-139.178.70.109:22-147.75.109.163:39994.service - OpenSSH per-connection server daemon (147.75.109.163:39994). Nov 8 00:32:33.734969 systemd[1]: sshd@10-139.178.70.109:22-147.75.109.163:39988.service: Deactivated successfully. Nov 8 00:32:33.738527 systemd-logind[1620]: Session 13 logged out. Waiting for processes to exit. Nov 8 00:32:33.744130 systemd[1]: session-13.scope: Deactivated successfully. Nov 8 00:32:33.749625 systemd-logind[1620]: Removed session 13. Nov 8 00:32:33.792721 sshd[5688]: Accepted publickey for core from 147.75.109.163 port 39994 ssh2: RSA SHA256:w/xGfnUobRwx5tSVykkPMEgd5qYNjwGoDH1wcB/4M9g Nov 8 00:32:33.793679 sshd[5688]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 8 00:32:33.796686 systemd-logind[1620]: New session 14 of user core. Nov 8 00:32:33.804226 systemd[1]: Started session-14.scope - Session 14 of User core. Nov 8 00:32:33.918087 sshd[5688]: pam_unix(sshd:session): session closed for user core Nov 8 00:32:33.924043 systemd[1]: sshd@11-139.178.70.109:22-147.75.109.163:39994.service: Deactivated successfully. Nov 8 00:32:33.925300 systemd[1]: session-14.scope: Deactivated successfully. Nov 8 00:32:33.927678 systemd-logind[1620]: Session 14 logged out. Waiting for processes to exit. Nov 8 00:32:33.928771 systemd-logind[1620]: Removed session 14. Nov 8 00:32:38.476996 containerd[1649]: time="2025-11-08T00:32:38.476960281Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Nov 8 00:32:38.838118 containerd[1649]: time="2025-11-08T00:32:38.838082569Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 8 00:32:38.845511 containerd[1649]: time="2025-11-08T00:32:38.845465228Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Nov 8 00:32:38.845655 containerd[1649]: time="2025-11-08T00:32:38.845555463Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Nov 8 00:32:38.845716 kubelet[2900]: E1108 00:32:38.845688 2900 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Nov 8 00:32:38.845957 kubelet[2900]: E1108 00:32:38.845725 2900 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Nov 8 00:32:38.845957 kubelet[2900]: E1108 00:32:38.845801 2900 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-6xszc,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-m4wsd_calico-system(c5b205c6-f534-4f27-bd2e-0a8fe1443335): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Nov 8 00:32:38.848079 containerd[1649]: time="2025-11-08T00:32:38.848051921Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Nov 8 00:32:38.925132 systemd[1]: Started sshd@12-139.178.70.109:22-147.75.109.163:40006.service - OpenSSH per-connection server daemon (147.75.109.163:40006). Nov 8 00:32:38.955267 sshd[5709]: Accepted publickey for core from 147.75.109.163 port 40006 ssh2: RSA SHA256:w/xGfnUobRwx5tSVykkPMEgd5qYNjwGoDH1wcB/4M9g Nov 8 00:32:38.956115 sshd[5709]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 8 00:32:38.958671 systemd-logind[1620]: New session 15 of user core. Nov 8 00:32:38.965082 systemd[1]: Started session-15.scope - Session 15 of User core. Nov 8 00:32:39.108073 sshd[5709]: pam_unix(sshd:session): session closed for user core Nov 8 00:32:39.111880 systemd-logind[1620]: Session 15 logged out. Waiting for processes to exit. Nov 8 00:32:39.112322 systemd[1]: sshd@12-139.178.70.109:22-147.75.109.163:40006.service: Deactivated successfully. Nov 8 00:32:39.114683 systemd[1]: session-15.scope: Deactivated successfully. Nov 8 00:32:39.118068 systemd-logind[1620]: Removed session 15. Nov 8 00:32:39.298281 containerd[1649]: time="2025-11-08T00:32:39.298249784Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 8 00:32:39.299311 containerd[1649]: time="2025-11-08T00:32:39.299215833Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Nov 8 00:32:39.299311 containerd[1649]: time="2025-11-08T00:32:39.299262292Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Nov 8 00:32:39.300792 kubelet[2900]: E1108 00:32:39.299438 2900 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Nov 8 00:32:39.300792 kubelet[2900]: E1108 00:32:39.299469 2900 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Nov 8 00:32:39.300792 kubelet[2900]: E1108 00:32:39.299539 2900 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-6xszc,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-m4wsd_calico-system(c5b205c6-f534-4f27-bd2e-0a8fe1443335): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Nov 8 00:32:39.301123 kubelet[2900]: E1108 00:32:39.300893 2900 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-m4wsd" podUID="c5b205c6-f534-4f27-bd2e-0a8fe1443335" Nov 8 00:32:39.480330 kubelet[2900]: E1108 00:32:39.478813 2900 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-5b7c8bd886-mhdkg" podUID="1e4b7614-5497-46ae-a96f-7f92d3916cde" Nov 8 00:32:42.477058 kubelet[2900]: E1108 00:32:42.477001 2900 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-84758c967d-czg8s" podUID="536546db-8e23-43bc-ada9-ff6aca8accce" Nov 8 00:32:43.477946 kubelet[2900]: E1108 00:32:43.477894 2900 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-84758c967d-hp26p" podUID="558dc8c2-70d1-4eda-a967-93f57dec2dc2" Nov 8 00:32:44.119177 systemd[1]: Started sshd@13-139.178.70.109:22-147.75.109.163:33634.service - OpenSSH per-connection server daemon (147.75.109.163:33634). Nov 8 00:32:44.447992 sshd[5748]: Accepted publickey for core from 147.75.109.163 port 33634 ssh2: RSA SHA256:w/xGfnUobRwx5tSVykkPMEgd5qYNjwGoDH1wcB/4M9g Nov 8 00:32:44.449965 sshd[5748]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 8 00:32:44.457138 systemd-logind[1620]: New session 16 of user core. Nov 8 00:32:44.464337 systemd[1]: Started session-16.scope - Session 16 of User core. Nov 8 00:32:44.709138 sshd[5748]: pam_unix(sshd:session): session closed for user core Nov 8 00:32:44.711337 systemd-logind[1620]: Session 16 logged out. Waiting for processes to exit. Nov 8 00:32:44.712214 systemd[1]: sshd@13-139.178.70.109:22-147.75.109.163:33634.service: Deactivated successfully. Nov 8 00:32:44.713837 systemd[1]: session-16.scope: Deactivated successfully. Nov 8 00:32:44.715013 systemd-logind[1620]: Removed session 16. Nov 8 00:32:44.873980 systemd-journald[1198]: Under memory pressure, flushing caches. Nov 8 00:32:44.872994 systemd-resolved[1542]: Under memory pressure, flushing caches. Nov 8 00:32:44.873008 systemd-resolved[1542]: Flushed all caches. Nov 8 00:32:45.479332 kubelet[2900]: E1108 00:32:45.477698 2900 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-655bcd5b7f-mvm84" podUID="d8943d47-ae19-484d-8d89-dda3dcc29a60" Nov 8 00:32:47.477901 kubelet[2900]: E1108 00:32:47.477869 2900 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-cblkz" podUID="c0dfff3f-1568-463e-aed1-906fd9d64aa0" Nov 8 00:32:49.722102 systemd[1]: Started sshd@14-139.178.70.109:22-147.75.109.163:33650.service - OpenSSH per-connection server daemon (147.75.109.163:33650). Nov 8 00:32:49.750416 sshd[5763]: Accepted publickey for core from 147.75.109.163 port 33650 ssh2: RSA SHA256:w/xGfnUobRwx5tSVykkPMEgd5qYNjwGoDH1wcB/4M9g Nov 8 00:32:49.751339 sshd[5763]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 8 00:32:49.753898 systemd-logind[1620]: New session 17 of user core. Nov 8 00:32:49.768278 systemd[1]: Started session-17.scope - Session 17 of User core. Nov 8 00:32:49.866041 sshd[5763]: pam_unix(sshd:session): session closed for user core Nov 8 00:32:49.867776 systemd[1]: sshd@14-139.178.70.109:22-147.75.109.163:33650.service: Deactivated successfully. Nov 8 00:32:49.867969 systemd-logind[1620]: Session 17 logged out. Waiting for processes to exit. Nov 8 00:32:49.873040 systemd[1]: session-17.scope: Deactivated successfully. Nov 8 00:32:49.873786 systemd-logind[1620]: Removed session 17. Nov 8 00:32:51.477762 kubelet[2900]: E1108 00:32:51.477095 2900 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-m4wsd" podUID="c5b205c6-f534-4f27-bd2e-0a8fe1443335" Nov 8 00:32:53.482179 kubelet[2900]: E1108 00:32:53.482152 2900 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-5b7c8bd886-mhdkg" podUID="1e4b7614-5497-46ae-a96f-7f92d3916cde" Nov 8 00:32:54.874163 systemd[1]: Started sshd@15-139.178.70.109:22-147.75.109.163:57394.service - OpenSSH per-connection server daemon (147.75.109.163:57394). Nov 8 00:32:54.907019 sshd[5777]: Accepted publickey for core from 147.75.109.163 port 57394 ssh2: RSA SHA256:w/xGfnUobRwx5tSVykkPMEgd5qYNjwGoDH1wcB/4M9g Nov 8 00:32:54.907737 sshd[5777]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 8 00:32:54.910276 systemd-logind[1620]: New session 18 of user core. Nov 8 00:32:54.915084 systemd[1]: Started session-18.scope - Session 18 of User core. Nov 8 00:32:55.019893 sshd[5777]: pam_unix(sshd:session): session closed for user core Nov 8 00:32:55.025935 systemd[1]: Started sshd@16-139.178.70.109:22-147.75.109.163:57410.service - OpenSSH per-connection server daemon (147.75.109.163:57410). Nov 8 00:32:55.026263 systemd[1]: sshd@15-139.178.70.109:22-147.75.109.163:57394.service: Deactivated successfully. Nov 8 00:32:55.028430 systemd[1]: session-18.scope: Deactivated successfully. Nov 8 00:32:55.030161 systemd-logind[1620]: Session 18 logged out. Waiting for processes to exit. Nov 8 00:32:55.030800 systemd-logind[1620]: Removed session 18. Nov 8 00:32:55.061379 sshd[5788]: Accepted publickey for core from 147.75.109.163 port 57410 ssh2: RSA SHA256:w/xGfnUobRwx5tSVykkPMEgd5qYNjwGoDH1wcB/4M9g Nov 8 00:32:55.062511 sshd[5788]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 8 00:32:55.067057 systemd-logind[1620]: New session 19 of user core. Nov 8 00:32:55.074079 systemd[1]: Started session-19.scope - Session 19 of User core. Nov 8 00:32:55.480853 kubelet[2900]: E1108 00:32:55.480226 2900 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-84758c967d-czg8s" podUID="536546db-8e23-43bc-ada9-ff6aca8accce" Nov 8 00:32:55.726481 sshd[5788]: pam_unix(sshd:session): session closed for user core Nov 8 00:32:55.731578 systemd[1]: Started sshd@17-139.178.70.109:22-147.75.109.163:57422.service - OpenSSH per-connection server daemon (147.75.109.163:57422). Nov 8 00:32:55.731951 systemd[1]: sshd@16-139.178.70.109:22-147.75.109.163:57410.service: Deactivated successfully. Nov 8 00:32:55.744434 systemd[1]: session-19.scope: Deactivated successfully. Nov 8 00:32:55.747964 systemd-logind[1620]: Session 19 logged out. Waiting for processes to exit. Nov 8 00:32:55.749598 systemd-logind[1620]: Removed session 19. Nov 8 00:32:55.775722 sshd[5800]: Accepted publickey for core from 147.75.109.163 port 57422 ssh2: RSA SHA256:w/xGfnUobRwx5tSVykkPMEgd5qYNjwGoDH1wcB/4M9g Nov 8 00:32:55.776694 sshd[5800]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 8 00:32:55.779945 systemd-logind[1620]: New session 20 of user core. Nov 8 00:32:55.784215 systemd[1]: Started session-20.scope - Session 20 of User core. Nov 8 00:32:56.412163 sshd[5800]: pam_unix(sshd:session): session closed for user core Nov 8 00:32:56.420121 systemd[1]: Started sshd@18-139.178.70.109:22-147.75.109.163:57426.service - OpenSSH per-connection server daemon (147.75.109.163:57426). Nov 8 00:32:56.421314 systemd[1]: sshd@17-139.178.70.109:22-147.75.109.163:57422.service: Deactivated successfully. Nov 8 00:32:56.432786 systemd[1]: session-20.scope: Deactivated successfully. Nov 8 00:32:56.433263 systemd-logind[1620]: Session 20 logged out. Waiting for processes to exit. Nov 8 00:32:56.435164 systemd-logind[1620]: Removed session 20. Nov 8 00:32:56.503082 sshd[5815]: Accepted publickey for core from 147.75.109.163 port 57426 ssh2: RSA SHA256:w/xGfnUobRwx5tSVykkPMEgd5qYNjwGoDH1wcB/4M9g Nov 8 00:32:56.503981 sshd[5815]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 8 00:32:56.506600 systemd-logind[1620]: New session 21 of user core. Nov 8 00:32:56.510096 systemd[1]: Started session-21.scope - Session 21 of User core. Nov 8 00:32:56.910424 sshd[5815]: pam_unix(sshd:session): session closed for user core Nov 8 00:32:56.914101 systemd[1]: Started sshd@19-139.178.70.109:22-147.75.109.163:57438.service - OpenSSH per-connection server daemon (147.75.109.163:57438). Nov 8 00:32:56.924400 systemd[1]: sshd@18-139.178.70.109:22-147.75.109.163:57426.service: Deactivated successfully. Nov 8 00:32:56.925488 systemd[1]: session-21.scope: Deactivated successfully. Nov 8 00:32:56.934372 systemd-logind[1620]: Session 21 logged out. Waiting for processes to exit. Nov 8 00:32:56.935987 systemd-logind[1620]: Removed session 21. Nov 8 00:32:56.974165 systemd-journald[1198]: Under memory pressure, flushing caches. Nov 8 00:32:56.973949 systemd-resolved[1542]: Under memory pressure, flushing caches. Nov 8 00:32:56.973953 systemd-resolved[1542]: Flushed all caches. Nov 8 00:32:57.020116 sshd[5830]: Accepted publickey for core from 147.75.109.163 port 57438 ssh2: RSA SHA256:w/xGfnUobRwx5tSVykkPMEgd5qYNjwGoDH1wcB/4M9g Nov 8 00:32:57.019643 sshd[5830]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 8 00:32:57.026413 systemd-logind[1620]: New session 22 of user core. Nov 8 00:32:57.032071 systemd[1]: Started session-22.scope - Session 22 of User core. Nov 8 00:32:57.196534 sshd[5830]: pam_unix(sshd:session): session closed for user core Nov 8 00:32:57.201177 systemd[1]: sshd@19-139.178.70.109:22-147.75.109.163:57438.service: Deactivated successfully. Nov 8 00:32:57.205469 systemd-logind[1620]: Session 22 logged out. Waiting for processes to exit. Nov 8 00:32:57.206039 systemd[1]: session-22.scope: Deactivated successfully. Nov 8 00:32:57.208683 systemd-logind[1620]: Removed session 22. Nov 8 00:32:58.525037 kubelet[2900]: E1108 00:32:58.525008 2900 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-84758c967d-hp26p" podUID="558dc8c2-70d1-4eda-a967-93f57dec2dc2" Nov 8 00:32:59.016992 systemd-resolved[1542]: Under memory pressure, flushing caches. Nov 8 00:32:59.016997 systemd-resolved[1542]: Flushed all caches. Nov 8 00:32:59.017980 systemd-journald[1198]: Under memory pressure, flushing caches. Nov 8 00:33:00.477025 kubelet[2900]: E1108 00:33:00.476999 2900 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-655bcd5b7f-mvm84" podUID="d8943d47-ae19-484d-8d89-dda3dcc29a60" Nov 8 00:33:01.479677 kubelet[2900]: E1108 00:33:01.479573 2900 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-cblkz" podUID="c0dfff3f-1568-463e-aed1-906fd9d64aa0" Nov 8 00:33:02.212225 systemd[1]: Started sshd@20-139.178.70.109:22-147.75.109.163:41148.service - OpenSSH per-connection server daemon (147.75.109.163:41148). Nov 8 00:33:02.262406 sshd[5849]: Accepted publickey for core from 147.75.109.163 port 41148 ssh2: RSA SHA256:w/xGfnUobRwx5tSVykkPMEgd5qYNjwGoDH1wcB/4M9g Nov 8 00:33:02.265048 sshd[5849]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 8 00:33:02.268421 systemd-logind[1620]: New session 23 of user core. Nov 8 00:33:02.271222 systemd[1]: Started session-23.scope - Session 23 of User core. Nov 8 00:33:02.465225 sshd[5849]: pam_unix(sshd:session): session closed for user core Nov 8 00:33:02.467784 systemd[1]: sshd@20-139.178.70.109:22-147.75.109.163:41148.service: Deactivated successfully. Nov 8 00:33:02.469363 systemd-logind[1620]: Session 23 logged out. Waiting for processes to exit. Nov 8 00:33:02.471140 systemd[1]: session-23.scope: Deactivated successfully. Nov 8 00:33:02.472504 systemd-logind[1620]: Removed session 23. Nov 8 00:33:06.486779 kubelet[2900]: E1108 00:33:06.478637 2900 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-m4wsd" podUID="c5b205c6-f534-4f27-bd2e-0a8fe1443335" Nov 8 00:33:07.472129 systemd[1]: Started sshd@21-139.178.70.109:22-147.75.109.163:41162.service - OpenSSH per-connection server daemon (147.75.109.163:41162). Nov 8 00:33:07.502325 sshd[5870]: Accepted publickey for core from 147.75.109.163 port 41162 ssh2: RSA SHA256:w/xGfnUobRwx5tSVykkPMEgd5qYNjwGoDH1wcB/4M9g Nov 8 00:33:07.503202 sshd[5870]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 8 00:33:07.505544 systemd-logind[1620]: New session 24 of user core. Nov 8 00:33:07.515184 systemd[1]: Started session-24.scope - Session 24 of User core. Nov 8 00:33:07.653009 sshd[5870]: pam_unix(sshd:session): session closed for user core Nov 8 00:33:07.655471 systemd[1]: sshd@21-139.178.70.109:22-147.75.109.163:41162.service: Deactivated successfully. Nov 8 00:33:07.655770 systemd-logind[1620]: Session 24 logged out. Waiting for processes to exit. Nov 8 00:33:07.657483 systemd[1]: session-24.scope: Deactivated successfully. Nov 8 00:33:07.658465 systemd-logind[1620]: Removed session 24. Nov 8 00:33:08.494973 containerd[1649]: time="2025-11-08T00:33:08.494827906Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Nov 8 00:33:08.838429 containerd[1649]: time="2025-11-08T00:33:08.838391564Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 8 00:33:08.838898 containerd[1649]: time="2025-11-08T00:33:08.838860152Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Nov 8 00:33:08.839009 containerd[1649]: time="2025-11-08T00:33:08.838972581Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Nov 8 00:33:08.839750 kubelet[2900]: E1108 00:33:08.839716 2900 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Nov 8 00:33:08.841142 kubelet[2900]: E1108 00:33:08.841117 2900 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Nov 8 00:33:08.841253 kubelet[2900]: E1108 00:33:08.841219 2900 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:a618f084f8064cdab9db195677f26467,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-ss2b5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-5b7c8bd886-mhdkg_calico-system(1e4b7614-5497-46ae-a96f-7f92d3916cde): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Nov 8 00:33:08.843600 containerd[1649]: time="2025-11-08T00:33:08.843564265Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Nov 8 00:33:09.186521 containerd[1649]: time="2025-11-08T00:33:09.186245630Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 8 00:33:09.186931 containerd[1649]: time="2025-11-08T00:33:09.186804141Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Nov 8 00:33:09.186931 containerd[1649]: time="2025-11-08T00:33:09.186871721Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Nov 8 00:33:09.187053 kubelet[2900]: E1108 00:33:09.186975 2900 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Nov 8 00:33:09.187053 kubelet[2900]: E1108 00:33:09.187010 2900 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Nov 8 00:33:09.187114 kubelet[2900]: E1108 00:33:09.187089 2900 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-ss2b5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-5b7c8bd886-mhdkg_calico-system(1e4b7614-5497-46ae-a96f-7f92d3916cde): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Nov 8 00:33:09.188383 kubelet[2900]: E1108 00:33:09.188329 2900 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-5b7c8bd886-mhdkg" podUID="1e4b7614-5497-46ae-a96f-7f92d3916cde" Nov 8 00:33:10.477950 containerd[1649]: time="2025-11-08T00:33:10.477586568Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 8 00:33:10.822877 containerd[1649]: time="2025-11-08T00:33:10.822804069Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 8 00:33:10.823253 containerd[1649]: time="2025-11-08T00:33:10.823218651Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 8 00:33:10.823325 containerd[1649]: time="2025-11-08T00:33:10.823294131Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Nov 8 00:33:10.823428 kubelet[2900]: E1108 00:33:10.823402 2900 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 8 00:33:10.823874 kubelet[2900]: E1108 00:33:10.823436 2900 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 8 00:33:10.823874 kubelet[2900]: E1108 00:33:10.823564 2900 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-8w9hr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-84758c967d-czg8s_calico-apiserver(536546db-8e23-43bc-ada9-ff6aca8accce): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 8 00:33:10.825266 kubelet[2900]: E1108 00:33:10.825226 2900 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-84758c967d-czg8s" podUID="536546db-8e23-43bc-ada9-ff6aca8accce" Nov 8 00:33:11.479222 kubelet[2900]: E1108 00:33:11.479026 2900 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-655bcd5b7f-mvm84" podUID="d8943d47-ae19-484d-8d89-dda3dcc29a60" Nov 8 00:33:11.479222 kubelet[2900]: E1108 00:33:11.479083 2900 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-84758c967d-hp26p" podUID="558dc8c2-70d1-4eda-a967-93f57dec2dc2" Nov 8 00:33:12.662090 systemd[1]: Started sshd@22-139.178.70.109:22-147.75.109.163:36268.service - OpenSSH per-connection server daemon (147.75.109.163:36268). Nov 8 00:33:12.700457 sshd[5888]: Accepted publickey for core from 147.75.109.163 port 36268 ssh2: RSA SHA256:w/xGfnUobRwx5tSVykkPMEgd5qYNjwGoDH1wcB/4M9g Nov 8 00:33:12.701494 sshd[5888]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 8 00:33:12.704784 systemd-logind[1620]: New session 25 of user core. Nov 8 00:33:12.715092 systemd[1]: Started session-25.scope - Session 25 of User core. Nov 8 00:33:12.954672 sshd[5888]: pam_unix(sshd:session): session closed for user core Nov 8 00:33:12.960785 systemd-logind[1620]: Session 25 logged out. Waiting for processes to exit. Nov 8 00:33:12.960967 systemd[1]: sshd@22-139.178.70.109:22-147.75.109.163:36268.service: Deactivated successfully. Nov 8 00:33:12.963047 systemd[1]: session-25.scope: Deactivated successfully. Nov 8 00:33:12.963780 systemd-logind[1620]: Removed session 25. Nov 8 00:33:14.486958 containerd[1649]: time="2025-11-08T00:33:14.486511756Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Nov 8 00:33:14.847577 containerd[1649]: time="2025-11-08T00:33:14.846989097Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 8 00:33:14.848167 containerd[1649]: time="2025-11-08T00:33:14.848083885Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Nov 8 00:33:14.848167 containerd[1649]: time="2025-11-08T00:33:14.848141392Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=77" Nov 8 00:33:14.848257 kubelet[2900]: E1108 00:33:14.848227 2900 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Nov 8 00:33:14.853761 kubelet[2900]: E1108 00:33:14.853732 2900 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Nov 8 00:33:14.853922 kubelet[2900]: E1108 00:33:14.853846 2900 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-jtt6s,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-cblkz_calico-system(c0dfff3f-1568-463e-aed1-906fd9d64aa0): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Nov 8 00:33:14.855380 kubelet[2900]: E1108 00:33:14.855359 2900 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-cblkz" podUID="c0dfff3f-1568-463e-aed1-906fd9d64aa0"