Oct 31 00:48:57.734984 kernel: Linux version 6.6.113-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT_DYNAMIC Thu Oct 30 22:59:39 -00 2025 Oct 31 00:48:57.735000 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=vmware flatcar.autologin verity.usrhash=950876ad7bc3e9634b7585a81697da4ef03ac6558969e5c002165369dd7c7885 Oct 31 00:48:57.735006 kernel: Disabled fast string operations Oct 31 00:48:57.735011 kernel: BIOS-provided physical RAM map: Oct 31 00:48:57.735014 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009ebff] usable Oct 31 00:48:57.735019 kernel: BIOS-e820: [mem 0x000000000009ec00-0x000000000009ffff] reserved Oct 31 00:48:57.735025 kernel: BIOS-e820: [mem 0x00000000000dc000-0x00000000000fffff] reserved Oct 31 00:48:57.735029 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000007fedffff] usable Oct 31 00:48:57.735033 kernel: BIOS-e820: [mem 0x000000007fee0000-0x000000007fefefff] ACPI data Oct 31 00:48:57.735037 kernel: BIOS-e820: [mem 0x000000007feff000-0x000000007fefffff] ACPI NVS Oct 31 00:48:57.735042 kernel: BIOS-e820: [mem 0x000000007ff00000-0x000000007fffffff] usable Oct 31 00:48:57.735046 kernel: BIOS-e820: [mem 0x00000000f0000000-0x00000000f7ffffff] reserved Oct 31 00:48:57.735050 kernel: BIOS-e820: [mem 0x00000000fec00000-0x00000000fec0ffff] reserved Oct 31 00:48:57.735054 kernel: BIOS-e820: [mem 0x00000000fee00000-0x00000000fee00fff] reserved Oct 31 00:48:57.735061 kernel: BIOS-e820: [mem 0x00000000fffe0000-0x00000000ffffffff] reserved Oct 31 00:48:57.735066 kernel: NX (Execute Disable) protection: active Oct 31 00:48:57.735070 kernel: APIC: Static calls initialized Oct 31 00:48:57.735075 kernel: SMBIOS 2.7 present. Oct 31 00:48:57.735080 kernel: DMI: VMware, Inc. VMware Virtual Platform/440BX Desktop Reference Platform, BIOS 6.00 05/28/2020 Oct 31 00:48:57.735085 kernel: vmware: hypercall mode: 0x00 Oct 31 00:48:57.735090 kernel: Hypervisor detected: VMware Oct 31 00:48:57.735095 kernel: vmware: TSC freq read from hypervisor : 3408.000 MHz Oct 31 00:48:57.735101 kernel: vmware: Host bus clock speed read from hypervisor : 66000000 Hz Oct 31 00:48:57.735105 kernel: vmware: using clock offset of 2596498976 ns Oct 31 00:48:57.735110 kernel: tsc: Detected 3408.000 MHz processor Oct 31 00:48:57.735115 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Oct 31 00:48:57.735120 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Oct 31 00:48:57.735125 kernel: last_pfn = 0x80000 max_arch_pfn = 0x400000000 Oct 31 00:48:57.735130 kernel: total RAM covered: 3072M Oct 31 00:48:57.735135 kernel: Found optimal setting for mtrr clean up Oct 31 00:48:57.735141 kernel: gran_size: 64K chunk_size: 64K num_reg: 2 lose cover RAM: 0G Oct 31 00:48:57.735147 kernel: MTRR map: 6 entries (5 fixed + 1 variable; max 21), built from 8 variable MTRRs Oct 31 00:48:57.735152 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Oct 31 00:48:57.735157 kernel: Using GB pages for direct mapping Oct 31 00:48:57.735162 kernel: ACPI: Early table checksum verification disabled Oct 31 00:48:57.735167 kernel: ACPI: RSDP 0x00000000000F6A00 000024 (v02 PTLTD ) Oct 31 00:48:57.735172 kernel: ACPI: XSDT 0x000000007FEE965B 00005C (v01 INTEL 440BX 06040000 VMW 01324272) Oct 31 00:48:57.735176 kernel: ACPI: FACP 0x000000007FEFEE73 0000F4 (v04 INTEL 440BX 06040000 PTL 000F4240) Oct 31 00:48:57.735181 kernel: ACPI: DSDT 0x000000007FEEAD55 01411E (v01 PTLTD Custom 06040000 MSFT 03000001) Oct 31 00:48:57.735186 kernel: ACPI: FACS 0x000000007FEFFFC0 000040 Oct 31 00:48:57.735194 kernel: ACPI: FACS 0x000000007FEFFFC0 000040 Oct 31 00:48:57.735199 kernel: ACPI: BOOT 0x000000007FEEAD2D 000028 (v01 PTLTD $SBFTBL$ 06040000 LTP 00000001) Oct 31 00:48:57.735204 kernel: ACPI: APIC 0x000000007FEEA5EB 000742 (v01 PTLTD ? APIC 06040000 LTP 00000000) Oct 31 00:48:57.735209 kernel: ACPI: MCFG 0x000000007FEEA5AF 00003C (v01 PTLTD $PCITBL$ 06040000 LTP 00000001) Oct 31 00:48:57.735215 kernel: ACPI: SRAT 0x000000007FEE9757 0008A8 (v02 VMWARE MEMPLUG 06040000 VMW 00000001) Oct 31 00:48:57.735221 kernel: ACPI: HPET 0x000000007FEE971F 000038 (v01 VMWARE VMW HPET 06040000 VMW 00000001) Oct 31 00:48:57.735226 kernel: ACPI: WAET 0x000000007FEE96F7 000028 (v01 VMWARE VMW WAET 06040000 VMW 00000001) Oct 31 00:48:57.735231 kernel: ACPI: Reserving FACP table memory at [mem 0x7fefee73-0x7fefef66] Oct 31 00:48:57.735236 kernel: ACPI: Reserving DSDT table memory at [mem 0x7feead55-0x7fefee72] Oct 31 00:48:57.735242 kernel: ACPI: Reserving FACS table memory at [mem 0x7fefffc0-0x7fefffff] Oct 31 00:48:57.735247 kernel: ACPI: Reserving FACS table memory at [mem 0x7fefffc0-0x7fefffff] Oct 31 00:48:57.735252 kernel: ACPI: Reserving BOOT table memory at [mem 0x7feead2d-0x7feead54] Oct 31 00:48:57.735257 kernel: ACPI: Reserving APIC table memory at [mem 0x7feea5eb-0x7feead2c] Oct 31 00:48:57.735262 kernel: ACPI: Reserving MCFG table memory at [mem 0x7feea5af-0x7feea5ea] Oct 31 00:48:57.735267 kernel: ACPI: Reserving SRAT table memory at [mem 0x7fee9757-0x7fee9ffe] Oct 31 00:48:57.735274 kernel: ACPI: Reserving HPET table memory at [mem 0x7fee971f-0x7fee9756] Oct 31 00:48:57.735279 kernel: ACPI: Reserving WAET table memory at [mem 0x7fee96f7-0x7fee971e] Oct 31 00:48:57.735284 kernel: system APIC only can use physical flat Oct 31 00:48:57.735289 kernel: APIC: Switched APIC routing to: physical flat Oct 31 00:48:57.735294 kernel: SRAT: PXM 0 -> APIC 0x00 -> Node 0 Oct 31 00:48:57.735299 kernel: SRAT: PXM 0 -> APIC 0x02 -> Node 0 Oct 31 00:48:57.735304 kernel: SRAT: PXM 0 -> APIC 0x04 -> Node 0 Oct 31 00:48:57.735309 kernel: SRAT: PXM 0 -> APIC 0x06 -> Node 0 Oct 31 00:48:57.735314 kernel: SRAT: PXM 0 -> APIC 0x08 -> Node 0 Oct 31 00:48:57.735320 kernel: SRAT: PXM 0 -> APIC 0x0a -> Node 0 Oct 31 00:48:57.735332 kernel: SRAT: PXM 0 -> APIC 0x0c -> Node 0 Oct 31 00:48:57.735337 kernel: SRAT: PXM 0 -> APIC 0x0e -> Node 0 Oct 31 00:48:57.735342 kernel: SRAT: PXM 0 -> APIC 0x10 -> Node 0 Oct 31 00:48:57.735347 kernel: SRAT: PXM 0 -> APIC 0x12 -> Node 0 Oct 31 00:48:57.735352 kernel: SRAT: PXM 0 -> APIC 0x14 -> Node 0 Oct 31 00:48:57.735357 kernel: SRAT: PXM 0 -> APIC 0x16 -> Node 0 Oct 31 00:48:57.735362 kernel: SRAT: PXM 0 -> APIC 0x18 -> Node 0 Oct 31 00:48:57.735367 kernel: SRAT: PXM 0 -> APIC 0x1a -> Node 0 Oct 31 00:48:57.735372 kernel: SRAT: PXM 0 -> APIC 0x1c -> Node 0 Oct 31 00:48:57.735377 kernel: SRAT: PXM 0 -> APIC 0x1e -> Node 0 Oct 31 00:48:57.735384 kernel: SRAT: PXM 0 -> APIC 0x20 -> Node 0 Oct 31 00:48:57.735389 kernel: SRAT: PXM 0 -> APIC 0x22 -> Node 0 Oct 31 00:48:57.735394 kernel: SRAT: PXM 0 -> APIC 0x24 -> Node 0 Oct 31 00:48:57.735399 kernel: SRAT: PXM 0 -> APIC 0x26 -> Node 0 Oct 31 00:48:57.735404 kernel: SRAT: PXM 0 -> APIC 0x28 -> Node 0 Oct 31 00:48:57.735409 kernel: SRAT: PXM 0 -> APIC 0x2a -> Node 0 Oct 31 00:48:57.735414 kernel: SRAT: PXM 0 -> APIC 0x2c -> Node 0 Oct 31 00:48:57.735419 kernel: SRAT: PXM 0 -> APIC 0x2e -> Node 0 Oct 31 00:48:57.735423 kernel: SRAT: PXM 0 -> APIC 0x30 -> Node 0 Oct 31 00:48:57.735429 kernel: SRAT: PXM 0 -> APIC 0x32 -> Node 0 Oct 31 00:48:57.735435 kernel: SRAT: PXM 0 -> APIC 0x34 -> Node 0 Oct 31 00:48:57.735440 kernel: SRAT: PXM 0 -> APIC 0x36 -> Node 0 Oct 31 00:48:57.735445 kernel: SRAT: PXM 0 -> APIC 0x38 -> Node 0 Oct 31 00:48:57.735450 kernel: SRAT: PXM 0 -> APIC 0x3a -> Node 0 Oct 31 00:48:57.735455 kernel: SRAT: PXM 0 -> APIC 0x3c -> Node 0 Oct 31 00:48:57.735460 kernel: SRAT: PXM 0 -> APIC 0x3e -> Node 0 Oct 31 00:48:57.735465 kernel: SRAT: PXM 0 -> APIC 0x40 -> Node 0 Oct 31 00:48:57.735470 kernel: SRAT: PXM 0 -> APIC 0x42 -> Node 0 Oct 31 00:48:57.735475 kernel: SRAT: PXM 0 -> APIC 0x44 -> Node 0 Oct 31 00:48:57.735480 kernel: SRAT: PXM 0 -> APIC 0x46 -> Node 0 Oct 31 00:48:57.735486 kernel: SRAT: PXM 0 -> APIC 0x48 -> Node 0 Oct 31 00:48:57.735491 kernel: SRAT: PXM 0 -> APIC 0x4a -> Node 0 Oct 31 00:48:57.735496 kernel: SRAT: PXM 0 -> APIC 0x4c -> Node 0 Oct 31 00:48:57.735501 kernel: SRAT: PXM 0 -> APIC 0x4e -> Node 0 Oct 31 00:48:57.735506 kernel: SRAT: PXM 0 -> APIC 0x50 -> Node 0 Oct 31 00:48:57.735511 kernel: SRAT: PXM 0 -> APIC 0x52 -> Node 0 Oct 31 00:48:57.735516 kernel: SRAT: PXM 0 -> APIC 0x54 -> Node 0 Oct 31 00:48:57.735521 kernel: SRAT: PXM 0 -> APIC 0x56 -> Node 0 Oct 31 00:48:57.735526 kernel: SRAT: PXM 0 -> APIC 0x58 -> Node 0 Oct 31 00:48:57.735531 kernel: SRAT: PXM 0 -> APIC 0x5a -> Node 0 Oct 31 00:48:57.735538 kernel: SRAT: PXM 0 -> APIC 0x5c -> Node 0 Oct 31 00:48:57.735543 kernel: SRAT: PXM 0 -> APIC 0x5e -> Node 0 Oct 31 00:48:57.735548 kernel: SRAT: PXM 0 -> APIC 0x60 -> Node 0 Oct 31 00:48:57.735553 kernel: SRAT: PXM 0 -> APIC 0x62 -> Node 0 Oct 31 00:48:57.735558 kernel: SRAT: PXM 0 -> APIC 0x64 -> Node 0 Oct 31 00:48:57.735563 kernel: SRAT: PXM 0 -> APIC 0x66 -> Node 0 Oct 31 00:48:57.735568 kernel: SRAT: PXM 0 -> APIC 0x68 -> Node 0 Oct 31 00:48:57.735573 kernel: SRAT: PXM 0 -> APIC 0x6a -> Node 0 Oct 31 00:48:57.735578 kernel: SRAT: PXM 0 -> APIC 0x6c -> Node 0 Oct 31 00:48:57.735583 kernel: SRAT: PXM 0 -> APIC 0x6e -> Node 0 Oct 31 00:48:57.735589 kernel: SRAT: PXM 0 -> APIC 0x70 -> Node 0 Oct 31 00:48:57.735594 kernel: SRAT: PXM 0 -> APIC 0x72 -> Node 0 Oct 31 00:48:57.735599 kernel: SRAT: PXM 0 -> APIC 0x74 -> Node 0 Oct 31 00:48:57.735608 kernel: SRAT: PXM 0 -> APIC 0x76 -> Node 0 Oct 31 00:48:57.735614 kernel: SRAT: PXM 0 -> APIC 0x78 -> Node 0 Oct 31 00:48:57.735619 kernel: SRAT: PXM 0 -> APIC 0x7a -> Node 0 Oct 31 00:48:57.735625 kernel: SRAT: PXM 0 -> APIC 0x7c -> Node 0 Oct 31 00:48:57.735630 kernel: SRAT: PXM 0 -> APIC 0x7e -> Node 0 Oct 31 00:48:57.735636 kernel: SRAT: PXM 0 -> APIC 0x80 -> Node 0 Oct 31 00:48:57.735642 kernel: SRAT: PXM 0 -> APIC 0x82 -> Node 0 Oct 31 00:48:57.735647 kernel: SRAT: PXM 0 -> APIC 0x84 -> Node 0 Oct 31 00:48:57.735652 kernel: SRAT: PXM 0 -> APIC 0x86 -> Node 0 Oct 31 00:48:57.735658 kernel: SRAT: PXM 0 -> APIC 0x88 -> Node 0 Oct 31 00:48:57.735663 kernel: SRAT: PXM 0 -> APIC 0x8a -> Node 0 Oct 31 00:48:57.735669 kernel: SRAT: PXM 0 -> APIC 0x8c -> Node 0 Oct 31 00:48:57.735674 kernel: SRAT: PXM 0 -> APIC 0x8e -> Node 0 Oct 31 00:48:57.735679 kernel: SRAT: PXM 0 -> APIC 0x90 -> Node 0 Oct 31 00:48:57.735685 kernel: SRAT: PXM 0 -> APIC 0x92 -> Node 0 Oct 31 00:48:57.735690 kernel: SRAT: PXM 0 -> APIC 0x94 -> Node 0 Oct 31 00:48:57.735696 kernel: SRAT: PXM 0 -> APIC 0x96 -> Node 0 Oct 31 00:48:57.735702 kernel: SRAT: PXM 0 -> APIC 0x98 -> Node 0 Oct 31 00:48:57.735707 kernel: SRAT: PXM 0 -> APIC 0x9a -> Node 0 Oct 31 00:48:57.735713 kernel: SRAT: PXM 0 -> APIC 0x9c -> Node 0 Oct 31 00:48:57.735718 kernel: SRAT: PXM 0 -> APIC 0x9e -> Node 0 Oct 31 00:48:57.735723 kernel: SRAT: PXM 0 -> APIC 0xa0 -> Node 0 Oct 31 00:48:57.735729 kernel: SRAT: PXM 0 -> APIC 0xa2 -> Node 0 Oct 31 00:48:57.735734 kernel: SRAT: PXM 0 -> APIC 0xa4 -> Node 0 Oct 31 00:48:57.735739 kernel: SRAT: PXM 0 -> APIC 0xa6 -> Node 0 Oct 31 00:48:57.735744 kernel: SRAT: PXM 0 -> APIC 0xa8 -> Node 0 Oct 31 00:48:57.735751 kernel: SRAT: PXM 0 -> APIC 0xaa -> Node 0 Oct 31 00:48:57.735756 kernel: SRAT: PXM 0 -> APIC 0xac -> Node 0 Oct 31 00:48:57.735762 kernel: SRAT: PXM 0 -> APIC 0xae -> Node 0 Oct 31 00:48:57.735767 kernel: SRAT: PXM 0 -> APIC 0xb0 -> Node 0 Oct 31 00:48:57.735772 kernel: SRAT: PXM 0 -> APIC 0xb2 -> Node 0 Oct 31 00:48:57.735778 kernel: SRAT: PXM 0 -> APIC 0xb4 -> Node 0 Oct 31 00:48:57.735783 kernel: SRAT: PXM 0 -> APIC 0xb6 -> Node 0 Oct 31 00:48:57.735788 kernel: SRAT: PXM 0 -> APIC 0xb8 -> Node 0 Oct 31 00:48:57.735794 kernel: SRAT: PXM 0 -> APIC 0xba -> Node 0 Oct 31 00:48:57.735799 kernel: SRAT: PXM 0 -> APIC 0xbc -> Node 0 Oct 31 00:48:57.735805 kernel: SRAT: PXM 0 -> APIC 0xbe -> Node 0 Oct 31 00:48:57.735811 kernel: SRAT: PXM 0 -> APIC 0xc0 -> Node 0 Oct 31 00:48:57.735816 kernel: SRAT: PXM 0 -> APIC 0xc2 -> Node 0 Oct 31 00:48:57.735821 kernel: SRAT: PXM 0 -> APIC 0xc4 -> Node 0 Oct 31 00:48:57.735827 kernel: SRAT: PXM 0 -> APIC 0xc6 -> Node 0 Oct 31 00:48:57.735832 kernel: SRAT: PXM 0 -> APIC 0xc8 -> Node 0 Oct 31 00:48:57.735838 kernel: SRAT: PXM 0 -> APIC 0xca -> Node 0 Oct 31 00:48:57.735843 kernel: SRAT: PXM 0 -> APIC 0xcc -> Node 0 Oct 31 00:48:57.735848 kernel: SRAT: PXM 0 -> APIC 0xce -> Node 0 Oct 31 00:48:57.735853 kernel: SRAT: PXM 0 -> APIC 0xd0 -> Node 0 Oct 31 00:48:57.735860 kernel: SRAT: PXM 0 -> APIC 0xd2 -> Node 0 Oct 31 00:48:57.735865 kernel: SRAT: PXM 0 -> APIC 0xd4 -> Node 0 Oct 31 00:48:57.735871 kernel: SRAT: PXM 0 -> APIC 0xd6 -> Node 0 Oct 31 00:48:57.735876 kernel: SRAT: PXM 0 -> APIC 0xd8 -> Node 0 Oct 31 00:48:57.735881 kernel: SRAT: PXM 0 -> APIC 0xda -> Node 0 Oct 31 00:48:57.735887 kernel: SRAT: PXM 0 -> APIC 0xdc -> Node 0 Oct 31 00:48:57.735892 kernel: SRAT: PXM 0 -> APIC 0xde -> Node 0 Oct 31 00:48:57.735898 kernel: SRAT: PXM 0 -> APIC 0xe0 -> Node 0 Oct 31 00:48:57.735903 kernel: SRAT: PXM 0 -> APIC 0xe2 -> Node 0 Oct 31 00:48:57.735908 kernel: SRAT: PXM 0 -> APIC 0xe4 -> Node 0 Oct 31 00:48:57.735913 kernel: SRAT: PXM 0 -> APIC 0xe6 -> Node 0 Oct 31 00:48:57.735920 kernel: SRAT: PXM 0 -> APIC 0xe8 -> Node 0 Oct 31 00:48:57.735925 kernel: SRAT: PXM 0 -> APIC 0xea -> Node 0 Oct 31 00:48:57.735931 kernel: SRAT: PXM 0 -> APIC 0xec -> Node 0 Oct 31 00:48:57.735937 kernel: SRAT: PXM 0 -> APIC 0xee -> Node 0 Oct 31 00:48:57.735942 kernel: SRAT: PXM 0 -> APIC 0xf0 -> Node 0 Oct 31 00:48:57.735947 kernel: SRAT: PXM 0 -> APIC 0xf2 -> Node 0 Oct 31 00:48:57.735953 kernel: SRAT: PXM 0 -> APIC 0xf4 -> Node 0 Oct 31 00:48:57.735958 kernel: SRAT: PXM 0 -> APIC 0xf6 -> Node 0 Oct 31 00:48:57.735963 kernel: SRAT: PXM 0 -> APIC 0xf8 -> Node 0 Oct 31 00:48:57.735969 kernel: SRAT: PXM 0 -> APIC 0xfa -> Node 0 Oct 31 00:48:57.735975 kernel: SRAT: PXM 0 -> APIC 0xfc -> Node 0 Oct 31 00:48:57.735981 kernel: SRAT: PXM 0 -> APIC 0xfe -> Node 0 Oct 31 00:48:57.735986 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x0009ffff] Oct 31 00:48:57.735991 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00100000-0x7fffffff] Oct 31 00:48:57.735997 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x80000000-0xbfffffff] hotplug Oct 31 00:48:57.736003 kernel: NUMA: Node 0 [mem 0x00000000-0x0009ffff] + [mem 0x00100000-0x7fffffff] -> [mem 0x00000000-0x7fffffff] Oct 31 00:48:57.736008 kernel: NODE_DATA(0) allocated [mem 0x7fffa000-0x7fffffff] Oct 31 00:48:57.736014 kernel: Zone ranges: Oct 31 00:48:57.736020 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Oct 31 00:48:57.736026 kernel: DMA32 [mem 0x0000000001000000-0x000000007fffffff] Oct 31 00:48:57.736032 kernel: Normal empty Oct 31 00:48:57.736037 kernel: Movable zone start for each node Oct 31 00:48:57.736042 kernel: Early memory node ranges Oct 31 00:48:57.736048 kernel: node 0: [mem 0x0000000000001000-0x000000000009dfff] Oct 31 00:48:57.736053 kernel: node 0: [mem 0x0000000000100000-0x000000007fedffff] Oct 31 00:48:57.736059 kernel: node 0: [mem 0x000000007ff00000-0x000000007fffffff] Oct 31 00:48:57.736064 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000007fffffff] Oct 31 00:48:57.736070 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Oct 31 00:48:57.736076 kernel: On node 0, zone DMA: 98 pages in unavailable ranges Oct 31 00:48:57.736082 kernel: On node 0, zone DMA32: 32 pages in unavailable ranges Oct 31 00:48:57.736088 kernel: ACPI: PM-Timer IO Port: 0x1008 Oct 31 00:48:57.736093 kernel: system APIC only can use physical flat Oct 31 00:48:57.736098 kernel: ACPI: LAPIC_NMI (acpi_id[0x00] high edge lint[0x1]) Oct 31 00:48:57.736104 kernel: ACPI: LAPIC_NMI (acpi_id[0x01] high edge lint[0x1]) Oct 31 00:48:57.736110 kernel: ACPI: LAPIC_NMI (acpi_id[0x02] high edge lint[0x1]) Oct 31 00:48:57.736115 kernel: ACPI: LAPIC_NMI (acpi_id[0x03] high edge lint[0x1]) Oct 31 00:48:57.736120 kernel: ACPI: LAPIC_NMI (acpi_id[0x04] high edge lint[0x1]) Oct 31 00:48:57.736126 kernel: ACPI: LAPIC_NMI (acpi_id[0x05] high edge lint[0x1]) Oct 31 00:48:57.736132 kernel: ACPI: LAPIC_NMI (acpi_id[0x06] high edge lint[0x1]) Oct 31 00:48:57.736138 kernel: ACPI: LAPIC_NMI (acpi_id[0x07] high edge lint[0x1]) Oct 31 00:48:57.736143 kernel: ACPI: LAPIC_NMI (acpi_id[0x08] high edge lint[0x1]) Oct 31 00:48:57.736149 kernel: ACPI: LAPIC_NMI (acpi_id[0x09] high edge lint[0x1]) Oct 31 00:48:57.736154 kernel: ACPI: LAPIC_NMI (acpi_id[0x0a] high edge lint[0x1]) Oct 31 00:48:57.736160 kernel: ACPI: LAPIC_NMI (acpi_id[0x0b] high edge lint[0x1]) Oct 31 00:48:57.736165 kernel: ACPI: LAPIC_NMI (acpi_id[0x0c] high edge lint[0x1]) Oct 31 00:48:57.736171 kernel: ACPI: LAPIC_NMI (acpi_id[0x0d] high edge lint[0x1]) Oct 31 00:48:57.736176 kernel: ACPI: LAPIC_NMI (acpi_id[0x0e] high edge lint[0x1]) Oct 31 00:48:57.736181 kernel: ACPI: LAPIC_NMI (acpi_id[0x0f] high edge lint[0x1]) Oct 31 00:48:57.736188 kernel: ACPI: LAPIC_NMI (acpi_id[0x10] high edge lint[0x1]) Oct 31 00:48:57.736193 kernel: ACPI: LAPIC_NMI (acpi_id[0x11] high edge lint[0x1]) Oct 31 00:48:57.736199 kernel: ACPI: LAPIC_NMI (acpi_id[0x12] high edge lint[0x1]) Oct 31 00:48:57.736204 kernel: ACPI: LAPIC_NMI (acpi_id[0x13] high edge lint[0x1]) Oct 31 00:48:57.736210 kernel: ACPI: LAPIC_NMI (acpi_id[0x14] high edge lint[0x1]) Oct 31 00:48:57.736215 kernel: ACPI: LAPIC_NMI (acpi_id[0x15] high edge lint[0x1]) Oct 31 00:48:57.736221 kernel: ACPI: LAPIC_NMI (acpi_id[0x16] high edge lint[0x1]) Oct 31 00:48:57.736226 kernel: ACPI: LAPIC_NMI (acpi_id[0x17] high edge lint[0x1]) Oct 31 00:48:57.736232 kernel: ACPI: LAPIC_NMI (acpi_id[0x18] high edge lint[0x1]) Oct 31 00:48:57.736237 kernel: ACPI: LAPIC_NMI (acpi_id[0x19] high edge lint[0x1]) Oct 31 00:48:57.736244 kernel: ACPI: LAPIC_NMI (acpi_id[0x1a] high edge lint[0x1]) Oct 31 00:48:57.736249 kernel: ACPI: LAPIC_NMI (acpi_id[0x1b] high edge lint[0x1]) Oct 31 00:48:57.736254 kernel: ACPI: LAPIC_NMI (acpi_id[0x1c] high edge lint[0x1]) Oct 31 00:48:57.736260 kernel: ACPI: LAPIC_NMI (acpi_id[0x1d] high edge lint[0x1]) Oct 31 00:48:57.736265 kernel: ACPI: LAPIC_NMI (acpi_id[0x1e] high edge lint[0x1]) Oct 31 00:48:57.736271 kernel: ACPI: LAPIC_NMI (acpi_id[0x1f] high edge lint[0x1]) Oct 31 00:48:57.736276 kernel: ACPI: LAPIC_NMI (acpi_id[0x20] high edge lint[0x1]) Oct 31 00:48:57.736282 kernel: ACPI: LAPIC_NMI (acpi_id[0x21] high edge lint[0x1]) Oct 31 00:48:57.736287 kernel: ACPI: LAPIC_NMI (acpi_id[0x22] high edge lint[0x1]) Oct 31 00:48:57.736293 kernel: ACPI: LAPIC_NMI (acpi_id[0x23] high edge lint[0x1]) Oct 31 00:48:57.736299 kernel: ACPI: LAPIC_NMI (acpi_id[0x24] high edge lint[0x1]) Oct 31 00:48:57.736304 kernel: ACPI: LAPIC_NMI (acpi_id[0x25] high edge lint[0x1]) Oct 31 00:48:57.736310 kernel: ACPI: LAPIC_NMI (acpi_id[0x26] high edge lint[0x1]) Oct 31 00:48:57.736315 kernel: ACPI: LAPIC_NMI (acpi_id[0x27] high edge lint[0x1]) Oct 31 00:48:57.736320 kernel: ACPI: LAPIC_NMI (acpi_id[0x28] high edge lint[0x1]) Oct 31 00:48:57.736334 kernel: ACPI: LAPIC_NMI (acpi_id[0x29] high edge lint[0x1]) Oct 31 00:48:57.736341 kernel: ACPI: LAPIC_NMI (acpi_id[0x2a] high edge lint[0x1]) Oct 31 00:48:57.736346 kernel: ACPI: LAPIC_NMI (acpi_id[0x2b] high edge lint[0x1]) Oct 31 00:48:57.736351 kernel: ACPI: LAPIC_NMI (acpi_id[0x2c] high edge lint[0x1]) Oct 31 00:48:57.736359 kernel: ACPI: LAPIC_NMI (acpi_id[0x2d] high edge lint[0x1]) Oct 31 00:48:57.736365 kernel: ACPI: LAPIC_NMI (acpi_id[0x2e] high edge lint[0x1]) Oct 31 00:48:57.736370 kernel: ACPI: LAPIC_NMI (acpi_id[0x2f] high edge lint[0x1]) Oct 31 00:48:57.736375 kernel: ACPI: LAPIC_NMI (acpi_id[0x30] high edge lint[0x1]) Oct 31 00:48:57.736381 kernel: ACPI: LAPIC_NMI (acpi_id[0x31] high edge lint[0x1]) Oct 31 00:48:57.736386 kernel: ACPI: LAPIC_NMI (acpi_id[0x32] high edge lint[0x1]) Oct 31 00:48:57.736392 kernel: ACPI: LAPIC_NMI (acpi_id[0x33] high edge lint[0x1]) Oct 31 00:48:57.736397 kernel: ACPI: LAPIC_NMI (acpi_id[0x34] high edge lint[0x1]) Oct 31 00:48:57.736402 kernel: ACPI: LAPIC_NMI (acpi_id[0x35] high edge lint[0x1]) Oct 31 00:48:57.736408 kernel: ACPI: LAPIC_NMI (acpi_id[0x36] high edge lint[0x1]) Oct 31 00:48:57.736414 kernel: ACPI: LAPIC_NMI (acpi_id[0x37] high edge lint[0x1]) Oct 31 00:48:57.736420 kernel: ACPI: LAPIC_NMI (acpi_id[0x38] high edge lint[0x1]) Oct 31 00:48:57.736425 kernel: ACPI: LAPIC_NMI (acpi_id[0x39] high edge lint[0x1]) Oct 31 00:48:57.736431 kernel: ACPI: LAPIC_NMI (acpi_id[0x3a] high edge lint[0x1]) Oct 31 00:48:57.736436 kernel: ACPI: LAPIC_NMI (acpi_id[0x3b] high edge lint[0x1]) Oct 31 00:48:57.736442 kernel: ACPI: LAPIC_NMI (acpi_id[0x3c] high edge lint[0x1]) Oct 31 00:48:57.736447 kernel: ACPI: LAPIC_NMI (acpi_id[0x3d] high edge lint[0x1]) Oct 31 00:48:57.736453 kernel: ACPI: LAPIC_NMI (acpi_id[0x3e] high edge lint[0x1]) Oct 31 00:48:57.736458 kernel: ACPI: LAPIC_NMI (acpi_id[0x3f] high edge lint[0x1]) Oct 31 00:48:57.736464 kernel: ACPI: LAPIC_NMI (acpi_id[0x40] high edge lint[0x1]) Oct 31 00:48:57.736470 kernel: ACPI: LAPIC_NMI (acpi_id[0x41] high edge lint[0x1]) Oct 31 00:48:57.736475 kernel: ACPI: LAPIC_NMI (acpi_id[0x42] high edge lint[0x1]) Oct 31 00:48:57.736481 kernel: ACPI: LAPIC_NMI (acpi_id[0x43] high edge lint[0x1]) Oct 31 00:48:57.736486 kernel: ACPI: LAPIC_NMI (acpi_id[0x44] high edge lint[0x1]) Oct 31 00:48:57.736492 kernel: ACPI: LAPIC_NMI (acpi_id[0x45] high edge lint[0x1]) Oct 31 00:48:57.736497 kernel: ACPI: LAPIC_NMI (acpi_id[0x46] high edge lint[0x1]) Oct 31 00:48:57.736502 kernel: ACPI: LAPIC_NMI (acpi_id[0x47] high edge lint[0x1]) Oct 31 00:48:57.736508 kernel: ACPI: LAPIC_NMI (acpi_id[0x48] high edge lint[0x1]) Oct 31 00:48:57.736517 kernel: ACPI: LAPIC_NMI (acpi_id[0x49] high edge lint[0x1]) Oct 31 00:48:57.736523 kernel: ACPI: LAPIC_NMI (acpi_id[0x4a] high edge lint[0x1]) Oct 31 00:48:57.736529 kernel: ACPI: LAPIC_NMI (acpi_id[0x4b] high edge lint[0x1]) Oct 31 00:48:57.736534 kernel: ACPI: LAPIC_NMI (acpi_id[0x4c] high edge lint[0x1]) Oct 31 00:48:57.736540 kernel: ACPI: LAPIC_NMI (acpi_id[0x4d] high edge lint[0x1]) Oct 31 00:48:57.736545 kernel: ACPI: LAPIC_NMI (acpi_id[0x4e] high edge lint[0x1]) Oct 31 00:48:57.736551 kernel: ACPI: LAPIC_NMI (acpi_id[0x4f] high edge lint[0x1]) Oct 31 00:48:57.736556 kernel: ACPI: LAPIC_NMI (acpi_id[0x50] high edge lint[0x1]) Oct 31 00:48:57.736561 kernel: ACPI: LAPIC_NMI (acpi_id[0x51] high edge lint[0x1]) Oct 31 00:48:57.736567 kernel: ACPI: LAPIC_NMI (acpi_id[0x52] high edge lint[0x1]) Oct 31 00:48:57.736572 kernel: ACPI: LAPIC_NMI (acpi_id[0x53] high edge lint[0x1]) Oct 31 00:48:57.736579 kernel: ACPI: LAPIC_NMI (acpi_id[0x54] high edge lint[0x1]) Oct 31 00:48:57.736584 kernel: ACPI: LAPIC_NMI (acpi_id[0x55] high edge lint[0x1]) Oct 31 00:48:57.736590 kernel: ACPI: LAPIC_NMI (acpi_id[0x56] high edge lint[0x1]) Oct 31 00:48:57.736595 kernel: ACPI: LAPIC_NMI (acpi_id[0x57] high edge lint[0x1]) Oct 31 00:48:57.736600 kernel: ACPI: LAPIC_NMI (acpi_id[0x58] high edge lint[0x1]) Oct 31 00:48:57.736606 kernel: ACPI: LAPIC_NMI (acpi_id[0x59] high edge lint[0x1]) Oct 31 00:48:57.736611 kernel: ACPI: LAPIC_NMI (acpi_id[0x5a] high edge lint[0x1]) Oct 31 00:48:57.736617 kernel: ACPI: LAPIC_NMI (acpi_id[0x5b] high edge lint[0x1]) Oct 31 00:48:57.736622 kernel: ACPI: LAPIC_NMI (acpi_id[0x5c] high edge lint[0x1]) Oct 31 00:48:57.736627 kernel: ACPI: LAPIC_NMI (acpi_id[0x5d] high edge lint[0x1]) Oct 31 00:48:57.736634 kernel: ACPI: LAPIC_NMI (acpi_id[0x5e] high edge lint[0x1]) Oct 31 00:48:57.736639 kernel: ACPI: LAPIC_NMI (acpi_id[0x5f] high edge lint[0x1]) Oct 31 00:48:57.736644 kernel: ACPI: LAPIC_NMI (acpi_id[0x60] high edge lint[0x1]) Oct 31 00:48:57.736650 kernel: ACPI: LAPIC_NMI (acpi_id[0x61] high edge lint[0x1]) Oct 31 00:48:57.736655 kernel: ACPI: LAPIC_NMI (acpi_id[0x62] high edge lint[0x1]) Oct 31 00:48:57.736661 kernel: ACPI: LAPIC_NMI (acpi_id[0x63] high edge lint[0x1]) Oct 31 00:48:57.736666 kernel: ACPI: LAPIC_NMI (acpi_id[0x64] high edge lint[0x1]) Oct 31 00:48:57.736671 kernel: ACPI: LAPIC_NMI (acpi_id[0x65] high edge lint[0x1]) Oct 31 00:48:57.736677 kernel: ACPI: LAPIC_NMI (acpi_id[0x66] high edge lint[0x1]) Oct 31 00:48:57.736683 kernel: ACPI: LAPIC_NMI (acpi_id[0x67] high edge lint[0x1]) Oct 31 00:48:57.736689 kernel: ACPI: LAPIC_NMI (acpi_id[0x68] high edge lint[0x1]) Oct 31 00:48:57.736694 kernel: ACPI: LAPIC_NMI (acpi_id[0x69] high edge lint[0x1]) Oct 31 00:48:57.736699 kernel: ACPI: LAPIC_NMI (acpi_id[0x6a] high edge lint[0x1]) Oct 31 00:48:57.736704 kernel: ACPI: LAPIC_NMI (acpi_id[0x6b] high edge lint[0x1]) Oct 31 00:48:57.736710 kernel: ACPI: LAPIC_NMI (acpi_id[0x6c] high edge lint[0x1]) Oct 31 00:48:57.736715 kernel: ACPI: LAPIC_NMI (acpi_id[0x6d] high edge lint[0x1]) Oct 31 00:48:57.736721 kernel: ACPI: LAPIC_NMI (acpi_id[0x6e] high edge lint[0x1]) Oct 31 00:48:57.736726 kernel: ACPI: LAPIC_NMI (acpi_id[0x6f] high edge lint[0x1]) Oct 31 00:48:57.736732 kernel: ACPI: LAPIC_NMI (acpi_id[0x70] high edge lint[0x1]) Oct 31 00:48:57.736738 kernel: ACPI: LAPIC_NMI (acpi_id[0x71] high edge lint[0x1]) Oct 31 00:48:57.736744 kernel: ACPI: LAPIC_NMI (acpi_id[0x72] high edge lint[0x1]) Oct 31 00:48:57.736749 kernel: ACPI: LAPIC_NMI (acpi_id[0x73] high edge lint[0x1]) Oct 31 00:48:57.736754 kernel: ACPI: LAPIC_NMI (acpi_id[0x74] high edge lint[0x1]) Oct 31 00:48:57.736760 kernel: ACPI: LAPIC_NMI (acpi_id[0x75] high edge lint[0x1]) Oct 31 00:48:57.736765 kernel: ACPI: LAPIC_NMI (acpi_id[0x76] high edge lint[0x1]) Oct 31 00:48:57.736771 kernel: ACPI: LAPIC_NMI (acpi_id[0x77] high edge lint[0x1]) Oct 31 00:48:57.736776 kernel: ACPI: LAPIC_NMI (acpi_id[0x78] high edge lint[0x1]) Oct 31 00:48:57.736781 kernel: ACPI: LAPIC_NMI (acpi_id[0x79] high edge lint[0x1]) Oct 31 00:48:57.736788 kernel: ACPI: LAPIC_NMI (acpi_id[0x7a] high edge lint[0x1]) Oct 31 00:48:57.736793 kernel: ACPI: LAPIC_NMI (acpi_id[0x7b] high edge lint[0x1]) Oct 31 00:48:57.736799 kernel: ACPI: LAPIC_NMI (acpi_id[0x7c] high edge lint[0x1]) Oct 31 00:48:57.736804 kernel: ACPI: LAPIC_NMI (acpi_id[0x7d] high edge lint[0x1]) Oct 31 00:48:57.736809 kernel: ACPI: LAPIC_NMI (acpi_id[0x7e] high edge lint[0x1]) Oct 31 00:48:57.736815 kernel: ACPI: LAPIC_NMI (acpi_id[0x7f] high edge lint[0x1]) Oct 31 00:48:57.736820 kernel: IOAPIC[0]: apic_id 1, version 17, address 0xfec00000, GSI 0-23 Oct 31 00:48:57.736826 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 high edge) Oct 31 00:48:57.736832 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Oct 31 00:48:57.736837 kernel: ACPI: HPET id: 0x8086af01 base: 0xfed00000 Oct 31 00:48:57.736844 kernel: TSC deadline timer available Oct 31 00:48:57.736849 kernel: smpboot: Allowing 128 CPUs, 126 hotplug CPUs Oct 31 00:48:57.736855 kernel: [mem 0x80000000-0xefffffff] available for PCI devices Oct 31 00:48:57.736860 kernel: Booting paravirtualized kernel on VMware hypervisor Oct 31 00:48:57.736866 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Oct 31 00:48:57.736871 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:128 nr_cpu_ids:128 nr_node_ids:1 Oct 31 00:48:57.736877 kernel: percpu: Embedded 58 pages/cpu s196712 r8192 d32664 u262144 Oct 31 00:48:57.736882 kernel: pcpu-alloc: s196712 r8192 d32664 u262144 alloc=1*2097152 Oct 31 00:48:57.736888 kernel: pcpu-alloc: [0] 000 001 002 003 004 005 006 007 Oct 31 00:48:57.736894 kernel: pcpu-alloc: [0] 008 009 010 011 012 013 014 015 Oct 31 00:48:57.736900 kernel: pcpu-alloc: [0] 016 017 018 019 020 021 022 023 Oct 31 00:48:57.736905 kernel: pcpu-alloc: [0] 024 025 026 027 028 029 030 031 Oct 31 00:48:57.736910 kernel: pcpu-alloc: [0] 032 033 034 035 036 037 038 039 Oct 31 00:48:57.736923 kernel: pcpu-alloc: [0] 040 041 042 043 044 045 046 047 Oct 31 00:48:57.736929 kernel: pcpu-alloc: [0] 048 049 050 051 052 053 054 055 Oct 31 00:48:57.736935 kernel: pcpu-alloc: [0] 056 057 058 059 060 061 062 063 Oct 31 00:48:57.736941 kernel: pcpu-alloc: [0] 064 065 066 067 068 069 070 071 Oct 31 00:48:57.736946 kernel: pcpu-alloc: [0] 072 073 074 075 076 077 078 079 Oct 31 00:48:57.736953 kernel: pcpu-alloc: [0] 080 081 082 083 084 085 086 087 Oct 31 00:48:57.736959 kernel: pcpu-alloc: [0] 088 089 090 091 092 093 094 095 Oct 31 00:48:57.736965 kernel: pcpu-alloc: [0] 096 097 098 099 100 101 102 103 Oct 31 00:48:57.736970 kernel: pcpu-alloc: [0] 104 105 106 107 108 109 110 111 Oct 31 00:48:57.736976 kernel: pcpu-alloc: [0] 112 113 114 115 116 117 118 119 Oct 31 00:48:57.736982 kernel: pcpu-alloc: [0] 120 121 122 123 124 125 126 127 Oct 31 00:48:57.736988 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=vmware flatcar.autologin verity.usrhash=950876ad7bc3e9634b7585a81697da4ef03ac6558969e5c002165369dd7c7885 Oct 31 00:48:57.736994 kernel: random: crng init done Oct 31 00:48:57.737001 kernel: printk: log_buf_len individual max cpu contribution: 4096 bytes Oct 31 00:48:57.737007 kernel: printk: log_buf_len total cpu_extra contributions: 520192 bytes Oct 31 00:48:57.737012 kernel: printk: log_buf_len min size: 262144 bytes Oct 31 00:48:57.737018 kernel: printk: log_buf_len: 1048576 bytes Oct 31 00:48:57.737024 kernel: printk: early log buf free: 239760(91%) Oct 31 00:48:57.737030 kernel: Dentry cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Oct 31 00:48:57.737036 kernel: Inode-cache hash table entries: 131072 (order: 8, 1048576 bytes, linear) Oct 31 00:48:57.737041 kernel: Fallback order for Node 0: 0 Oct 31 00:48:57.737047 kernel: Built 1 zonelists, mobility grouping on. Total pages: 515808 Oct 31 00:48:57.737054 kernel: Policy zone: DMA32 Oct 31 00:48:57.737060 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Oct 31 00:48:57.737066 kernel: Memory: 1936368K/2096628K available (12288K kernel code, 2288K rwdata, 22748K rodata, 42880K init, 2320K bss, 160000K reserved, 0K cma-reserved) Oct 31 00:48:57.737073 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=128, Nodes=1 Oct 31 00:48:57.737079 kernel: ftrace: allocating 37980 entries in 149 pages Oct 31 00:48:57.737086 kernel: ftrace: allocated 149 pages with 4 groups Oct 31 00:48:57.737091 kernel: Dynamic Preempt: voluntary Oct 31 00:48:57.737097 kernel: rcu: Preemptible hierarchical RCU implementation. Oct 31 00:48:57.737103 kernel: rcu: RCU event tracing is enabled. Oct 31 00:48:57.737109 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=128. Oct 31 00:48:57.737115 kernel: Trampoline variant of Tasks RCU enabled. Oct 31 00:48:57.737121 kernel: Rude variant of Tasks RCU enabled. Oct 31 00:48:57.737127 kernel: Tracing variant of Tasks RCU enabled. Oct 31 00:48:57.737133 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Oct 31 00:48:57.737138 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=128 Oct 31 00:48:57.737145 kernel: NR_IRQS: 33024, nr_irqs: 1448, preallocated irqs: 16 Oct 31 00:48:57.737151 kernel: rcu: srcu_init: Setting srcu_struct sizes to big. Oct 31 00:48:57.737157 kernel: Console: colour VGA+ 80x25 Oct 31 00:48:57.737163 kernel: printk: console [tty0] enabled Oct 31 00:48:57.737169 kernel: printk: console [ttyS0] enabled Oct 31 00:48:57.737175 kernel: ACPI: Core revision 20230628 Oct 31 00:48:57.737181 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 133484882848 ns Oct 31 00:48:57.737187 kernel: APIC: Switch to symmetric I/O mode setup Oct 31 00:48:57.737193 kernel: x2apic enabled Oct 31 00:48:57.737200 kernel: APIC: Switched APIC routing to: physical x2apic Oct 31 00:48:57.737206 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Oct 31 00:48:57.737212 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x311fd3cd494, max_idle_ns: 440795223879 ns Oct 31 00:48:57.737218 kernel: Calibrating delay loop (skipped) preset value.. 6816.00 BogoMIPS (lpj=3408000) Oct 31 00:48:57.737224 kernel: Disabled fast string operations Oct 31 00:48:57.737231 kernel: Last level iTLB entries: 4KB 64, 2MB 8, 4MB 8 Oct 31 00:48:57.737237 kernel: Last level dTLB entries: 4KB 64, 2MB 32, 4MB 32, 1GB 4 Oct 31 00:48:57.737243 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Oct 31 00:48:57.737249 kernel: Spectre V2 : Spectre BHI mitigation: SW BHB clearing on vm exit Oct 31 00:48:57.737256 kernel: Spectre V2 : Spectre BHI mitigation: SW BHB clearing on syscall Oct 31 00:48:57.737262 kernel: Spectre V2 : Mitigation: Enhanced / Automatic IBRS Oct 31 00:48:57.737268 kernel: Spectre V2 : Spectre v2 / PBRSB-eIBRS: Retire a single CALL on VMEXIT Oct 31 00:48:57.737273 kernel: RETBleed: Mitigation: Enhanced IBRS Oct 31 00:48:57.737279 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Oct 31 00:48:57.737285 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Oct 31 00:48:57.737291 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode Oct 31 00:48:57.737297 kernel: SRBDS: Unknown: Dependent on hypervisor status Oct 31 00:48:57.737303 kernel: GDS: Unknown: Dependent on hypervisor status Oct 31 00:48:57.737310 kernel: active return thunk: its_return_thunk Oct 31 00:48:57.737316 kernel: ITS: Mitigation: Aligned branch/return thunks Oct 31 00:48:57.737322 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Oct 31 00:48:57.737365 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Oct 31 00:48:57.737371 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Oct 31 00:48:57.737377 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Oct 31 00:48:57.737383 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format. Oct 31 00:48:57.737389 kernel: Freeing SMP alternatives memory: 32K Oct 31 00:48:57.737395 kernel: pid_max: default: 131072 minimum: 1024 Oct 31 00:48:57.737403 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Oct 31 00:48:57.737409 kernel: landlock: Up and running. Oct 31 00:48:57.737414 kernel: SELinux: Initializing. Oct 31 00:48:57.737420 kernel: Mount-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Oct 31 00:48:57.737426 kernel: Mountpoint-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Oct 31 00:48:57.737432 kernel: smpboot: CPU0: Intel(R) Xeon(R) E-2278G CPU @ 3.40GHz (family: 0x6, model: 0x9e, stepping: 0xd) Oct 31 00:48:57.737438 kernel: RCU Tasks: Setting shift to 7 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=128. Oct 31 00:48:57.737444 kernel: RCU Tasks Rude: Setting shift to 7 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=128. Oct 31 00:48:57.737451 kernel: RCU Tasks Trace: Setting shift to 7 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=128. Oct 31 00:48:57.737457 kernel: Performance Events: Skylake events, core PMU driver. Oct 31 00:48:57.737463 kernel: core: CPUID marked event: 'cpu cycles' unavailable Oct 31 00:48:57.737469 kernel: core: CPUID marked event: 'instructions' unavailable Oct 31 00:48:57.737474 kernel: core: CPUID marked event: 'bus cycles' unavailable Oct 31 00:48:57.737480 kernel: core: CPUID marked event: 'cache references' unavailable Oct 31 00:48:57.737486 kernel: core: CPUID marked event: 'cache misses' unavailable Oct 31 00:48:57.737491 kernel: core: CPUID marked event: 'branch instructions' unavailable Oct 31 00:48:57.737497 kernel: core: CPUID marked event: 'branch misses' unavailable Oct 31 00:48:57.737504 kernel: ... version: 1 Oct 31 00:48:57.737510 kernel: ... bit width: 48 Oct 31 00:48:57.737515 kernel: ... generic registers: 4 Oct 31 00:48:57.737521 kernel: ... value mask: 0000ffffffffffff Oct 31 00:48:57.737527 kernel: ... max period: 000000007fffffff Oct 31 00:48:57.737533 kernel: ... fixed-purpose events: 0 Oct 31 00:48:57.737539 kernel: ... event mask: 000000000000000f Oct 31 00:48:57.737544 kernel: signal: max sigframe size: 1776 Oct 31 00:48:57.737550 kernel: rcu: Hierarchical SRCU implementation. Oct 31 00:48:57.737557 kernel: rcu: Max phase no-delay instances is 400. Oct 31 00:48:57.737563 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Oct 31 00:48:57.737569 kernel: smp: Bringing up secondary CPUs ... Oct 31 00:48:57.737575 kernel: smpboot: x86: Booting SMP configuration: Oct 31 00:48:57.737581 kernel: .... node #0, CPUs: #1 Oct 31 00:48:57.737587 kernel: Disabled fast string operations Oct 31 00:48:57.737592 kernel: smpboot: CPU 1 Converting physical 2 to logical package 1 Oct 31 00:48:57.737598 kernel: smpboot: CPU 1 Converting physical 0 to logical die 1 Oct 31 00:48:57.737604 kernel: smp: Brought up 1 node, 2 CPUs Oct 31 00:48:57.737610 kernel: smpboot: Max logical packages: 128 Oct 31 00:48:57.737617 kernel: smpboot: Total of 2 processors activated (13632.00 BogoMIPS) Oct 31 00:48:57.737624 kernel: devtmpfs: initialized Oct 31 00:48:57.737630 kernel: x86/mm: Memory block size: 128MB Oct 31 00:48:57.737636 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x7feff000-0x7fefffff] (4096 bytes) Oct 31 00:48:57.737642 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Oct 31 00:48:57.737648 kernel: futex hash table entries: 32768 (order: 9, 2097152 bytes, linear) Oct 31 00:48:57.737653 kernel: pinctrl core: initialized pinctrl subsystem Oct 31 00:48:57.737659 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Oct 31 00:48:57.737665 kernel: audit: initializing netlink subsys (disabled) Oct 31 00:48:57.737672 kernel: audit: type=2000 audit(1761871735.089:1): state=initialized audit_enabled=0 res=1 Oct 31 00:48:57.737678 kernel: thermal_sys: Registered thermal governor 'step_wise' Oct 31 00:48:57.737684 kernel: thermal_sys: Registered thermal governor 'user_space' Oct 31 00:48:57.737690 kernel: cpuidle: using governor menu Oct 31 00:48:57.737695 kernel: Simple Boot Flag at 0x36 set to 0x80 Oct 31 00:48:57.737701 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Oct 31 00:48:57.737707 kernel: dca service started, version 1.12.1 Oct 31 00:48:57.737713 kernel: PCI: MMCONFIG for domain 0000 [bus 00-7f] at [mem 0xf0000000-0xf7ffffff] (base 0xf0000000) Oct 31 00:48:57.737719 kernel: PCI: Using configuration type 1 for base access Oct 31 00:48:57.737726 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Oct 31 00:48:57.737732 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Oct 31 00:48:57.737738 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Oct 31 00:48:57.737743 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Oct 31 00:48:57.737749 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Oct 31 00:48:57.737755 kernel: ACPI: Added _OSI(Module Device) Oct 31 00:48:57.737761 kernel: ACPI: Added _OSI(Processor Device) Oct 31 00:48:57.737767 kernel: ACPI: Added _OSI(Processor Aggregator Device) Oct 31 00:48:57.737773 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Oct 31 00:48:57.737780 kernel: ACPI: [Firmware Bug]: BIOS _OSI(Linux) query ignored Oct 31 00:48:57.737785 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Oct 31 00:48:57.737791 kernel: ACPI: Interpreter enabled Oct 31 00:48:57.737797 kernel: ACPI: PM: (supports S0 S1 S5) Oct 31 00:48:57.737803 kernel: ACPI: Using IOAPIC for interrupt routing Oct 31 00:48:57.737809 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Oct 31 00:48:57.737815 kernel: PCI: Using E820 reservations for host bridge windows Oct 31 00:48:57.737820 kernel: ACPI: Enabled 4 GPEs in block 00 to 0F Oct 31 00:48:57.737826 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-7f]) Oct 31 00:48:57.737907 kernel: acpi PNP0A03:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Oct 31 00:48:57.737965 kernel: acpi PNP0A03:00: _OSC: platform does not support [AER LTR] Oct 31 00:48:57.738017 kernel: acpi PNP0A03:00: _OSC: OS now controls [PCIeHotplug PME PCIeCapability] Oct 31 00:48:57.738026 kernel: PCI host bridge to bus 0000:00 Oct 31 00:48:57.738077 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Oct 31 00:48:57.738124 kernel: pci_bus 0000:00: root bus resource [mem 0x000cc000-0x000dbfff window] Oct 31 00:48:57.738173 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Oct 31 00:48:57.738219 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Oct 31 00:48:57.738265 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xfeff window] Oct 31 00:48:57.738311 kernel: pci_bus 0000:00: root bus resource [bus 00-7f] Oct 31 00:48:57.738384 kernel: pci 0000:00:00.0: [8086:7190] type 00 class 0x060000 Oct 31 00:48:57.738441 kernel: pci 0000:00:01.0: [8086:7191] type 01 class 0x060400 Oct 31 00:48:57.738499 kernel: pci 0000:00:07.0: [8086:7110] type 00 class 0x060100 Oct 31 00:48:57.738560 kernel: pci 0000:00:07.1: [8086:7111] type 00 class 0x01018a Oct 31 00:48:57.738613 kernel: pci 0000:00:07.1: reg 0x20: [io 0x1060-0x106f] Oct 31 00:48:57.738666 kernel: pci 0000:00:07.1: legacy IDE quirk: reg 0x10: [io 0x01f0-0x01f7] Oct 31 00:48:57.738718 kernel: pci 0000:00:07.1: legacy IDE quirk: reg 0x14: [io 0x03f6] Oct 31 00:48:57.738770 kernel: pci 0000:00:07.1: legacy IDE quirk: reg 0x18: [io 0x0170-0x0177] Oct 31 00:48:57.738822 kernel: pci 0000:00:07.1: legacy IDE quirk: reg 0x1c: [io 0x0376] Oct 31 00:48:57.738884 kernel: pci 0000:00:07.3: [8086:7113] type 00 class 0x068000 Oct 31 00:48:57.738936 kernel: pci 0000:00:07.3: quirk: [io 0x1000-0x103f] claimed by PIIX4 ACPI Oct 31 00:48:57.738989 kernel: pci 0000:00:07.3: quirk: [io 0x1040-0x104f] claimed by PIIX4 SMB Oct 31 00:48:57.739045 kernel: pci 0000:00:07.7: [15ad:0740] type 00 class 0x088000 Oct 31 00:48:57.739098 kernel: pci 0000:00:07.7: reg 0x10: [io 0x1080-0x10bf] Oct 31 00:48:57.739150 kernel: pci 0000:00:07.7: reg 0x14: [mem 0xfebfe000-0xfebfffff 64bit] Oct 31 00:48:57.739208 kernel: pci 0000:00:0f.0: [15ad:0405] type 00 class 0x030000 Oct 31 00:48:57.739260 kernel: pci 0000:00:0f.0: reg 0x10: [io 0x1070-0x107f] Oct 31 00:48:57.739312 kernel: pci 0000:00:0f.0: reg 0x14: [mem 0xe8000000-0xefffffff pref] Oct 31 00:48:57.739373 kernel: pci 0000:00:0f.0: reg 0x18: [mem 0xfe000000-0xfe7fffff] Oct 31 00:48:57.739425 kernel: pci 0000:00:0f.0: reg 0x30: [mem 0x00000000-0x00007fff pref] Oct 31 00:48:57.739476 kernel: pci 0000:00:0f.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Oct 31 00:48:57.739538 kernel: pci 0000:00:11.0: [15ad:0790] type 01 class 0x060401 Oct 31 00:48:57.739599 kernel: pci 0000:00:15.0: [15ad:07a0] type 01 class 0x060400 Oct 31 00:48:57.739652 kernel: pci 0000:00:15.0: PME# supported from D0 D3hot D3cold Oct 31 00:48:57.739710 kernel: pci 0000:00:15.1: [15ad:07a0] type 01 class 0x060400 Oct 31 00:48:57.739762 kernel: pci 0000:00:15.1: PME# supported from D0 D3hot D3cold Oct 31 00:48:57.739818 kernel: pci 0000:00:15.2: [15ad:07a0] type 01 class 0x060400 Oct 31 00:48:57.739871 kernel: pci 0000:00:15.2: PME# supported from D0 D3hot D3cold Oct 31 00:48:57.739927 kernel: pci 0000:00:15.3: [15ad:07a0] type 01 class 0x060400 Oct 31 00:48:57.739982 kernel: pci 0000:00:15.3: PME# supported from D0 D3hot D3cold Oct 31 00:48:57.740038 kernel: pci 0000:00:15.4: [15ad:07a0] type 01 class 0x060400 Oct 31 00:48:57.740091 kernel: pci 0000:00:15.4: PME# supported from D0 D3hot D3cold Oct 31 00:48:57.740151 kernel: pci 0000:00:15.5: [15ad:07a0] type 01 class 0x060400 Oct 31 00:48:57.740204 kernel: pci 0000:00:15.5: PME# supported from D0 D3hot D3cold Oct 31 00:48:57.740260 kernel: pci 0000:00:15.6: [15ad:07a0] type 01 class 0x060400 Oct 31 00:48:57.740316 kernel: pci 0000:00:15.6: PME# supported from D0 D3hot D3cold Oct 31 00:48:57.740386 kernel: pci 0000:00:15.7: [15ad:07a0] type 01 class 0x060400 Oct 31 00:48:57.740439 kernel: pci 0000:00:15.7: PME# supported from D0 D3hot D3cold Oct 31 00:48:57.740496 kernel: pci 0000:00:16.0: [15ad:07a0] type 01 class 0x060400 Oct 31 00:48:57.740549 kernel: pci 0000:00:16.0: PME# supported from D0 D3hot D3cold Oct 31 00:48:57.740608 kernel: pci 0000:00:16.1: [15ad:07a0] type 01 class 0x060400 Oct 31 00:48:57.740661 kernel: pci 0000:00:16.1: PME# supported from D0 D3hot D3cold Oct 31 00:48:57.740717 kernel: pci 0000:00:16.2: [15ad:07a0] type 01 class 0x060400 Oct 31 00:48:57.740770 kernel: pci 0000:00:16.2: PME# supported from D0 D3hot D3cold Oct 31 00:48:57.740825 kernel: pci 0000:00:16.3: [15ad:07a0] type 01 class 0x060400 Oct 31 00:48:57.740878 kernel: pci 0000:00:16.3: PME# supported from D0 D3hot D3cold Oct 31 00:48:57.740935 kernel: pci 0000:00:16.4: [15ad:07a0] type 01 class 0x060400 Oct 31 00:48:57.740988 kernel: pci 0000:00:16.4: PME# supported from D0 D3hot D3cold Oct 31 00:48:57.741045 kernel: pci 0000:00:16.5: [15ad:07a0] type 01 class 0x060400 Oct 31 00:48:57.741098 kernel: pci 0000:00:16.5: PME# supported from D0 D3hot D3cold Oct 31 00:48:57.741155 kernel: pci 0000:00:16.6: [15ad:07a0] type 01 class 0x060400 Oct 31 00:48:57.741207 kernel: pci 0000:00:16.6: PME# supported from D0 D3hot D3cold Oct 31 00:48:57.741263 kernel: pci 0000:00:16.7: [15ad:07a0] type 01 class 0x060400 Oct 31 00:48:57.741319 kernel: pci 0000:00:16.7: PME# supported from D0 D3hot D3cold Oct 31 00:48:57.741390 kernel: pci 0000:00:17.0: [15ad:07a0] type 01 class 0x060400 Oct 31 00:48:57.741443 kernel: pci 0000:00:17.0: PME# supported from D0 D3hot D3cold Oct 31 00:48:57.741500 kernel: pci 0000:00:17.1: [15ad:07a0] type 01 class 0x060400 Oct 31 00:48:57.741556 kernel: pci 0000:00:17.1: PME# supported from D0 D3hot D3cold Oct 31 00:48:57.741612 kernel: pci 0000:00:17.2: [15ad:07a0] type 01 class 0x060400 Oct 31 00:48:57.741682 kernel: pci 0000:00:17.2: PME# supported from D0 D3hot D3cold Oct 31 00:48:57.741744 kernel: pci 0000:00:17.3: [15ad:07a0] type 01 class 0x060400 Oct 31 00:48:57.741797 kernel: pci 0000:00:17.3: PME# supported from D0 D3hot D3cold Oct 31 00:48:57.741853 kernel: pci 0000:00:17.4: [15ad:07a0] type 01 class 0x060400 Oct 31 00:48:57.741906 kernel: pci 0000:00:17.4: PME# supported from D0 D3hot D3cold Oct 31 00:48:57.741965 kernel: pci 0000:00:17.5: [15ad:07a0] type 01 class 0x060400 Oct 31 00:48:57.742021 kernel: pci 0000:00:17.5: PME# supported from D0 D3hot D3cold Oct 31 00:48:57.742104 kernel: pci 0000:00:17.6: [15ad:07a0] type 01 class 0x060400 Oct 31 00:48:57.742199 kernel: pci 0000:00:17.6: PME# supported from D0 D3hot D3cold Oct 31 00:48:57.742258 kernel: pci 0000:00:17.7: [15ad:07a0] type 01 class 0x060400 Oct 31 00:48:57.742312 kernel: pci 0000:00:17.7: PME# supported from D0 D3hot D3cold Oct 31 00:48:57.742410 kernel: pci 0000:00:18.0: [15ad:07a0] type 01 class 0x060400 Oct 31 00:48:57.742467 kernel: pci 0000:00:18.0: PME# supported from D0 D3hot D3cold Oct 31 00:48:57.742528 kernel: pci 0000:00:18.1: [15ad:07a0] type 01 class 0x060400 Oct 31 00:48:57.742597 kernel: pci 0000:00:18.1: PME# supported from D0 D3hot D3cold Oct 31 00:48:57.742653 kernel: pci 0000:00:18.2: [15ad:07a0] type 01 class 0x060400 Oct 31 00:48:57.742705 kernel: pci 0000:00:18.2: PME# supported from D0 D3hot D3cold Oct 31 00:48:57.742780 kernel: pci 0000:00:18.3: [15ad:07a0] type 01 class 0x060400 Oct 31 00:48:57.742850 kernel: pci 0000:00:18.3: PME# supported from D0 D3hot D3cold Oct 31 00:48:57.742906 kernel: pci 0000:00:18.4: [15ad:07a0] type 01 class 0x060400 Oct 31 00:48:57.742958 kernel: pci 0000:00:18.4: PME# supported from D0 D3hot D3cold Oct 31 00:48:57.743013 kernel: pci 0000:00:18.5: [15ad:07a0] type 01 class 0x060400 Oct 31 00:48:57.743065 kernel: pci 0000:00:18.5: PME# supported from D0 D3hot D3cold Oct 31 00:48:57.743121 kernel: pci 0000:00:18.6: [15ad:07a0] type 01 class 0x060400 Oct 31 00:48:57.743172 kernel: pci 0000:00:18.6: PME# supported from D0 D3hot D3cold Oct 31 00:48:57.743230 kernel: pci 0000:00:18.7: [15ad:07a0] type 01 class 0x060400 Oct 31 00:48:57.743282 kernel: pci 0000:00:18.7: PME# supported from D0 D3hot D3cold Oct 31 00:48:57.743504 kernel: pci_bus 0000:01: extended config space not accessible Oct 31 00:48:57.743563 kernel: pci 0000:00:01.0: PCI bridge to [bus 01] Oct 31 00:48:57.743708 kernel: pci_bus 0000:02: extended config space not accessible Oct 31 00:48:57.743719 kernel: acpiphp: Slot [32] registered Oct 31 00:48:57.743727 kernel: acpiphp: Slot [33] registered Oct 31 00:48:57.743733 kernel: acpiphp: Slot [34] registered Oct 31 00:48:57.743739 kernel: acpiphp: Slot [35] registered Oct 31 00:48:57.743745 kernel: acpiphp: Slot [36] registered Oct 31 00:48:57.743751 kernel: acpiphp: Slot [37] registered Oct 31 00:48:57.743757 kernel: acpiphp: Slot [38] registered Oct 31 00:48:57.743763 kernel: acpiphp: Slot [39] registered Oct 31 00:48:57.743769 kernel: acpiphp: Slot [40] registered Oct 31 00:48:57.743775 kernel: acpiphp: Slot [41] registered Oct 31 00:48:57.743781 kernel: acpiphp: Slot [42] registered Oct 31 00:48:57.743788 kernel: acpiphp: Slot [43] registered Oct 31 00:48:57.743794 kernel: acpiphp: Slot [44] registered Oct 31 00:48:57.743799 kernel: acpiphp: Slot [45] registered Oct 31 00:48:57.743805 kernel: acpiphp: Slot [46] registered Oct 31 00:48:57.743811 kernel: acpiphp: Slot [47] registered Oct 31 00:48:57.743817 kernel: acpiphp: Slot [48] registered Oct 31 00:48:57.743822 kernel: acpiphp: Slot [49] registered Oct 31 00:48:57.743828 kernel: acpiphp: Slot [50] registered Oct 31 00:48:57.743834 kernel: acpiphp: Slot [51] registered Oct 31 00:48:57.743841 kernel: acpiphp: Slot [52] registered Oct 31 00:48:57.743847 kernel: acpiphp: Slot [53] registered Oct 31 00:48:57.743853 kernel: acpiphp: Slot [54] registered Oct 31 00:48:57.743858 kernel: acpiphp: Slot [55] registered Oct 31 00:48:57.743864 kernel: acpiphp: Slot [56] registered Oct 31 00:48:57.743870 kernel: acpiphp: Slot [57] registered Oct 31 00:48:57.743875 kernel: acpiphp: Slot [58] registered Oct 31 00:48:57.743881 kernel: acpiphp: Slot [59] registered Oct 31 00:48:57.743887 kernel: acpiphp: Slot [60] registered Oct 31 00:48:57.743892 kernel: acpiphp: Slot [61] registered Oct 31 00:48:57.743899 kernel: acpiphp: Slot [62] registered Oct 31 00:48:57.743905 kernel: acpiphp: Slot [63] registered Oct 31 00:48:57.743958 kernel: pci 0000:00:11.0: PCI bridge to [bus 02] (subtractive decode) Oct 31 00:48:57.744011 kernel: pci 0000:00:11.0: bridge window [io 0x2000-0x3fff] Oct 31 00:48:57.744062 kernel: pci 0000:00:11.0: bridge window [mem 0xfd600000-0xfdffffff] Oct 31 00:48:57.744131 kernel: pci 0000:00:11.0: bridge window [mem 0xe7b00000-0xe7ffffff 64bit pref] Oct 31 00:48:57.744182 kernel: pci 0000:00:11.0: bridge window [mem 0x000a0000-0x000bffff window] (subtractive decode) Oct 31 00:48:57.744233 kernel: pci 0000:00:11.0: bridge window [mem 0x000cc000-0x000dbfff window] (subtractive decode) Oct 31 00:48:57.744286 kernel: pci 0000:00:11.0: bridge window [mem 0xc0000000-0xfebfffff window] (subtractive decode) Oct 31 00:48:57.744363 kernel: pci 0000:00:11.0: bridge window [io 0x0000-0x0cf7 window] (subtractive decode) Oct 31 00:48:57.744416 kernel: pci 0000:00:11.0: bridge window [io 0x0d00-0xfeff window] (subtractive decode) Oct 31 00:48:57.744474 kernel: pci 0000:03:00.0: [15ad:07c0] type 00 class 0x010700 Oct 31 00:48:57.744528 kernel: pci 0000:03:00.0: reg 0x10: [io 0x4000-0x4007] Oct 31 00:48:57.744580 kernel: pci 0000:03:00.0: reg 0x14: [mem 0xfd5f8000-0xfd5fffff 64bit] Oct 31 00:48:57.744633 kernel: pci 0000:03:00.0: reg 0x30: [mem 0x00000000-0x0000ffff pref] Oct 31 00:48:57.744689 kernel: pci 0000:03:00.0: PME# supported from D0 D3hot D3cold Oct 31 00:48:57.744941 kernel: pci 0000:03:00.0: disabling ASPM on pre-1.1 PCIe device. You can enable it with 'pcie_aspm=force' Oct 31 00:48:57.745004 kernel: pci 0000:00:15.0: PCI bridge to [bus 03] Oct 31 00:48:57.745058 kernel: pci 0000:00:15.0: bridge window [io 0x4000-0x4fff] Oct 31 00:48:57.745111 kernel: pci 0000:00:15.0: bridge window [mem 0xfd500000-0xfd5fffff] Oct 31 00:48:57.745164 kernel: pci 0000:00:15.1: PCI bridge to [bus 04] Oct 31 00:48:57.745217 kernel: pci 0000:00:15.1: bridge window [io 0x8000-0x8fff] Oct 31 00:48:57.745273 kernel: pci 0000:00:15.1: bridge window [mem 0xfd100000-0xfd1fffff] Oct 31 00:48:57.745351 kernel: pci 0000:00:15.1: bridge window [mem 0xe7800000-0xe78fffff 64bit pref] Oct 31 00:48:57.745407 kernel: pci 0000:00:15.2: PCI bridge to [bus 05] Oct 31 00:48:57.745459 kernel: pci 0000:00:15.2: bridge window [io 0xc000-0xcfff] Oct 31 00:48:57.747377 kernel: pci 0000:00:15.2: bridge window [mem 0xfcd00000-0xfcdfffff] Oct 31 00:48:57.747439 kernel: pci 0000:00:15.2: bridge window [mem 0xe7400000-0xe74fffff 64bit pref] Oct 31 00:48:57.747495 kernel: pci 0000:00:15.3: PCI bridge to [bus 06] Oct 31 00:48:57.747548 kernel: pci 0000:00:15.3: bridge window [mem 0xfc900000-0xfc9fffff] Oct 31 00:48:57.747605 kernel: pci 0000:00:15.3: bridge window [mem 0xe7000000-0xe70fffff 64bit pref] Oct 31 00:48:57.747660 kernel: pci 0000:00:15.4: PCI bridge to [bus 07] Oct 31 00:48:57.747713 kernel: pci 0000:00:15.4: bridge window [mem 0xfc500000-0xfc5fffff] Oct 31 00:48:57.747785 kernel: pci 0000:00:15.4: bridge window [mem 0xe6c00000-0xe6cfffff 64bit pref] Oct 31 00:48:57.747843 kernel: pci 0000:00:15.5: PCI bridge to [bus 08] Oct 31 00:48:57.747896 kernel: pci 0000:00:15.5: bridge window [mem 0xfc100000-0xfc1fffff] Oct 31 00:48:57.747950 kernel: pci 0000:00:15.5: bridge window [mem 0xe6800000-0xe68fffff 64bit pref] Oct 31 00:48:57.748003 kernel: pci 0000:00:15.6: PCI bridge to [bus 09] Oct 31 00:48:57.748056 kernel: pci 0000:00:15.6: bridge window [mem 0xfbd00000-0xfbdfffff] Oct 31 00:48:57.748108 kernel: pci 0000:00:15.6: bridge window [mem 0xe6400000-0xe64fffff 64bit pref] Oct 31 00:48:57.748161 kernel: pci 0000:00:15.7: PCI bridge to [bus 0a] Oct 31 00:48:57.748214 kernel: pci 0000:00:15.7: bridge window [mem 0xfb900000-0xfb9fffff] Oct 31 00:48:57.748269 kernel: pci 0000:00:15.7: bridge window [mem 0xe6000000-0xe60fffff 64bit pref] Oct 31 00:48:57.748360 kernel: pci 0000:0b:00.0: [15ad:07b0] type 00 class 0x020000 Oct 31 00:48:57.748421 kernel: pci 0000:0b:00.0: reg 0x10: [mem 0xfd4fc000-0xfd4fcfff] Oct 31 00:48:57.748475 kernel: pci 0000:0b:00.0: reg 0x14: [mem 0xfd4fd000-0xfd4fdfff] Oct 31 00:48:57.748532 kernel: pci 0000:0b:00.0: reg 0x18: [mem 0xfd4fe000-0xfd4fffff] Oct 31 00:48:57.748586 kernel: pci 0000:0b:00.0: reg 0x1c: [io 0x5000-0x500f] Oct 31 00:48:57.748639 kernel: pci 0000:0b:00.0: reg 0x30: [mem 0x00000000-0x0000ffff pref] Oct 31 00:48:57.748695 kernel: pci 0000:0b:00.0: supports D1 D2 Oct 31 00:48:57.748748 kernel: pci 0000:0b:00.0: PME# supported from D0 D1 D2 D3hot D3cold Oct 31 00:48:57.748802 kernel: pci 0000:0b:00.0: disabling ASPM on pre-1.1 PCIe device. You can enable it with 'pcie_aspm=force' Oct 31 00:48:57.748855 kernel: pci 0000:00:16.0: PCI bridge to [bus 0b] Oct 31 00:48:57.748907 kernel: pci 0000:00:16.0: bridge window [io 0x5000-0x5fff] Oct 31 00:48:57.748959 kernel: pci 0000:00:16.0: bridge window [mem 0xfd400000-0xfd4fffff] Oct 31 00:48:57.749013 kernel: pci 0000:00:16.1: PCI bridge to [bus 0c] Oct 31 00:48:57.749066 kernel: pci 0000:00:16.1: bridge window [io 0x9000-0x9fff] Oct 31 00:48:57.749120 kernel: pci 0000:00:16.1: bridge window [mem 0xfd000000-0xfd0fffff] Oct 31 00:48:57.749172 kernel: pci 0000:00:16.1: bridge window [mem 0xe7700000-0xe77fffff 64bit pref] Oct 31 00:48:57.749225 kernel: pci 0000:00:16.2: PCI bridge to [bus 0d] Oct 31 00:48:57.749278 kernel: pci 0000:00:16.2: bridge window [io 0xd000-0xdfff] Oct 31 00:48:57.750455 kernel: pci 0000:00:16.2: bridge window [mem 0xfcc00000-0xfccfffff] Oct 31 00:48:57.750522 kernel: pci 0000:00:16.2: bridge window [mem 0xe7300000-0xe73fffff 64bit pref] Oct 31 00:48:57.750581 kernel: pci 0000:00:16.3: PCI bridge to [bus 0e] Oct 31 00:48:57.750635 kernel: pci 0000:00:16.3: bridge window [mem 0xfc800000-0xfc8fffff] Oct 31 00:48:57.750691 kernel: pci 0000:00:16.3: bridge window [mem 0xe6f00000-0xe6ffffff 64bit pref] Oct 31 00:48:57.750745 kernel: pci 0000:00:16.4: PCI bridge to [bus 0f] Oct 31 00:48:57.750797 kernel: pci 0000:00:16.4: bridge window [mem 0xfc400000-0xfc4fffff] Oct 31 00:48:57.750849 kernel: pci 0000:00:16.4: bridge window [mem 0xe6b00000-0xe6bfffff 64bit pref] Oct 31 00:48:57.750903 kernel: pci 0000:00:16.5: PCI bridge to [bus 10] Oct 31 00:48:57.750955 kernel: pci 0000:00:16.5: bridge window [mem 0xfc000000-0xfc0fffff] Oct 31 00:48:57.751007 kernel: pci 0000:00:16.5: bridge window [mem 0xe6700000-0xe67fffff 64bit pref] Oct 31 00:48:57.751059 kernel: pci 0000:00:16.6: PCI bridge to [bus 11] Oct 31 00:48:57.751114 kernel: pci 0000:00:16.6: bridge window [mem 0xfbc00000-0xfbcfffff] Oct 31 00:48:57.751166 kernel: pci 0000:00:16.6: bridge window [mem 0xe6300000-0xe63fffff 64bit pref] Oct 31 00:48:57.751219 kernel: pci 0000:00:16.7: PCI bridge to [bus 12] Oct 31 00:48:57.751271 kernel: pci 0000:00:16.7: bridge window [mem 0xfb800000-0xfb8fffff] Oct 31 00:48:57.751331 kernel: pci 0000:00:16.7: bridge window [mem 0xe5f00000-0xe5ffffff 64bit pref] Oct 31 00:48:57.751387 kernel: pci 0000:00:17.0: PCI bridge to [bus 13] Oct 31 00:48:57.751440 kernel: pci 0000:00:17.0: bridge window [io 0x6000-0x6fff] Oct 31 00:48:57.751492 kernel: pci 0000:00:17.0: bridge window [mem 0xfd300000-0xfd3fffff] Oct 31 00:48:57.751547 kernel: pci 0000:00:17.0: bridge window [mem 0xe7a00000-0xe7afffff 64bit pref] Oct 31 00:48:57.751601 kernel: pci 0000:00:17.1: PCI bridge to [bus 14] Oct 31 00:48:57.751654 kernel: pci 0000:00:17.1: bridge window [io 0xa000-0xafff] Oct 31 00:48:57.751706 kernel: pci 0000:00:17.1: bridge window [mem 0xfcf00000-0xfcffffff] Oct 31 00:48:57.751759 kernel: pci 0000:00:17.1: bridge window [mem 0xe7600000-0xe76fffff 64bit pref] Oct 31 00:48:57.751813 kernel: pci 0000:00:17.2: PCI bridge to [bus 15] Oct 31 00:48:57.751865 kernel: pci 0000:00:17.2: bridge window [io 0xe000-0xefff] Oct 31 00:48:57.751920 kernel: pci 0000:00:17.2: bridge window [mem 0xfcb00000-0xfcbfffff] Oct 31 00:48:57.751972 kernel: pci 0000:00:17.2: bridge window [mem 0xe7200000-0xe72fffff 64bit pref] Oct 31 00:48:57.752026 kernel: pci 0000:00:17.3: PCI bridge to [bus 16] Oct 31 00:48:57.752078 kernel: pci 0000:00:17.3: bridge window [mem 0xfc700000-0xfc7fffff] Oct 31 00:48:57.752130 kernel: pci 0000:00:17.3: bridge window [mem 0xe6e00000-0xe6efffff 64bit pref] Oct 31 00:48:57.752183 kernel: pci 0000:00:17.4: PCI bridge to [bus 17] Oct 31 00:48:57.752234 kernel: pci 0000:00:17.4: bridge window [mem 0xfc300000-0xfc3fffff] Oct 31 00:48:57.752286 kernel: pci 0000:00:17.4: bridge window [mem 0xe6a00000-0xe6afffff 64bit pref] Oct 31 00:48:57.753774 kernel: pci 0000:00:17.5: PCI bridge to [bus 18] Oct 31 00:48:57.753839 kernel: pci 0000:00:17.5: bridge window [mem 0xfbf00000-0xfbffffff] Oct 31 00:48:57.753895 kernel: pci 0000:00:17.5: bridge window [mem 0xe6600000-0xe66fffff 64bit pref] Oct 31 00:48:57.753951 kernel: pci 0000:00:17.6: PCI bridge to [bus 19] Oct 31 00:48:57.754005 kernel: pci 0000:00:17.6: bridge window [mem 0xfbb00000-0xfbbfffff] Oct 31 00:48:57.754059 kernel: pci 0000:00:17.6: bridge window [mem 0xe6200000-0xe62fffff 64bit pref] Oct 31 00:48:57.754114 kernel: pci 0000:00:17.7: PCI bridge to [bus 1a] Oct 31 00:48:57.754166 kernel: pci 0000:00:17.7: bridge window [mem 0xfb700000-0xfb7fffff] Oct 31 00:48:57.754222 kernel: pci 0000:00:17.7: bridge window [mem 0xe5e00000-0xe5efffff 64bit pref] Oct 31 00:48:57.754277 kernel: pci 0000:00:18.0: PCI bridge to [bus 1b] Oct 31 00:48:57.754391 kernel: pci 0000:00:18.0: bridge window [io 0x7000-0x7fff] Oct 31 00:48:57.754447 kernel: pci 0000:00:18.0: bridge window [mem 0xfd200000-0xfd2fffff] Oct 31 00:48:57.754500 kernel: pci 0000:00:18.0: bridge window [mem 0xe7900000-0xe79fffff 64bit pref] Oct 31 00:48:57.754554 kernel: pci 0000:00:18.1: PCI bridge to [bus 1c] Oct 31 00:48:57.754607 kernel: pci 0000:00:18.1: bridge window [io 0xb000-0xbfff] Oct 31 00:48:57.754659 kernel: pci 0000:00:18.1: bridge window [mem 0xfce00000-0xfcefffff] Oct 31 00:48:57.754714 kernel: pci 0000:00:18.1: bridge window [mem 0xe7500000-0xe75fffff 64bit pref] Oct 31 00:48:57.754768 kernel: pci 0000:00:18.2: PCI bridge to [bus 1d] Oct 31 00:48:57.754820 kernel: pci 0000:00:18.2: bridge window [mem 0xfca00000-0xfcafffff] Oct 31 00:48:57.754872 kernel: pci 0000:00:18.2: bridge window [mem 0xe7100000-0xe71fffff 64bit pref] Oct 31 00:48:57.754926 kernel: pci 0000:00:18.3: PCI bridge to [bus 1e] Oct 31 00:48:57.754978 kernel: pci 0000:00:18.3: bridge window [mem 0xfc600000-0xfc6fffff] Oct 31 00:48:57.755030 kernel: pci 0000:00:18.3: bridge window [mem 0xe6d00000-0xe6dfffff 64bit pref] Oct 31 00:48:57.755083 kernel: pci 0000:00:18.4: PCI bridge to [bus 1f] Oct 31 00:48:57.755138 kernel: pci 0000:00:18.4: bridge window [mem 0xfc200000-0xfc2fffff] Oct 31 00:48:57.755190 kernel: pci 0000:00:18.4: bridge window [mem 0xe6900000-0xe69fffff 64bit pref] Oct 31 00:48:57.755245 kernel: pci 0000:00:18.5: PCI bridge to [bus 20] Oct 31 00:48:57.755297 kernel: pci 0000:00:18.5: bridge window [mem 0xfbe00000-0xfbefffff] Oct 31 00:48:57.756393 kernel: pci 0000:00:18.5: bridge window [mem 0xe6500000-0xe65fffff 64bit pref] Oct 31 00:48:57.756458 kernel: pci 0000:00:18.6: PCI bridge to [bus 21] Oct 31 00:48:57.756514 kernel: pci 0000:00:18.6: bridge window [mem 0xfba00000-0xfbafffff] Oct 31 00:48:57.756568 kernel: pci 0000:00:18.6: bridge window [mem 0xe6100000-0xe61fffff 64bit pref] Oct 31 00:48:57.756626 kernel: pci 0000:00:18.7: PCI bridge to [bus 22] Oct 31 00:48:57.756678 kernel: pci 0000:00:18.7: bridge window [mem 0xfb600000-0xfb6fffff] Oct 31 00:48:57.756730 kernel: pci 0000:00:18.7: bridge window [mem 0xe5d00000-0xe5dfffff 64bit pref] Oct 31 00:48:57.756739 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 9 Oct 31 00:48:57.756746 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 0 Oct 31 00:48:57.756752 kernel: ACPI: PCI: Interrupt link LNKB disabled Oct 31 00:48:57.756758 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Oct 31 00:48:57.756764 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 10 Oct 31 00:48:57.756772 kernel: iommu: Default domain type: Translated Oct 31 00:48:57.756778 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Oct 31 00:48:57.756784 kernel: PCI: Using ACPI for IRQ routing Oct 31 00:48:57.756790 kernel: PCI: pci_cache_line_size set to 64 bytes Oct 31 00:48:57.756797 kernel: e820: reserve RAM buffer [mem 0x0009ec00-0x0009ffff] Oct 31 00:48:57.756802 kernel: e820: reserve RAM buffer [mem 0x7fee0000-0x7fffffff] Oct 31 00:48:57.756854 kernel: pci 0000:00:0f.0: vgaarb: setting as boot VGA device Oct 31 00:48:57.756905 kernel: pci 0000:00:0f.0: vgaarb: bridge control possible Oct 31 00:48:57.756956 kernel: pci 0000:00:0f.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Oct 31 00:48:57.756967 kernel: vgaarb: loaded Oct 31 00:48:57.756974 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 Oct 31 00:48:57.756980 kernel: hpet0: 16 comparators, 64-bit 14.318180 MHz counter Oct 31 00:48:57.756986 kernel: clocksource: Switched to clocksource tsc-early Oct 31 00:48:57.756992 kernel: VFS: Disk quotas dquot_6.6.0 Oct 31 00:48:57.756998 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Oct 31 00:48:57.757004 kernel: pnp: PnP ACPI init Oct 31 00:48:57.757060 kernel: system 00:00: [io 0x1000-0x103f] has been reserved Oct 31 00:48:57.757112 kernel: system 00:00: [io 0x1040-0x104f] has been reserved Oct 31 00:48:57.757159 kernel: system 00:00: [io 0x0cf0-0x0cf1] has been reserved Oct 31 00:48:57.757212 kernel: system 00:04: [mem 0xfed00000-0xfed003ff] has been reserved Oct 31 00:48:57.757262 kernel: pnp 00:06: [dma 2] Oct 31 00:48:57.757312 kernel: system 00:07: [io 0xfce0-0xfcff] has been reserved Oct 31 00:48:57.757379 kernel: system 00:07: [mem 0xf0000000-0xf7ffffff] has been reserved Oct 31 00:48:57.757426 kernel: system 00:07: [mem 0xfe800000-0xfe9fffff] has been reserved Oct 31 00:48:57.757438 kernel: pnp: PnP ACPI: found 8 devices Oct 31 00:48:57.757444 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Oct 31 00:48:57.757450 kernel: NET: Registered PF_INET protocol family Oct 31 00:48:57.757457 kernel: IP idents hash table entries: 32768 (order: 6, 262144 bytes, linear) Oct 31 00:48:57.757463 kernel: tcp_listen_portaddr_hash hash table entries: 1024 (order: 2, 16384 bytes, linear) Oct 31 00:48:57.757469 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Oct 31 00:48:57.757475 kernel: TCP established hash table entries: 16384 (order: 5, 131072 bytes, linear) Oct 31 00:48:57.757481 kernel: TCP bind hash table entries: 16384 (order: 7, 524288 bytes, linear) Oct 31 00:48:57.757488 kernel: TCP: Hash tables configured (established 16384 bind 16384) Oct 31 00:48:57.757494 kernel: UDP hash table entries: 1024 (order: 3, 32768 bytes, linear) Oct 31 00:48:57.757501 kernel: UDP-Lite hash table entries: 1024 (order: 3, 32768 bytes, linear) Oct 31 00:48:57.757507 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Oct 31 00:48:57.757513 kernel: NET: Registered PF_XDP protocol family Oct 31 00:48:57.757566 kernel: pci 0000:00:15.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 03] add_size 200000 add_align 100000 Oct 31 00:48:57.757619 kernel: pci 0000:00:15.3: bridge window [io 0x1000-0x0fff] to [bus 06] add_size 1000 Oct 31 00:48:57.757673 kernel: pci 0000:00:15.4: bridge window [io 0x1000-0x0fff] to [bus 07] add_size 1000 Oct 31 00:48:57.757729 kernel: pci 0000:00:15.5: bridge window [io 0x1000-0x0fff] to [bus 08] add_size 1000 Oct 31 00:48:57.757783 kernel: pci 0000:00:15.6: bridge window [io 0x1000-0x0fff] to [bus 09] add_size 1000 Oct 31 00:48:57.757836 kernel: pci 0000:00:15.7: bridge window [io 0x1000-0x0fff] to [bus 0a] add_size 1000 Oct 31 00:48:57.757889 kernel: pci 0000:00:16.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 0b] add_size 200000 add_align 100000 Oct 31 00:48:57.757942 kernel: pci 0000:00:16.3: bridge window [io 0x1000-0x0fff] to [bus 0e] add_size 1000 Oct 31 00:48:57.757994 kernel: pci 0000:00:16.4: bridge window [io 0x1000-0x0fff] to [bus 0f] add_size 1000 Oct 31 00:48:57.758049 kernel: pci 0000:00:16.5: bridge window [io 0x1000-0x0fff] to [bus 10] add_size 1000 Oct 31 00:48:57.758102 kernel: pci 0000:00:16.6: bridge window [io 0x1000-0x0fff] to [bus 11] add_size 1000 Oct 31 00:48:57.758154 kernel: pci 0000:00:16.7: bridge window [io 0x1000-0x0fff] to [bus 12] add_size 1000 Oct 31 00:48:57.758206 kernel: pci 0000:00:17.3: bridge window [io 0x1000-0x0fff] to [bus 16] add_size 1000 Oct 31 00:48:57.758258 kernel: pci 0000:00:17.4: bridge window [io 0x1000-0x0fff] to [bus 17] add_size 1000 Oct 31 00:48:57.758310 kernel: pci 0000:00:17.5: bridge window [io 0x1000-0x0fff] to [bus 18] add_size 1000 Oct 31 00:48:57.759702 kernel: pci 0000:00:17.6: bridge window [io 0x1000-0x0fff] to [bus 19] add_size 1000 Oct 31 00:48:57.759764 kernel: pci 0000:00:17.7: bridge window [io 0x1000-0x0fff] to [bus 1a] add_size 1000 Oct 31 00:48:57.759821 kernel: pci 0000:00:18.2: bridge window [io 0x1000-0x0fff] to [bus 1d] add_size 1000 Oct 31 00:48:57.759875 kernel: pci 0000:00:18.3: bridge window [io 0x1000-0x0fff] to [bus 1e] add_size 1000 Oct 31 00:48:57.759929 kernel: pci 0000:00:18.4: bridge window [io 0x1000-0x0fff] to [bus 1f] add_size 1000 Oct 31 00:48:57.759987 kernel: pci 0000:00:18.5: bridge window [io 0x1000-0x0fff] to [bus 20] add_size 1000 Oct 31 00:48:57.760040 kernel: pci 0000:00:18.6: bridge window [io 0x1000-0x0fff] to [bus 21] add_size 1000 Oct 31 00:48:57.760093 kernel: pci 0000:00:18.7: bridge window [io 0x1000-0x0fff] to [bus 22] add_size 1000 Oct 31 00:48:57.760146 kernel: pci 0000:00:15.0: BAR 15: assigned [mem 0xc0000000-0xc01fffff 64bit pref] Oct 31 00:48:57.760198 kernel: pci 0000:00:16.0: BAR 15: assigned [mem 0xc0200000-0xc03fffff 64bit pref] Oct 31 00:48:57.760251 kernel: pci 0000:00:15.3: BAR 13: no space for [io size 0x1000] Oct 31 00:48:57.760304 kernel: pci 0000:00:15.3: BAR 13: failed to assign [io size 0x1000] Oct 31 00:48:57.760408 kernel: pci 0000:00:15.4: BAR 13: no space for [io size 0x1000] Oct 31 00:48:57.760461 kernel: pci 0000:00:15.4: BAR 13: failed to assign [io size 0x1000] Oct 31 00:48:57.760514 kernel: pci 0000:00:15.5: BAR 13: no space for [io size 0x1000] Oct 31 00:48:57.760566 kernel: pci 0000:00:15.5: BAR 13: failed to assign [io size 0x1000] Oct 31 00:48:57.760618 kernel: pci 0000:00:15.6: BAR 13: no space for [io size 0x1000] Oct 31 00:48:57.760670 kernel: pci 0000:00:15.6: BAR 13: failed to assign [io size 0x1000] Oct 31 00:48:57.760723 kernel: pci 0000:00:15.7: BAR 13: no space for [io size 0x1000] Oct 31 00:48:57.760775 kernel: pci 0000:00:15.7: BAR 13: failed to assign [io size 0x1000] Oct 31 00:48:57.760831 kernel: pci 0000:00:16.3: BAR 13: no space for [io size 0x1000] Oct 31 00:48:57.760884 kernel: pci 0000:00:16.3: BAR 13: failed to assign [io size 0x1000] Oct 31 00:48:57.760936 kernel: pci 0000:00:16.4: BAR 13: no space for [io size 0x1000] Oct 31 00:48:57.760989 kernel: pci 0000:00:16.4: BAR 13: failed to assign [io size 0x1000] Oct 31 00:48:57.761042 kernel: pci 0000:00:16.5: BAR 13: no space for [io size 0x1000] Oct 31 00:48:57.761095 kernel: pci 0000:00:16.5: BAR 13: failed to assign [io size 0x1000] Oct 31 00:48:57.761147 kernel: pci 0000:00:16.6: BAR 13: no space for [io size 0x1000] Oct 31 00:48:57.761200 kernel: pci 0000:00:16.6: BAR 13: failed to assign [io size 0x1000] Oct 31 00:48:57.761255 kernel: pci 0000:00:16.7: BAR 13: no space for [io size 0x1000] Oct 31 00:48:57.761308 kernel: pci 0000:00:16.7: BAR 13: failed to assign [io size 0x1000] Oct 31 00:48:57.761701 kernel: pci 0000:00:17.3: BAR 13: no space for [io size 0x1000] Oct 31 00:48:57.761756 kernel: pci 0000:00:17.3: BAR 13: failed to assign [io size 0x1000] Oct 31 00:48:57.761808 kernel: pci 0000:00:17.4: BAR 13: no space for [io size 0x1000] Oct 31 00:48:57.761860 kernel: pci 0000:00:17.4: BAR 13: failed to assign [io size 0x1000] Oct 31 00:48:57.761912 kernel: pci 0000:00:17.5: BAR 13: no space for [io size 0x1000] Oct 31 00:48:57.761964 kernel: pci 0000:00:17.5: BAR 13: failed to assign [io size 0x1000] Oct 31 00:48:57.762020 kernel: pci 0000:00:17.6: BAR 13: no space for [io size 0x1000] Oct 31 00:48:57.762072 kernel: pci 0000:00:17.6: BAR 13: failed to assign [io size 0x1000] Oct 31 00:48:57.762133 kernel: pci 0000:00:17.7: BAR 13: no space for [io size 0x1000] Oct 31 00:48:57.762290 kernel: pci 0000:00:17.7: BAR 13: failed to assign [io size 0x1000] Oct 31 00:48:57.762372 kernel: pci 0000:00:18.2: BAR 13: no space for [io size 0x1000] Oct 31 00:48:57.762426 kernel: pci 0000:00:18.2: BAR 13: failed to assign [io size 0x1000] Oct 31 00:48:57.763402 kernel: pci 0000:00:18.3: BAR 13: no space for [io size 0x1000] Oct 31 00:48:57.763458 kernel: pci 0000:00:18.3: BAR 13: failed to assign [io size 0x1000] Oct 31 00:48:57.763514 kernel: pci 0000:00:18.4: BAR 13: no space for [io size 0x1000] Oct 31 00:48:57.763567 kernel: pci 0000:00:18.4: BAR 13: failed to assign [io size 0x1000] Oct 31 00:48:57.763619 kernel: pci 0000:00:18.5: BAR 13: no space for [io size 0x1000] Oct 31 00:48:57.763672 kernel: pci 0000:00:18.5: BAR 13: failed to assign [io size 0x1000] Oct 31 00:48:57.763724 kernel: pci 0000:00:18.6: BAR 13: no space for [io size 0x1000] Oct 31 00:48:57.763776 kernel: pci 0000:00:18.6: BAR 13: failed to assign [io size 0x1000] Oct 31 00:48:57.763829 kernel: pci 0000:00:18.7: BAR 13: no space for [io size 0x1000] Oct 31 00:48:57.763896 kernel: pci 0000:00:18.7: BAR 13: failed to assign [io size 0x1000] Oct 31 00:48:57.763950 kernel: pci 0000:00:18.7: BAR 13: no space for [io size 0x1000] Oct 31 00:48:57.764001 kernel: pci 0000:00:18.7: BAR 13: failed to assign [io size 0x1000] Oct 31 00:48:57.764052 kernel: pci 0000:00:18.6: BAR 13: no space for [io size 0x1000] Oct 31 00:48:57.764105 kernel: pci 0000:00:18.6: BAR 13: failed to assign [io size 0x1000] Oct 31 00:48:57.764168 kernel: pci 0000:00:18.5: BAR 13: no space for [io size 0x1000] Oct 31 00:48:57.764240 kernel: pci 0000:00:18.5: BAR 13: failed to assign [io size 0x1000] Oct 31 00:48:57.764295 kernel: pci 0000:00:18.4: BAR 13: no space for [io size 0x1000] Oct 31 00:48:57.765374 kernel: pci 0000:00:18.4: BAR 13: failed to assign [io size 0x1000] Oct 31 00:48:57.765428 kernel: pci 0000:00:18.3: BAR 13: no space for [io size 0x1000] Oct 31 00:48:57.765481 kernel: pci 0000:00:18.3: BAR 13: failed to assign [io size 0x1000] Oct 31 00:48:57.765560 kernel: pci 0000:00:18.2: BAR 13: no space for [io size 0x1000] Oct 31 00:48:57.765612 kernel: pci 0000:00:18.2: BAR 13: failed to assign [io size 0x1000] Oct 31 00:48:57.765665 kernel: pci 0000:00:17.7: BAR 13: no space for [io size 0x1000] Oct 31 00:48:57.765717 kernel: pci 0000:00:17.7: BAR 13: failed to assign [io size 0x1000] Oct 31 00:48:57.765770 kernel: pci 0000:00:17.6: BAR 13: no space for [io size 0x1000] Oct 31 00:48:57.765822 kernel: pci 0000:00:17.6: BAR 13: failed to assign [io size 0x1000] Oct 31 00:48:57.765874 kernel: pci 0000:00:17.5: BAR 13: no space for [io size 0x1000] Oct 31 00:48:57.765926 kernel: pci 0000:00:17.5: BAR 13: failed to assign [io size 0x1000] Oct 31 00:48:57.765979 kernel: pci 0000:00:17.4: BAR 13: no space for [io size 0x1000] Oct 31 00:48:57.766034 kernel: pci 0000:00:17.4: BAR 13: failed to assign [io size 0x1000] Oct 31 00:48:57.766086 kernel: pci 0000:00:17.3: BAR 13: no space for [io size 0x1000] Oct 31 00:48:57.766138 kernel: pci 0000:00:17.3: BAR 13: failed to assign [io size 0x1000] Oct 31 00:48:57.766190 kernel: pci 0000:00:16.7: BAR 13: no space for [io size 0x1000] Oct 31 00:48:57.766242 kernel: pci 0000:00:16.7: BAR 13: failed to assign [io size 0x1000] Oct 31 00:48:57.766294 kernel: pci 0000:00:16.6: BAR 13: no space for [io size 0x1000] Oct 31 00:48:57.766369 kernel: pci 0000:00:16.6: BAR 13: failed to assign [io size 0x1000] Oct 31 00:48:57.766435 kernel: pci 0000:00:16.5: BAR 13: no space for [io size 0x1000] Oct 31 00:48:57.766486 kernel: pci 0000:00:16.5: BAR 13: failed to assign [io size 0x1000] Oct 31 00:48:57.766536 kernel: pci 0000:00:16.4: BAR 13: no space for [io size 0x1000] Oct 31 00:48:57.766622 kernel: pci 0000:00:16.4: BAR 13: failed to assign [io size 0x1000] Oct 31 00:48:57.766748 kernel: pci 0000:00:16.3: BAR 13: no space for [io size 0x1000] Oct 31 00:48:57.766996 kernel: pci 0000:00:16.3: BAR 13: failed to assign [io size 0x1000] Oct 31 00:48:57.767052 kernel: pci 0000:00:15.7: BAR 13: no space for [io size 0x1000] Oct 31 00:48:57.768409 kernel: pci 0000:00:15.7: BAR 13: failed to assign [io size 0x1000] Oct 31 00:48:57.768473 kernel: pci 0000:00:15.6: BAR 13: no space for [io size 0x1000] Oct 31 00:48:57.768529 kernel: pci 0000:00:15.6: BAR 13: failed to assign [io size 0x1000] Oct 31 00:48:57.768584 kernel: pci 0000:00:15.5: BAR 13: no space for [io size 0x1000] Oct 31 00:48:57.768639 kernel: pci 0000:00:15.5: BAR 13: failed to assign [io size 0x1000] Oct 31 00:48:57.768696 kernel: pci 0000:00:15.4: BAR 13: no space for [io size 0x1000] Oct 31 00:48:57.768749 kernel: pci 0000:00:15.4: BAR 13: failed to assign [io size 0x1000] Oct 31 00:48:57.768802 kernel: pci 0000:00:15.3: BAR 13: no space for [io size 0x1000] Oct 31 00:48:57.768855 kernel: pci 0000:00:15.3: BAR 13: failed to assign [io size 0x1000] Oct 31 00:48:57.768909 kernel: pci 0000:00:01.0: PCI bridge to [bus 01] Oct 31 00:48:57.768962 kernel: pci 0000:00:11.0: PCI bridge to [bus 02] Oct 31 00:48:57.769016 kernel: pci 0000:00:11.0: bridge window [io 0x2000-0x3fff] Oct 31 00:48:57.769086 kernel: pci 0000:00:11.0: bridge window [mem 0xfd600000-0xfdffffff] Oct 31 00:48:57.769140 kernel: pci 0000:00:11.0: bridge window [mem 0xe7b00000-0xe7ffffff 64bit pref] Oct 31 00:48:57.769200 kernel: pci 0000:03:00.0: BAR 6: assigned [mem 0xfd500000-0xfd50ffff pref] Oct 31 00:48:57.769272 kernel: pci 0000:00:15.0: PCI bridge to [bus 03] Oct 31 00:48:57.769366 kernel: pci 0000:00:15.0: bridge window [io 0x4000-0x4fff] Oct 31 00:48:57.769451 kernel: pci 0000:00:15.0: bridge window [mem 0xfd500000-0xfd5fffff] Oct 31 00:48:57.769546 kernel: pci 0000:00:15.0: bridge window [mem 0xc0000000-0xc01fffff 64bit pref] Oct 31 00:48:57.769643 kernel: pci 0000:00:15.1: PCI bridge to [bus 04] Oct 31 00:48:57.769716 kernel: pci 0000:00:15.1: bridge window [io 0x8000-0x8fff] Oct 31 00:48:57.769770 kernel: pci 0000:00:15.1: bridge window [mem 0xfd100000-0xfd1fffff] Oct 31 00:48:57.769823 kernel: pci 0000:00:15.1: bridge window [mem 0xe7800000-0xe78fffff 64bit pref] Oct 31 00:48:57.769881 kernel: pci 0000:00:15.2: PCI bridge to [bus 05] Oct 31 00:48:57.769936 kernel: pci 0000:00:15.2: bridge window [io 0xc000-0xcfff] Oct 31 00:48:57.769988 kernel: pci 0000:00:15.2: bridge window [mem 0xfcd00000-0xfcdfffff] Oct 31 00:48:57.770041 kernel: pci 0000:00:15.2: bridge window [mem 0xe7400000-0xe74fffff 64bit pref] Oct 31 00:48:57.770094 kernel: pci 0000:00:15.3: PCI bridge to [bus 06] Oct 31 00:48:57.770159 kernel: pci 0000:00:15.3: bridge window [mem 0xfc900000-0xfc9fffff] Oct 31 00:48:57.770214 kernel: pci 0000:00:15.3: bridge window [mem 0xe7000000-0xe70fffff 64bit pref] Oct 31 00:48:57.770266 kernel: pci 0000:00:15.4: PCI bridge to [bus 07] Oct 31 00:48:57.770318 kernel: pci 0000:00:15.4: bridge window [mem 0xfc500000-0xfc5fffff] Oct 31 00:48:57.772058 kernel: pci 0000:00:15.4: bridge window [mem 0xe6c00000-0xe6cfffff 64bit pref] Oct 31 00:48:57.772386 kernel: pci 0000:00:15.5: PCI bridge to [bus 08] Oct 31 00:48:57.772449 kernel: pci 0000:00:15.5: bridge window [mem 0xfc100000-0xfc1fffff] Oct 31 00:48:57.772507 kernel: pci 0000:00:15.5: bridge window [mem 0xe6800000-0xe68fffff 64bit pref] Oct 31 00:48:57.772562 kernel: pci 0000:00:15.6: PCI bridge to [bus 09] Oct 31 00:48:57.772652 kernel: pci 0000:00:15.6: bridge window [mem 0xfbd00000-0xfbdfffff] Oct 31 00:48:57.772707 kernel: pci 0000:00:15.6: bridge window [mem 0xe6400000-0xe64fffff 64bit pref] Oct 31 00:48:57.772759 kernel: pci 0000:00:15.7: PCI bridge to [bus 0a] Oct 31 00:48:57.772810 kernel: pci 0000:00:15.7: bridge window [mem 0xfb900000-0xfb9fffff] Oct 31 00:48:57.772861 kernel: pci 0000:00:15.7: bridge window [mem 0xe6000000-0xe60fffff 64bit pref] Oct 31 00:48:57.772915 kernel: pci 0000:0b:00.0: BAR 6: assigned [mem 0xfd400000-0xfd40ffff pref] Oct 31 00:48:57.772967 kernel: pci 0000:00:16.0: PCI bridge to [bus 0b] Oct 31 00:48:57.773018 kernel: pci 0000:00:16.0: bridge window [io 0x5000-0x5fff] Oct 31 00:48:57.773069 kernel: pci 0000:00:16.0: bridge window [mem 0xfd400000-0xfd4fffff] Oct 31 00:48:57.773121 kernel: pci 0000:00:16.0: bridge window [mem 0xc0200000-0xc03fffff 64bit pref] Oct 31 00:48:57.773186 kernel: pci 0000:00:16.1: PCI bridge to [bus 0c] Oct 31 00:48:57.773240 kernel: pci 0000:00:16.1: bridge window [io 0x9000-0x9fff] Oct 31 00:48:57.773292 kernel: pci 0000:00:16.1: bridge window [mem 0xfd000000-0xfd0fffff] Oct 31 00:48:57.773351 kernel: pci 0000:00:16.1: bridge window [mem 0xe7700000-0xe77fffff 64bit pref] Oct 31 00:48:57.773417 kernel: pci 0000:00:16.2: PCI bridge to [bus 0d] Oct 31 00:48:57.773471 kernel: pci 0000:00:16.2: bridge window [io 0xd000-0xdfff] Oct 31 00:48:57.773545 kernel: pci 0000:00:16.2: bridge window [mem 0xfcc00000-0xfccfffff] Oct 31 00:48:57.773613 kernel: pci 0000:00:16.2: bridge window [mem 0xe7300000-0xe73fffff 64bit pref] Oct 31 00:48:57.773664 kernel: pci 0000:00:16.3: PCI bridge to [bus 0e] Oct 31 00:48:57.773716 kernel: pci 0000:00:16.3: bridge window [mem 0xfc800000-0xfc8fffff] Oct 31 00:48:57.773770 kernel: pci 0000:00:16.3: bridge window [mem 0xe6f00000-0xe6ffffff 64bit pref] Oct 31 00:48:57.773822 kernel: pci 0000:00:16.4: PCI bridge to [bus 0f] Oct 31 00:48:57.773873 kernel: pci 0000:00:16.4: bridge window [mem 0xfc400000-0xfc4fffff] Oct 31 00:48:57.773925 kernel: pci 0000:00:16.4: bridge window [mem 0xe6b00000-0xe6bfffff 64bit pref] Oct 31 00:48:57.773977 kernel: pci 0000:00:16.5: PCI bridge to [bus 10] Oct 31 00:48:57.774028 kernel: pci 0000:00:16.5: bridge window [mem 0xfc000000-0xfc0fffff] Oct 31 00:48:57.774080 kernel: pci 0000:00:16.5: bridge window [mem 0xe6700000-0xe67fffff 64bit pref] Oct 31 00:48:57.774132 kernel: pci 0000:00:16.6: PCI bridge to [bus 11] Oct 31 00:48:57.774184 kernel: pci 0000:00:16.6: bridge window [mem 0xfbc00000-0xfbcfffff] Oct 31 00:48:57.774240 kernel: pci 0000:00:16.6: bridge window [mem 0xe6300000-0xe63fffff 64bit pref] Oct 31 00:48:57.774292 kernel: pci 0000:00:16.7: PCI bridge to [bus 12] Oct 31 00:48:57.774450 kernel: pci 0000:00:16.7: bridge window [mem 0xfb800000-0xfb8fffff] Oct 31 00:48:57.774505 kernel: pci 0000:00:16.7: bridge window [mem 0xe5f00000-0xe5ffffff 64bit pref] Oct 31 00:48:57.774563 kernel: pci 0000:00:17.0: PCI bridge to [bus 13] Oct 31 00:48:57.774615 kernel: pci 0000:00:17.0: bridge window [io 0x6000-0x6fff] Oct 31 00:48:57.774667 kernel: pci 0000:00:17.0: bridge window [mem 0xfd300000-0xfd3fffff] Oct 31 00:48:57.774719 kernel: pci 0000:00:17.0: bridge window [mem 0xe7a00000-0xe7afffff 64bit pref] Oct 31 00:48:57.774771 kernel: pci 0000:00:17.1: PCI bridge to [bus 14] Oct 31 00:48:57.774823 kernel: pci 0000:00:17.1: bridge window [io 0xa000-0xafff] Oct 31 00:48:57.774878 kernel: pci 0000:00:17.1: bridge window [mem 0xfcf00000-0xfcffffff] Oct 31 00:48:57.774930 kernel: pci 0000:00:17.1: bridge window [mem 0xe7600000-0xe76fffff 64bit pref] Oct 31 00:48:57.774984 kernel: pci 0000:00:17.2: PCI bridge to [bus 15] Oct 31 00:48:57.775036 kernel: pci 0000:00:17.2: bridge window [io 0xe000-0xefff] Oct 31 00:48:57.775087 kernel: pci 0000:00:17.2: bridge window [mem 0xfcb00000-0xfcbfffff] Oct 31 00:48:57.775139 kernel: pci 0000:00:17.2: bridge window [mem 0xe7200000-0xe72fffff 64bit pref] Oct 31 00:48:57.775191 kernel: pci 0000:00:17.3: PCI bridge to [bus 16] Oct 31 00:48:57.775243 kernel: pci 0000:00:17.3: bridge window [mem 0xfc700000-0xfc7fffff] Oct 31 00:48:57.775295 kernel: pci 0000:00:17.3: bridge window [mem 0xe6e00000-0xe6efffff 64bit pref] Oct 31 00:48:57.775357 kernel: pci 0000:00:17.4: PCI bridge to [bus 17] Oct 31 00:48:57.775410 kernel: pci 0000:00:17.4: bridge window [mem 0xfc300000-0xfc3fffff] Oct 31 00:48:57.775462 kernel: pci 0000:00:17.4: bridge window [mem 0xe6a00000-0xe6afffff 64bit pref] Oct 31 00:48:57.775513 kernel: pci 0000:00:17.5: PCI bridge to [bus 18] Oct 31 00:48:57.775566 kernel: pci 0000:00:17.5: bridge window [mem 0xfbf00000-0xfbffffff] Oct 31 00:48:57.775636 kernel: pci 0000:00:17.5: bridge window [mem 0xe6600000-0xe66fffff 64bit pref] Oct 31 00:48:57.775705 kernel: pci 0000:00:17.6: PCI bridge to [bus 19] Oct 31 00:48:57.775757 kernel: pci 0000:00:17.6: bridge window [mem 0xfbb00000-0xfbbfffff] Oct 31 00:48:57.775809 kernel: pci 0000:00:17.6: bridge window [mem 0xe6200000-0xe62fffff 64bit pref] Oct 31 00:48:57.775860 kernel: pci 0000:00:17.7: PCI bridge to [bus 1a] Oct 31 00:48:57.775915 kernel: pci 0000:00:17.7: bridge window [mem 0xfb700000-0xfb7fffff] Oct 31 00:48:57.775967 kernel: pci 0000:00:17.7: bridge window [mem 0xe5e00000-0xe5efffff 64bit pref] Oct 31 00:48:57.776020 kernel: pci 0000:00:18.0: PCI bridge to [bus 1b] Oct 31 00:48:57.776072 kernel: pci 0000:00:18.0: bridge window [io 0x7000-0x7fff] Oct 31 00:48:57.776138 kernel: pci 0000:00:18.0: bridge window [mem 0xfd200000-0xfd2fffff] Oct 31 00:48:57.776189 kernel: pci 0000:00:18.0: bridge window [mem 0xe7900000-0xe79fffff 64bit pref] Oct 31 00:48:57.776241 kernel: pci 0000:00:18.1: PCI bridge to [bus 1c] Oct 31 00:48:57.776291 kernel: pci 0000:00:18.1: bridge window [io 0xb000-0xbfff] Oct 31 00:48:57.776367 kernel: pci 0000:00:18.1: bridge window [mem 0xfce00000-0xfcefffff] Oct 31 00:48:57.776423 kernel: pci 0000:00:18.1: bridge window [mem 0xe7500000-0xe75fffff 64bit pref] Oct 31 00:48:57.776475 kernel: pci 0000:00:18.2: PCI bridge to [bus 1d] Oct 31 00:48:57.776526 kernel: pci 0000:00:18.2: bridge window [mem 0xfca00000-0xfcafffff] Oct 31 00:48:57.776578 kernel: pci 0000:00:18.2: bridge window [mem 0xe7100000-0xe71fffff 64bit pref] Oct 31 00:48:57.776630 kernel: pci 0000:00:18.3: PCI bridge to [bus 1e] Oct 31 00:48:57.776681 kernel: pci 0000:00:18.3: bridge window [mem 0xfc600000-0xfc6fffff] Oct 31 00:48:57.776731 kernel: pci 0000:00:18.3: bridge window [mem 0xe6d00000-0xe6dfffff 64bit pref] Oct 31 00:48:57.776782 kernel: pci 0000:00:18.4: PCI bridge to [bus 1f] Oct 31 00:48:57.776833 kernel: pci 0000:00:18.4: bridge window [mem 0xfc200000-0xfc2fffff] Oct 31 00:48:57.776884 kernel: pci 0000:00:18.4: bridge window [mem 0xe6900000-0xe69fffff 64bit pref] Oct 31 00:48:57.776938 kernel: pci 0000:00:18.5: PCI bridge to [bus 20] Oct 31 00:48:57.776989 kernel: pci 0000:00:18.5: bridge window [mem 0xfbe00000-0xfbefffff] Oct 31 00:48:57.777041 kernel: pci 0000:00:18.5: bridge window [mem 0xe6500000-0xe65fffff 64bit pref] Oct 31 00:48:57.777092 kernel: pci 0000:00:18.6: PCI bridge to [bus 21] Oct 31 00:48:57.777143 kernel: pci 0000:00:18.6: bridge window [mem 0xfba00000-0xfbafffff] Oct 31 00:48:57.777194 kernel: pci 0000:00:18.6: bridge window [mem 0xe6100000-0xe61fffff 64bit pref] Oct 31 00:48:57.777245 kernel: pci 0000:00:18.7: PCI bridge to [bus 22] Oct 31 00:48:57.777296 kernel: pci 0000:00:18.7: bridge window [mem 0xfb600000-0xfb6fffff] Oct 31 00:48:57.777355 kernel: pci 0000:00:18.7: bridge window [mem 0xe5d00000-0xe5dfffff 64bit pref] Oct 31 00:48:57.777410 kernel: pci_bus 0000:00: resource 4 [mem 0x000a0000-0x000bffff window] Oct 31 00:48:57.777456 kernel: pci_bus 0000:00: resource 5 [mem 0x000cc000-0x000dbfff window] Oct 31 00:48:57.777501 kernel: pci_bus 0000:00: resource 6 [mem 0xc0000000-0xfebfffff window] Oct 31 00:48:57.777587 kernel: pci_bus 0000:00: resource 7 [io 0x0000-0x0cf7 window] Oct 31 00:48:57.777648 kernel: pci_bus 0000:00: resource 8 [io 0x0d00-0xfeff window] Oct 31 00:48:57.777697 kernel: pci_bus 0000:02: resource 0 [io 0x2000-0x3fff] Oct 31 00:48:57.777745 kernel: pci_bus 0000:02: resource 1 [mem 0xfd600000-0xfdffffff] Oct 31 00:48:57.777791 kernel: pci_bus 0000:02: resource 2 [mem 0xe7b00000-0xe7ffffff 64bit pref] Oct 31 00:48:57.777841 kernel: pci_bus 0000:02: resource 4 [mem 0x000a0000-0x000bffff window] Oct 31 00:48:57.777888 kernel: pci_bus 0000:02: resource 5 [mem 0x000cc000-0x000dbfff window] Oct 31 00:48:57.777935 kernel: pci_bus 0000:02: resource 6 [mem 0xc0000000-0xfebfffff window] Oct 31 00:48:57.777981 kernel: pci_bus 0000:02: resource 7 [io 0x0000-0x0cf7 window] Oct 31 00:48:57.778027 kernel: pci_bus 0000:02: resource 8 [io 0x0d00-0xfeff window] Oct 31 00:48:57.778078 kernel: pci_bus 0000:03: resource 0 [io 0x4000-0x4fff] Oct 31 00:48:57.778126 kernel: pci_bus 0000:03: resource 1 [mem 0xfd500000-0xfd5fffff] Oct 31 00:48:57.778175 kernel: pci_bus 0000:03: resource 2 [mem 0xc0000000-0xc01fffff 64bit pref] Oct 31 00:48:57.778225 kernel: pci_bus 0000:04: resource 0 [io 0x8000-0x8fff] Oct 31 00:48:57.778272 kernel: pci_bus 0000:04: resource 1 [mem 0xfd100000-0xfd1fffff] Oct 31 00:48:57.778319 kernel: pci_bus 0000:04: resource 2 [mem 0xe7800000-0xe78fffff 64bit pref] Oct 31 00:48:57.778378 kernel: pci_bus 0000:05: resource 0 [io 0xc000-0xcfff] Oct 31 00:48:57.778426 kernel: pci_bus 0000:05: resource 1 [mem 0xfcd00000-0xfcdfffff] Oct 31 00:48:57.778472 kernel: pci_bus 0000:05: resource 2 [mem 0xe7400000-0xe74fffff 64bit pref] Oct 31 00:48:57.778526 kernel: pci_bus 0000:06: resource 1 [mem 0xfc900000-0xfc9fffff] Oct 31 00:48:57.778573 kernel: pci_bus 0000:06: resource 2 [mem 0xe7000000-0xe70fffff 64bit pref] Oct 31 00:48:57.778626 kernel: pci_bus 0000:07: resource 1 [mem 0xfc500000-0xfc5fffff] Oct 31 00:48:57.778708 kernel: pci_bus 0000:07: resource 2 [mem 0xe6c00000-0xe6cfffff 64bit pref] Oct 31 00:48:57.778759 kernel: pci_bus 0000:08: resource 1 [mem 0xfc100000-0xfc1fffff] Oct 31 00:48:57.778815 kernel: pci_bus 0000:08: resource 2 [mem 0xe6800000-0xe68fffff 64bit pref] Oct 31 00:48:57.778870 kernel: pci_bus 0000:09: resource 1 [mem 0xfbd00000-0xfbdfffff] Oct 31 00:48:57.778917 kernel: pci_bus 0000:09: resource 2 [mem 0xe6400000-0xe64fffff 64bit pref] Oct 31 00:48:57.778970 kernel: pci_bus 0000:0a: resource 1 [mem 0xfb900000-0xfb9fffff] Oct 31 00:48:57.779018 kernel: pci_bus 0000:0a: resource 2 [mem 0xe6000000-0xe60fffff 64bit pref] Oct 31 00:48:57.779080 kernel: pci_bus 0000:0b: resource 0 [io 0x5000-0x5fff] Oct 31 00:48:57.779130 kernel: pci_bus 0000:0b: resource 1 [mem 0xfd400000-0xfd4fffff] Oct 31 00:48:57.779176 kernel: pci_bus 0000:0b: resource 2 [mem 0xc0200000-0xc03fffff 64bit pref] Oct 31 00:48:57.779227 kernel: pci_bus 0000:0c: resource 0 [io 0x9000-0x9fff] Oct 31 00:48:57.779275 kernel: pci_bus 0000:0c: resource 1 [mem 0xfd000000-0xfd0fffff] Oct 31 00:48:57.779322 kernel: pci_bus 0000:0c: resource 2 [mem 0xe7700000-0xe77fffff 64bit pref] Oct 31 00:48:57.779608 kernel: pci_bus 0000:0d: resource 0 [io 0xd000-0xdfff] Oct 31 00:48:57.779658 kernel: pci_bus 0000:0d: resource 1 [mem 0xfcc00000-0xfccfffff] Oct 31 00:48:57.779710 kernel: pci_bus 0000:0d: resource 2 [mem 0xe7300000-0xe73fffff 64bit pref] Oct 31 00:48:57.779762 kernel: pci_bus 0000:0e: resource 1 [mem 0xfc800000-0xfc8fffff] Oct 31 00:48:57.779809 kernel: pci_bus 0000:0e: resource 2 [mem 0xe6f00000-0xe6ffffff 64bit pref] Oct 31 00:48:57.779861 kernel: pci_bus 0000:0f: resource 1 [mem 0xfc400000-0xfc4fffff] Oct 31 00:48:57.779908 kernel: pci_bus 0000:0f: resource 2 [mem 0xe6b00000-0xe6bfffff 64bit pref] Oct 31 00:48:57.779958 kernel: pci_bus 0000:10: resource 1 [mem 0xfc000000-0xfc0fffff] Oct 31 00:48:57.780209 kernel: pci_bus 0000:10: resource 2 [mem 0xe6700000-0xe67fffff 64bit pref] Oct 31 00:48:57.780267 kernel: pci_bus 0000:11: resource 1 [mem 0xfbc00000-0xfbcfffff] Oct 31 00:48:57.780316 kernel: pci_bus 0000:11: resource 2 [mem 0xe6300000-0xe63fffff 64bit pref] Oct 31 00:48:57.780441 kernel: pci_bus 0000:12: resource 1 [mem 0xfb800000-0xfb8fffff] Oct 31 00:48:57.780522 kernel: pci_bus 0000:12: resource 2 [mem 0xe5f00000-0xe5ffffff 64bit pref] Oct 31 00:48:57.780589 kernel: pci_bus 0000:13: resource 0 [io 0x6000-0x6fff] Oct 31 00:48:57.780636 kernel: pci_bus 0000:13: resource 1 [mem 0xfd300000-0xfd3fffff] Oct 31 00:48:57.780686 kernel: pci_bus 0000:13: resource 2 [mem 0xe7a00000-0xe7afffff 64bit pref] Oct 31 00:48:57.780736 kernel: pci_bus 0000:14: resource 0 [io 0xa000-0xafff] Oct 31 00:48:57.780784 kernel: pci_bus 0000:14: resource 1 [mem 0xfcf00000-0xfcffffff] Oct 31 00:48:57.780830 kernel: pci_bus 0000:14: resource 2 [mem 0xe7600000-0xe76fffff 64bit pref] Oct 31 00:48:57.780880 kernel: pci_bus 0000:15: resource 0 [io 0xe000-0xefff] Oct 31 00:48:57.780927 kernel: pci_bus 0000:15: resource 1 [mem 0xfcb00000-0xfcbfffff] Oct 31 00:48:57.780977 kernel: pci_bus 0000:15: resource 2 [mem 0xe7200000-0xe72fffff 64bit pref] Oct 31 00:48:57.781030 kernel: pci_bus 0000:16: resource 1 [mem 0xfc700000-0xfc7fffff] Oct 31 00:48:57.781078 kernel: pci_bus 0000:16: resource 2 [mem 0xe6e00000-0xe6efffff 64bit pref] Oct 31 00:48:57.781128 kernel: pci_bus 0000:17: resource 1 [mem 0xfc300000-0xfc3fffff] Oct 31 00:48:57.781175 kernel: pci_bus 0000:17: resource 2 [mem 0xe6a00000-0xe6afffff 64bit pref] Oct 31 00:48:57.781227 kernel: pci_bus 0000:18: resource 1 [mem 0xfbf00000-0xfbffffff] Oct 31 00:48:57.781274 kernel: pci_bus 0000:18: resource 2 [mem 0xe6600000-0xe66fffff 64bit pref] Oct 31 00:48:57.781337 kernel: pci_bus 0000:19: resource 1 [mem 0xfbb00000-0xfbbfffff] Oct 31 00:48:57.781408 kernel: pci_bus 0000:19: resource 2 [mem 0xe6200000-0xe62fffff 64bit pref] Oct 31 00:48:57.781459 kernel: pci_bus 0000:1a: resource 1 [mem 0xfb700000-0xfb7fffff] Oct 31 00:48:57.781507 kernel: pci_bus 0000:1a: resource 2 [mem 0xe5e00000-0xe5efffff 64bit pref] Oct 31 00:48:57.781602 kernel: pci_bus 0000:1b: resource 0 [io 0x7000-0x7fff] Oct 31 00:48:57.781654 kernel: pci_bus 0000:1b: resource 1 [mem 0xfd200000-0xfd2fffff] Oct 31 00:48:57.781701 kernel: pci_bus 0000:1b: resource 2 [mem 0xe7900000-0xe79fffff 64bit pref] Oct 31 00:48:57.781752 kernel: pci_bus 0000:1c: resource 0 [io 0xb000-0xbfff] Oct 31 00:48:57.781800 kernel: pci_bus 0000:1c: resource 1 [mem 0xfce00000-0xfcefffff] Oct 31 00:48:57.781847 kernel: pci_bus 0000:1c: resource 2 [mem 0xe7500000-0xe75fffff 64bit pref] Oct 31 00:48:57.781898 kernel: pci_bus 0000:1d: resource 1 [mem 0xfca00000-0xfcafffff] Oct 31 00:48:57.781946 kernel: pci_bus 0000:1d: resource 2 [mem 0xe7100000-0xe71fffff 64bit pref] Oct 31 00:48:57.782017 kernel: pci_bus 0000:1e: resource 1 [mem 0xfc600000-0xfc6fffff] Oct 31 00:48:57.782064 kernel: pci_bus 0000:1e: resource 2 [mem 0xe6d00000-0xe6dfffff 64bit pref] Oct 31 00:48:57.782115 kernel: pci_bus 0000:1f: resource 1 [mem 0xfc200000-0xfc2fffff] Oct 31 00:48:57.782162 kernel: pci_bus 0000:1f: resource 2 [mem 0xe6900000-0xe69fffff 64bit pref] Oct 31 00:48:57.782215 kernel: pci_bus 0000:20: resource 1 [mem 0xfbe00000-0xfbefffff] Oct 31 00:48:57.782262 kernel: pci_bus 0000:20: resource 2 [mem 0xe6500000-0xe65fffff 64bit pref] Oct 31 00:48:57.782314 kernel: pci_bus 0000:21: resource 1 [mem 0xfba00000-0xfbafffff] Oct 31 00:48:57.782392 kernel: pci_bus 0000:21: resource 2 [mem 0xe6100000-0xe61fffff 64bit pref] Oct 31 00:48:57.782442 kernel: pci_bus 0000:22: resource 1 [mem 0xfb600000-0xfb6fffff] Oct 31 00:48:57.782490 kernel: pci_bus 0000:22: resource 2 [mem 0xe5d00000-0xe5dfffff 64bit pref] Oct 31 00:48:57.782549 kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers Oct 31 00:48:57.782558 kernel: PCI: CLS 32 bytes, default 64 Oct 31 00:48:57.782565 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Oct 31 00:48:57.782574 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x311fd3cd494, max_idle_ns: 440795223879 ns Oct 31 00:48:57.782580 kernel: clocksource: Switched to clocksource tsc Oct 31 00:48:57.782586 kernel: Initialise system trusted keyrings Oct 31 00:48:57.782592 kernel: workingset: timestamp_bits=39 max_order=19 bucket_order=0 Oct 31 00:48:57.782599 kernel: Key type asymmetric registered Oct 31 00:48:57.782605 kernel: Asymmetric key parser 'x509' registered Oct 31 00:48:57.782611 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Oct 31 00:48:57.782617 kernel: io scheduler mq-deadline registered Oct 31 00:48:57.782623 kernel: io scheduler kyber registered Oct 31 00:48:57.782631 kernel: io scheduler bfq registered Oct 31 00:48:57.782684 kernel: pcieport 0000:00:15.0: PME: Signaling with IRQ 24 Oct 31 00:48:57.782737 kernel: pcieport 0000:00:15.0: pciehp: Slot #160 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Oct 31 00:48:57.782790 kernel: pcieport 0000:00:15.1: PME: Signaling with IRQ 25 Oct 31 00:48:57.782842 kernel: pcieport 0000:00:15.1: pciehp: Slot #161 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Oct 31 00:48:57.782895 kernel: pcieport 0000:00:15.2: PME: Signaling with IRQ 26 Oct 31 00:48:57.782946 kernel: pcieport 0000:00:15.2: pciehp: Slot #162 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Oct 31 00:48:57.782999 kernel: pcieport 0000:00:15.3: PME: Signaling with IRQ 27 Oct 31 00:48:57.783054 kernel: pcieport 0000:00:15.3: pciehp: Slot #163 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Oct 31 00:48:57.783106 kernel: pcieport 0000:00:15.4: PME: Signaling with IRQ 28 Oct 31 00:48:57.783158 kernel: pcieport 0000:00:15.4: pciehp: Slot #164 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Oct 31 00:48:57.783211 kernel: pcieport 0000:00:15.5: PME: Signaling with IRQ 29 Oct 31 00:48:57.783263 kernel: pcieport 0000:00:15.5: pciehp: Slot #165 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Oct 31 00:48:57.783317 kernel: pcieport 0000:00:15.6: PME: Signaling with IRQ 30 Oct 31 00:48:57.783388 kernel: pcieport 0000:00:15.6: pciehp: Slot #166 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Oct 31 00:48:57.783441 kernel: pcieport 0000:00:15.7: PME: Signaling with IRQ 31 Oct 31 00:48:57.783493 kernel: pcieport 0000:00:15.7: pciehp: Slot #167 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Oct 31 00:48:57.783545 kernel: pcieport 0000:00:16.0: PME: Signaling with IRQ 32 Oct 31 00:48:57.783597 kernel: pcieport 0000:00:16.0: pciehp: Slot #192 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Oct 31 00:48:57.783653 kernel: pcieport 0000:00:16.1: PME: Signaling with IRQ 33 Oct 31 00:48:57.783709 kernel: pcieport 0000:00:16.1: pciehp: Slot #193 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Oct 31 00:48:57.783761 kernel: pcieport 0000:00:16.2: PME: Signaling with IRQ 34 Oct 31 00:48:57.783813 kernel: pcieport 0000:00:16.2: pciehp: Slot #194 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Oct 31 00:48:57.783865 kernel: pcieport 0000:00:16.3: PME: Signaling with IRQ 35 Oct 31 00:48:57.783917 kernel: pcieport 0000:00:16.3: pciehp: Slot #195 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Oct 31 00:48:57.783973 kernel: pcieport 0000:00:16.4: PME: Signaling with IRQ 36 Oct 31 00:48:57.784024 kernel: pcieport 0000:00:16.4: pciehp: Slot #196 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Oct 31 00:48:57.784076 kernel: pcieport 0000:00:16.5: PME: Signaling with IRQ 37 Oct 31 00:48:57.784128 kernel: pcieport 0000:00:16.5: pciehp: Slot #197 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Oct 31 00:48:57.784181 kernel: pcieport 0000:00:16.6: PME: Signaling with IRQ 38 Oct 31 00:48:57.784233 kernel: pcieport 0000:00:16.6: pciehp: Slot #198 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Oct 31 00:48:57.784288 kernel: pcieport 0000:00:16.7: PME: Signaling with IRQ 39 Oct 31 00:48:57.784384 kernel: pcieport 0000:00:16.7: pciehp: Slot #199 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Oct 31 00:48:57.784438 kernel: pcieport 0000:00:17.0: PME: Signaling with IRQ 40 Oct 31 00:48:57.784491 kernel: pcieport 0000:00:17.0: pciehp: Slot #224 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Oct 31 00:48:57.784543 kernel: pcieport 0000:00:17.1: PME: Signaling with IRQ 41 Oct 31 00:48:57.784628 kernel: pcieport 0000:00:17.1: pciehp: Slot #225 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Oct 31 00:48:57.784684 kernel: pcieport 0000:00:17.2: PME: Signaling with IRQ 42 Oct 31 00:48:57.784735 kernel: pcieport 0000:00:17.2: pciehp: Slot #226 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Oct 31 00:48:57.784785 kernel: pcieport 0000:00:17.3: PME: Signaling with IRQ 43 Oct 31 00:48:57.784837 kernel: pcieport 0000:00:17.3: pciehp: Slot #227 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Oct 31 00:48:57.784887 kernel: pcieport 0000:00:17.4: PME: Signaling with IRQ 44 Oct 31 00:48:57.784938 kernel: pcieport 0000:00:17.4: pciehp: Slot #228 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Oct 31 00:48:57.784993 kernel: pcieport 0000:00:17.5: PME: Signaling with IRQ 45 Oct 31 00:48:57.785044 kernel: pcieport 0000:00:17.5: pciehp: Slot #229 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Oct 31 00:48:57.785113 kernel: pcieport 0000:00:17.6: PME: Signaling with IRQ 46 Oct 31 00:48:57.785166 kernel: pcieport 0000:00:17.6: pciehp: Slot #230 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Oct 31 00:48:57.785217 kernel: pcieport 0000:00:17.7: PME: Signaling with IRQ 47 Oct 31 00:48:57.785271 kernel: pcieport 0000:00:17.7: pciehp: Slot #231 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Oct 31 00:48:57.785322 kernel: pcieport 0000:00:18.0: PME: Signaling with IRQ 48 Oct 31 00:48:57.785384 kernel: pcieport 0000:00:18.0: pciehp: Slot #256 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Oct 31 00:48:57.785436 kernel: pcieport 0000:00:18.1: PME: Signaling with IRQ 49 Oct 31 00:48:57.785487 kernel: pcieport 0000:00:18.1: pciehp: Slot #257 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Oct 31 00:48:57.785558 kernel: pcieport 0000:00:18.2: PME: Signaling with IRQ 50 Oct 31 00:48:57.785625 kernel: pcieport 0000:00:18.2: pciehp: Slot #258 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Oct 31 00:48:57.785680 kernel: pcieport 0000:00:18.3: PME: Signaling with IRQ 51 Oct 31 00:48:57.785731 kernel: pcieport 0000:00:18.3: pciehp: Slot #259 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Oct 31 00:48:57.785783 kernel: pcieport 0000:00:18.4: PME: Signaling with IRQ 52 Oct 31 00:48:57.785834 kernel: pcieport 0000:00:18.4: pciehp: Slot #260 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Oct 31 00:48:57.785885 kernel: pcieport 0000:00:18.5: PME: Signaling with IRQ 53 Oct 31 00:48:57.785940 kernel: pcieport 0000:00:18.5: pciehp: Slot #261 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Oct 31 00:48:57.785991 kernel: pcieport 0000:00:18.6: PME: Signaling with IRQ 54 Oct 31 00:48:57.786042 kernel: pcieport 0000:00:18.6: pciehp: Slot #262 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Oct 31 00:48:57.786093 kernel: pcieport 0000:00:18.7: PME: Signaling with IRQ 55 Oct 31 00:48:57.786145 kernel: pcieport 0000:00:18.7: pciehp: Slot #263 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Oct 31 00:48:57.786156 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Oct 31 00:48:57.786163 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Oct 31 00:48:57.786169 kernel: 00:05: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Oct 31 00:48:57.786175 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBC,PNP0f13:MOUS] at 0x60,0x64 irq 1,12 Oct 31 00:48:57.786181 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Oct 31 00:48:57.786188 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Oct 31 00:48:57.786239 kernel: rtc_cmos 00:01: registered as rtc0 Oct 31 00:48:57.786289 kernel: rtc_cmos 00:01: setting system clock to 2025-10-31T00:48:57 UTC (1761871737) Oct 31 00:48:57.786390 kernel: rtc_cmos 00:01: alarms up to one month, y3k, 114 bytes nvram Oct 31 00:48:57.786400 kernel: intel_pstate: CPU model not supported Oct 31 00:48:57.786407 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Oct 31 00:48:57.786413 kernel: NET: Registered PF_INET6 protocol family Oct 31 00:48:57.786419 kernel: Segment Routing with IPv6 Oct 31 00:48:57.786425 kernel: In-situ OAM (IOAM) with IPv6 Oct 31 00:48:57.786432 kernel: NET: Registered PF_PACKET protocol family Oct 31 00:48:57.786438 kernel: Key type dns_resolver registered Oct 31 00:48:57.786444 kernel: IPI shorthand broadcast: enabled Oct 31 00:48:57.786453 kernel: sched_clock: Marking stable (906002942, 220089139)->(1182622323, -56530242) Oct 31 00:48:57.786459 kernel: registered taskstats version 1 Oct 31 00:48:57.786465 kernel: Loading compiled-in X.509 certificates Oct 31 00:48:57.786472 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.113-flatcar: 3640cadef2ce00a652278ae302be325ebb54a228' Oct 31 00:48:57.786478 kernel: Key type .fscrypt registered Oct 31 00:48:57.786484 kernel: Key type fscrypt-provisioning registered Oct 31 00:48:57.786490 kernel: ima: No TPM chip found, activating TPM-bypass! Oct 31 00:48:57.786496 kernel: ima: Allocated hash algorithm: sha1 Oct 31 00:48:57.786503 kernel: ima: No architecture policies found Oct 31 00:48:57.786510 kernel: clk: Disabling unused clocks Oct 31 00:48:57.786521 kernel: Freeing unused kernel image (initmem) memory: 42880K Oct 31 00:48:57.786528 kernel: Write protecting the kernel read-only data: 36864k Oct 31 00:48:57.786534 kernel: Freeing unused kernel image (rodata/data gap) memory: 1828K Oct 31 00:48:57.786540 kernel: Run /init as init process Oct 31 00:48:57.786546 kernel: with arguments: Oct 31 00:48:57.786553 kernel: /init Oct 31 00:48:57.786558 kernel: with environment: Oct 31 00:48:57.786564 kernel: HOME=/ Oct 31 00:48:57.786572 kernel: TERM=linux Oct 31 00:48:57.786580 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Oct 31 00:48:57.786588 systemd[1]: Detected virtualization vmware. Oct 31 00:48:57.786594 systemd[1]: Detected architecture x86-64. Oct 31 00:48:57.786600 systemd[1]: Running in initrd. Oct 31 00:48:57.786606 systemd[1]: No hostname configured, using default hostname. Oct 31 00:48:57.786612 systemd[1]: Hostname set to . Oct 31 00:48:57.786620 systemd[1]: Initializing machine ID from random generator. Oct 31 00:48:57.786626 systemd[1]: Queued start job for default target initrd.target. Oct 31 00:48:57.786632 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Oct 31 00:48:57.786639 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Oct 31 00:48:57.786645 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Oct 31 00:48:57.786652 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Oct 31 00:48:57.786658 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Oct 31 00:48:57.786665 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Oct 31 00:48:57.786673 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Oct 31 00:48:57.786680 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Oct 31 00:48:57.786686 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Oct 31 00:48:57.786692 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Oct 31 00:48:57.786699 systemd[1]: Reached target paths.target - Path Units. Oct 31 00:48:57.786705 systemd[1]: Reached target slices.target - Slice Units. Oct 31 00:48:57.786711 systemd[1]: Reached target swap.target - Swaps. Oct 31 00:48:57.786719 systemd[1]: Reached target timers.target - Timer Units. Oct 31 00:48:57.786725 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Oct 31 00:48:57.786732 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Oct 31 00:48:57.786738 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Oct 31 00:48:57.786744 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Oct 31 00:48:57.786750 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Oct 31 00:48:57.786756 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Oct 31 00:48:57.786763 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Oct 31 00:48:57.786769 systemd[1]: Reached target sockets.target - Socket Units. Oct 31 00:48:57.786776 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Oct 31 00:48:57.786782 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Oct 31 00:48:57.786789 systemd[1]: Finished network-cleanup.service - Network Cleanup. Oct 31 00:48:57.786795 systemd[1]: Starting systemd-fsck-usr.service... Oct 31 00:48:57.786802 systemd[1]: Starting systemd-journald.service - Journal Service... Oct 31 00:48:57.786808 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Oct 31 00:48:57.786815 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Oct 31 00:48:57.786834 systemd-journald[217]: Collecting audit messages is disabled. Oct 31 00:48:57.786870 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Oct 31 00:48:57.786877 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Oct 31 00:48:57.786883 systemd[1]: Finished systemd-fsck-usr.service. Oct 31 00:48:57.786891 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Oct 31 00:48:57.786898 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Oct 31 00:48:57.786905 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Oct 31 00:48:57.786912 kernel: Bridge firewalling registered Oct 31 00:48:57.786919 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Oct 31 00:48:57.786926 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Oct 31 00:48:57.786948 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Oct 31 00:48:57.786955 systemd-journald[217]: Journal started Oct 31 00:48:57.786969 systemd-journald[217]: Runtime Journal (/run/log/journal/9078b903f6f74db685ae227fb1581f4f) is 4.8M, max 38.6M, 33.8M free. Oct 31 00:48:57.744941 systemd-modules-load[218]: Inserted module 'overlay' Oct 31 00:48:57.789845 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Oct 31 00:48:57.775340 systemd-modules-load[218]: Inserted module 'br_netfilter' Oct 31 00:48:57.793260 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Oct 31 00:48:57.793277 systemd[1]: Started systemd-journald.service - Journal Service. Oct 31 00:48:57.793669 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Oct 31 00:48:57.796736 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Oct 31 00:48:57.800857 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Oct 31 00:48:57.803473 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Oct 31 00:48:57.809720 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Oct 31 00:48:57.809971 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Oct 31 00:48:57.810998 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Oct 31 00:48:57.819020 dracut-cmdline[249]: dracut-dracut-053 Oct 31 00:48:57.820850 dracut-cmdline[249]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=vmware flatcar.autologin verity.usrhash=950876ad7bc3e9634b7585a81697da4ef03ac6558969e5c002165369dd7c7885 Oct 31 00:48:57.832441 systemd-resolved[251]: Positive Trust Anchors: Oct 31 00:48:57.832448 systemd-resolved[251]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Oct 31 00:48:57.832470 systemd-resolved[251]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Oct 31 00:48:57.834945 systemd-resolved[251]: Defaulting to hostname 'linux'. Oct 31 00:48:57.835630 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Oct 31 00:48:57.835786 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Oct 31 00:48:57.863338 kernel: SCSI subsystem initialized Oct 31 00:48:57.871338 kernel: Loading iSCSI transport class v2.0-870. Oct 31 00:48:57.878335 kernel: iscsi: registered transport (tcp) Oct 31 00:48:57.892378 kernel: iscsi: registered transport (qla4xxx) Oct 31 00:48:57.892407 kernel: QLogic iSCSI HBA Driver Oct 31 00:48:57.912768 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Oct 31 00:48:57.918438 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Oct 31 00:48:57.934334 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Oct 31 00:48:57.934360 kernel: device-mapper: uevent: version 1.0.3 Oct 31 00:48:57.934369 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Oct 31 00:48:57.965338 kernel: raid6: avx2x4 gen() 53349 MB/s Oct 31 00:48:57.982366 kernel: raid6: avx2x2 gen() 52916 MB/s Oct 31 00:48:57.999482 kernel: raid6: avx2x1 gen() 46064 MB/s Oct 31 00:48:57.999503 kernel: raid6: using algorithm avx2x4 gen() 53349 MB/s Oct 31 00:48:58.017536 kernel: raid6: .... xor() 21397 MB/s, rmw enabled Oct 31 00:48:58.017588 kernel: raid6: using avx2x2 recovery algorithm Oct 31 00:48:58.031337 kernel: xor: automatically using best checksumming function avx Oct 31 00:48:58.134420 kernel: Btrfs loaded, zoned=no, fsverity=no Oct 31 00:48:58.139398 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Oct 31 00:48:58.145591 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Oct 31 00:48:58.152578 systemd-udevd[434]: Using default interface naming scheme 'v255'. Oct 31 00:48:58.155078 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Oct 31 00:48:58.160402 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Oct 31 00:48:58.167055 dracut-pre-trigger[439]: rd.md=0: removing MD RAID activation Oct 31 00:48:58.181669 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Oct 31 00:48:58.185415 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Oct 31 00:48:58.255671 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Oct 31 00:48:58.259420 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Oct 31 00:48:58.269499 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Oct 31 00:48:58.269987 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Oct 31 00:48:58.270699 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Oct 31 00:48:58.270926 systemd[1]: Reached target remote-fs.target - Remote File Systems. Oct 31 00:48:58.274402 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Oct 31 00:48:58.281014 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Oct 31 00:48:58.319339 kernel: VMware PVSCSI driver - version 1.0.7.0-k Oct 31 00:48:58.320667 kernel: vmw_pvscsi: using 64bit dma Oct 31 00:48:58.320687 kernel: vmw_pvscsi: max_id: 16 Oct 31 00:48:58.320695 kernel: vmw_pvscsi: setting ring_pages to 8 Oct 31 00:48:58.325340 kernel: vmw_pvscsi: enabling reqCallThreshold Oct 31 00:48:58.325360 kernel: vmw_pvscsi: driver-based request coalescing enabled Oct 31 00:48:58.325369 kernel: vmw_pvscsi: using MSI-X Oct 31 00:48:58.329438 kernel: scsi host0: VMware PVSCSI storage adapter rev 2, req/cmp/msg rings: 8/8/1 pages, cmd_per_lun=254 Oct 31 00:48:58.341984 kernel: VMware vmxnet3 virtual NIC driver - version 1.7.0.0-k-NAPI Oct 31 00:48:58.342003 kernel: vmw_pvscsi 0000:03:00.0: VMware PVSCSI rev 2 host #0 Oct 31 00:48:58.350671 kernel: scsi 0:0:0:0: Direct-Access VMware Virtual disk 2.0 PQ: 0 ANSI: 6 Oct 31 00:48:58.350694 kernel: vmxnet3 0000:0b:00.0: # of Tx queues : 2, # of Rx queues : 2 Oct 31 00:48:58.350785 kernel: vmxnet3 0000:0b:00.0 eth0: NIC Link is Up 10000 Mbps Oct 31 00:48:58.358340 kernel: libata version 3.00 loaded. Oct 31 00:48:58.360338 kernel: ata_piix 0000:00:07.1: version 2.13 Oct 31 00:48:58.360459 kernel: scsi host1: ata_piix Oct 31 00:48:58.362164 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Oct 31 00:48:58.362236 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Oct 31 00:48:58.362565 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Oct 31 00:48:58.362672 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Oct 31 00:48:58.362742 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Oct 31 00:48:58.362909 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Oct 31 00:48:58.366066 kernel: cryptd: max_cpu_qlen set to 1000 Oct 31 00:48:58.366087 kernel: scsi host2: ata_piix Oct 31 00:48:58.366170 kernel: ata1: PATA max UDMA/33 cmd 0x1f0 ctl 0x3f6 bmdma 0x1060 irq 14 Oct 31 00:48:58.366182 kernel: ata2: PATA max UDMA/33 cmd 0x170 ctl 0x376 bmdma 0x1068 irq 15 Oct 31 00:48:58.367473 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Oct 31 00:48:58.370342 kernel: vmxnet3 0000:0b:00.0 ens192: renamed from eth0 Oct 31 00:48:58.380289 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Oct 31 00:48:58.383402 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Oct 31 00:48:58.393770 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Oct 31 00:48:58.537422 kernel: ata2.00: ATAPI: VMware Virtual IDE CDROM Drive, 00000001, max UDMA/33 Oct 31 00:48:58.540340 kernel: scsi 2:0:0:0: CD-ROM NECVMWar VMware IDE CDR10 1.00 PQ: 0 ANSI: 5 Oct 31 00:48:58.551596 kernel: AVX2 version of gcm_enc/dec engaged. Oct 31 00:48:58.551620 kernel: AES CTR mode by8 optimization enabled Oct 31 00:48:58.556726 kernel: sd 0:0:0:0: [sda] 17805312 512-byte logical blocks: (9.12 GB/8.49 GiB) Oct 31 00:48:58.556860 kernel: sd 0:0:0:0: [sda] Write Protect is off Oct 31 00:48:58.556939 kernel: sd 0:0:0:0: [sda] Mode Sense: 31 00 00 00 Oct 31 00:48:58.557009 kernel: sd 0:0:0:0: [sda] Cache data unavailable Oct 31 00:48:58.558295 kernel: sd 0:0:0:0: [sda] Assuming drive cache: write through Oct 31 00:48:58.560783 kernel: sr 2:0:0:0: [sr0] scsi3-mmc drive: 1x/1x writer dvd-ram cd/rw xa/form2 cdda tray Oct 31 00:48:58.560874 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Oct 31 00:48:58.562740 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Oct 31 00:48:58.562756 kernel: sd 0:0:0:0: [sda] Attached SCSI disk Oct 31 00:48:58.568336 kernel: sr 2:0:0:0: Attached scsi CD-ROM sr0 Oct 31 00:48:58.592337 kernel: BTRFS: device label OEM devid 1 transid 9 /dev/sda6 scanned by (udev-worker) (481) Oct 31 00:48:58.595126 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Virtual_disk ROOT. Oct 31 00:48:58.597665 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Virtual_disk EFI-SYSTEM. Oct 31 00:48:58.600256 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Virtual_disk OEM. Oct 31 00:48:58.603459 kernel: BTRFS: device fsid 1021cdf2-f4a0-46ed-8fe0-b31d3115a6e0 devid 1 transid 37 /dev/sda3 scanned by (udev-worker) (487) Oct 31 00:48:58.606179 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - Virtual_disk USR-A. Oct 31 00:48:58.606310 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Virtual_disk USR-A. Oct 31 00:48:58.609449 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Oct 31 00:48:58.633353 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Oct 31 00:48:58.637342 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Oct 31 00:48:58.642541 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Oct 31 00:48:59.641362 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Oct 31 00:48:59.641400 disk-uuid[588]: The operation has completed successfully. Oct 31 00:48:59.675958 systemd[1]: disk-uuid.service: Deactivated successfully. Oct 31 00:48:59.676011 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Oct 31 00:48:59.680400 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Oct 31 00:48:59.682115 sh[608]: Success Oct 31 00:48:59.690357 kernel: device-mapper: verity: sha256 using implementation "sha256-avx2" Oct 31 00:48:59.737539 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Oct 31 00:48:59.738049 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Oct 31 00:48:59.738928 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Oct 31 00:48:59.755720 kernel: BTRFS info (device dm-0): first mount of filesystem 1021cdf2-f4a0-46ed-8fe0-b31d3115a6e0 Oct 31 00:48:59.755744 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Oct 31 00:48:59.755753 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Oct 31 00:48:59.756813 kernel: BTRFS info (device dm-0): disabling log replay at mount time Oct 31 00:48:59.758335 kernel: BTRFS info (device dm-0): using free space tree Oct 31 00:48:59.764339 kernel: BTRFS info (device dm-0): enabling ssd optimizations Oct 31 00:48:59.766502 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Oct 31 00:48:59.771412 systemd[1]: Starting afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments... Oct 31 00:48:59.772420 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Oct 31 00:48:59.796468 kernel: BTRFS info (device sda6): first mount of filesystem 1a1fe00d-a5e5-45c6-a30a-fcc91f19f9ea Oct 31 00:48:59.796497 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Oct 31 00:48:59.797373 kernel: BTRFS info (device sda6): using free space tree Oct 31 00:48:59.805338 kernel: BTRFS info (device sda6): enabling ssd optimizations Oct 31 00:48:59.814425 systemd[1]: mnt-oem.mount: Deactivated successfully. Oct 31 00:48:59.816337 kernel: BTRFS info (device sda6): last unmount of filesystem 1a1fe00d-a5e5-45c6-a30a-fcc91f19f9ea Oct 31 00:48:59.818194 systemd[1]: Finished ignition-setup.service - Ignition (setup). Oct 31 00:48:59.823421 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Oct 31 00:48:59.847279 systemd[1]: Finished afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments. Oct 31 00:48:59.851420 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Oct 31 00:48:59.903948 ignition[666]: Ignition 2.19.0 Oct 31 00:48:59.904224 ignition[666]: Stage: fetch-offline Oct 31 00:48:59.904246 ignition[666]: no configs at "/usr/lib/ignition/base.d" Oct 31 00:48:59.904252 ignition[666]: no config dir at "/usr/lib/ignition/base.platform.d/vmware" Oct 31 00:48:59.904305 ignition[666]: parsed url from cmdline: "" Oct 31 00:48:59.904307 ignition[666]: no config URL provided Oct 31 00:48:59.904310 ignition[666]: reading system config file "/usr/lib/ignition/user.ign" Oct 31 00:48:59.904314 ignition[666]: no config at "/usr/lib/ignition/user.ign" Oct 31 00:48:59.904705 ignition[666]: config successfully fetched Oct 31 00:48:59.904724 ignition[666]: parsing config with SHA512: bba6c530faf942ee9b42d60c3a3efc26c7e260a410f3743137e5fa655b09c8f3d5ef29c8349e0aec55646014d4719f5ca77fe5877c177f19805d462b6a46cf2f Oct 31 00:48:59.907850 unknown[666]: fetched base config from "system" Oct 31 00:48:59.907991 unknown[666]: fetched user config from "vmware" Oct 31 00:48:59.908375 ignition[666]: fetch-offline: fetch-offline passed Oct 31 00:48:59.908546 ignition[666]: Ignition finished successfully Oct 31 00:48:59.909236 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Oct 31 00:48:59.928907 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Oct 31 00:48:59.932415 systemd[1]: Starting systemd-networkd.service - Network Configuration... Oct 31 00:48:59.945036 systemd-networkd[800]: lo: Link UP Oct 31 00:48:59.945223 systemd-networkd[800]: lo: Gained carrier Oct 31 00:48:59.946145 systemd-networkd[800]: Enumeration completed Oct 31 00:48:59.949544 kernel: vmxnet3 0000:0b:00.0 ens192: intr type 3, mode 0, 3 vectors allocated Oct 31 00:48:59.949658 kernel: vmxnet3 0000:0b:00.0 ens192: NIC Link is Up 10000 Mbps Oct 31 00:48:59.946298 systemd[1]: Started systemd-networkd.service - Network Configuration. Oct 31 00:48:59.946487 systemd[1]: Reached target network.target - Network. Oct 31 00:48:59.946507 systemd-networkd[800]: ens192: Configuring with /etc/systemd/network/10-dracut-cmdline-99.network. Oct 31 00:48:59.946582 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Oct 31 00:48:59.950420 systemd-networkd[800]: ens192: Link UP Oct 31 00:48:59.950423 systemd-networkd[800]: ens192: Gained carrier Oct 31 00:48:59.952470 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Oct 31 00:48:59.959592 ignition[802]: Ignition 2.19.0 Oct 31 00:48:59.959601 ignition[802]: Stage: kargs Oct 31 00:48:59.959764 ignition[802]: no configs at "/usr/lib/ignition/base.d" Oct 31 00:48:59.959771 ignition[802]: no config dir at "/usr/lib/ignition/base.platform.d/vmware" Oct 31 00:48:59.961321 ignition[802]: kargs: kargs passed Oct 31 00:48:59.961478 ignition[802]: Ignition finished successfully Oct 31 00:48:59.963072 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Oct 31 00:48:59.968541 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Oct 31 00:48:59.976626 ignition[809]: Ignition 2.19.0 Oct 31 00:48:59.976632 ignition[809]: Stage: disks Oct 31 00:48:59.977288 ignition[809]: no configs at "/usr/lib/ignition/base.d" Oct 31 00:48:59.977420 ignition[809]: no config dir at "/usr/lib/ignition/base.platform.d/vmware" Oct 31 00:48:59.978799 ignition[809]: disks: disks passed Oct 31 00:48:59.978831 ignition[809]: Ignition finished successfully Oct 31 00:48:59.979794 systemd[1]: Finished ignition-disks.service - Ignition (disks). Oct 31 00:48:59.980015 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Oct 31 00:48:59.980143 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Oct 31 00:48:59.980344 systemd[1]: Reached target local-fs.target - Local File Systems. Oct 31 00:48:59.980521 systemd[1]: Reached target sysinit.target - System Initialization. Oct 31 00:48:59.980693 systemd[1]: Reached target basic.target - Basic System. Oct 31 00:48:59.988548 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Oct 31 00:48:59.999132 systemd-fsck[817]: ROOT: clean, 14/1628000 files, 120691/1617920 blocks Oct 31 00:49:00.000652 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Oct 31 00:49:00.004496 systemd[1]: Mounting sysroot.mount - /sysroot... Oct 31 00:49:00.061605 kernel: EXT4-fs (sda9): mounted filesystem 044ea9d4-3e15-48f6-be3f-240ec74f6b62 r/w with ordered data mode. Quota mode: none. Oct 31 00:49:00.061091 systemd[1]: Mounted sysroot.mount - /sysroot. Oct 31 00:49:00.061460 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Oct 31 00:49:00.066411 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Oct 31 00:49:00.068372 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Oct 31 00:49:00.068772 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Oct 31 00:49:00.068988 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Oct 31 00:49:00.069003 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Oct 31 00:49:00.072322 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Oct 31 00:49:00.073621 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Oct 31 00:49:00.075346 kernel: BTRFS: device label OEM devid 1 transid 10 /dev/sda6 scanned by mount (825) Oct 31 00:49:00.078459 kernel: BTRFS info (device sda6): first mount of filesystem 1a1fe00d-a5e5-45c6-a30a-fcc91f19f9ea Oct 31 00:49:00.078479 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Oct 31 00:49:00.078488 kernel: BTRFS info (device sda6): using free space tree Oct 31 00:49:00.082437 kernel: BTRFS info (device sda6): enabling ssd optimizations Oct 31 00:49:00.082348 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Oct 31 00:49:00.102316 initrd-setup-root[849]: cut: /sysroot/etc/passwd: No such file or directory Oct 31 00:49:00.104860 initrd-setup-root[856]: cut: /sysroot/etc/group: No such file or directory Oct 31 00:49:00.107133 initrd-setup-root[863]: cut: /sysroot/etc/shadow: No such file or directory Oct 31 00:49:00.109082 initrd-setup-root[870]: cut: /sysroot/etc/gshadow: No such file or directory Oct 31 00:49:00.157871 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Oct 31 00:49:00.165464 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Oct 31 00:49:00.167873 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Oct 31 00:49:00.170414 kernel: BTRFS info (device sda6): last unmount of filesystem 1a1fe00d-a5e5-45c6-a30a-fcc91f19f9ea Oct 31 00:49:00.183583 ignition[937]: INFO : Ignition 2.19.0 Oct 31 00:49:00.183583 ignition[937]: INFO : Stage: mount Oct 31 00:49:00.183583 ignition[937]: INFO : no configs at "/usr/lib/ignition/base.d" Oct 31 00:49:00.183583 ignition[937]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/vmware" Oct 31 00:49:00.184705 ignition[937]: INFO : mount: mount passed Oct 31 00:49:00.184705 ignition[937]: INFO : Ignition finished successfully Oct 31 00:49:00.184587 systemd[1]: Finished ignition-mount.service - Ignition (mount). Oct 31 00:49:00.190403 systemd[1]: Starting ignition-files.service - Ignition (files)... Oct 31 00:49:00.190596 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Oct 31 00:49:00.754340 systemd[1]: sysroot-oem.mount: Deactivated successfully. Oct 31 00:49:00.759490 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Oct 31 00:49:00.841358 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/sda6 scanned by mount (950) Oct 31 00:49:00.846291 kernel: BTRFS info (device sda6): first mount of filesystem 1a1fe00d-a5e5-45c6-a30a-fcc91f19f9ea Oct 31 00:49:00.846321 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Oct 31 00:49:00.846354 kernel: BTRFS info (device sda6): using free space tree Oct 31 00:49:00.850344 kernel: BTRFS info (device sda6): enabling ssd optimizations Oct 31 00:49:00.851867 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Oct 31 00:49:00.866557 ignition[967]: INFO : Ignition 2.19.0 Oct 31 00:49:00.866829 ignition[967]: INFO : Stage: files Oct 31 00:49:00.867815 ignition[967]: INFO : no configs at "/usr/lib/ignition/base.d" Oct 31 00:49:00.867815 ignition[967]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/vmware" Oct 31 00:49:00.867815 ignition[967]: DEBUG : files: compiled without relabeling support, skipping Oct 31 00:49:00.868505 ignition[967]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Oct 31 00:49:00.868505 ignition[967]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Oct 31 00:49:00.870553 ignition[967]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Oct 31 00:49:00.870798 ignition[967]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Oct 31 00:49:00.871188 unknown[967]: wrote ssh authorized keys file for user: core Oct 31 00:49:00.871400 ignition[967]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Oct 31 00:49:00.873075 ignition[967]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Oct 31 00:49:00.873318 ignition[967]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.3-linux-amd64.tar.gz: attempt #1 Oct 31 00:49:00.911434 ignition[967]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Oct 31 00:49:00.957637 ignition[967]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Oct 31 00:49:00.957988 ignition[967]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Oct 31 00:49:00.957988 ignition[967]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Oct 31 00:49:00.957988 ignition[967]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Oct 31 00:49:00.957988 ignition[967]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Oct 31 00:49:00.957988 ignition[967]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Oct 31 00:49:00.959083 ignition[967]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Oct 31 00:49:00.959083 ignition[967]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Oct 31 00:49:00.959083 ignition[967]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Oct 31 00:49:00.959083 ignition[967]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Oct 31 00:49:00.959083 ignition[967]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Oct 31 00:49:00.959083 ignition[967]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.34.1-x86-64.raw" Oct 31 00:49:00.959083 ignition[967]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.34.1-x86-64.raw" Oct 31 00:49:00.959083 ignition[967]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.34.1-x86-64.raw" Oct 31 00:49:00.959083 ignition[967]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.34.1-x86-64.raw: attempt #1 Oct 31 00:49:01.417356 ignition[967]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Oct 31 00:49:01.665184 ignition[967]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.34.1-x86-64.raw" Oct 31 00:49:01.665465 ignition[967]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/etc/systemd/network/00-vmware.network" Oct 31 00:49:01.665465 ignition[967]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/etc/systemd/network/00-vmware.network" Oct 31 00:49:01.665465 ignition[967]: INFO : files: op(c): [started] processing unit "prepare-helm.service" Oct 31 00:49:01.665941 ignition[967]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Oct 31 00:49:01.665941 ignition[967]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Oct 31 00:49:01.665941 ignition[967]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" Oct 31 00:49:01.665941 ignition[967]: INFO : files: op(e): [started] processing unit "coreos-metadata.service" Oct 31 00:49:01.665941 ignition[967]: INFO : files: op(e): op(f): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Oct 31 00:49:01.665941 ignition[967]: INFO : files: op(e): op(f): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Oct 31 00:49:01.665941 ignition[967]: INFO : files: op(e): [finished] processing unit "coreos-metadata.service" Oct 31 00:49:01.665941 ignition[967]: INFO : files: op(10): [started] setting preset to disabled for "coreos-metadata.service" Oct 31 00:49:01.702171 ignition[967]: INFO : files: op(10): op(11): [started] removing enablement symlink(s) for "coreos-metadata.service" Oct 31 00:49:01.704927 ignition[967]: INFO : files: op(10): op(11): [finished] removing enablement symlink(s) for "coreos-metadata.service" Oct 31 00:49:01.705087 ignition[967]: INFO : files: op(10): [finished] setting preset to disabled for "coreos-metadata.service" Oct 31 00:49:01.705087 ignition[967]: INFO : files: op(12): [started] setting preset to enabled for "prepare-helm.service" Oct 31 00:49:01.705087 ignition[967]: INFO : files: op(12): [finished] setting preset to enabled for "prepare-helm.service" Oct 31 00:49:01.705507 ignition[967]: INFO : files: createResultFile: createFiles: op(13): [started] writing file "/sysroot/etc/.ignition-result.json" Oct 31 00:49:01.705507 ignition[967]: INFO : files: createResultFile: createFiles: op(13): [finished] writing file "/sysroot/etc/.ignition-result.json" Oct 31 00:49:01.705507 ignition[967]: INFO : files: files passed Oct 31 00:49:01.705507 ignition[967]: INFO : Ignition finished successfully Oct 31 00:49:01.706200 systemd[1]: Finished ignition-files.service - Ignition (files). Oct 31 00:49:01.710450 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Oct 31 00:49:01.712329 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Oct 31 00:49:01.712766 systemd[1]: ignition-quench.service: Deactivated successfully. Oct 31 00:49:01.712954 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Oct 31 00:49:01.719387 initrd-setup-root-after-ignition[998]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Oct 31 00:49:01.719387 initrd-setup-root-after-ignition[998]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Oct 31 00:49:01.720572 initrd-setup-root-after-ignition[1002]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Oct 31 00:49:01.721061 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Oct 31 00:49:01.721673 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Oct 31 00:49:01.724392 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Oct 31 00:49:01.746469 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Oct 31 00:49:01.746680 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Oct 31 00:49:01.746987 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Oct 31 00:49:01.747098 systemd[1]: Reached target initrd.target - Initrd Default Target. Oct 31 00:49:01.747221 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Oct 31 00:49:01.748412 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Oct 31 00:49:01.756679 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Oct 31 00:49:01.760416 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Oct 31 00:49:01.765451 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Oct 31 00:49:01.765601 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Oct 31 00:49:01.765753 systemd[1]: Stopped target timers.target - Timer Units. Oct 31 00:49:01.765879 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Oct 31 00:49:01.765947 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Oct 31 00:49:01.766161 systemd[1]: Stopped target initrd.target - Initrd Default Target. Oct 31 00:49:01.766291 systemd[1]: Stopped target basic.target - Basic System. Oct 31 00:49:01.766445 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Oct 31 00:49:01.766646 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Oct 31 00:49:01.766852 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Oct 31 00:49:01.767035 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Oct 31 00:49:01.767216 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Oct 31 00:49:01.767441 systemd[1]: Stopped target sysinit.target - System Initialization. Oct 31 00:49:01.767639 systemd[1]: Stopped target local-fs.target - Local File Systems. Oct 31 00:49:01.767794 systemd[1]: Stopped target swap.target - Swaps. Oct 31 00:49:01.768105 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Oct 31 00:49:01.768171 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Oct 31 00:49:01.768472 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Oct 31 00:49:01.768623 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Oct 31 00:49:01.768808 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Oct 31 00:49:01.768847 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Oct 31 00:49:01.769026 systemd[1]: dracut-initqueue.service: Deactivated successfully. Oct 31 00:49:01.769082 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Oct 31 00:49:01.769308 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Oct 31 00:49:01.769405 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Oct 31 00:49:01.769641 systemd[1]: Stopped target paths.target - Path Units. Oct 31 00:49:01.769760 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Oct 31 00:49:01.773347 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Oct 31 00:49:01.773507 systemd[1]: Stopped target slices.target - Slice Units. Oct 31 00:49:01.773758 systemd[1]: Stopped target sockets.target - Socket Units. Oct 31 00:49:01.773908 systemd[1]: iscsid.socket: Deactivated successfully. Oct 31 00:49:01.773954 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Oct 31 00:49:01.774095 systemd[1]: iscsiuio.socket: Deactivated successfully. Oct 31 00:49:01.774142 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Oct 31 00:49:01.774292 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Oct 31 00:49:01.774368 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Oct 31 00:49:01.774604 systemd[1]: ignition-files.service: Deactivated successfully. Oct 31 00:49:01.774665 systemd[1]: Stopped ignition-files.service - Ignition (files). Oct 31 00:49:01.782550 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Oct 31 00:49:01.784450 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Oct 31 00:49:01.784597 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Oct 31 00:49:01.784690 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Oct 31 00:49:01.784945 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Oct 31 00:49:01.785024 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Oct 31 00:49:01.786596 systemd-networkd[800]: ens192: Gained IPv6LL Oct 31 00:49:01.789745 systemd[1]: initrd-cleanup.service: Deactivated successfully. Oct 31 00:49:01.789831 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Oct 31 00:49:01.795503 ignition[1022]: INFO : Ignition 2.19.0 Oct 31 00:49:01.795503 ignition[1022]: INFO : Stage: umount Oct 31 00:49:01.795863 ignition[1022]: INFO : no configs at "/usr/lib/ignition/base.d" Oct 31 00:49:01.795863 ignition[1022]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/vmware" Oct 31 00:49:01.796963 ignition[1022]: INFO : umount: umount passed Oct 31 00:49:01.797097 ignition[1022]: INFO : Ignition finished successfully Oct 31 00:49:01.798910 systemd[1]: sysroot-boot.mount: Deactivated successfully. Oct 31 00:49:01.799181 systemd[1]: ignition-mount.service: Deactivated successfully. Oct 31 00:49:01.799243 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Oct 31 00:49:01.799425 systemd[1]: Stopped target network.target - Network. Oct 31 00:49:01.799510 systemd[1]: ignition-disks.service: Deactivated successfully. Oct 31 00:49:01.799548 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Oct 31 00:49:01.799650 systemd[1]: ignition-kargs.service: Deactivated successfully. Oct 31 00:49:01.799672 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Oct 31 00:49:01.799771 systemd[1]: ignition-setup.service: Deactivated successfully. Oct 31 00:49:01.799791 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Oct 31 00:49:01.799889 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Oct 31 00:49:01.799910 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Oct 31 00:49:01.800075 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Oct 31 00:49:01.800209 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Oct 31 00:49:01.804076 systemd[1]: systemd-networkd.service: Deactivated successfully. Oct 31 00:49:01.804138 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Oct 31 00:49:01.804434 systemd[1]: systemd-networkd.socket: Deactivated successfully. Oct 31 00:49:01.804458 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Oct 31 00:49:01.808451 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Oct 31 00:49:01.808568 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Oct 31 00:49:01.808596 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Oct 31 00:49:01.808706 systemd[1]: afterburn-network-kargs.service: Deactivated successfully. Oct 31 00:49:01.808728 systemd[1]: Stopped afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments. Oct 31 00:49:01.810053 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Oct 31 00:49:01.814186 systemd[1]: systemd-resolved.service: Deactivated successfully. Oct 31 00:49:01.814264 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Oct 31 00:49:01.815156 systemd[1]: systemd-sysctl.service: Deactivated successfully. Oct 31 00:49:01.815192 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Oct 31 00:49:01.815495 systemd[1]: systemd-modules-load.service: Deactivated successfully. Oct 31 00:49:01.815518 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Oct 31 00:49:01.815662 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Oct 31 00:49:01.815684 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Oct 31 00:49:01.818530 systemd[1]: network-cleanup.service: Deactivated successfully. Oct 31 00:49:01.818595 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Oct 31 00:49:01.822547 systemd[1]: systemd-udevd.service: Deactivated successfully. Oct 31 00:49:01.822622 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Oct 31 00:49:01.823041 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Oct 31 00:49:01.823077 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Oct 31 00:49:01.823188 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Oct 31 00:49:01.823207 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Oct 31 00:49:01.823368 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Oct 31 00:49:01.823392 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Oct 31 00:49:01.823666 systemd[1]: dracut-cmdline.service: Deactivated successfully. Oct 31 00:49:01.823689 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Oct 31 00:49:01.823974 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Oct 31 00:49:01.823997 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Oct 31 00:49:01.830400 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Oct 31 00:49:01.830510 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Oct 31 00:49:01.830540 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Oct 31 00:49:01.830667 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Oct 31 00:49:01.830690 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Oct 31 00:49:01.830812 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Oct 31 00:49:01.830834 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Oct 31 00:49:01.830951 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Oct 31 00:49:01.830972 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Oct 31 00:49:01.833356 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Oct 31 00:49:01.833597 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Oct 31 00:49:01.868155 systemd[1]: sysroot-boot.service: Deactivated successfully. Oct 31 00:49:01.868464 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Oct 31 00:49:01.869033 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Oct 31 00:49:01.869177 systemd[1]: initrd-setup-root.service: Deactivated successfully. Oct 31 00:49:01.869219 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Oct 31 00:49:01.873495 systemd[1]: Starting initrd-switch-root.service - Switch Root... Oct 31 00:49:01.885618 systemd[1]: Switching root. Oct 31 00:49:01.918026 systemd-journald[217]: Journal stopped Oct 31 00:48:57.734984 kernel: Linux version 6.6.113-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT_DYNAMIC Thu Oct 30 22:59:39 -00 2025 Oct 31 00:48:57.735000 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=vmware flatcar.autologin verity.usrhash=950876ad7bc3e9634b7585a81697da4ef03ac6558969e5c002165369dd7c7885 Oct 31 00:48:57.735006 kernel: Disabled fast string operations Oct 31 00:48:57.735011 kernel: BIOS-provided physical RAM map: Oct 31 00:48:57.735014 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009ebff] usable Oct 31 00:48:57.735019 kernel: BIOS-e820: [mem 0x000000000009ec00-0x000000000009ffff] reserved Oct 31 00:48:57.735025 kernel: BIOS-e820: [mem 0x00000000000dc000-0x00000000000fffff] reserved Oct 31 00:48:57.735029 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000007fedffff] usable Oct 31 00:48:57.735033 kernel: BIOS-e820: [mem 0x000000007fee0000-0x000000007fefefff] ACPI data Oct 31 00:48:57.735037 kernel: BIOS-e820: [mem 0x000000007feff000-0x000000007fefffff] ACPI NVS Oct 31 00:48:57.735042 kernel: BIOS-e820: [mem 0x000000007ff00000-0x000000007fffffff] usable Oct 31 00:48:57.735046 kernel: BIOS-e820: [mem 0x00000000f0000000-0x00000000f7ffffff] reserved Oct 31 00:48:57.735050 kernel: BIOS-e820: [mem 0x00000000fec00000-0x00000000fec0ffff] reserved Oct 31 00:48:57.735054 kernel: BIOS-e820: [mem 0x00000000fee00000-0x00000000fee00fff] reserved Oct 31 00:48:57.735061 kernel: BIOS-e820: [mem 0x00000000fffe0000-0x00000000ffffffff] reserved Oct 31 00:48:57.735066 kernel: NX (Execute Disable) protection: active Oct 31 00:48:57.735070 kernel: APIC: Static calls initialized Oct 31 00:48:57.735075 kernel: SMBIOS 2.7 present. Oct 31 00:48:57.735080 kernel: DMI: VMware, Inc. VMware Virtual Platform/440BX Desktop Reference Platform, BIOS 6.00 05/28/2020 Oct 31 00:48:57.735085 kernel: vmware: hypercall mode: 0x00 Oct 31 00:48:57.735090 kernel: Hypervisor detected: VMware Oct 31 00:48:57.735095 kernel: vmware: TSC freq read from hypervisor : 3408.000 MHz Oct 31 00:48:57.735101 kernel: vmware: Host bus clock speed read from hypervisor : 66000000 Hz Oct 31 00:48:57.735105 kernel: vmware: using clock offset of 2596498976 ns Oct 31 00:48:57.735110 kernel: tsc: Detected 3408.000 MHz processor Oct 31 00:48:57.735115 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Oct 31 00:48:57.735120 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Oct 31 00:48:57.735125 kernel: last_pfn = 0x80000 max_arch_pfn = 0x400000000 Oct 31 00:48:57.735130 kernel: total RAM covered: 3072M Oct 31 00:48:57.735135 kernel: Found optimal setting for mtrr clean up Oct 31 00:48:57.735141 kernel: gran_size: 64K chunk_size: 64K num_reg: 2 lose cover RAM: 0G Oct 31 00:48:57.735147 kernel: MTRR map: 6 entries (5 fixed + 1 variable; max 21), built from 8 variable MTRRs Oct 31 00:48:57.735152 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Oct 31 00:48:57.735157 kernel: Using GB pages for direct mapping Oct 31 00:48:57.735162 kernel: ACPI: Early table checksum verification disabled Oct 31 00:48:57.735167 kernel: ACPI: RSDP 0x00000000000F6A00 000024 (v02 PTLTD ) Oct 31 00:48:57.735172 kernel: ACPI: XSDT 0x000000007FEE965B 00005C (v01 INTEL 440BX 06040000 VMW 01324272) Oct 31 00:48:57.735176 kernel: ACPI: FACP 0x000000007FEFEE73 0000F4 (v04 INTEL 440BX 06040000 PTL 000F4240) Oct 31 00:48:57.735181 kernel: ACPI: DSDT 0x000000007FEEAD55 01411E (v01 PTLTD Custom 06040000 MSFT 03000001) Oct 31 00:48:57.735186 kernel: ACPI: FACS 0x000000007FEFFFC0 000040 Oct 31 00:48:57.735194 kernel: ACPI: FACS 0x000000007FEFFFC0 000040 Oct 31 00:48:57.735199 kernel: ACPI: BOOT 0x000000007FEEAD2D 000028 (v01 PTLTD $SBFTBL$ 06040000 LTP 00000001) Oct 31 00:48:57.735204 kernel: ACPI: APIC 0x000000007FEEA5EB 000742 (v01 PTLTD ? APIC 06040000 LTP 00000000) Oct 31 00:48:57.735209 kernel: ACPI: MCFG 0x000000007FEEA5AF 00003C (v01 PTLTD $PCITBL$ 06040000 LTP 00000001) Oct 31 00:48:57.735215 kernel: ACPI: SRAT 0x000000007FEE9757 0008A8 (v02 VMWARE MEMPLUG 06040000 VMW 00000001) Oct 31 00:48:57.735221 kernel: ACPI: HPET 0x000000007FEE971F 000038 (v01 VMWARE VMW HPET 06040000 VMW 00000001) Oct 31 00:48:57.735226 kernel: ACPI: WAET 0x000000007FEE96F7 000028 (v01 VMWARE VMW WAET 06040000 VMW 00000001) Oct 31 00:48:57.735231 kernel: ACPI: Reserving FACP table memory at [mem 0x7fefee73-0x7fefef66] Oct 31 00:48:57.735236 kernel: ACPI: Reserving DSDT table memory at [mem 0x7feead55-0x7fefee72] Oct 31 00:48:57.735242 kernel: ACPI: Reserving FACS table memory at [mem 0x7fefffc0-0x7fefffff] Oct 31 00:48:57.735247 kernel: ACPI: Reserving FACS table memory at [mem 0x7fefffc0-0x7fefffff] Oct 31 00:48:57.735252 kernel: ACPI: Reserving BOOT table memory at [mem 0x7feead2d-0x7feead54] Oct 31 00:48:57.735257 kernel: ACPI: Reserving APIC table memory at [mem 0x7feea5eb-0x7feead2c] Oct 31 00:48:57.735262 kernel: ACPI: Reserving MCFG table memory at [mem 0x7feea5af-0x7feea5ea] Oct 31 00:48:57.735267 kernel: ACPI: Reserving SRAT table memory at [mem 0x7fee9757-0x7fee9ffe] Oct 31 00:48:57.735274 kernel: ACPI: Reserving HPET table memory at [mem 0x7fee971f-0x7fee9756] Oct 31 00:48:57.735279 kernel: ACPI: Reserving WAET table memory at [mem 0x7fee96f7-0x7fee971e] Oct 31 00:48:57.735284 kernel: system APIC only can use physical flat Oct 31 00:48:57.735289 kernel: APIC: Switched APIC routing to: physical flat Oct 31 00:48:57.735294 kernel: SRAT: PXM 0 -> APIC 0x00 -> Node 0 Oct 31 00:48:57.735299 kernel: SRAT: PXM 0 -> APIC 0x02 -> Node 0 Oct 31 00:48:57.735304 kernel: SRAT: PXM 0 -> APIC 0x04 -> Node 0 Oct 31 00:48:57.735309 kernel: SRAT: PXM 0 -> APIC 0x06 -> Node 0 Oct 31 00:48:57.735314 kernel: SRAT: PXM 0 -> APIC 0x08 -> Node 0 Oct 31 00:48:57.735320 kernel: SRAT: PXM 0 -> APIC 0x0a -> Node 0 Oct 31 00:48:57.735332 kernel: SRAT: PXM 0 -> APIC 0x0c -> Node 0 Oct 31 00:48:57.735337 kernel: SRAT: PXM 0 -> APIC 0x0e -> Node 0 Oct 31 00:48:57.735342 kernel: SRAT: PXM 0 -> APIC 0x10 -> Node 0 Oct 31 00:48:57.735347 kernel: SRAT: PXM 0 -> APIC 0x12 -> Node 0 Oct 31 00:48:57.735352 kernel: SRAT: PXM 0 -> APIC 0x14 -> Node 0 Oct 31 00:48:57.735357 kernel: SRAT: PXM 0 -> APIC 0x16 -> Node 0 Oct 31 00:48:57.735362 kernel: SRAT: PXM 0 -> APIC 0x18 -> Node 0 Oct 31 00:48:57.735367 kernel: SRAT: PXM 0 -> APIC 0x1a -> Node 0 Oct 31 00:48:57.735372 kernel: SRAT: PXM 0 -> APIC 0x1c -> Node 0 Oct 31 00:48:57.735377 kernel: SRAT: PXM 0 -> APIC 0x1e -> Node 0 Oct 31 00:48:57.735384 kernel: SRAT: PXM 0 -> APIC 0x20 -> Node 0 Oct 31 00:48:57.735389 kernel: SRAT: PXM 0 -> APIC 0x22 -> Node 0 Oct 31 00:48:57.735394 kernel: SRAT: PXM 0 -> APIC 0x24 -> Node 0 Oct 31 00:48:57.735399 kernel: SRAT: PXM 0 -> APIC 0x26 -> Node 0 Oct 31 00:48:57.735404 kernel: SRAT: PXM 0 -> APIC 0x28 -> Node 0 Oct 31 00:48:57.735409 kernel: SRAT: PXM 0 -> APIC 0x2a -> Node 0 Oct 31 00:48:57.735414 kernel: SRAT: PXM 0 -> APIC 0x2c -> Node 0 Oct 31 00:48:57.735419 kernel: SRAT: PXM 0 -> APIC 0x2e -> Node 0 Oct 31 00:48:57.735423 kernel: SRAT: PXM 0 -> APIC 0x30 -> Node 0 Oct 31 00:48:57.735429 kernel: SRAT: PXM 0 -> APIC 0x32 -> Node 0 Oct 31 00:48:57.735435 kernel: SRAT: PXM 0 -> APIC 0x34 -> Node 0 Oct 31 00:48:57.735440 kernel: SRAT: PXM 0 -> APIC 0x36 -> Node 0 Oct 31 00:48:57.735445 kernel: SRAT: PXM 0 -> APIC 0x38 -> Node 0 Oct 31 00:48:57.735450 kernel: SRAT: PXM 0 -> APIC 0x3a -> Node 0 Oct 31 00:48:57.735455 kernel: SRAT: PXM 0 -> APIC 0x3c -> Node 0 Oct 31 00:48:57.735460 kernel: SRAT: PXM 0 -> APIC 0x3e -> Node 0 Oct 31 00:48:57.735465 kernel: SRAT: PXM 0 -> APIC 0x40 -> Node 0 Oct 31 00:48:57.735470 kernel: SRAT: PXM 0 -> APIC 0x42 -> Node 0 Oct 31 00:48:57.735475 kernel: SRAT: PXM 0 -> APIC 0x44 -> Node 0 Oct 31 00:48:57.735480 kernel: SRAT: PXM 0 -> APIC 0x46 -> Node 0 Oct 31 00:48:57.735486 kernel: SRAT: PXM 0 -> APIC 0x48 -> Node 0 Oct 31 00:48:57.735491 kernel: SRAT: PXM 0 -> APIC 0x4a -> Node 0 Oct 31 00:48:57.735496 kernel: SRAT: PXM 0 -> APIC 0x4c -> Node 0 Oct 31 00:48:57.735501 kernel: SRAT: PXM 0 -> APIC 0x4e -> Node 0 Oct 31 00:48:57.735506 kernel: SRAT: PXM 0 -> APIC 0x50 -> Node 0 Oct 31 00:48:57.735511 kernel: SRAT: PXM 0 -> APIC 0x52 -> Node 0 Oct 31 00:48:57.735516 kernel: SRAT: PXM 0 -> APIC 0x54 -> Node 0 Oct 31 00:48:57.735521 kernel: SRAT: PXM 0 -> APIC 0x56 -> Node 0 Oct 31 00:48:57.735526 kernel: SRAT: PXM 0 -> APIC 0x58 -> Node 0 Oct 31 00:48:57.735531 kernel: SRAT: PXM 0 -> APIC 0x5a -> Node 0 Oct 31 00:48:57.735538 kernel: SRAT: PXM 0 -> APIC 0x5c -> Node 0 Oct 31 00:48:57.735543 kernel: SRAT: PXM 0 -> APIC 0x5e -> Node 0 Oct 31 00:48:57.735548 kernel: SRAT: PXM 0 -> APIC 0x60 -> Node 0 Oct 31 00:48:57.735553 kernel: SRAT: PXM 0 -> APIC 0x62 -> Node 0 Oct 31 00:48:57.735558 kernel: SRAT: PXM 0 -> APIC 0x64 -> Node 0 Oct 31 00:48:57.735563 kernel: SRAT: PXM 0 -> APIC 0x66 -> Node 0 Oct 31 00:48:57.735568 kernel: SRAT: PXM 0 -> APIC 0x68 -> Node 0 Oct 31 00:48:57.735573 kernel: SRAT: PXM 0 -> APIC 0x6a -> Node 0 Oct 31 00:48:57.735578 kernel: SRAT: PXM 0 -> APIC 0x6c -> Node 0 Oct 31 00:48:57.735583 kernel: SRAT: PXM 0 -> APIC 0x6e -> Node 0 Oct 31 00:48:57.735589 kernel: SRAT: PXM 0 -> APIC 0x70 -> Node 0 Oct 31 00:48:57.735594 kernel: SRAT: PXM 0 -> APIC 0x72 -> Node 0 Oct 31 00:48:57.735599 kernel: SRAT: PXM 0 -> APIC 0x74 -> Node 0 Oct 31 00:48:57.735608 kernel: SRAT: PXM 0 -> APIC 0x76 -> Node 0 Oct 31 00:48:57.735614 kernel: SRAT: PXM 0 -> APIC 0x78 -> Node 0 Oct 31 00:48:57.735619 kernel: SRAT: PXM 0 -> APIC 0x7a -> Node 0 Oct 31 00:48:57.735625 kernel: SRAT: PXM 0 -> APIC 0x7c -> Node 0 Oct 31 00:48:57.735630 kernel: SRAT: PXM 0 -> APIC 0x7e -> Node 0 Oct 31 00:48:57.735636 kernel: SRAT: PXM 0 -> APIC 0x80 -> Node 0 Oct 31 00:48:57.735642 kernel: SRAT: PXM 0 -> APIC 0x82 -> Node 0 Oct 31 00:48:57.735647 kernel: SRAT: PXM 0 -> APIC 0x84 -> Node 0 Oct 31 00:48:57.735652 kernel: SRAT: PXM 0 -> APIC 0x86 -> Node 0 Oct 31 00:48:57.735658 kernel: SRAT: PXM 0 -> APIC 0x88 -> Node 0 Oct 31 00:48:57.735663 kernel: SRAT: PXM 0 -> APIC 0x8a -> Node 0 Oct 31 00:48:57.735669 kernel: SRAT: PXM 0 -> APIC 0x8c -> Node 0 Oct 31 00:48:57.735674 kernel: SRAT: PXM 0 -> APIC 0x8e -> Node 0 Oct 31 00:48:57.735679 kernel: SRAT: PXM 0 -> APIC 0x90 -> Node 0 Oct 31 00:48:57.735685 kernel: SRAT: PXM 0 -> APIC 0x92 -> Node 0 Oct 31 00:48:57.735690 kernel: SRAT: PXM 0 -> APIC 0x94 -> Node 0 Oct 31 00:48:57.735696 kernel: SRAT: PXM 0 -> APIC 0x96 -> Node 0 Oct 31 00:48:57.735702 kernel: SRAT: PXM 0 -> APIC 0x98 -> Node 0 Oct 31 00:48:57.735707 kernel: SRAT: PXM 0 -> APIC 0x9a -> Node 0 Oct 31 00:48:57.735713 kernel: SRAT: PXM 0 -> APIC 0x9c -> Node 0 Oct 31 00:48:57.735718 kernel: SRAT: PXM 0 -> APIC 0x9e -> Node 0 Oct 31 00:48:57.735723 kernel: SRAT: PXM 0 -> APIC 0xa0 -> Node 0 Oct 31 00:48:57.735729 kernel: SRAT: PXM 0 -> APIC 0xa2 -> Node 0 Oct 31 00:48:57.735734 kernel: SRAT: PXM 0 -> APIC 0xa4 -> Node 0 Oct 31 00:48:57.735739 kernel: SRAT: PXM 0 -> APIC 0xa6 -> Node 0 Oct 31 00:48:57.735744 kernel: SRAT: PXM 0 -> APIC 0xa8 -> Node 0 Oct 31 00:48:57.735751 kernel: SRAT: PXM 0 -> APIC 0xaa -> Node 0 Oct 31 00:48:57.735756 kernel: SRAT: PXM 0 -> APIC 0xac -> Node 0 Oct 31 00:48:57.735762 kernel: SRAT: PXM 0 -> APIC 0xae -> Node 0 Oct 31 00:48:57.735767 kernel: SRAT: PXM 0 -> APIC 0xb0 -> Node 0 Oct 31 00:48:57.735772 kernel: SRAT: PXM 0 -> APIC 0xb2 -> Node 0 Oct 31 00:48:57.735778 kernel: SRAT: PXM 0 -> APIC 0xb4 -> Node 0 Oct 31 00:48:57.735783 kernel: SRAT: PXM 0 -> APIC 0xb6 -> Node 0 Oct 31 00:48:57.735788 kernel: SRAT: PXM 0 -> APIC 0xb8 -> Node 0 Oct 31 00:48:57.735794 kernel: SRAT: PXM 0 -> APIC 0xba -> Node 0 Oct 31 00:48:57.735799 kernel: SRAT: PXM 0 -> APIC 0xbc -> Node 0 Oct 31 00:48:57.735805 kernel: SRAT: PXM 0 -> APIC 0xbe -> Node 0 Oct 31 00:48:57.735811 kernel: SRAT: PXM 0 -> APIC 0xc0 -> Node 0 Oct 31 00:48:57.735816 kernel: SRAT: PXM 0 -> APIC 0xc2 -> Node 0 Oct 31 00:48:57.735821 kernel: SRAT: PXM 0 -> APIC 0xc4 -> Node 0 Oct 31 00:48:57.735827 kernel: SRAT: PXM 0 -> APIC 0xc6 -> Node 0 Oct 31 00:48:57.735832 kernel: SRAT: PXM 0 -> APIC 0xc8 -> Node 0 Oct 31 00:48:57.735838 kernel: SRAT: PXM 0 -> APIC 0xca -> Node 0 Oct 31 00:48:57.735843 kernel: SRAT: PXM 0 -> APIC 0xcc -> Node 0 Oct 31 00:48:57.735848 kernel: SRAT: PXM 0 -> APIC 0xce -> Node 0 Oct 31 00:48:57.735853 kernel: SRAT: PXM 0 -> APIC 0xd0 -> Node 0 Oct 31 00:48:57.735860 kernel: SRAT: PXM 0 -> APIC 0xd2 -> Node 0 Oct 31 00:48:57.735865 kernel: SRAT: PXM 0 -> APIC 0xd4 -> Node 0 Oct 31 00:48:57.735871 kernel: SRAT: PXM 0 -> APIC 0xd6 -> Node 0 Oct 31 00:48:57.735876 kernel: SRAT: PXM 0 -> APIC 0xd8 -> Node 0 Oct 31 00:48:57.735881 kernel: SRAT: PXM 0 -> APIC 0xda -> Node 0 Oct 31 00:48:57.735887 kernel: SRAT: PXM 0 -> APIC 0xdc -> Node 0 Oct 31 00:48:57.735892 kernel: SRAT: PXM 0 -> APIC 0xde -> Node 0 Oct 31 00:48:57.735898 kernel: SRAT: PXM 0 -> APIC 0xe0 -> Node 0 Oct 31 00:48:57.735903 kernel: SRAT: PXM 0 -> APIC 0xe2 -> Node 0 Oct 31 00:48:57.735908 kernel: SRAT: PXM 0 -> APIC 0xe4 -> Node 0 Oct 31 00:48:57.735913 kernel: SRAT: PXM 0 -> APIC 0xe6 -> Node 0 Oct 31 00:48:57.735920 kernel: SRAT: PXM 0 -> APIC 0xe8 -> Node 0 Oct 31 00:48:57.735925 kernel: SRAT: PXM 0 -> APIC 0xea -> Node 0 Oct 31 00:48:57.735931 kernel: SRAT: PXM 0 -> APIC 0xec -> Node 0 Oct 31 00:48:57.735937 kernel: SRAT: PXM 0 -> APIC 0xee -> Node 0 Oct 31 00:48:57.735942 kernel: SRAT: PXM 0 -> APIC 0xf0 -> Node 0 Oct 31 00:48:57.735947 kernel: SRAT: PXM 0 -> APIC 0xf2 -> Node 0 Oct 31 00:48:57.735953 kernel: SRAT: PXM 0 -> APIC 0xf4 -> Node 0 Oct 31 00:48:57.735958 kernel: SRAT: PXM 0 -> APIC 0xf6 -> Node 0 Oct 31 00:48:57.735963 kernel: SRAT: PXM 0 -> APIC 0xf8 -> Node 0 Oct 31 00:48:57.735969 kernel: SRAT: PXM 0 -> APIC 0xfa -> Node 0 Oct 31 00:48:57.735975 kernel: SRAT: PXM 0 -> APIC 0xfc -> Node 0 Oct 31 00:48:57.735981 kernel: SRAT: PXM 0 -> APIC 0xfe -> Node 0 Oct 31 00:48:57.735986 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x0009ffff] Oct 31 00:48:57.735991 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00100000-0x7fffffff] Oct 31 00:48:57.735997 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x80000000-0xbfffffff] hotplug Oct 31 00:48:57.736003 kernel: NUMA: Node 0 [mem 0x00000000-0x0009ffff] + [mem 0x00100000-0x7fffffff] -> [mem 0x00000000-0x7fffffff] Oct 31 00:48:57.736008 kernel: NODE_DATA(0) allocated [mem 0x7fffa000-0x7fffffff] Oct 31 00:48:57.736014 kernel: Zone ranges: Oct 31 00:48:57.736020 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Oct 31 00:48:57.736026 kernel: DMA32 [mem 0x0000000001000000-0x000000007fffffff] Oct 31 00:48:57.736032 kernel: Normal empty Oct 31 00:48:57.736037 kernel: Movable zone start for each node Oct 31 00:48:57.736042 kernel: Early memory node ranges Oct 31 00:48:57.736048 kernel: node 0: [mem 0x0000000000001000-0x000000000009dfff] Oct 31 00:48:57.736053 kernel: node 0: [mem 0x0000000000100000-0x000000007fedffff] Oct 31 00:48:57.736059 kernel: node 0: [mem 0x000000007ff00000-0x000000007fffffff] Oct 31 00:48:57.736064 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000007fffffff] Oct 31 00:48:57.736070 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Oct 31 00:48:57.736076 kernel: On node 0, zone DMA: 98 pages in unavailable ranges Oct 31 00:48:57.736082 kernel: On node 0, zone DMA32: 32 pages in unavailable ranges Oct 31 00:48:57.736088 kernel: ACPI: PM-Timer IO Port: 0x1008 Oct 31 00:48:57.736093 kernel: system APIC only can use physical flat Oct 31 00:48:57.736098 kernel: ACPI: LAPIC_NMI (acpi_id[0x00] high edge lint[0x1]) Oct 31 00:48:57.736104 kernel: ACPI: LAPIC_NMI (acpi_id[0x01] high edge lint[0x1]) Oct 31 00:48:57.736110 kernel: ACPI: LAPIC_NMI (acpi_id[0x02] high edge lint[0x1]) Oct 31 00:48:57.736115 kernel: ACPI: LAPIC_NMI (acpi_id[0x03] high edge lint[0x1]) Oct 31 00:48:57.736120 kernel: ACPI: LAPIC_NMI (acpi_id[0x04] high edge lint[0x1]) Oct 31 00:48:57.736126 kernel: ACPI: LAPIC_NMI (acpi_id[0x05] high edge lint[0x1]) Oct 31 00:48:57.736132 kernel: ACPI: LAPIC_NMI (acpi_id[0x06] high edge lint[0x1]) Oct 31 00:48:57.736138 kernel: ACPI: LAPIC_NMI (acpi_id[0x07] high edge lint[0x1]) Oct 31 00:48:57.736143 kernel: ACPI: LAPIC_NMI (acpi_id[0x08] high edge lint[0x1]) Oct 31 00:48:57.736149 kernel: ACPI: LAPIC_NMI (acpi_id[0x09] high edge lint[0x1]) Oct 31 00:48:57.736154 kernel: ACPI: LAPIC_NMI (acpi_id[0x0a] high edge lint[0x1]) Oct 31 00:48:57.736160 kernel: ACPI: LAPIC_NMI (acpi_id[0x0b] high edge lint[0x1]) Oct 31 00:48:57.736165 kernel: ACPI: LAPIC_NMI (acpi_id[0x0c] high edge lint[0x1]) Oct 31 00:48:57.736171 kernel: ACPI: LAPIC_NMI (acpi_id[0x0d] high edge lint[0x1]) Oct 31 00:48:57.736176 kernel: ACPI: LAPIC_NMI (acpi_id[0x0e] high edge lint[0x1]) Oct 31 00:48:57.736181 kernel: ACPI: LAPIC_NMI (acpi_id[0x0f] high edge lint[0x1]) Oct 31 00:48:57.736188 kernel: ACPI: LAPIC_NMI (acpi_id[0x10] high edge lint[0x1]) Oct 31 00:48:57.736193 kernel: ACPI: LAPIC_NMI (acpi_id[0x11] high edge lint[0x1]) Oct 31 00:48:57.736199 kernel: ACPI: LAPIC_NMI (acpi_id[0x12] high edge lint[0x1]) Oct 31 00:48:57.736204 kernel: ACPI: LAPIC_NMI (acpi_id[0x13] high edge lint[0x1]) Oct 31 00:48:57.736210 kernel: ACPI: LAPIC_NMI (acpi_id[0x14] high edge lint[0x1]) Oct 31 00:48:57.736215 kernel: ACPI: LAPIC_NMI (acpi_id[0x15] high edge lint[0x1]) Oct 31 00:48:57.736221 kernel: ACPI: LAPIC_NMI (acpi_id[0x16] high edge lint[0x1]) Oct 31 00:48:57.736226 kernel: ACPI: LAPIC_NMI (acpi_id[0x17] high edge lint[0x1]) Oct 31 00:48:57.736232 kernel: ACPI: LAPIC_NMI (acpi_id[0x18] high edge lint[0x1]) Oct 31 00:48:57.736237 kernel: ACPI: LAPIC_NMI (acpi_id[0x19] high edge lint[0x1]) Oct 31 00:48:57.736244 kernel: ACPI: LAPIC_NMI (acpi_id[0x1a] high edge lint[0x1]) Oct 31 00:48:57.736249 kernel: ACPI: LAPIC_NMI (acpi_id[0x1b] high edge lint[0x1]) Oct 31 00:48:57.736254 kernel: ACPI: LAPIC_NMI (acpi_id[0x1c] high edge lint[0x1]) Oct 31 00:48:57.736260 kernel: ACPI: LAPIC_NMI (acpi_id[0x1d] high edge lint[0x1]) Oct 31 00:48:57.736265 kernel: ACPI: LAPIC_NMI (acpi_id[0x1e] high edge lint[0x1]) Oct 31 00:48:57.736271 kernel: ACPI: LAPIC_NMI (acpi_id[0x1f] high edge lint[0x1]) Oct 31 00:48:57.736276 kernel: ACPI: LAPIC_NMI (acpi_id[0x20] high edge lint[0x1]) Oct 31 00:48:57.736282 kernel: ACPI: LAPIC_NMI (acpi_id[0x21] high edge lint[0x1]) Oct 31 00:48:57.736287 kernel: ACPI: LAPIC_NMI (acpi_id[0x22] high edge lint[0x1]) Oct 31 00:48:57.736293 kernel: ACPI: LAPIC_NMI (acpi_id[0x23] high edge lint[0x1]) Oct 31 00:48:57.736299 kernel: ACPI: LAPIC_NMI (acpi_id[0x24] high edge lint[0x1]) Oct 31 00:48:57.736304 kernel: ACPI: LAPIC_NMI (acpi_id[0x25] high edge lint[0x1]) Oct 31 00:48:57.736310 kernel: ACPI: LAPIC_NMI (acpi_id[0x26] high edge lint[0x1]) Oct 31 00:48:57.736315 kernel: ACPI: LAPIC_NMI (acpi_id[0x27] high edge lint[0x1]) Oct 31 00:48:57.736320 kernel: ACPI: LAPIC_NMI (acpi_id[0x28] high edge lint[0x1]) Oct 31 00:48:57.736334 kernel: ACPI: LAPIC_NMI (acpi_id[0x29] high edge lint[0x1]) Oct 31 00:48:57.736341 kernel: ACPI: LAPIC_NMI (acpi_id[0x2a] high edge lint[0x1]) Oct 31 00:48:57.736346 kernel: ACPI: LAPIC_NMI (acpi_id[0x2b] high edge lint[0x1]) Oct 31 00:48:57.736351 kernel: ACPI: LAPIC_NMI (acpi_id[0x2c] high edge lint[0x1]) Oct 31 00:48:57.736359 kernel: ACPI: LAPIC_NMI (acpi_id[0x2d] high edge lint[0x1]) Oct 31 00:48:57.736365 kernel: ACPI: LAPIC_NMI (acpi_id[0x2e] high edge lint[0x1]) Oct 31 00:48:57.736370 kernel: ACPI: LAPIC_NMI (acpi_id[0x2f] high edge lint[0x1]) Oct 31 00:48:57.736375 kernel: ACPI: LAPIC_NMI (acpi_id[0x30] high edge lint[0x1]) Oct 31 00:48:57.736381 kernel: ACPI: LAPIC_NMI (acpi_id[0x31] high edge lint[0x1]) Oct 31 00:48:57.736386 kernel: ACPI: LAPIC_NMI (acpi_id[0x32] high edge lint[0x1]) Oct 31 00:48:57.736392 kernel: ACPI: LAPIC_NMI (acpi_id[0x33] high edge lint[0x1]) Oct 31 00:48:57.736397 kernel: ACPI: LAPIC_NMI (acpi_id[0x34] high edge lint[0x1]) Oct 31 00:48:57.736402 kernel: ACPI: LAPIC_NMI (acpi_id[0x35] high edge lint[0x1]) Oct 31 00:48:57.736408 kernel: ACPI: LAPIC_NMI (acpi_id[0x36] high edge lint[0x1]) Oct 31 00:48:57.736414 kernel: ACPI: LAPIC_NMI (acpi_id[0x37] high edge lint[0x1]) Oct 31 00:48:57.736420 kernel: ACPI: LAPIC_NMI (acpi_id[0x38] high edge lint[0x1]) Oct 31 00:48:57.736425 kernel: ACPI: LAPIC_NMI (acpi_id[0x39] high edge lint[0x1]) Oct 31 00:48:57.736431 kernel: ACPI: LAPIC_NMI (acpi_id[0x3a] high edge lint[0x1]) Oct 31 00:48:57.736436 kernel: ACPI: LAPIC_NMI (acpi_id[0x3b] high edge lint[0x1]) Oct 31 00:48:57.736442 kernel: ACPI: LAPIC_NMI (acpi_id[0x3c] high edge lint[0x1]) Oct 31 00:48:57.736447 kernel: ACPI: LAPIC_NMI (acpi_id[0x3d] high edge lint[0x1]) Oct 31 00:48:57.736453 kernel: ACPI: LAPIC_NMI (acpi_id[0x3e] high edge lint[0x1]) Oct 31 00:48:57.736458 kernel: ACPI: LAPIC_NMI (acpi_id[0x3f] high edge lint[0x1]) Oct 31 00:48:57.736464 kernel: ACPI: LAPIC_NMI (acpi_id[0x40] high edge lint[0x1]) Oct 31 00:48:57.736470 kernel: ACPI: LAPIC_NMI (acpi_id[0x41] high edge lint[0x1]) Oct 31 00:48:57.736475 kernel: ACPI: LAPIC_NMI (acpi_id[0x42] high edge lint[0x1]) Oct 31 00:48:57.736481 kernel: ACPI: LAPIC_NMI (acpi_id[0x43] high edge lint[0x1]) Oct 31 00:48:57.736486 kernel: ACPI: LAPIC_NMI (acpi_id[0x44] high edge lint[0x1]) Oct 31 00:48:57.736492 kernel: ACPI: LAPIC_NMI (acpi_id[0x45] high edge lint[0x1]) Oct 31 00:48:57.736497 kernel: ACPI: LAPIC_NMI (acpi_id[0x46] high edge lint[0x1]) Oct 31 00:48:57.736502 kernel: ACPI: LAPIC_NMI (acpi_id[0x47] high edge lint[0x1]) Oct 31 00:48:57.736508 kernel: ACPI: LAPIC_NMI (acpi_id[0x48] high edge lint[0x1]) Oct 31 00:48:57.736517 kernel: ACPI: LAPIC_NMI (acpi_id[0x49] high edge lint[0x1]) Oct 31 00:48:57.736523 kernel: ACPI: LAPIC_NMI (acpi_id[0x4a] high edge lint[0x1]) Oct 31 00:48:57.736529 kernel: ACPI: LAPIC_NMI (acpi_id[0x4b] high edge lint[0x1]) Oct 31 00:48:57.736534 kernel: ACPI: LAPIC_NMI (acpi_id[0x4c] high edge lint[0x1]) Oct 31 00:48:57.736540 kernel: ACPI: LAPIC_NMI (acpi_id[0x4d] high edge lint[0x1]) Oct 31 00:48:57.736545 kernel: ACPI: LAPIC_NMI (acpi_id[0x4e] high edge lint[0x1]) Oct 31 00:48:57.736551 kernel: ACPI: LAPIC_NMI (acpi_id[0x4f] high edge lint[0x1]) Oct 31 00:48:57.736556 kernel: ACPI: LAPIC_NMI (acpi_id[0x50] high edge lint[0x1]) Oct 31 00:48:57.736561 kernel: ACPI: LAPIC_NMI (acpi_id[0x51] high edge lint[0x1]) Oct 31 00:48:57.736567 kernel: ACPI: LAPIC_NMI (acpi_id[0x52] high edge lint[0x1]) Oct 31 00:48:57.736572 kernel: ACPI: LAPIC_NMI (acpi_id[0x53] high edge lint[0x1]) Oct 31 00:48:57.736579 kernel: ACPI: LAPIC_NMI (acpi_id[0x54] high edge lint[0x1]) Oct 31 00:48:57.736584 kernel: ACPI: LAPIC_NMI (acpi_id[0x55] high edge lint[0x1]) Oct 31 00:48:57.736590 kernel: ACPI: LAPIC_NMI (acpi_id[0x56] high edge lint[0x1]) Oct 31 00:48:57.736595 kernel: ACPI: LAPIC_NMI (acpi_id[0x57] high edge lint[0x1]) Oct 31 00:48:57.736600 kernel: ACPI: LAPIC_NMI (acpi_id[0x58] high edge lint[0x1]) Oct 31 00:48:57.736606 kernel: ACPI: LAPIC_NMI (acpi_id[0x59] high edge lint[0x1]) Oct 31 00:48:57.736611 kernel: ACPI: LAPIC_NMI (acpi_id[0x5a] high edge lint[0x1]) Oct 31 00:48:57.736617 kernel: ACPI: LAPIC_NMI (acpi_id[0x5b] high edge lint[0x1]) Oct 31 00:48:57.736622 kernel: ACPI: LAPIC_NMI (acpi_id[0x5c] high edge lint[0x1]) Oct 31 00:48:57.736627 kernel: ACPI: LAPIC_NMI (acpi_id[0x5d] high edge lint[0x1]) Oct 31 00:48:57.736634 kernel: ACPI: LAPIC_NMI (acpi_id[0x5e] high edge lint[0x1]) Oct 31 00:48:57.736639 kernel: ACPI: LAPIC_NMI (acpi_id[0x5f] high edge lint[0x1]) Oct 31 00:48:57.736644 kernel: ACPI: LAPIC_NMI (acpi_id[0x60] high edge lint[0x1]) Oct 31 00:48:57.736650 kernel: ACPI: LAPIC_NMI (acpi_id[0x61] high edge lint[0x1]) Oct 31 00:48:57.736655 kernel: ACPI: LAPIC_NMI (acpi_id[0x62] high edge lint[0x1]) Oct 31 00:48:57.736661 kernel: ACPI: LAPIC_NMI (acpi_id[0x63] high edge lint[0x1]) Oct 31 00:48:57.736666 kernel: ACPI: LAPIC_NMI (acpi_id[0x64] high edge lint[0x1]) Oct 31 00:48:57.736671 kernel: ACPI: LAPIC_NMI (acpi_id[0x65] high edge lint[0x1]) Oct 31 00:48:57.736677 kernel: ACPI: LAPIC_NMI (acpi_id[0x66] high edge lint[0x1]) Oct 31 00:48:57.736683 kernel: ACPI: LAPIC_NMI (acpi_id[0x67] high edge lint[0x1]) Oct 31 00:48:57.736689 kernel: ACPI: LAPIC_NMI (acpi_id[0x68] high edge lint[0x1]) Oct 31 00:48:57.736694 kernel: ACPI: LAPIC_NMI (acpi_id[0x69] high edge lint[0x1]) Oct 31 00:48:57.736699 kernel: ACPI: LAPIC_NMI (acpi_id[0x6a] high edge lint[0x1]) Oct 31 00:48:57.736704 kernel: ACPI: LAPIC_NMI (acpi_id[0x6b] high edge lint[0x1]) Oct 31 00:48:57.736710 kernel: ACPI: LAPIC_NMI (acpi_id[0x6c] high edge lint[0x1]) Oct 31 00:48:57.736715 kernel: ACPI: LAPIC_NMI (acpi_id[0x6d] high edge lint[0x1]) Oct 31 00:48:57.736721 kernel: ACPI: LAPIC_NMI (acpi_id[0x6e] high edge lint[0x1]) Oct 31 00:48:57.736726 kernel: ACPI: LAPIC_NMI (acpi_id[0x6f] high edge lint[0x1]) Oct 31 00:48:57.736732 kernel: ACPI: LAPIC_NMI (acpi_id[0x70] high edge lint[0x1]) Oct 31 00:48:57.736738 kernel: ACPI: LAPIC_NMI (acpi_id[0x71] high edge lint[0x1]) Oct 31 00:48:57.736744 kernel: ACPI: LAPIC_NMI (acpi_id[0x72] high edge lint[0x1]) Oct 31 00:48:57.736749 kernel: ACPI: LAPIC_NMI (acpi_id[0x73] high edge lint[0x1]) Oct 31 00:48:57.736754 kernel: ACPI: LAPIC_NMI (acpi_id[0x74] high edge lint[0x1]) Oct 31 00:48:57.736760 kernel: ACPI: LAPIC_NMI (acpi_id[0x75] high edge lint[0x1]) Oct 31 00:48:57.736765 kernel: ACPI: LAPIC_NMI (acpi_id[0x76] high edge lint[0x1]) Oct 31 00:48:57.736771 kernel: ACPI: LAPIC_NMI (acpi_id[0x77] high edge lint[0x1]) Oct 31 00:48:57.736776 kernel: ACPI: LAPIC_NMI (acpi_id[0x78] high edge lint[0x1]) Oct 31 00:48:57.736781 kernel: ACPI: LAPIC_NMI (acpi_id[0x79] high edge lint[0x1]) Oct 31 00:48:57.736788 kernel: ACPI: LAPIC_NMI (acpi_id[0x7a] high edge lint[0x1]) Oct 31 00:48:57.736793 kernel: ACPI: LAPIC_NMI (acpi_id[0x7b] high edge lint[0x1]) Oct 31 00:48:57.736799 kernel: ACPI: LAPIC_NMI (acpi_id[0x7c] high edge lint[0x1]) Oct 31 00:48:57.736804 kernel: ACPI: LAPIC_NMI (acpi_id[0x7d] high edge lint[0x1]) Oct 31 00:48:57.736809 kernel: ACPI: LAPIC_NMI (acpi_id[0x7e] high edge lint[0x1]) Oct 31 00:48:57.736815 kernel: ACPI: LAPIC_NMI (acpi_id[0x7f] high edge lint[0x1]) Oct 31 00:48:57.736820 kernel: IOAPIC[0]: apic_id 1, version 17, address 0xfec00000, GSI 0-23 Oct 31 00:48:57.736826 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 high edge) Oct 31 00:48:57.736832 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Oct 31 00:48:57.736837 kernel: ACPI: HPET id: 0x8086af01 base: 0xfed00000 Oct 31 00:48:57.736844 kernel: TSC deadline timer available Oct 31 00:48:57.736849 kernel: smpboot: Allowing 128 CPUs, 126 hotplug CPUs Oct 31 00:48:57.736855 kernel: [mem 0x80000000-0xefffffff] available for PCI devices Oct 31 00:48:57.736860 kernel: Booting paravirtualized kernel on VMware hypervisor Oct 31 00:48:57.736866 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Oct 31 00:48:57.736871 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:128 nr_cpu_ids:128 nr_node_ids:1 Oct 31 00:48:57.736877 kernel: percpu: Embedded 58 pages/cpu s196712 r8192 d32664 u262144 Oct 31 00:48:57.736882 kernel: pcpu-alloc: s196712 r8192 d32664 u262144 alloc=1*2097152 Oct 31 00:48:57.736888 kernel: pcpu-alloc: [0] 000 001 002 003 004 005 006 007 Oct 31 00:48:57.736894 kernel: pcpu-alloc: [0] 008 009 010 011 012 013 014 015 Oct 31 00:48:57.736900 kernel: pcpu-alloc: [0] 016 017 018 019 020 021 022 023 Oct 31 00:48:57.736905 kernel: pcpu-alloc: [0] 024 025 026 027 028 029 030 031 Oct 31 00:48:57.736910 kernel: pcpu-alloc: [0] 032 033 034 035 036 037 038 039 Oct 31 00:48:57.736923 kernel: pcpu-alloc: [0] 040 041 042 043 044 045 046 047 Oct 31 00:48:57.736929 kernel: pcpu-alloc: [0] 048 049 050 051 052 053 054 055 Oct 31 00:48:57.736935 kernel: pcpu-alloc: [0] 056 057 058 059 060 061 062 063 Oct 31 00:48:57.736941 kernel: pcpu-alloc: [0] 064 065 066 067 068 069 070 071 Oct 31 00:48:57.736946 kernel: pcpu-alloc: [0] 072 073 074 075 076 077 078 079 Oct 31 00:48:57.736953 kernel: pcpu-alloc: [0] 080 081 082 083 084 085 086 087 Oct 31 00:48:57.736959 kernel: pcpu-alloc: [0] 088 089 090 091 092 093 094 095 Oct 31 00:48:57.736965 kernel: pcpu-alloc: [0] 096 097 098 099 100 101 102 103 Oct 31 00:48:57.736970 kernel: pcpu-alloc: [0] 104 105 106 107 108 109 110 111 Oct 31 00:48:57.736976 kernel: pcpu-alloc: [0] 112 113 114 115 116 117 118 119 Oct 31 00:48:57.736982 kernel: pcpu-alloc: [0] 120 121 122 123 124 125 126 127 Oct 31 00:48:57.736988 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=vmware flatcar.autologin verity.usrhash=950876ad7bc3e9634b7585a81697da4ef03ac6558969e5c002165369dd7c7885 Oct 31 00:48:57.736994 kernel: random: crng init done Oct 31 00:48:57.737001 kernel: printk: log_buf_len individual max cpu contribution: 4096 bytes Oct 31 00:48:57.737007 kernel: printk: log_buf_len total cpu_extra contributions: 520192 bytes Oct 31 00:48:57.737012 kernel: printk: log_buf_len min size: 262144 bytes Oct 31 00:48:57.737018 kernel: printk: log_buf_len: 1048576 bytes Oct 31 00:48:57.737024 kernel: printk: early log buf free: 239760(91%) Oct 31 00:48:57.737030 kernel: Dentry cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Oct 31 00:48:57.737036 kernel: Inode-cache hash table entries: 131072 (order: 8, 1048576 bytes, linear) Oct 31 00:48:57.737041 kernel: Fallback order for Node 0: 0 Oct 31 00:48:57.737047 kernel: Built 1 zonelists, mobility grouping on. Total pages: 515808 Oct 31 00:48:57.737054 kernel: Policy zone: DMA32 Oct 31 00:48:57.737060 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Oct 31 00:48:57.737066 kernel: Memory: 1936368K/2096628K available (12288K kernel code, 2288K rwdata, 22748K rodata, 42880K init, 2320K bss, 160000K reserved, 0K cma-reserved) Oct 31 00:48:57.737073 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=128, Nodes=1 Oct 31 00:48:57.737079 kernel: ftrace: allocating 37980 entries in 149 pages Oct 31 00:48:57.737086 kernel: ftrace: allocated 149 pages with 4 groups Oct 31 00:48:57.737091 kernel: Dynamic Preempt: voluntary Oct 31 00:48:57.737097 kernel: rcu: Preemptible hierarchical RCU implementation. Oct 31 00:48:57.737103 kernel: rcu: RCU event tracing is enabled. Oct 31 00:48:57.737109 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=128. Oct 31 00:48:57.737115 kernel: Trampoline variant of Tasks RCU enabled. Oct 31 00:48:57.737121 kernel: Rude variant of Tasks RCU enabled. Oct 31 00:48:57.737127 kernel: Tracing variant of Tasks RCU enabled. Oct 31 00:48:57.737133 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Oct 31 00:48:57.737138 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=128 Oct 31 00:48:57.737145 kernel: NR_IRQS: 33024, nr_irqs: 1448, preallocated irqs: 16 Oct 31 00:48:57.737151 kernel: rcu: srcu_init: Setting srcu_struct sizes to big. Oct 31 00:48:57.737157 kernel: Console: colour VGA+ 80x25 Oct 31 00:48:57.737163 kernel: printk: console [tty0] enabled Oct 31 00:48:57.737169 kernel: printk: console [ttyS0] enabled Oct 31 00:48:57.737175 kernel: ACPI: Core revision 20230628 Oct 31 00:48:57.737181 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 133484882848 ns Oct 31 00:48:57.737187 kernel: APIC: Switch to symmetric I/O mode setup Oct 31 00:48:57.737193 kernel: x2apic enabled Oct 31 00:48:57.737200 kernel: APIC: Switched APIC routing to: physical x2apic Oct 31 00:48:57.737206 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Oct 31 00:48:57.737212 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x311fd3cd494, max_idle_ns: 440795223879 ns Oct 31 00:48:57.737218 kernel: Calibrating delay loop (skipped) preset value.. 6816.00 BogoMIPS (lpj=3408000) Oct 31 00:48:57.737224 kernel: Disabled fast string operations Oct 31 00:48:57.737231 kernel: Last level iTLB entries: 4KB 64, 2MB 8, 4MB 8 Oct 31 00:48:57.737237 kernel: Last level dTLB entries: 4KB 64, 2MB 32, 4MB 32, 1GB 4 Oct 31 00:48:57.737243 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Oct 31 00:48:57.737249 kernel: Spectre V2 : Spectre BHI mitigation: SW BHB clearing on vm exit Oct 31 00:48:57.737256 kernel: Spectre V2 : Spectre BHI mitigation: SW BHB clearing on syscall Oct 31 00:48:57.737262 kernel: Spectre V2 : Mitigation: Enhanced / Automatic IBRS Oct 31 00:48:57.737268 kernel: Spectre V2 : Spectre v2 / PBRSB-eIBRS: Retire a single CALL on VMEXIT Oct 31 00:48:57.737273 kernel: RETBleed: Mitigation: Enhanced IBRS Oct 31 00:48:57.737279 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Oct 31 00:48:57.737285 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Oct 31 00:48:57.737291 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode Oct 31 00:48:57.737297 kernel: SRBDS: Unknown: Dependent on hypervisor status Oct 31 00:48:57.737303 kernel: GDS: Unknown: Dependent on hypervisor status Oct 31 00:48:57.737310 kernel: active return thunk: its_return_thunk Oct 31 00:48:57.737316 kernel: ITS: Mitigation: Aligned branch/return thunks Oct 31 00:48:57.737322 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Oct 31 00:48:57.737365 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Oct 31 00:48:57.737371 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Oct 31 00:48:57.737377 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Oct 31 00:48:57.737383 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format. Oct 31 00:48:57.737389 kernel: Freeing SMP alternatives memory: 32K Oct 31 00:48:57.737395 kernel: pid_max: default: 131072 minimum: 1024 Oct 31 00:48:57.737403 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Oct 31 00:48:57.737409 kernel: landlock: Up and running. Oct 31 00:48:57.737414 kernel: SELinux: Initializing. Oct 31 00:48:57.737420 kernel: Mount-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Oct 31 00:48:57.737426 kernel: Mountpoint-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Oct 31 00:48:57.737432 kernel: smpboot: CPU0: Intel(R) Xeon(R) E-2278G CPU @ 3.40GHz (family: 0x6, model: 0x9e, stepping: 0xd) Oct 31 00:48:57.737438 kernel: RCU Tasks: Setting shift to 7 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=128. Oct 31 00:48:57.737444 kernel: RCU Tasks Rude: Setting shift to 7 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=128. Oct 31 00:48:57.737451 kernel: RCU Tasks Trace: Setting shift to 7 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=128. Oct 31 00:48:57.737457 kernel: Performance Events: Skylake events, core PMU driver. Oct 31 00:48:57.737463 kernel: core: CPUID marked event: 'cpu cycles' unavailable Oct 31 00:48:57.737469 kernel: core: CPUID marked event: 'instructions' unavailable Oct 31 00:48:57.737474 kernel: core: CPUID marked event: 'bus cycles' unavailable Oct 31 00:48:57.737480 kernel: core: CPUID marked event: 'cache references' unavailable Oct 31 00:48:57.737486 kernel: core: CPUID marked event: 'cache misses' unavailable Oct 31 00:48:57.737491 kernel: core: CPUID marked event: 'branch instructions' unavailable Oct 31 00:48:57.737497 kernel: core: CPUID marked event: 'branch misses' unavailable Oct 31 00:48:57.737504 kernel: ... version: 1 Oct 31 00:48:57.737510 kernel: ... bit width: 48 Oct 31 00:48:57.737515 kernel: ... generic registers: 4 Oct 31 00:48:57.737521 kernel: ... value mask: 0000ffffffffffff Oct 31 00:48:57.737527 kernel: ... max period: 000000007fffffff Oct 31 00:48:57.737533 kernel: ... fixed-purpose events: 0 Oct 31 00:48:57.737539 kernel: ... event mask: 000000000000000f Oct 31 00:48:57.737544 kernel: signal: max sigframe size: 1776 Oct 31 00:48:57.737550 kernel: rcu: Hierarchical SRCU implementation. Oct 31 00:48:57.737557 kernel: rcu: Max phase no-delay instances is 400. Oct 31 00:48:57.737563 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Oct 31 00:48:57.737569 kernel: smp: Bringing up secondary CPUs ... Oct 31 00:48:57.737575 kernel: smpboot: x86: Booting SMP configuration: Oct 31 00:48:57.737581 kernel: .... node #0, CPUs: #1 Oct 31 00:48:57.737587 kernel: Disabled fast string operations Oct 31 00:48:57.737592 kernel: smpboot: CPU 1 Converting physical 2 to logical package 1 Oct 31 00:48:57.737598 kernel: smpboot: CPU 1 Converting physical 0 to logical die 1 Oct 31 00:48:57.737604 kernel: smp: Brought up 1 node, 2 CPUs Oct 31 00:48:57.737610 kernel: smpboot: Max logical packages: 128 Oct 31 00:48:57.737617 kernel: smpboot: Total of 2 processors activated (13632.00 BogoMIPS) Oct 31 00:48:57.737624 kernel: devtmpfs: initialized Oct 31 00:48:57.737630 kernel: x86/mm: Memory block size: 128MB Oct 31 00:48:57.737636 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x7feff000-0x7fefffff] (4096 bytes) Oct 31 00:48:57.737642 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Oct 31 00:48:57.737648 kernel: futex hash table entries: 32768 (order: 9, 2097152 bytes, linear) Oct 31 00:48:57.737653 kernel: pinctrl core: initialized pinctrl subsystem Oct 31 00:48:57.737659 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Oct 31 00:48:57.737665 kernel: audit: initializing netlink subsys (disabled) Oct 31 00:48:57.737672 kernel: audit: type=2000 audit(1761871735.089:1): state=initialized audit_enabled=0 res=1 Oct 31 00:48:57.737678 kernel: thermal_sys: Registered thermal governor 'step_wise' Oct 31 00:48:57.737684 kernel: thermal_sys: Registered thermal governor 'user_space' Oct 31 00:48:57.737690 kernel: cpuidle: using governor menu Oct 31 00:48:57.737695 kernel: Simple Boot Flag at 0x36 set to 0x80 Oct 31 00:48:57.737701 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Oct 31 00:48:57.737707 kernel: dca service started, version 1.12.1 Oct 31 00:48:57.737713 kernel: PCI: MMCONFIG for domain 0000 [bus 00-7f] at [mem 0xf0000000-0xf7ffffff] (base 0xf0000000) Oct 31 00:48:57.737719 kernel: PCI: Using configuration type 1 for base access Oct 31 00:48:57.737726 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Oct 31 00:48:57.737732 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Oct 31 00:48:57.737738 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Oct 31 00:48:57.737743 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Oct 31 00:48:57.737749 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Oct 31 00:48:57.737755 kernel: ACPI: Added _OSI(Module Device) Oct 31 00:48:57.737761 kernel: ACPI: Added _OSI(Processor Device) Oct 31 00:48:57.737767 kernel: ACPI: Added _OSI(Processor Aggregator Device) Oct 31 00:48:57.737773 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Oct 31 00:48:57.737780 kernel: ACPI: [Firmware Bug]: BIOS _OSI(Linux) query ignored Oct 31 00:48:57.737785 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Oct 31 00:48:57.737791 kernel: ACPI: Interpreter enabled Oct 31 00:48:57.737797 kernel: ACPI: PM: (supports S0 S1 S5) Oct 31 00:48:57.737803 kernel: ACPI: Using IOAPIC for interrupt routing Oct 31 00:48:57.737809 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Oct 31 00:48:57.737815 kernel: PCI: Using E820 reservations for host bridge windows Oct 31 00:48:57.737820 kernel: ACPI: Enabled 4 GPEs in block 00 to 0F Oct 31 00:48:57.737826 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-7f]) Oct 31 00:48:57.737907 kernel: acpi PNP0A03:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Oct 31 00:48:57.737965 kernel: acpi PNP0A03:00: _OSC: platform does not support [AER LTR] Oct 31 00:48:57.738017 kernel: acpi PNP0A03:00: _OSC: OS now controls [PCIeHotplug PME PCIeCapability] Oct 31 00:48:57.738026 kernel: PCI host bridge to bus 0000:00 Oct 31 00:48:57.738077 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Oct 31 00:48:57.738124 kernel: pci_bus 0000:00: root bus resource [mem 0x000cc000-0x000dbfff window] Oct 31 00:48:57.738173 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Oct 31 00:48:57.738219 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Oct 31 00:48:57.738265 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xfeff window] Oct 31 00:48:57.738311 kernel: pci_bus 0000:00: root bus resource [bus 00-7f] Oct 31 00:48:57.738384 kernel: pci 0000:00:00.0: [8086:7190] type 00 class 0x060000 Oct 31 00:48:57.738441 kernel: pci 0000:00:01.0: [8086:7191] type 01 class 0x060400 Oct 31 00:48:57.738499 kernel: pci 0000:00:07.0: [8086:7110] type 00 class 0x060100 Oct 31 00:48:57.738560 kernel: pci 0000:00:07.1: [8086:7111] type 00 class 0x01018a Oct 31 00:48:57.738613 kernel: pci 0000:00:07.1: reg 0x20: [io 0x1060-0x106f] Oct 31 00:48:57.738666 kernel: pci 0000:00:07.1: legacy IDE quirk: reg 0x10: [io 0x01f0-0x01f7] Oct 31 00:48:57.738718 kernel: pci 0000:00:07.1: legacy IDE quirk: reg 0x14: [io 0x03f6] Oct 31 00:48:57.738770 kernel: pci 0000:00:07.1: legacy IDE quirk: reg 0x18: [io 0x0170-0x0177] Oct 31 00:48:57.738822 kernel: pci 0000:00:07.1: legacy IDE quirk: reg 0x1c: [io 0x0376] Oct 31 00:48:57.738884 kernel: pci 0000:00:07.3: [8086:7113] type 00 class 0x068000 Oct 31 00:48:57.738936 kernel: pci 0000:00:07.3: quirk: [io 0x1000-0x103f] claimed by PIIX4 ACPI Oct 31 00:48:57.738989 kernel: pci 0000:00:07.3: quirk: [io 0x1040-0x104f] claimed by PIIX4 SMB Oct 31 00:48:57.739045 kernel: pci 0000:00:07.7: [15ad:0740] type 00 class 0x088000 Oct 31 00:48:57.739098 kernel: pci 0000:00:07.7: reg 0x10: [io 0x1080-0x10bf] Oct 31 00:48:57.739150 kernel: pci 0000:00:07.7: reg 0x14: [mem 0xfebfe000-0xfebfffff 64bit] Oct 31 00:48:57.739208 kernel: pci 0000:00:0f.0: [15ad:0405] type 00 class 0x030000 Oct 31 00:48:57.739260 kernel: pci 0000:00:0f.0: reg 0x10: [io 0x1070-0x107f] Oct 31 00:48:57.739312 kernel: pci 0000:00:0f.0: reg 0x14: [mem 0xe8000000-0xefffffff pref] Oct 31 00:48:57.739373 kernel: pci 0000:00:0f.0: reg 0x18: [mem 0xfe000000-0xfe7fffff] Oct 31 00:48:57.739425 kernel: pci 0000:00:0f.0: reg 0x30: [mem 0x00000000-0x00007fff pref] Oct 31 00:48:57.739476 kernel: pci 0000:00:0f.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Oct 31 00:48:57.739538 kernel: pci 0000:00:11.0: [15ad:0790] type 01 class 0x060401 Oct 31 00:48:57.739599 kernel: pci 0000:00:15.0: [15ad:07a0] type 01 class 0x060400 Oct 31 00:48:57.739652 kernel: pci 0000:00:15.0: PME# supported from D0 D3hot D3cold Oct 31 00:48:57.739710 kernel: pci 0000:00:15.1: [15ad:07a0] type 01 class 0x060400 Oct 31 00:48:57.739762 kernel: pci 0000:00:15.1: PME# supported from D0 D3hot D3cold Oct 31 00:48:57.739818 kernel: pci 0000:00:15.2: [15ad:07a0] type 01 class 0x060400 Oct 31 00:48:57.739871 kernel: pci 0000:00:15.2: PME# supported from D0 D3hot D3cold Oct 31 00:48:57.739927 kernel: pci 0000:00:15.3: [15ad:07a0] type 01 class 0x060400 Oct 31 00:48:57.739982 kernel: pci 0000:00:15.3: PME# supported from D0 D3hot D3cold Oct 31 00:48:57.740038 kernel: pci 0000:00:15.4: [15ad:07a0] type 01 class 0x060400 Oct 31 00:48:57.740091 kernel: pci 0000:00:15.4: PME# supported from D0 D3hot D3cold Oct 31 00:48:57.740151 kernel: pci 0000:00:15.5: [15ad:07a0] type 01 class 0x060400 Oct 31 00:48:57.740204 kernel: pci 0000:00:15.5: PME# supported from D0 D3hot D3cold Oct 31 00:48:57.740260 kernel: pci 0000:00:15.6: [15ad:07a0] type 01 class 0x060400 Oct 31 00:48:57.740316 kernel: pci 0000:00:15.6: PME# supported from D0 D3hot D3cold Oct 31 00:48:57.740386 kernel: pci 0000:00:15.7: [15ad:07a0] type 01 class 0x060400 Oct 31 00:48:57.740439 kernel: pci 0000:00:15.7: PME# supported from D0 D3hot D3cold Oct 31 00:48:57.740496 kernel: pci 0000:00:16.0: [15ad:07a0] type 01 class 0x060400 Oct 31 00:48:57.740549 kernel: pci 0000:00:16.0: PME# supported from D0 D3hot D3cold Oct 31 00:48:57.740608 kernel: pci 0000:00:16.1: [15ad:07a0] type 01 class 0x060400 Oct 31 00:48:57.740661 kernel: pci 0000:00:16.1: PME# supported from D0 D3hot D3cold Oct 31 00:48:57.740717 kernel: pci 0000:00:16.2: [15ad:07a0] type 01 class 0x060400 Oct 31 00:48:57.740770 kernel: pci 0000:00:16.2: PME# supported from D0 D3hot D3cold Oct 31 00:48:57.740825 kernel: pci 0000:00:16.3: [15ad:07a0] type 01 class 0x060400 Oct 31 00:48:57.740878 kernel: pci 0000:00:16.3: PME# supported from D0 D3hot D3cold Oct 31 00:48:57.740935 kernel: pci 0000:00:16.4: [15ad:07a0] type 01 class 0x060400 Oct 31 00:48:57.740988 kernel: pci 0000:00:16.4: PME# supported from D0 D3hot D3cold Oct 31 00:48:57.741045 kernel: pci 0000:00:16.5: [15ad:07a0] type 01 class 0x060400 Oct 31 00:48:57.741098 kernel: pci 0000:00:16.5: PME# supported from D0 D3hot D3cold Oct 31 00:48:57.741155 kernel: pci 0000:00:16.6: [15ad:07a0] type 01 class 0x060400 Oct 31 00:48:57.741207 kernel: pci 0000:00:16.6: PME# supported from D0 D3hot D3cold Oct 31 00:48:57.741263 kernel: pci 0000:00:16.7: [15ad:07a0] type 01 class 0x060400 Oct 31 00:48:57.741319 kernel: pci 0000:00:16.7: PME# supported from D0 D3hot D3cold Oct 31 00:48:57.741390 kernel: pci 0000:00:17.0: [15ad:07a0] type 01 class 0x060400 Oct 31 00:48:57.741443 kernel: pci 0000:00:17.0: PME# supported from D0 D3hot D3cold Oct 31 00:48:57.741500 kernel: pci 0000:00:17.1: [15ad:07a0] type 01 class 0x060400 Oct 31 00:48:57.741556 kernel: pci 0000:00:17.1: PME# supported from D0 D3hot D3cold Oct 31 00:48:57.741612 kernel: pci 0000:00:17.2: [15ad:07a0] type 01 class 0x060400 Oct 31 00:48:57.741682 kernel: pci 0000:00:17.2: PME# supported from D0 D3hot D3cold Oct 31 00:48:57.741744 kernel: pci 0000:00:17.3: [15ad:07a0] type 01 class 0x060400 Oct 31 00:48:57.741797 kernel: pci 0000:00:17.3: PME# supported from D0 D3hot D3cold Oct 31 00:48:57.741853 kernel: pci 0000:00:17.4: [15ad:07a0] type 01 class 0x060400 Oct 31 00:48:57.741906 kernel: pci 0000:00:17.4: PME# supported from D0 D3hot D3cold Oct 31 00:48:57.741965 kernel: pci 0000:00:17.5: [15ad:07a0] type 01 class 0x060400 Oct 31 00:48:57.742021 kernel: pci 0000:00:17.5: PME# supported from D0 D3hot D3cold Oct 31 00:48:57.742104 kernel: pci 0000:00:17.6: [15ad:07a0] type 01 class 0x060400 Oct 31 00:48:57.742199 kernel: pci 0000:00:17.6: PME# supported from D0 D3hot D3cold Oct 31 00:48:57.742258 kernel: pci 0000:00:17.7: [15ad:07a0] type 01 class 0x060400 Oct 31 00:48:57.742312 kernel: pci 0000:00:17.7: PME# supported from D0 D3hot D3cold Oct 31 00:48:57.742410 kernel: pci 0000:00:18.0: [15ad:07a0] type 01 class 0x060400 Oct 31 00:48:57.742467 kernel: pci 0000:00:18.0: PME# supported from D0 D3hot D3cold Oct 31 00:48:57.742528 kernel: pci 0000:00:18.1: [15ad:07a0] type 01 class 0x060400 Oct 31 00:48:57.742597 kernel: pci 0000:00:18.1: PME# supported from D0 D3hot D3cold Oct 31 00:48:57.742653 kernel: pci 0000:00:18.2: [15ad:07a0] type 01 class 0x060400 Oct 31 00:48:57.742705 kernel: pci 0000:00:18.2: PME# supported from D0 D3hot D3cold Oct 31 00:48:57.742780 kernel: pci 0000:00:18.3: [15ad:07a0] type 01 class 0x060400 Oct 31 00:48:57.742850 kernel: pci 0000:00:18.3: PME# supported from D0 D3hot D3cold Oct 31 00:48:57.742906 kernel: pci 0000:00:18.4: [15ad:07a0] type 01 class 0x060400 Oct 31 00:48:57.742958 kernel: pci 0000:00:18.4: PME# supported from D0 D3hot D3cold Oct 31 00:48:57.743013 kernel: pci 0000:00:18.5: [15ad:07a0] type 01 class 0x060400 Oct 31 00:48:57.743065 kernel: pci 0000:00:18.5: PME# supported from D0 D3hot D3cold Oct 31 00:48:57.743121 kernel: pci 0000:00:18.6: [15ad:07a0] type 01 class 0x060400 Oct 31 00:48:57.743172 kernel: pci 0000:00:18.6: PME# supported from D0 D3hot D3cold Oct 31 00:48:57.743230 kernel: pci 0000:00:18.7: [15ad:07a0] type 01 class 0x060400 Oct 31 00:48:57.743282 kernel: pci 0000:00:18.7: PME# supported from D0 D3hot D3cold Oct 31 00:48:57.743504 kernel: pci_bus 0000:01: extended config space not accessible Oct 31 00:48:57.743563 kernel: pci 0000:00:01.0: PCI bridge to [bus 01] Oct 31 00:48:57.743708 kernel: pci_bus 0000:02: extended config space not accessible Oct 31 00:48:57.743719 kernel: acpiphp: Slot [32] registered Oct 31 00:48:57.743727 kernel: acpiphp: Slot [33] registered Oct 31 00:48:57.743733 kernel: acpiphp: Slot [34] registered Oct 31 00:48:57.743739 kernel: acpiphp: Slot [35] registered Oct 31 00:48:57.743745 kernel: acpiphp: Slot [36] registered Oct 31 00:48:57.743751 kernel: acpiphp: Slot [37] registered Oct 31 00:48:57.743757 kernel: acpiphp: Slot [38] registered Oct 31 00:48:57.743763 kernel: acpiphp: Slot [39] registered Oct 31 00:48:57.743769 kernel: acpiphp: Slot [40] registered Oct 31 00:48:57.743775 kernel: acpiphp: Slot [41] registered Oct 31 00:48:57.743781 kernel: acpiphp: Slot [42] registered Oct 31 00:48:57.743788 kernel: acpiphp: Slot [43] registered Oct 31 00:48:57.743794 kernel: acpiphp: Slot [44] registered Oct 31 00:48:57.743799 kernel: acpiphp: Slot [45] registered Oct 31 00:48:57.743805 kernel: acpiphp: Slot [46] registered Oct 31 00:48:57.743811 kernel: acpiphp: Slot [47] registered Oct 31 00:48:57.743817 kernel: acpiphp: Slot [48] registered Oct 31 00:48:57.743822 kernel: acpiphp: Slot [49] registered Oct 31 00:48:57.743828 kernel: acpiphp: Slot [50] registered Oct 31 00:48:57.743834 kernel: acpiphp: Slot [51] registered Oct 31 00:48:57.743841 kernel: acpiphp: Slot [52] registered Oct 31 00:48:57.743847 kernel: acpiphp: Slot [53] registered Oct 31 00:48:57.743853 kernel: acpiphp: Slot [54] registered Oct 31 00:48:57.743858 kernel: acpiphp: Slot [55] registered Oct 31 00:48:57.743864 kernel: acpiphp: Slot [56] registered Oct 31 00:48:57.743870 kernel: acpiphp: Slot [57] registered Oct 31 00:48:57.743875 kernel: acpiphp: Slot [58] registered Oct 31 00:48:57.743881 kernel: acpiphp: Slot [59] registered Oct 31 00:48:57.743887 kernel: acpiphp: Slot [60] registered Oct 31 00:48:57.743892 kernel: acpiphp: Slot [61] registered Oct 31 00:48:57.743899 kernel: acpiphp: Slot [62] registered Oct 31 00:48:57.743905 kernel: acpiphp: Slot [63] registered Oct 31 00:48:57.743958 kernel: pci 0000:00:11.0: PCI bridge to [bus 02] (subtractive decode) Oct 31 00:48:57.744011 kernel: pci 0000:00:11.0: bridge window [io 0x2000-0x3fff] Oct 31 00:48:57.744062 kernel: pci 0000:00:11.0: bridge window [mem 0xfd600000-0xfdffffff] Oct 31 00:48:57.744131 kernel: pci 0000:00:11.0: bridge window [mem 0xe7b00000-0xe7ffffff 64bit pref] Oct 31 00:48:57.744182 kernel: pci 0000:00:11.0: bridge window [mem 0x000a0000-0x000bffff window] (subtractive decode) Oct 31 00:48:57.744233 kernel: pci 0000:00:11.0: bridge window [mem 0x000cc000-0x000dbfff window] (subtractive decode) Oct 31 00:48:57.744286 kernel: pci 0000:00:11.0: bridge window [mem 0xc0000000-0xfebfffff window] (subtractive decode) Oct 31 00:48:57.744363 kernel: pci 0000:00:11.0: bridge window [io 0x0000-0x0cf7 window] (subtractive decode) Oct 31 00:48:57.744416 kernel: pci 0000:00:11.0: bridge window [io 0x0d00-0xfeff window] (subtractive decode) Oct 31 00:48:57.744474 kernel: pci 0000:03:00.0: [15ad:07c0] type 00 class 0x010700 Oct 31 00:48:57.744528 kernel: pci 0000:03:00.0: reg 0x10: [io 0x4000-0x4007] Oct 31 00:48:57.744580 kernel: pci 0000:03:00.0: reg 0x14: [mem 0xfd5f8000-0xfd5fffff 64bit] Oct 31 00:48:57.744633 kernel: pci 0000:03:00.0: reg 0x30: [mem 0x00000000-0x0000ffff pref] Oct 31 00:48:57.744689 kernel: pci 0000:03:00.0: PME# supported from D0 D3hot D3cold Oct 31 00:48:57.744941 kernel: pci 0000:03:00.0: disabling ASPM on pre-1.1 PCIe device. You can enable it with 'pcie_aspm=force' Oct 31 00:48:57.745004 kernel: pci 0000:00:15.0: PCI bridge to [bus 03] Oct 31 00:48:57.745058 kernel: pci 0000:00:15.0: bridge window [io 0x4000-0x4fff] Oct 31 00:48:57.745111 kernel: pci 0000:00:15.0: bridge window [mem 0xfd500000-0xfd5fffff] Oct 31 00:48:57.745164 kernel: pci 0000:00:15.1: PCI bridge to [bus 04] Oct 31 00:48:57.745217 kernel: pci 0000:00:15.1: bridge window [io 0x8000-0x8fff] Oct 31 00:48:57.745273 kernel: pci 0000:00:15.1: bridge window [mem 0xfd100000-0xfd1fffff] Oct 31 00:48:57.745351 kernel: pci 0000:00:15.1: bridge window [mem 0xe7800000-0xe78fffff 64bit pref] Oct 31 00:48:57.745407 kernel: pci 0000:00:15.2: PCI bridge to [bus 05] Oct 31 00:48:57.745459 kernel: pci 0000:00:15.2: bridge window [io 0xc000-0xcfff] Oct 31 00:48:57.747377 kernel: pci 0000:00:15.2: bridge window [mem 0xfcd00000-0xfcdfffff] Oct 31 00:48:57.747439 kernel: pci 0000:00:15.2: bridge window [mem 0xe7400000-0xe74fffff 64bit pref] Oct 31 00:48:57.747495 kernel: pci 0000:00:15.3: PCI bridge to [bus 06] Oct 31 00:48:57.747548 kernel: pci 0000:00:15.3: bridge window [mem 0xfc900000-0xfc9fffff] Oct 31 00:48:57.747605 kernel: pci 0000:00:15.3: bridge window [mem 0xe7000000-0xe70fffff 64bit pref] Oct 31 00:48:57.747660 kernel: pci 0000:00:15.4: PCI bridge to [bus 07] Oct 31 00:48:57.747713 kernel: pci 0000:00:15.4: bridge window [mem 0xfc500000-0xfc5fffff] Oct 31 00:48:57.747785 kernel: pci 0000:00:15.4: bridge window [mem 0xe6c00000-0xe6cfffff 64bit pref] Oct 31 00:48:57.747843 kernel: pci 0000:00:15.5: PCI bridge to [bus 08] Oct 31 00:48:57.747896 kernel: pci 0000:00:15.5: bridge window [mem 0xfc100000-0xfc1fffff] Oct 31 00:48:57.747950 kernel: pci 0000:00:15.5: bridge window [mem 0xe6800000-0xe68fffff 64bit pref] Oct 31 00:48:57.748003 kernel: pci 0000:00:15.6: PCI bridge to [bus 09] Oct 31 00:48:57.748056 kernel: pci 0000:00:15.6: bridge window [mem 0xfbd00000-0xfbdfffff] Oct 31 00:48:57.748108 kernel: pci 0000:00:15.6: bridge window [mem 0xe6400000-0xe64fffff 64bit pref] Oct 31 00:48:57.748161 kernel: pci 0000:00:15.7: PCI bridge to [bus 0a] Oct 31 00:48:57.748214 kernel: pci 0000:00:15.7: bridge window [mem 0xfb900000-0xfb9fffff] Oct 31 00:48:57.748269 kernel: pci 0000:00:15.7: bridge window [mem 0xe6000000-0xe60fffff 64bit pref] Oct 31 00:48:57.748360 kernel: pci 0000:0b:00.0: [15ad:07b0] type 00 class 0x020000 Oct 31 00:48:57.748421 kernel: pci 0000:0b:00.0: reg 0x10: [mem 0xfd4fc000-0xfd4fcfff] Oct 31 00:48:57.748475 kernel: pci 0000:0b:00.0: reg 0x14: [mem 0xfd4fd000-0xfd4fdfff] Oct 31 00:48:57.748532 kernel: pci 0000:0b:00.0: reg 0x18: [mem 0xfd4fe000-0xfd4fffff] Oct 31 00:48:57.748586 kernel: pci 0000:0b:00.0: reg 0x1c: [io 0x5000-0x500f] Oct 31 00:48:57.748639 kernel: pci 0000:0b:00.0: reg 0x30: [mem 0x00000000-0x0000ffff pref] Oct 31 00:48:57.748695 kernel: pci 0000:0b:00.0: supports D1 D2 Oct 31 00:48:57.748748 kernel: pci 0000:0b:00.0: PME# supported from D0 D1 D2 D3hot D3cold Oct 31 00:48:57.748802 kernel: pci 0000:0b:00.0: disabling ASPM on pre-1.1 PCIe device. You can enable it with 'pcie_aspm=force' Oct 31 00:48:57.748855 kernel: pci 0000:00:16.0: PCI bridge to [bus 0b] Oct 31 00:48:57.748907 kernel: pci 0000:00:16.0: bridge window [io 0x5000-0x5fff] Oct 31 00:48:57.748959 kernel: pci 0000:00:16.0: bridge window [mem 0xfd400000-0xfd4fffff] Oct 31 00:48:57.749013 kernel: pci 0000:00:16.1: PCI bridge to [bus 0c] Oct 31 00:48:57.749066 kernel: pci 0000:00:16.1: bridge window [io 0x9000-0x9fff] Oct 31 00:48:57.749120 kernel: pci 0000:00:16.1: bridge window [mem 0xfd000000-0xfd0fffff] Oct 31 00:48:57.749172 kernel: pci 0000:00:16.1: bridge window [mem 0xe7700000-0xe77fffff 64bit pref] Oct 31 00:48:57.749225 kernel: pci 0000:00:16.2: PCI bridge to [bus 0d] Oct 31 00:48:57.749278 kernel: pci 0000:00:16.2: bridge window [io 0xd000-0xdfff] Oct 31 00:48:57.750455 kernel: pci 0000:00:16.2: bridge window [mem 0xfcc00000-0xfccfffff] Oct 31 00:48:57.750522 kernel: pci 0000:00:16.2: bridge window [mem 0xe7300000-0xe73fffff 64bit pref] Oct 31 00:48:57.750581 kernel: pci 0000:00:16.3: PCI bridge to [bus 0e] Oct 31 00:48:57.750635 kernel: pci 0000:00:16.3: bridge window [mem 0xfc800000-0xfc8fffff] Oct 31 00:48:57.750691 kernel: pci 0000:00:16.3: bridge window [mem 0xe6f00000-0xe6ffffff 64bit pref] Oct 31 00:48:57.750745 kernel: pci 0000:00:16.4: PCI bridge to [bus 0f] Oct 31 00:48:57.750797 kernel: pci 0000:00:16.4: bridge window [mem 0xfc400000-0xfc4fffff] Oct 31 00:48:57.750849 kernel: pci 0000:00:16.4: bridge window [mem 0xe6b00000-0xe6bfffff 64bit pref] Oct 31 00:48:57.750903 kernel: pci 0000:00:16.5: PCI bridge to [bus 10] Oct 31 00:48:57.750955 kernel: pci 0000:00:16.5: bridge window [mem 0xfc000000-0xfc0fffff] Oct 31 00:48:57.751007 kernel: pci 0000:00:16.5: bridge window [mem 0xe6700000-0xe67fffff 64bit pref] Oct 31 00:48:57.751059 kernel: pci 0000:00:16.6: PCI bridge to [bus 11] Oct 31 00:48:57.751114 kernel: pci 0000:00:16.6: bridge window [mem 0xfbc00000-0xfbcfffff] Oct 31 00:48:57.751166 kernel: pci 0000:00:16.6: bridge window [mem 0xe6300000-0xe63fffff 64bit pref] Oct 31 00:48:57.751219 kernel: pci 0000:00:16.7: PCI bridge to [bus 12] Oct 31 00:48:57.751271 kernel: pci 0000:00:16.7: bridge window [mem 0xfb800000-0xfb8fffff] Oct 31 00:48:57.751331 kernel: pci 0000:00:16.7: bridge window [mem 0xe5f00000-0xe5ffffff 64bit pref] Oct 31 00:48:57.751387 kernel: pci 0000:00:17.0: PCI bridge to [bus 13] Oct 31 00:48:57.751440 kernel: pci 0000:00:17.0: bridge window [io 0x6000-0x6fff] Oct 31 00:48:57.751492 kernel: pci 0000:00:17.0: bridge window [mem 0xfd300000-0xfd3fffff] Oct 31 00:48:57.751547 kernel: pci 0000:00:17.0: bridge window [mem 0xe7a00000-0xe7afffff 64bit pref] Oct 31 00:48:57.751601 kernel: pci 0000:00:17.1: PCI bridge to [bus 14] Oct 31 00:48:57.751654 kernel: pci 0000:00:17.1: bridge window [io 0xa000-0xafff] Oct 31 00:48:57.751706 kernel: pci 0000:00:17.1: bridge window [mem 0xfcf00000-0xfcffffff] Oct 31 00:48:57.751759 kernel: pci 0000:00:17.1: bridge window [mem 0xe7600000-0xe76fffff 64bit pref] Oct 31 00:48:57.751813 kernel: pci 0000:00:17.2: PCI bridge to [bus 15] Oct 31 00:48:57.751865 kernel: pci 0000:00:17.2: bridge window [io 0xe000-0xefff] Oct 31 00:48:57.751920 kernel: pci 0000:00:17.2: bridge window [mem 0xfcb00000-0xfcbfffff] Oct 31 00:48:57.751972 kernel: pci 0000:00:17.2: bridge window [mem 0xe7200000-0xe72fffff 64bit pref] Oct 31 00:48:57.752026 kernel: pci 0000:00:17.3: PCI bridge to [bus 16] Oct 31 00:48:57.752078 kernel: pci 0000:00:17.3: bridge window [mem 0xfc700000-0xfc7fffff] Oct 31 00:48:57.752130 kernel: pci 0000:00:17.3: bridge window [mem 0xe6e00000-0xe6efffff 64bit pref] Oct 31 00:48:57.752183 kernel: pci 0000:00:17.4: PCI bridge to [bus 17] Oct 31 00:48:57.752234 kernel: pci 0000:00:17.4: bridge window [mem 0xfc300000-0xfc3fffff] Oct 31 00:48:57.752286 kernel: pci 0000:00:17.4: bridge window [mem 0xe6a00000-0xe6afffff 64bit pref] Oct 31 00:48:57.753774 kernel: pci 0000:00:17.5: PCI bridge to [bus 18] Oct 31 00:48:57.753839 kernel: pci 0000:00:17.5: bridge window [mem 0xfbf00000-0xfbffffff] Oct 31 00:48:57.753895 kernel: pci 0000:00:17.5: bridge window [mem 0xe6600000-0xe66fffff 64bit pref] Oct 31 00:48:57.753951 kernel: pci 0000:00:17.6: PCI bridge to [bus 19] Oct 31 00:48:57.754005 kernel: pci 0000:00:17.6: bridge window [mem 0xfbb00000-0xfbbfffff] Oct 31 00:48:57.754059 kernel: pci 0000:00:17.6: bridge window [mem 0xe6200000-0xe62fffff 64bit pref] Oct 31 00:48:57.754114 kernel: pci 0000:00:17.7: PCI bridge to [bus 1a] Oct 31 00:48:57.754166 kernel: pci 0000:00:17.7: bridge window [mem 0xfb700000-0xfb7fffff] Oct 31 00:48:57.754222 kernel: pci 0000:00:17.7: bridge window [mem 0xe5e00000-0xe5efffff 64bit pref] Oct 31 00:48:57.754277 kernel: pci 0000:00:18.0: PCI bridge to [bus 1b] Oct 31 00:48:57.754391 kernel: pci 0000:00:18.0: bridge window [io 0x7000-0x7fff] Oct 31 00:48:57.754447 kernel: pci 0000:00:18.0: bridge window [mem 0xfd200000-0xfd2fffff] Oct 31 00:48:57.754500 kernel: pci 0000:00:18.0: bridge window [mem 0xe7900000-0xe79fffff 64bit pref] Oct 31 00:48:57.754554 kernel: pci 0000:00:18.1: PCI bridge to [bus 1c] Oct 31 00:48:57.754607 kernel: pci 0000:00:18.1: bridge window [io 0xb000-0xbfff] Oct 31 00:48:57.754659 kernel: pci 0000:00:18.1: bridge window [mem 0xfce00000-0xfcefffff] Oct 31 00:48:57.754714 kernel: pci 0000:00:18.1: bridge window [mem 0xe7500000-0xe75fffff 64bit pref] Oct 31 00:48:57.754768 kernel: pci 0000:00:18.2: PCI bridge to [bus 1d] Oct 31 00:48:57.754820 kernel: pci 0000:00:18.2: bridge window [mem 0xfca00000-0xfcafffff] Oct 31 00:48:57.754872 kernel: pci 0000:00:18.2: bridge window [mem 0xe7100000-0xe71fffff 64bit pref] Oct 31 00:48:57.754926 kernel: pci 0000:00:18.3: PCI bridge to [bus 1e] Oct 31 00:48:57.754978 kernel: pci 0000:00:18.3: bridge window [mem 0xfc600000-0xfc6fffff] Oct 31 00:48:57.755030 kernel: pci 0000:00:18.3: bridge window [mem 0xe6d00000-0xe6dfffff 64bit pref] Oct 31 00:48:57.755083 kernel: pci 0000:00:18.4: PCI bridge to [bus 1f] Oct 31 00:48:57.755138 kernel: pci 0000:00:18.4: bridge window [mem 0xfc200000-0xfc2fffff] Oct 31 00:48:57.755190 kernel: pci 0000:00:18.4: bridge window [mem 0xe6900000-0xe69fffff 64bit pref] Oct 31 00:48:57.755245 kernel: pci 0000:00:18.5: PCI bridge to [bus 20] Oct 31 00:48:57.755297 kernel: pci 0000:00:18.5: bridge window [mem 0xfbe00000-0xfbefffff] Oct 31 00:48:57.756393 kernel: pci 0000:00:18.5: bridge window [mem 0xe6500000-0xe65fffff 64bit pref] Oct 31 00:48:57.756458 kernel: pci 0000:00:18.6: PCI bridge to [bus 21] Oct 31 00:48:57.756514 kernel: pci 0000:00:18.6: bridge window [mem 0xfba00000-0xfbafffff] Oct 31 00:48:57.756568 kernel: pci 0000:00:18.6: bridge window [mem 0xe6100000-0xe61fffff 64bit pref] Oct 31 00:48:57.756626 kernel: pci 0000:00:18.7: PCI bridge to [bus 22] Oct 31 00:48:57.756678 kernel: pci 0000:00:18.7: bridge window [mem 0xfb600000-0xfb6fffff] Oct 31 00:48:57.756730 kernel: pci 0000:00:18.7: bridge window [mem 0xe5d00000-0xe5dfffff 64bit pref] Oct 31 00:48:57.756739 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 9 Oct 31 00:48:57.756746 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 0 Oct 31 00:48:57.756752 kernel: ACPI: PCI: Interrupt link LNKB disabled Oct 31 00:48:57.756758 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Oct 31 00:48:57.756764 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 10 Oct 31 00:48:57.756772 kernel: iommu: Default domain type: Translated Oct 31 00:48:57.756778 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Oct 31 00:48:57.756784 kernel: PCI: Using ACPI for IRQ routing Oct 31 00:48:57.756790 kernel: PCI: pci_cache_line_size set to 64 bytes Oct 31 00:48:57.756797 kernel: e820: reserve RAM buffer [mem 0x0009ec00-0x0009ffff] Oct 31 00:48:57.756802 kernel: e820: reserve RAM buffer [mem 0x7fee0000-0x7fffffff] Oct 31 00:48:57.756854 kernel: pci 0000:00:0f.0: vgaarb: setting as boot VGA device Oct 31 00:48:57.756905 kernel: pci 0000:00:0f.0: vgaarb: bridge control possible Oct 31 00:48:57.756956 kernel: pci 0000:00:0f.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Oct 31 00:48:57.756967 kernel: vgaarb: loaded Oct 31 00:48:57.756974 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 Oct 31 00:48:57.756980 kernel: hpet0: 16 comparators, 64-bit 14.318180 MHz counter Oct 31 00:48:57.756986 kernel: clocksource: Switched to clocksource tsc-early Oct 31 00:48:57.756992 kernel: VFS: Disk quotas dquot_6.6.0 Oct 31 00:48:57.756998 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Oct 31 00:48:57.757004 kernel: pnp: PnP ACPI init Oct 31 00:48:57.757060 kernel: system 00:00: [io 0x1000-0x103f] has been reserved Oct 31 00:48:57.757112 kernel: system 00:00: [io 0x1040-0x104f] has been reserved Oct 31 00:48:57.757159 kernel: system 00:00: [io 0x0cf0-0x0cf1] has been reserved Oct 31 00:48:57.757212 kernel: system 00:04: [mem 0xfed00000-0xfed003ff] has been reserved Oct 31 00:48:57.757262 kernel: pnp 00:06: [dma 2] Oct 31 00:48:57.757312 kernel: system 00:07: [io 0xfce0-0xfcff] has been reserved Oct 31 00:48:57.757379 kernel: system 00:07: [mem 0xf0000000-0xf7ffffff] has been reserved Oct 31 00:48:57.757426 kernel: system 00:07: [mem 0xfe800000-0xfe9fffff] has been reserved Oct 31 00:48:57.757438 kernel: pnp: PnP ACPI: found 8 devices Oct 31 00:48:57.757444 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Oct 31 00:48:57.757450 kernel: NET: Registered PF_INET protocol family Oct 31 00:48:57.757457 kernel: IP idents hash table entries: 32768 (order: 6, 262144 bytes, linear) Oct 31 00:48:57.757463 kernel: tcp_listen_portaddr_hash hash table entries: 1024 (order: 2, 16384 bytes, linear) Oct 31 00:48:57.757469 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Oct 31 00:48:57.757475 kernel: TCP established hash table entries: 16384 (order: 5, 131072 bytes, linear) Oct 31 00:48:57.757481 kernel: TCP bind hash table entries: 16384 (order: 7, 524288 bytes, linear) Oct 31 00:48:57.757488 kernel: TCP: Hash tables configured (established 16384 bind 16384) Oct 31 00:48:57.757494 kernel: UDP hash table entries: 1024 (order: 3, 32768 bytes, linear) Oct 31 00:48:57.757501 kernel: UDP-Lite hash table entries: 1024 (order: 3, 32768 bytes, linear) Oct 31 00:48:57.757507 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Oct 31 00:48:57.757513 kernel: NET: Registered PF_XDP protocol family Oct 31 00:48:57.757566 kernel: pci 0000:00:15.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 03] add_size 200000 add_align 100000 Oct 31 00:48:57.757619 kernel: pci 0000:00:15.3: bridge window [io 0x1000-0x0fff] to [bus 06] add_size 1000 Oct 31 00:48:57.757673 kernel: pci 0000:00:15.4: bridge window [io 0x1000-0x0fff] to [bus 07] add_size 1000 Oct 31 00:48:57.757729 kernel: pci 0000:00:15.5: bridge window [io 0x1000-0x0fff] to [bus 08] add_size 1000 Oct 31 00:48:57.757783 kernel: pci 0000:00:15.6: bridge window [io 0x1000-0x0fff] to [bus 09] add_size 1000 Oct 31 00:48:57.757836 kernel: pci 0000:00:15.7: bridge window [io 0x1000-0x0fff] to [bus 0a] add_size 1000 Oct 31 00:48:57.757889 kernel: pci 0000:00:16.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 0b] add_size 200000 add_align 100000 Oct 31 00:48:57.757942 kernel: pci 0000:00:16.3: bridge window [io 0x1000-0x0fff] to [bus 0e] add_size 1000 Oct 31 00:48:57.757994 kernel: pci 0000:00:16.4: bridge window [io 0x1000-0x0fff] to [bus 0f] add_size 1000 Oct 31 00:48:57.758049 kernel: pci 0000:00:16.5: bridge window [io 0x1000-0x0fff] to [bus 10] add_size 1000 Oct 31 00:48:57.758102 kernel: pci 0000:00:16.6: bridge window [io 0x1000-0x0fff] to [bus 11] add_size 1000 Oct 31 00:48:57.758154 kernel: pci 0000:00:16.7: bridge window [io 0x1000-0x0fff] to [bus 12] add_size 1000 Oct 31 00:48:57.758206 kernel: pci 0000:00:17.3: bridge window [io 0x1000-0x0fff] to [bus 16] add_size 1000 Oct 31 00:48:57.758258 kernel: pci 0000:00:17.4: bridge window [io 0x1000-0x0fff] to [bus 17] add_size 1000 Oct 31 00:48:57.758310 kernel: pci 0000:00:17.5: bridge window [io 0x1000-0x0fff] to [bus 18] add_size 1000 Oct 31 00:48:57.759702 kernel: pci 0000:00:17.6: bridge window [io 0x1000-0x0fff] to [bus 19] add_size 1000 Oct 31 00:48:57.759764 kernel: pci 0000:00:17.7: bridge window [io 0x1000-0x0fff] to [bus 1a] add_size 1000 Oct 31 00:48:57.759821 kernel: pci 0000:00:18.2: bridge window [io 0x1000-0x0fff] to [bus 1d] add_size 1000 Oct 31 00:48:57.759875 kernel: pci 0000:00:18.3: bridge window [io 0x1000-0x0fff] to [bus 1e] add_size 1000 Oct 31 00:48:57.759929 kernel: pci 0000:00:18.4: bridge window [io 0x1000-0x0fff] to [bus 1f] add_size 1000 Oct 31 00:48:57.759987 kernel: pci 0000:00:18.5: bridge window [io 0x1000-0x0fff] to [bus 20] add_size 1000 Oct 31 00:48:57.760040 kernel: pci 0000:00:18.6: bridge window [io 0x1000-0x0fff] to [bus 21] add_size 1000 Oct 31 00:48:57.760093 kernel: pci 0000:00:18.7: bridge window [io 0x1000-0x0fff] to [bus 22] add_size 1000 Oct 31 00:48:57.760146 kernel: pci 0000:00:15.0: BAR 15: assigned [mem 0xc0000000-0xc01fffff 64bit pref] Oct 31 00:48:57.760198 kernel: pci 0000:00:16.0: BAR 15: assigned [mem 0xc0200000-0xc03fffff 64bit pref] Oct 31 00:48:57.760251 kernel: pci 0000:00:15.3: BAR 13: no space for [io size 0x1000] Oct 31 00:48:57.760304 kernel: pci 0000:00:15.3: BAR 13: failed to assign [io size 0x1000] Oct 31 00:48:57.760408 kernel: pci 0000:00:15.4: BAR 13: no space for [io size 0x1000] Oct 31 00:48:57.760461 kernel: pci 0000:00:15.4: BAR 13: failed to assign [io size 0x1000] Oct 31 00:48:57.760514 kernel: pci 0000:00:15.5: BAR 13: no space for [io size 0x1000] Oct 31 00:48:57.760566 kernel: pci 0000:00:15.5: BAR 13: failed to assign [io size 0x1000] Oct 31 00:48:57.760618 kernel: pci 0000:00:15.6: BAR 13: no space for [io size 0x1000] Oct 31 00:48:57.760670 kernel: pci 0000:00:15.6: BAR 13: failed to assign [io size 0x1000] Oct 31 00:48:57.760723 kernel: pci 0000:00:15.7: BAR 13: no space for [io size 0x1000] Oct 31 00:48:57.760775 kernel: pci 0000:00:15.7: BAR 13: failed to assign [io size 0x1000] Oct 31 00:48:57.760831 kernel: pci 0000:00:16.3: BAR 13: no space for [io size 0x1000] Oct 31 00:48:57.760884 kernel: pci 0000:00:16.3: BAR 13: failed to assign [io size 0x1000] Oct 31 00:48:57.760936 kernel: pci 0000:00:16.4: BAR 13: no space for [io size 0x1000] Oct 31 00:48:57.760989 kernel: pci 0000:00:16.4: BAR 13: failed to assign [io size 0x1000] Oct 31 00:48:57.761042 kernel: pci 0000:00:16.5: BAR 13: no space for [io size 0x1000] Oct 31 00:48:57.761095 kernel: pci 0000:00:16.5: BAR 13: failed to assign [io size 0x1000] Oct 31 00:48:57.761147 kernel: pci 0000:00:16.6: BAR 13: no space for [io size 0x1000] Oct 31 00:48:57.761200 kernel: pci 0000:00:16.6: BAR 13: failed to assign [io size 0x1000] Oct 31 00:48:57.761255 kernel: pci 0000:00:16.7: BAR 13: no space for [io size 0x1000] Oct 31 00:48:57.761308 kernel: pci 0000:00:16.7: BAR 13: failed to assign [io size 0x1000] Oct 31 00:48:57.761701 kernel: pci 0000:00:17.3: BAR 13: no space for [io size 0x1000] Oct 31 00:48:57.761756 kernel: pci 0000:00:17.3: BAR 13: failed to assign [io size 0x1000] Oct 31 00:48:57.761808 kernel: pci 0000:00:17.4: BAR 13: no space for [io size 0x1000] Oct 31 00:48:57.761860 kernel: pci 0000:00:17.4: BAR 13: failed to assign [io size 0x1000] Oct 31 00:48:57.761912 kernel: pci 0000:00:17.5: BAR 13: no space for [io size 0x1000] Oct 31 00:48:57.761964 kernel: pci 0000:00:17.5: BAR 13: failed to assign [io size 0x1000] Oct 31 00:48:57.762020 kernel: pci 0000:00:17.6: BAR 13: no space for [io size 0x1000] Oct 31 00:48:57.762072 kernel: pci 0000:00:17.6: BAR 13: failed to assign [io size 0x1000] Oct 31 00:48:57.762133 kernel: pci 0000:00:17.7: BAR 13: no space for [io size 0x1000] Oct 31 00:48:57.762290 kernel: pci 0000:00:17.7: BAR 13: failed to assign [io size 0x1000] Oct 31 00:48:57.762372 kernel: pci 0000:00:18.2: BAR 13: no space for [io size 0x1000] Oct 31 00:48:57.762426 kernel: pci 0000:00:18.2: BAR 13: failed to assign [io size 0x1000] Oct 31 00:48:57.763402 kernel: pci 0000:00:18.3: BAR 13: no space for [io size 0x1000] Oct 31 00:48:57.763458 kernel: pci 0000:00:18.3: BAR 13: failed to assign [io size 0x1000] Oct 31 00:48:57.763514 kernel: pci 0000:00:18.4: BAR 13: no space for [io size 0x1000] Oct 31 00:48:57.763567 kernel: pci 0000:00:18.4: BAR 13: failed to assign [io size 0x1000] Oct 31 00:48:57.763619 kernel: pci 0000:00:18.5: BAR 13: no space for [io size 0x1000] Oct 31 00:48:57.763672 kernel: pci 0000:00:18.5: BAR 13: failed to assign [io size 0x1000] Oct 31 00:48:57.763724 kernel: pci 0000:00:18.6: BAR 13: no space for [io size 0x1000] Oct 31 00:48:57.763776 kernel: pci 0000:00:18.6: BAR 13: failed to assign [io size 0x1000] Oct 31 00:48:57.763829 kernel: pci 0000:00:18.7: BAR 13: no space for [io size 0x1000] Oct 31 00:48:57.763896 kernel: pci 0000:00:18.7: BAR 13: failed to assign [io size 0x1000] Oct 31 00:48:57.763950 kernel: pci 0000:00:18.7: BAR 13: no space for [io size 0x1000] Oct 31 00:48:57.764001 kernel: pci 0000:00:18.7: BAR 13: failed to assign [io size 0x1000] Oct 31 00:48:57.764052 kernel: pci 0000:00:18.6: BAR 13: no space for [io size 0x1000] Oct 31 00:48:57.764105 kernel: pci 0000:00:18.6: BAR 13: failed to assign [io size 0x1000] Oct 31 00:48:57.764168 kernel: pci 0000:00:18.5: BAR 13: no space for [io size 0x1000] Oct 31 00:48:57.764240 kernel: pci 0000:00:18.5: BAR 13: failed to assign [io size 0x1000] Oct 31 00:48:57.764295 kernel: pci 0000:00:18.4: BAR 13: no space for [io size 0x1000] Oct 31 00:48:57.765374 kernel: pci 0000:00:18.4: BAR 13: failed to assign [io size 0x1000] Oct 31 00:48:57.765428 kernel: pci 0000:00:18.3: BAR 13: no space for [io size 0x1000] Oct 31 00:48:57.765481 kernel: pci 0000:00:18.3: BAR 13: failed to assign [io size 0x1000] Oct 31 00:48:57.765560 kernel: pci 0000:00:18.2: BAR 13: no space for [io size 0x1000] Oct 31 00:48:57.765612 kernel: pci 0000:00:18.2: BAR 13: failed to assign [io size 0x1000] Oct 31 00:48:57.765665 kernel: pci 0000:00:17.7: BAR 13: no space for [io size 0x1000] Oct 31 00:48:57.765717 kernel: pci 0000:00:17.7: BAR 13: failed to assign [io size 0x1000] Oct 31 00:48:57.765770 kernel: pci 0000:00:17.6: BAR 13: no space for [io size 0x1000] Oct 31 00:48:57.765822 kernel: pci 0000:00:17.6: BAR 13: failed to assign [io size 0x1000] Oct 31 00:48:57.765874 kernel: pci 0000:00:17.5: BAR 13: no space for [io size 0x1000] Oct 31 00:48:57.765926 kernel: pci 0000:00:17.5: BAR 13: failed to assign [io size 0x1000] Oct 31 00:48:57.765979 kernel: pci 0000:00:17.4: BAR 13: no space for [io size 0x1000] Oct 31 00:48:57.766034 kernel: pci 0000:00:17.4: BAR 13: failed to assign [io size 0x1000] Oct 31 00:48:57.766086 kernel: pci 0000:00:17.3: BAR 13: no space for [io size 0x1000] Oct 31 00:48:57.766138 kernel: pci 0000:00:17.3: BAR 13: failed to assign [io size 0x1000] Oct 31 00:48:57.766190 kernel: pci 0000:00:16.7: BAR 13: no space for [io size 0x1000] Oct 31 00:48:57.766242 kernel: pci 0000:00:16.7: BAR 13: failed to assign [io size 0x1000] Oct 31 00:48:57.766294 kernel: pci 0000:00:16.6: BAR 13: no space for [io size 0x1000] Oct 31 00:48:57.766369 kernel: pci 0000:00:16.6: BAR 13: failed to assign [io size 0x1000] Oct 31 00:48:57.766435 kernel: pci 0000:00:16.5: BAR 13: no space for [io size 0x1000] Oct 31 00:48:57.766486 kernel: pci 0000:00:16.5: BAR 13: failed to assign [io size 0x1000] Oct 31 00:48:57.766536 kernel: pci 0000:00:16.4: BAR 13: no space for [io size 0x1000] Oct 31 00:48:57.766622 kernel: pci 0000:00:16.4: BAR 13: failed to assign [io size 0x1000] Oct 31 00:48:57.766748 kernel: pci 0000:00:16.3: BAR 13: no space for [io size 0x1000] Oct 31 00:48:57.766996 kernel: pci 0000:00:16.3: BAR 13: failed to assign [io size 0x1000] Oct 31 00:48:57.767052 kernel: pci 0000:00:15.7: BAR 13: no space for [io size 0x1000] Oct 31 00:48:57.768409 kernel: pci 0000:00:15.7: BAR 13: failed to assign [io size 0x1000] Oct 31 00:48:57.768473 kernel: pci 0000:00:15.6: BAR 13: no space for [io size 0x1000] Oct 31 00:48:57.768529 kernel: pci 0000:00:15.6: BAR 13: failed to assign [io size 0x1000] Oct 31 00:48:57.768584 kernel: pci 0000:00:15.5: BAR 13: no space for [io size 0x1000] Oct 31 00:48:57.768639 kernel: pci 0000:00:15.5: BAR 13: failed to assign [io size 0x1000] Oct 31 00:48:57.768696 kernel: pci 0000:00:15.4: BAR 13: no space for [io size 0x1000] Oct 31 00:48:57.768749 kernel: pci 0000:00:15.4: BAR 13: failed to assign [io size 0x1000] Oct 31 00:48:57.768802 kernel: pci 0000:00:15.3: BAR 13: no space for [io size 0x1000] Oct 31 00:48:57.768855 kernel: pci 0000:00:15.3: BAR 13: failed to assign [io size 0x1000] Oct 31 00:48:57.768909 kernel: pci 0000:00:01.0: PCI bridge to [bus 01] Oct 31 00:48:57.768962 kernel: pci 0000:00:11.0: PCI bridge to [bus 02] Oct 31 00:48:57.769016 kernel: pci 0000:00:11.0: bridge window [io 0x2000-0x3fff] Oct 31 00:48:57.769086 kernel: pci 0000:00:11.0: bridge window [mem 0xfd600000-0xfdffffff] Oct 31 00:48:57.769140 kernel: pci 0000:00:11.0: bridge window [mem 0xe7b00000-0xe7ffffff 64bit pref] Oct 31 00:48:57.769200 kernel: pci 0000:03:00.0: BAR 6: assigned [mem 0xfd500000-0xfd50ffff pref] Oct 31 00:48:57.769272 kernel: pci 0000:00:15.0: PCI bridge to [bus 03] Oct 31 00:48:57.769366 kernel: pci 0000:00:15.0: bridge window [io 0x4000-0x4fff] Oct 31 00:48:57.769451 kernel: pci 0000:00:15.0: bridge window [mem 0xfd500000-0xfd5fffff] Oct 31 00:48:57.769546 kernel: pci 0000:00:15.0: bridge window [mem 0xc0000000-0xc01fffff 64bit pref] Oct 31 00:48:57.769643 kernel: pci 0000:00:15.1: PCI bridge to [bus 04] Oct 31 00:48:57.769716 kernel: pci 0000:00:15.1: bridge window [io 0x8000-0x8fff] Oct 31 00:48:57.769770 kernel: pci 0000:00:15.1: bridge window [mem 0xfd100000-0xfd1fffff] Oct 31 00:48:57.769823 kernel: pci 0000:00:15.1: bridge window [mem 0xe7800000-0xe78fffff 64bit pref] Oct 31 00:48:57.769881 kernel: pci 0000:00:15.2: PCI bridge to [bus 05] Oct 31 00:48:57.769936 kernel: pci 0000:00:15.2: bridge window [io 0xc000-0xcfff] Oct 31 00:48:57.769988 kernel: pci 0000:00:15.2: bridge window [mem 0xfcd00000-0xfcdfffff] Oct 31 00:48:57.770041 kernel: pci 0000:00:15.2: bridge window [mem 0xe7400000-0xe74fffff 64bit pref] Oct 31 00:48:57.770094 kernel: pci 0000:00:15.3: PCI bridge to [bus 06] Oct 31 00:48:57.770159 kernel: pci 0000:00:15.3: bridge window [mem 0xfc900000-0xfc9fffff] Oct 31 00:48:57.770214 kernel: pci 0000:00:15.3: bridge window [mem 0xe7000000-0xe70fffff 64bit pref] Oct 31 00:48:57.770266 kernel: pci 0000:00:15.4: PCI bridge to [bus 07] Oct 31 00:48:57.770318 kernel: pci 0000:00:15.4: bridge window [mem 0xfc500000-0xfc5fffff] Oct 31 00:48:57.772058 kernel: pci 0000:00:15.4: bridge window [mem 0xe6c00000-0xe6cfffff 64bit pref] Oct 31 00:48:57.772386 kernel: pci 0000:00:15.5: PCI bridge to [bus 08] Oct 31 00:48:57.772449 kernel: pci 0000:00:15.5: bridge window [mem 0xfc100000-0xfc1fffff] Oct 31 00:48:57.772507 kernel: pci 0000:00:15.5: bridge window [mem 0xe6800000-0xe68fffff 64bit pref] Oct 31 00:48:57.772562 kernel: pci 0000:00:15.6: PCI bridge to [bus 09] Oct 31 00:48:57.772652 kernel: pci 0000:00:15.6: bridge window [mem 0xfbd00000-0xfbdfffff] Oct 31 00:48:57.772707 kernel: pci 0000:00:15.6: bridge window [mem 0xe6400000-0xe64fffff 64bit pref] Oct 31 00:48:57.772759 kernel: pci 0000:00:15.7: PCI bridge to [bus 0a] Oct 31 00:48:57.772810 kernel: pci 0000:00:15.7: bridge window [mem 0xfb900000-0xfb9fffff] Oct 31 00:48:57.772861 kernel: pci 0000:00:15.7: bridge window [mem 0xe6000000-0xe60fffff 64bit pref] Oct 31 00:48:57.772915 kernel: pci 0000:0b:00.0: BAR 6: assigned [mem 0xfd400000-0xfd40ffff pref] Oct 31 00:48:57.772967 kernel: pci 0000:00:16.0: PCI bridge to [bus 0b] Oct 31 00:48:57.773018 kernel: pci 0000:00:16.0: bridge window [io 0x5000-0x5fff] Oct 31 00:48:57.773069 kernel: pci 0000:00:16.0: bridge window [mem 0xfd400000-0xfd4fffff] Oct 31 00:48:57.773121 kernel: pci 0000:00:16.0: bridge window [mem 0xc0200000-0xc03fffff 64bit pref] Oct 31 00:48:57.773186 kernel: pci 0000:00:16.1: PCI bridge to [bus 0c] Oct 31 00:48:57.773240 kernel: pci 0000:00:16.1: bridge window [io 0x9000-0x9fff] Oct 31 00:48:57.773292 kernel: pci 0000:00:16.1: bridge window [mem 0xfd000000-0xfd0fffff] Oct 31 00:48:57.773351 kernel: pci 0000:00:16.1: bridge window [mem 0xe7700000-0xe77fffff 64bit pref] Oct 31 00:48:57.773417 kernel: pci 0000:00:16.2: PCI bridge to [bus 0d] Oct 31 00:48:57.773471 kernel: pci 0000:00:16.2: bridge window [io 0xd000-0xdfff] Oct 31 00:48:57.773545 kernel: pci 0000:00:16.2: bridge window [mem 0xfcc00000-0xfccfffff] Oct 31 00:48:57.773613 kernel: pci 0000:00:16.2: bridge window [mem 0xe7300000-0xe73fffff 64bit pref] Oct 31 00:48:57.773664 kernel: pci 0000:00:16.3: PCI bridge to [bus 0e] Oct 31 00:48:57.773716 kernel: pci 0000:00:16.3: bridge window [mem 0xfc800000-0xfc8fffff] Oct 31 00:48:57.773770 kernel: pci 0000:00:16.3: bridge window [mem 0xe6f00000-0xe6ffffff 64bit pref] Oct 31 00:48:57.773822 kernel: pci 0000:00:16.4: PCI bridge to [bus 0f] Oct 31 00:48:57.773873 kernel: pci 0000:00:16.4: bridge window [mem 0xfc400000-0xfc4fffff] Oct 31 00:48:57.773925 kernel: pci 0000:00:16.4: bridge window [mem 0xe6b00000-0xe6bfffff 64bit pref] Oct 31 00:48:57.773977 kernel: pci 0000:00:16.5: PCI bridge to [bus 10] Oct 31 00:48:57.774028 kernel: pci 0000:00:16.5: bridge window [mem 0xfc000000-0xfc0fffff] Oct 31 00:48:57.774080 kernel: pci 0000:00:16.5: bridge window [mem 0xe6700000-0xe67fffff 64bit pref] Oct 31 00:48:57.774132 kernel: pci 0000:00:16.6: PCI bridge to [bus 11] Oct 31 00:48:57.774184 kernel: pci 0000:00:16.6: bridge window [mem 0xfbc00000-0xfbcfffff] Oct 31 00:48:57.774240 kernel: pci 0000:00:16.6: bridge window [mem 0xe6300000-0xe63fffff 64bit pref] Oct 31 00:48:57.774292 kernel: pci 0000:00:16.7: PCI bridge to [bus 12] Oct 31 00:48:57.774450 kernel: pci 0000:00:16.7: bridge window [mem 0xfb800000-0xfb8fffff] Oct 31 00:48:57.774505 kernel: pci 0000:00:16.7: bridge window [mem 0xe5f00000-0xe5ffffff 64bit pref] Oct 31 00:48:57.774563 kernel: pci 0000:00:17.0: PCI bridge to [bus 13] Oct 31 00:48:57.774615 kernel: pci 0000:00:17.0: bridge window [io 0x6000-0x6fff] Oct 31 00:48:57.774667 kernel: pci 0000:00:17.0: bridge window [mem 0xfd300000-0xfd3fffff] Oct 31 00:48:57.774719 kernel: pci 0000:00:17.0: bridge window [mem 0xe7a00000-0xe7afffff 64bit pref] Oct 31 00:48:57.774771 kernel: pci 0000:00:17.1: PCI bridge to [bus 14] Oct 31 00:48:57.774823 kernel: pci 0000:00:17.1: bridge window [io 0xa000-0xafff] Oct 31 00:48:57.774878 kernel: pci 0000:00:17.1: bridge window [mem 0xfcf00000-0xfcffffff] Oct 31 00:48:57.774930 kernel: pci 0000:00:17.1: bridge window [mem 0xe7600000-0xe76fffff 64bit pref] Oct 31 00:48:57.774984 kernel: pci 0000:00:17.2: PCI bridge to [bus 15] Oct 31 00:48:57.775036 kernel: pci 0000:00:17.2: bridge window [io 0xe000-0xefff] Oct 31 00:48:57.775087 kernel: pci 0000:00:17.2: bridge window [mem 0xfcb00000-0xfcbfffff] Oct 31 00:48:57.775139 kernel: pci 0000:00:17.2: bridge window [mem 0xe7200000-0xe72fffff 64bit pref] Oct 31 00:48:57.775191 kernel: pci 0000:00:17.3: PCI bridge to [bus 16] Oct 31 00:48:57.775243 kernel: pci 0000:00:17.3: bridge window [mem 0xfc700000-0xfc7fffff] Oct 31 00:48:57.775295 kernel: pci 0000:00:17.3: bridge window [mem 0xe6e00000-0xe6efffff 64bit pref] Oct 31 00:48:57.775357 kernel: pci 0000:00:17.4: PCI bridge to [bus 17] Oct 31 00:48:57.775410 kernel: pci 0000:00:17.4: bridge window [mem 0xfc300000-0xfc3fffff] Oct 31 00:48:57.775462 kernel: pci 0000:00:17.4: bridge window [mem 0xe6a00000-0xe6afffff 64bit pref] Oct 31 00:48:57.775513 kernel: pci 0000:00:17.5: PCI bridge to [bus 18] Oct 31 00:48:57.775566 kernel: pci 0000:00:17.5: bridge window [mem 0xfbf00000-0xfbffffff] Oct 31 00:48:57.775636 kernel: pci 0000:00:17.5: bridge window [mem 0xe6600000-0xe66fffff 64bit pref] Oct 31 00:48:57.775705 kernel: pci 0000:00:17.6: PCI bridge to [bus 19] Oct 31 00:48:57.775757 kernel: pci 0000:00:17.6: bridge window [mem 0xfbb00000-0xfbbfffff] Oct 31 00:48:57.775809 kernel: pci 0000:00:17.6: bridge window [mem 0xe6200000-0xe62fffff 64bit pref] Oct 31 00:48:57.775860 kernel: pci 0000:00:17.7: PCI bridge to [bus 1a] Oct 31 00:48:57.775915 kernel: pci 0000:00:17.7: bridge window [mem 0xfb700000-0xfb7fffff] Oct 31 00:48:57.775967 kernel: pci 0000:00:17.7: bridge window [mem 0xe5e00000-0xe5efffff 64bit pref] Oct 31 00:48:57.776020 kernel: pci 0000:00:18.0: PCI bridge to [bus 1b] Oct 31 00:48:57.776072 kernel: pci 0000:00:18.0: bridge window [io 0x7000-0x7fff] Oct 31 00:48:57.776138 kernel: pci 0000:00:18.0: bridge window [mem 0xfd200000-0xfd2fffff] Oct 31 00:48:57.776189 kernel: pci 0000:00:18.0: bridge window [mem 0xe7900000-0xe79fffff 64bit pref] Oct 31 00:48:57.776241 kernel: pci 0000:00:18.1: PCI bridge to [bus 1c] Oct 31 00:48:57.776291 kernel: pci 0000:00:18.1: bridge window [io 0xb000-0xbfff] Oct 31 00:48:57.776367 kernel: pci 0000:00:18.1: bridge window [mem 0xfce00000-0xfcefffff] Oct 31 00:48:57.776423 kernel: pci 0000:00:18.1: bridge window [mem 0xe7500000-0xe75fffff 64bit pref] Oct 31 00:48:57.776475 kernel: pci 0000:00:18.2: PCI bridge to [bus 1d] Oct 31 00:48:57.776526 kernel: pci 0000:00:18.2: bridge window [mem 0xfca00000-0xfcafffff] Oct 31 00:48:57.776578 kernel: pci 0000:00:18.2: bridge window [mem 0xe7100000-0xe71fffff 64bit pref] Oct 31 00:48:57.776630 kernel: pci 0000:00:18.3: PCI bridge to [bus 1e] Oct 31 00:48:57.776681 kernel: pci 0000:00:18.3: bridge window [mem 0xfc600000-0xfc6fffff] Oct 31 00:48:57.776731 kernel: pci 0000:00:18.3: bridge window [mem 0xe6d00000-0xe6dfffff 64bit pref] Oct 31 00:48:57.776782 kernel: pci 0000:00:18.4: PCI bridge to [bus 1f] Oct 31 00:48:57.776833 kernel: pci 0000:00:18.4: bridge window [mem 0xfc200000-0xfc2fffff] Oct 31 00:48:57.776884 kernel: pci 0000:00:18.4: bridge window [mem 0xe6900000-0xe69fffff 64bit pref] Oct 31 00:48:57.776938 kernel: pci 0000:00:18.5: PCI bridge to [bus 20] Oct 31 00:48:57.776989 kernel: pci 0000:00:18.5: bridge window [mem 0xfbe00000-0xfbefffff] Oct 31 00:48:57.777041 kernel: pci 0000:00:18.5: bridge window [mem 0xe6500000-0xe65fffff 64bit pref] Oct 31 00:48:57.777092 kernel: pci 0000:00:18.6: PCI bridge to [bus 21] Oct 31 00:48:57.777143 kernel: pci 0000:00:18.6: bridge window [mem 0xfba00000-0xfbafffff] Oct 31 00:48:57.777194 kernel: pci 0000:00:18.6: bridge window [mem 0xe6100000-0xe61fffff 64bit pref] Oct 31 00:48:57.777245 kernel: pci 0000:00:18.7: PCI bridge to [bus 22] Oct 31 00:48:57.777296 kernel: pci 0000:00:18.7: bridge window [mem 0xfb600000-0xfb6fffff] Oct 31 00:48:57.777355 kernel: pci 0000:00:18.7: bridge window [mem 0xe5d00000-0xe5dfffff 64bit pref] Oct 31 00:48:57.777410 kernel: pci_bus 0000:00: resource 4 [mem 0x000a0000-0x000bffff window] Oct 31 00:48:57.777456 kernel: pci_bus 0000:00: resource 5 [mem 0x000cc000-0x000dbfff window] Oct 31 00:48:57.777501 kernel: pci_bus 0000:00: resource 6 [mem 0xc0000000-0xfebfffff window] Oct 31 00:48:57.777587 kernel: pci_bus 0000:00: resource 7 [io 0x0000-0x0cf7 window] Oct 31 00:48:57.777648 kernel: pci_bus 0000:00: resource 8 [io 0x0d00-0xfeff window] Oct 31 00:48:57.777697 kernel: pci_bus 0000:02: resource 0 [io 0x2000-0x3fff] Oct 31 00:48:57.777745 kernel: pci_bus 0000:02: resource 1 [mem 0xfd600000-0xfdffffff] Oct 31 00:48:57.777791 kernel: pci_bus 0000:02: resource 2 [mem 0xe7b00000-0xe7ffffff 64bit pref] Oct 31 00:48:57.777841 kernel: pci_bus 0000:02: resource 4 [mem 0x000a0000-0x000bffff window] Oct 31 00:48:57.777888 kernel: pci_bus 0000:02: resource 5 [mem 0x000cc000-0x000dbfff window] Oct 31 00:48:57.777935 kernel: pci_bus 0000:02: resource 6 [mem 0xc0000000-0xfebfffff window] Oct 31 00:48:57.777981 kernel: pci_bus 0000:02: resource 7 [io 0x0000-0x0cf7 window] Oct 31 00:48:57.778027 kernel: pci_bus 0000:02: resource 8 [io 0x0d00-0xfeff window] Oct 31 00:48:57.778078 kernel: pci_bus 0000:03: resource 0 [io 0x4000-0x4fff] Oct 31 00:48:57.778126 kernel: pci_bus 0000:03: resource 1 [mem 0xfd500000-0xfd5fffff] Oct 31 00:48:57.778175 kernel: pci_bus 0000:03: resource 2 [mem 0xc0000000-0xc01fffff 64bit pref] Oct 31 00:48:57.778225 kernel: pci_bus 0000:04: resource 0 [io 0x8000-0x8fff] Oct 31 00:48:57.778272 kernel: pci_bus 0000:04: resource 1 [mem 0xfd100000-0xfd1fffff] Oct 31 00:48:57.778319 kernel: pci_bus 0000:04: resource 2 [mem 0xe7800000-0xe78fffff 64bit pref] Oct 31 00:48:57.778378 kernel: pci_bus 0000:05: resource 0 [io 0xc000-0xcfff] Oct 31 00:48:57.778426 kernel: pci_bus 0000:05: resource 1 [mem 0xfcd00000-0xfcdfffff] Oct 31 00:48:57.778472 kernel: pci_bus 0000:05: resource 2 [mem 0xe7400000-0xe74fffff 64bit pref] Oct 31 00:48:57.778526 kernel: pci_bus 0000:06: resource 1 [mem 0xfc900000-0xfc9fffff] Oct 31 00:48:57.778573 kernel: pci_bus 0000:06: resource 2 [mem 0xe7000000-0xe70fffff 64bit pref] Oct 31 00:48:57.778626 kernel: pci_bus 0000:07: resource 1 [mem 0xfc500000-0xfc5fffff] Oct 31 00:48:57.778708 kernel: pci_bus 0000:07: resource 2 [mem 0xe6c00000-0xe6cfffff 64bit pref] Oct 31 00:48:57.778759 kernel: pci_bus 0000:08: resource 1 [mem 0xfc100000-0xfc1fffff] Oct 31 00:48:57.778815 kernel: pci_bus 0000:08: resource 2 [mem 0xe6800000-0xe68fffff 64bit pref] Oct 31 00:48:57.778870 kernel: pci_bus 0000:09: resource 1 [mem 0xfbd00000-0xfbdfffff] Oct 31 00:48:57.778917 kernel: pci_bus 0000:09: resource 2 [mem 0xe6400000-0xe64fffff 64bit pref] Oct 31 00:48:57.778970 kernel: pci_bus 0000:0a: resource 1 [mem 0xfb900000-0xfb9fffff] Oct 31 00:48:57.779018 kernel: pci_bus 0000:0a: resource 2 [mem 0xe6000000-0xe60fffff 64bit pref] Oct 31 00:48:57.779080 kernel: pci_bus 0000:0b: resource 0 [io 0x5000-0x5fff] Oct 31 00:48:57.779130 kernel: pci_bus 0000:0b: resource 1 [mem 0xfd400000-0xfd4fffff] Oct 31 00:48:57.779176 kernel: pci_bus 0000:0b: resource 2 [mem 0xc0200000-0xc03fffff 64bit pref] Oct 31 00:48:57.779227 kernel: pci_bus 0000:0c: resource 0 [io 0x9000-0x9fff] Oct 31 00:48:57.779275 kernel: pci_bus 0000:0c: resource 1 [mem 0xfd000000-0xfd0fffff] Oct 31 00:48:57.779322 kernel: pci_bus 0000:0c: resource 2 [mem 0xe7700000-0xe77fffff 64bit pref] Oct 31 00:48:57.779608 kernel: pci_bus 0000:0d: resource 0 [io 0xd000-0xdfff] Oct 31 00:48:57.779658 kernel: pci_bus 0000:0d: resource 1 [mem 0xfcc00000-0xfccfffff] Oct 31 00:48:57.779710 kernel: pci_bus 0000:0d: resource 2 [mem 0xe7300000-0xe73fffff 64bit pref] Oct 31 00:48:57.779762 kernel: pci_bus 0000:0e: resource 1 [mem 0xfc800000-0xfc8fffff] Oct 31 00:48:57.779809 kernel: pci_bus 0000:0e: resource 2 [mem 0xe6f00000-0xe6ffffff 64bit pref] Oct 31 00:48:57.779861 kernel: pci_bus 0000:0f: resource 1 [mem 0xfc400000-0xfc4fffff] Oct 31 00:48:57.779908 kernel: pci_bus 0000:0f: resource 2 [mem 0xe6b00000-0xe6bfffff 64bit pref] Oct 31 00:48:57.779958 kernel: pci_bus 0000:10: resource 1 [mem 0xfc000000-0xfc0fffff] Oct 31 00:48:57.780209 kernel: pci_bus 0000:10: resource 2 [mem 0xe6700000-0xe67fffff 64bit pref] Oct 31 00:48:57.780267 kernel: pci_bus 0000:11: resource 1 [mem 0xfbc00000-0xfbcfffff] Oct 31 00:48:57.780316 kernel: pci_bus 0000:11: resource 2 [mem 0xe6300000-0xe63fffff 64bit pref] Oct 31 00:48:57.780441 kernel: pci_bus 0000:12: resource 1 [mem 0xfb800000-0xfb8fffff] Oct 31 00:48:57.780522 kernel: pci_bus 0000:12: resource 2 [mem 0xe5f00000-0xe5ffffff 64bit pref] Oct 31 00:48:57.780589 kernel: pci_bus 0000:13: resource 0 [io 0x6000-0x6fff] Oct 31 00:48:57.780636 kernel: pci_bus 0000:13: resource 1 [mem 0xfd300000-0xfd3fffff] Oct 31 00:48:57.780686 kernel: pci_bus 0000:13: resource 2 [mem 0xe7a00000-0xe7afffff 64bit pref] Oct 31 00:48:57.780736 kernel: pci_bus 0000:14: resource 0 [io 0xa000-0xafff] Oct 31 00:48:57.780784 kernel: pci_bus 0000:14: resource 1 [mem 0xfcf00000-0xfcffffff] Oct 31 00:48:57.780830 kernel: pci_bus 0000:14: resource 2 [mem 0xe7600000-0xe76fffff 64bit pref] Oct 31 00:48:57.780880 kernel: pci_bus 0000:15: resource 0 [io 0xe000-0xefff] Oct 31 00:48:57.780927 kernel: pci_bus 0000:15: resource 1 [mem 0xfcb00000-0xfcbfffff] Oct 31 00:48:57.780977 kernel: pci_bus 0000:15: resource 2 [mem 0xe7200000-0xe72fffff 64bit pref] Oct 31 00:48:57.781030 kernel: pci_bus 0000:16: resource 1 [mem 0xfc700000-0xfc7fffff] Oct 31 00:48:57.781078 kernel: pci_bus 0000:16: resource 2 [mem 0xe6e00000-0xe6efffff 64bit pref] Oct 31 00:48:57.781128 kernel: pci_bus 0000:17: resource 1 [mem 0xfc300000-0xfc3fffff] Oct 31 00:48:57.781175 kernel: pci_bus 0000:17: resource 2 [mem 0xe6a00000-0xe6afffff 64bit pref] Oct 31 00:48:57.781227 kernel: pci_bus 0000:18: resource 1 [mem 0xfbf00000-0xfbffffff] Oct 31 00:48:57.781274 kernel: pci_bus 0000:18: resource 2 [mem 0xe6600000-0xe66fffff 64bit pref] Oct 31 00:48:57.781337 kernel: pci_bus 0000:19: resource 1 [mem 0xfbb00000-0xfbbfffff] Oct 31 00:48:57.781408 kernel: pci_bus 0000:19: resource 2 [mem 0xe6200000-0xe62fffff 64bit pref] Oct 31 00:48:57.781459 kernel: pci_bus 0000:1a: resource 1 [mem 0xfb700000-0xfb7fffff] Oct 31 00:48:57.781507 kernel: pci_bus 0000:1a: resource 2 [mem 0xe5e00000-0xe5efffff 64bit pref] Oct 31 00:48:57.781602 kernel: pci_bus 0000:1b: resource 0 [io 0x7000-0x7fff] Oct 31 00:48:57.781654 kernel: pci_bus 0000:1b: resource 1 [mem 0xfd200000-0xfd2fffff] Oct 31 00:48:57.781701 kernel: pci_bus 0000:1b: resource 2 [mem 0xe7900000-0xe79fffff 64bit pref] Oct 31 00:48:57.781752 kernel: pci_bus 0000:1c: resource 0 [io 0xb000-0xbfff] Oct 31 00:48:57.781800 kernel: pci_bus 0000:1c: resource 1 [mem 0xfce00000-0xfcefffff] Oct 31 00:48:57.781847 kernel: pci_bus 0000:1c: resource 2 [mem 0xe7500000-0xe75fffff 64bit pref] Oct 31 00:48:57.781898 kernel: pci_bus 0000:1d: resource 1 [mem 0xfca00000-0xfcafffff] Oct 31 00:48:57.781946 kernel: pci_bus 0000:1d: resource 2 [mem 0xe7100000-0xe71fffff 64bit pref] Oct 31 00:48:57.782017 kernel: pci_bus 0000:1e: resource 1 [mem 0xfc600000-0xfc6fffff] Oct 31 00:48:57.782064 kernel: pci_bus 0000:1e: resource 2 [mem 0xe6d00000-0xe6dfffff 64bit pref] Oct 31 00:48:57.782115 kernel: pci_bus 0000:1f: resource 1 [mem 0xfc200000-0xfc2fffff] Oct 31 00:48:57.782162 kernel: pci_bus 0000:1f: resource 2 [mem 0xe6900000-0xe69fffff 64bit pref] Oct 31 00:48:57.782215 kernel: pci_bus 0000:20: resource 1 [mem 0xfbe00000-0xfbefffff] Oct 31 00:48:57.782262 kernel: pci_bus 0000:20: resource 2 [mem 0xe6500000-0xe65fffff 64bit pref] Oct 31 00:48:57.782314 kernel: pci_bus 0000:21: resource 1 [mem 0xfba00000-0xfbafffff] Oct 31 00:48:57.782392 kernel: pci_bus 0000:21: resource 2 [mem 0xe6100000-0xe61fffff 64bit pref] Oct 31 00:48:57.782442 kernel: pci_bus 0000:22: resource 1 [mem 0xfb600000-0xfb6fffff] Oct 31 00:48:57.782490 kernel: pci_bus 0000:22: resource 2 [mem 0xe5d00000-0xe5dfffff 64bit pref] Oct 31 00:48:57.782549 kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers Oct 31 00:48:57.782558 kernel: PCI: CLS 32 bytes, default 64 Oct 31 00:48:57.782565 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Oct 31 00:48:57.782574 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x311fd3cd494, max_idle_ns: 440795223879 ns Oct 31 00:48:57.782580 kernel: clocksource: Switched to clocksource tsc Oct 31 00:48:57.782586 kernel: Initialise system trusted keyrings Oct 31 00:48:57.782592 kernel: workingset: timestamp_bits=39 max_order=19 bucket_order=0 Oct 31 00:48:57.782599 kernel: Key type asymmetric registered Oct 31 00:48:57.782605 kernel: Asymmetric key parser 'x509' registered Oct 31 00:48:57.782611 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Oct 31 00:48:57.782617 kernel: io scheduler mq-deadline registered Oct 31 00:48:57.782623 kernel: io scheduler kyber registered Oct 31 00:48:57.782631 kernel: io scheduler bfq registered Oct 31 00:48:57.782684 kernel: pcieport 0000:00:15.0: PME: Signaling with IRQ 24 Oct 31 00:48:57.782737 kernel: pcieport 0000:00:15.0: pciehp: Slot #160 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Oct 31 00:48:57.782790 kernel: pcieport 0000:00:15.1: PME: Signaling with IRQ 25 Oct 31 00:48:57.782842 kernel: pcieport 0000:00:15.1: pciehp: Slot #161 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Oct 31 00:48:57.782895 kernel: pcieport 0000:00:15.2: PME: Signaling with IRQ 26 Oct 31 00:48:57.782946 kernel: pcieport 0000:00:15.2: pciehp: Slot #162 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Oct 31 00:48:57.782999 kernel: pcieport 0000:00:15.3: PME: Signaling with IRQ 27 Oct 31 00:48:57.783054 kernel: pcieport 0000:00:15.3: pciehp: Slot #163 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Oct 31 00:48:57.783106 kernel: pcieport 0000:00:15.4: PME: Signaling with IRQ 28 Oct 31 00:48:57.783158 kernel: pcieport 0000:00:15.4: pciehp: Slot #164 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Oct 31 00:48:57.783211 kernel: pcieport 0000:00:15.5: PME: Signaling with IRQ 29 Oct 31 00:48:57.783263 kernel: pcieport 0000:00:15.5: pciehp: Slot #165 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Oct 31 00:48:57.783317 kernel: pcieport 0000:00:15.6: PME: Signaling with IRQ 30 Oct 31 00:48:57.783388 kernel: pcieport 0000:00:15.6: pciehp: Slot #166 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Oct 31 00:48:57.783441 kernel: pcieport 0000:00:15.7: PME: Signaling with IRQ 31 Oct 31 00:48:57.783493 kernel: pcieport 0000:00:15.7: pciehp: Slot #167 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Oct 31 00:48:57.783545 kernel: pcieport 0000:00:16.0: PME: Signaling with IRQ 32 Oct 31 00:48:57.783597 kernel: pcieport 0000:00:16.0: pciehp: Slot #192 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Oct 31 00:48:57.783653 kernel: pcieport 0000:00:16.1: PME: Signaling with IRQ 33 Oct 31 00:48:57.783709 kernel: pcieport 0000:00:16.1: pciehp: Slot #193 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Oct 31 00:48:57.783761 kernel: pcieport 0000:00:16.2: PME: Signaling with IRQ 34 Oct 31 00:48:57.783813 kernel: pcieport 0000:00:16.2: pciehp: Slot #194 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Oct 31 00:48:57.783865 kernel: pcieport 0000:00:16.3: PME: Signaling with IRQ 35 Oct 31 00:48:57.783917 kernel: pcieport 0000:00:16.3: pciehp: Slot #195 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Oct 31 00:48:57.783973 kernel: pcieport 0000:00:16.4: PME: Signaling with IRQ 36 Oct 31 00:48:57.784024 kernel: pcieport 0000:00:16.4: pciehp: Slot #196 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Oct 31 00:48:57.784076 kernel: pcieport 0000:00:16.5: PME: Signaling with IRQ 37 Oct 31 00:48:57.784128 kernel: pcieport 0000:00:16.5: pciehp: Slot #197 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Oct 31 00:48:57.784181 kernel: pcieport 0000:00:16.6: PME: Signaling with IRQ 38 Oct 31 00:48:57.784233 kernel: pcieport 0000:00:16.6: pciehp: Slot #198 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Oct 31 00:48:57.784288 kernel: pcieport 0000:00:16.7: PME: Signaling with IRQ 39 Oct 31 00:48:57.784384 kernel: pcieport 0000:00:16.7: pciehp: Slot #199 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Oct 31 00:48:57.784438 kernel: pcieport 0000:00:17.0: PME: Signaling with IRQ 40 Oct 31 00:48:57.784491 kernel: pcieport 0000:00:17.0: pciehp: Slot #224 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Oct 31 00:48:57.784543 kernel: pcieport 0000:00:17.1: PME: Signaling with IRQ 41 Oct 31 00:48:57.784628 kernel: pcieport 0000:00:17.1: pciehp: Slot #225 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Oct 31 00:48:57.784684 kernel: pcieport 0000:00:17.2: PME: Signaling with IRQ 42 Oct 31 00:48:57.784735 kernel: pcieport 0000:00:17.2: pciehp: Slot #226 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Oct 31 00:48:57.784785 kernel: pcieport 0000:00:17.3: PME: Signaling with IRQ 43 Oct 31 00:48:57.784837 kernel: pcieport 0000:00:17.3: pciehp: Slot #227 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Oct 31 00:48:57.784887 kernel: pcieport 0000:00:17.4: PME: Signaling with IRQ 44 Oct 31 00:48:57.784938 kernel: pcieport 0000:00:17.4: pciehp: Slot #228 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Oct 31 00:48:57.784993 kernel: pcieport 0000:00:17.5: PME: Signaling with IRQ 45 Oct 31 00:48:57.785044 kernel: pcieport 0000:00:17.5: pciehp: Slot #229 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Oct 31 00:48:57.785113 kernel: pcieport 0000:00:17.6: PME: Signaling with IRQ 46 Oct 31 00:48:57.785166 kernel: pcieport 0000:00:17.6: pciehp: Slot #230 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Oct 31 00:48:57.785217 kernel: pcieport 0000:00:17.7: PME: Signaling with IRQ 47 Oct 31 00:48:57.785271 kernel: pcieport 0000:00:17.7: pciehp: Slot #231 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Oct 31 00:48:57.785322 kernel: pcieport 0000:00:18.0: PME: Signaling with IRQ 48 Oct 31 00:48:57.785384 kernel: pcieport 0000:00:18.0: pciehp: Slot #256 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Oct 31 00:48:57.785436 kernel: pcieport 0000:00:18.1: PME: Signaling with IRQ 49 Oct 31 00:48:57.785487 kernel: pcieport 0000:00:18.1: pciehp: Slot #257 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Oct 31 00:48:57.785558 kernel: pcieport 0000:00:18.2: PME: Signaling with IRQ 50 Oct 31 00:48:57.785625 kernel: pcieport 0000:00:18.2: pciehp: Slot #258 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Oct 31 00:48:57.785680 kernel: pcieport 0000:00:18.3: PME: Signaling with IRQ 51 Oct 31 00:48:57.785731 kernel: pcieport 0000:00:18.3: pciehp: Slot #259 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Oct 31 00:48:57.785783 kernel: pcieport 0000:00:18.4: PME: Signaling with IRQ 52 Oct 31 00:48:57.785834 kernel: pcieport 0000:00:18.4: pciehp: Slot #260 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Oct 31 00:48:57.785885 kernel: pcieport 0000:00:18.5: PME: Signaling with IRQ 53 Oct 31 00:48:57.785940 kernel: pcieport 0000:00:18.5: pciehp: Slot #261 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Oct 31 00:48:57.785991 kernel: pcieport 0000:00:18.6: PME: Signaling with IRQ 54 Oct 31 00:48:57.786042 kernel: pcieport 0000:00:18.6: pciehp: Slot #262 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Oct 31 00:48:57.786093 kernel: pcieport 0000:00:18.7: PME: Signaling with IRQ 55 Oct 31 00:48:57.786145 kernel: pcieport 0000:00:18.7: pciehp: Slot #263 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Oct 31 00:48:57.786156 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Oct 31 00:48:57.786163 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Oct 31 00:48:57.786169 kernel: 00:05: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Oct 31 00:48:57.786175 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBC,PNP0f13:MOUS] at 0x60,0x64 irq 1,12 Oct 31 00:48:57.786181 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Oct 31 00:48:57.786188 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Oct 31 00:48:57.786239 kernel: rtc_cmos 00:01: registered as rtc0 Oct 31 00:48:57.786289 kernel: rtc_cmos 00:01: setting system clock to 2025-10-31T00:48:57 UTC (1761871737) Oct 31 00:48:57.786390 kernel: rtc_cmos 00:01: alarms up to one month, y3k, 114 bytes nvram Oct 31 00:48:57.786400 kernel: intel_pstate: CPU model not supported Oct 31 00:48:57.786407 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Oct 31 00:48:57.786413 kernel: NET: Registered PF_INET6 protocol family Oct 31 00:48:57.786419 kernel: Segment Routing with IPv6 Oct 31 00:48:57.786425 kernel: In-situ OAM (IOAM) with IPv6 Oct 31 00:48:57.786432 kernel: NET: Registered PF_PACKET protocol family Oct 31 00:48:57.786438 kernel: Key type dns_resolver registered Oct 31 00:48:57.786444 kernel: IPI shorthand broadcast: enabled Oct 31 00:48:57.786453 kernel: sched_clock: Marking stable (906002942, 220089139)->(1182622323, -56530242) Oct 31 00:48:57.786459 kernel: registered taskstats version 1 Oct 31 00:48:57.786465 kernel: Loading compiled-in X.509 certificates Oct 31 00:48:57.786472 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.113-flatcar: 3640cadef2ce00a652278ae302be325ebb54a228' Oct 31 00:48:57.786478 kernel: Key type .fscrypt registered Oct 31 00:48:57.786484 kernel: Key type fscrypt-provisioning registered Oct 31 00:48:57.786490 kernel: ima: No TPM chip found, activating TPM-bypass! Oct 31 00:48:57.786496 kernel: ima: Allocated hash algorithm: sha1 Oct 31 00:48:57.786503 kernel: ima: No architecture policies found Oct 31 00:48:57.786510 kernel: clk: Disabling unused clocks Oct 31 00:48:57.786521 kernel: Freeing unused kernel image (initmem) memory: 42880K Oct 31 00:48:57.786528 kernel: Write protecting the kernel read-only data: 36864k Oct 31 00:48:57.786534 kernel: Freeing unused kernel image (rodata/data gap) memory: 1828K Oct 31 00:48:57.786540 kernel: Run /init as init process Oct 31 00:48:57.786546 kernel: with arguments: Oct 31 00:48:57.786553 kernel: /init Oct 31 00:48:57.786558 kernel: with environment: Oct 31 00:48:57.786564 kernel: HOME=/ Oct 31 00:48:57.786572 kernel: TERM=linux Oct 31 00:48:57.786580 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Oct 31 00:48:57.786588 systemd[1]: Detected virtualization vmware. Oct 31 00:48:57.786594 systemd[1]: Detected architecture x86-64. Oct 31 00:48:57.786600 systemd[1]: Running in initrd. Oct 31 00:48:57.786606 systemd[1]: No hostname configured, using default hostname. Oct 31 00:48:57.786612 systemd[1]: Hostname set to . Oct 31 00:48:57.786620 systemd[1]: Initializing machine ID from random generator. Oct 31 00:48:57.786626 systemd[1]: Queued start job for default target initrd.target. Oct 31 00:48:57.786632 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Oct 31 00:48:57.786639 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Oct 31 00:48:57.786645 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Oct 31 00:48:57.786652 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Oct 31 00:48:57.786658 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Oct 31 00:48:57.786665 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Oct 31 00:48:57.786673 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Oct 31 00:48:57.786680 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Oct 31 00:48:57.786686 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Oct 31 00:48:57.786692 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Oct 31 00:48:57.786699 systemd[1]: Reached target paths.target - Path Units. Oct 31 00:48:57.786705 systemd[1]: Reached target slices.target - Slice Units. Oct 31 00:48:57.786711 systemd[1]: Reached target swap.target - Swaps. Oct 31 00:48:57.786719 systemd[1]: Reached target timers.target - Timer Units. Oct 31 00:48:57.786725 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Oct 31 00:48:57.786732 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Oct 31 00:48:57.786738 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Oct 31 00:48:57.786744 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Oct 31 00:48:57.786750 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Oct 31 00:48:57.786756 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Oct 31 00:48:57.786763 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Oct 31 00:48:57.786769 systemd[1]: Reached target sockets.target - Socket Units. Oct 31 00:48:57.786776 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Oct 31 00:48:57.786782 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Oct 31 00:48:57.786789 systemd[1]: Finished network-cleanup.service - Network Cleanup. Oct 31 00:48:57.786795 systemd[1]: Starting systemd-fsck-usr.service... Oct 31 00:48:57.786802 systemd[1]: Starting systemd-journald.service - Journal Service... Oct 31 00:48:57.786808 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Oct 31 00:48:57.786815 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Oct 31 00:48:57.786834 systemd-journald[217]: Collecting audit messages is disabled. Oct 31 00:48:57.786870 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Oct 31 00:48:57.786877 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Oct 31 00:48:57.786883 systemd[1]: Finished systemd-fsck-usr.service. Oct 31 00:48:57.786891 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Oct 31 00:48:57.786898 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Oct 31 00:48:57.786905 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Oct 31 00:48:57.786912 kernel: Bridge firewalling registered Oct 31 00:48:57.786919 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Oct 31 00:48:57.786926 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Oct 31 00:48:57.786948 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Oct 31 00:48:57.786955 systemd-journald[217]: Journal started Oct 31 00:48:57.786969 systemd-journald[217]: Runtime Journal (/run/log/journal/9078b903f6f74db685ae227fb1581f4f) is 4.8M, max 38.6M, 33.8M free. Oct 31 00:48:57.744941 systemd-modules-load[218]: Inserted module 'overlay' Oct 31 00:48:57.789845 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Oct 31 00:48:57.775340 systemd-modules-load[218]: Inserted module 'br_netfilter' Oct 31 00:48:57.793260 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Oct 31 00:48:57.793277 systemd[1]: Started systemd-journald.service - Journal Service. Oct 31 00:48:57.793669 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Oct 31 00:48:57.796736 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Oct 31 00:48:57.800857 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Oct 31 00:48:57.803473 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Oct 31 00:48:57.809720 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Oct 31 00:48:57.809971 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Oct 31 00:48:57.810998 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Oct 31 00:48:57.819020 dracut-cmdline[249]: dracut-dracut-053 Oct 31 00:48:57.820850 dracut-cmdline[249]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=vmware flatcar.autologin verity.usrhash=950876ad7bc3e9634b7585a81697da4ef03ac6558969e5c002165369dd7c7885 Oct 31 00:48:57.832441 systemd-resolved[251]: Positive Trust Anchors: Oct 31 00:48:57.832448 systemd-resolved[251]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Oct 31 00:48:57.832470 systemd-resolved[251]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Oct 31 00:48:57.834945 systemd-resolved[251]: Defaulting to hostname 'linux'. Oct 31 00:48:57.835630 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Oct 31 00:48:57.835786 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Oct 31 00:48:57.863338 kernel: SCSI subsystem initialized Oct 31 00:48:57.871338 kernel: Loading iSCSI transport class v2.0-870. Oct 31 00:48:57.878335 kernel: iscsi: registered transport (tcp) Oct 31 00:48:57.892378 kernel: iscsi: registered transport (qla4xxx) Oct 31 00:48:57.892407 kernel: QLogic iSCSI HBA Driver Oct 31 00:48:57.912768 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Oct 31 00:48:57.918438 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Oct 31 00:48:57.934334 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Oct 31 00:48:57.934360 kernel: device-mapper: uevent: version 1.0.3 Oct 31 00:48:57.934369 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Oct 31 00:48:57.965338 kernel: raid6: avx2x4 gen() 53349 MB/s Oct 31 00:48:57.982366 kernel: raid6: avx2x2 gen() 52916 MB/s Oct 31 00:48:57.999482 kernel: raid6: avx2x1 gen() 46064 MB/s Oct 31 00:48:57.999503 kernel: raid6: using algorithm avx2x4 gen() 53349 MB/s Oct 31 00:48:58.017536 kernel: raid6: .... xor() 21397 MB/s, rmw enabled Oct 31 00:48:58.017588 kernel: raid6: using avx2x2 recovery algorithm Oct 31 00:48:58.031337 kernel: xor: automatically using best checksumming function avx Oct 31 00:48:58.134420 kernel: Btrfs loaded, zoned=no, fsverity=no Oct 31 00:48:58.139398 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Oct 31 00:48:58.145591 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Oct 31 00:48:58.152578 systemd-udevd[434]: Using default interface naming scheme 'v255'. Oct 31 00:48:58.155078 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Oct 31 00:48:58.160402 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Oct 31 00:48:58.167055 dracut-pre-trigger[439]: rd.md=0: removing MD RAID activation Oct 31 00:48:58.181669 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Oct 31 00:48:58.185415 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Oct 31 00:48:58.255671 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Oct 31 00:48:58.259420 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Oct 31 00:48:58.269499 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Oct 31 00:48:58.269987 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Oct 31 00:48:58.270699 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Oct 31 00:48:58.270926 systemd[1]: Reached target remote-fs.target - Remote File Systems. Oct 31 00:48:58.274402 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Oct 31 00:48:58.281014 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Oct 31 00:48:58.319339 kernel: VMware PVSCSI driver - version 1.0.7.0-k Oct 31 00:48:58.320667 kernel: vmw_pvscsi: using 64bit dma Oct 31 00:48:58.320687 kernel: vmw_pvscsi: max_id: 16 Oct 31 00:48:58.320695 kernel: vmw_pvscsi: setting ring_pages to 8 Oct 31 00:48:58.325340 kernel: vmw_pvscsi: enabling reqCallThreshold Oct 31 00:48:58.325360 kernel: vmw_pvscsi: driver-based request coalescing enabled Oct 31 00:48:58.325369 kernel: vmw_pvscsi: using MSI-X Oct 31 00:48:58.329438 kernel: scsi host0: VMware PVSCSI storage adapter rev 2, req/cmp/msg rings: 8/8/1 pages, cmd_per_lun=254 Oct 31 00:48:58.341984 kernel: VMware vmxnet3 virtual NIC driver - version 1.7.0.0-k-NAPI Oct 31 00:48:58.342003 kernel: vmw_pvscsi 0000:03:00.0: VMware PVSCSI rev 2 host #0 Oct 31 00:48:58.350671 kernel: scsi 0:0:0:0: Direct-Access VMware Virtual disk 2.0 PQ: 0 ANSI: 6 Oct 31 00:48:58.350694 kernel: vmxnet3 0000:0b:00.0: # of Tx queues : 2, # of Rx queues : 2 Oct 31 00:48:58.350785 kernel: vmxnet3 0000:0b:00.0 eth0: NIC Link is Up 10000 Mbps Oct 31 00:48:58.358340 kernel: libata version 3.00 loaded. Oct 31 00:48:58.360338 kernel: ata_piix 0000:00:07.1: version 2.13 Oct 31 00:48:58.360459 kernel: scsi host1: ata_piix Oct 31 00:48:58.362164 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Oct 31 00:48:58.362236 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Oct 31 00:48:58.362565 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Oct 31 00:48:58.362672 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Oct 31 00:48:58.362742 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Oct 31 00:48:58.362909 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Oct 31 00:48:58.366066 kernel: cryptd: max_cpu_qlen set to 1000 Oct 31 00:48:58.366087 kernel: scsi host2: ata_piix Oct 31 00:48:58.366170 kernel: ata1: PATA max UDMA/33 cmd 0x1f0 ctl 0x3f6 bmdma 0x1060 irq 14 Oct 31 00:48:58.366182 kernel: ata2: PATA max UDMA/33 cmd 0x170 ctl 0x376 bmdma 0x1068 irq 15 Oct 31 00:48:58.367473 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Oct 31 00:48:58.370342 kernel: vmxnet3 0000:0b:00.0 ens192: renamed from eth0 Oct 31 00:48:58.380289 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Oct 31 00:48:58.383402 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Oct 31 00:48:58.393770 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Oct 31 00:48:58.537422 kernel: ata2.00: ATAPI: VMware Virtual IDE CDROM Drive, 00000001, max UDMA/33 Oct 31 00:48:58.540340 kernel: scsi 2:0:0:0: CD-ROM NECVMWar VMware IDE CDR10 1.00 PQ: 0 ANSI: 5 Oct 31 00:48:58.551596 kernel: AVX2 version of gcm_enc/dec engaged. Oct 31 00:48:58.551620 kernel: AES CTR mode by8 optimization enabled Oct 31 00:48:58.556726 kernel: sd 0:0:0:0: [sda] 17805312 512-byte logical blocks: (9.12 GB/8.49 GiB) Oct 31 00:48:58.556860 kernel: sd 0:0:0:0: [sda] Write Protect is off Oct 31 00:48:58.556939 kernel: sd 0:0:0:0: [sda] Mode Sense: 31 00 00 00 Oct 31 00:48:58.557009 kernel: sd 0:0:0:0: [sda] Cache data unavailable Oct 31 00:48:58.558295 kernel: sd 0:0:0:0: [sda] Assuming drive cache: write through Oct 31 00:48:58.560783 kernel: sr 2:0:0:0: [sr0] scsi3-mmc drive: 1x/1x writer dvd-ram cd/rw xa/form2 cdda tray Oct 31 00:48:58.560874 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Oct 31 00:48:58.562740 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Oct 31 00:48:58.562756 kernel: sd 0:0:0:0: [sda] Attached SCSI disk Oct 31 00:48:58.568336 kernel: sr 2:0:0:0: Attached scsi CD-ROM sr0 Oct 31 00:48:58.592337 kernel: BTRFS: device label OEM devid 1 transid 9 /dev/sda6 scanned by (udev-worker) (481) Oct 31 00:48:58.595126 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Virtual_disk ROOT. Oct 31 00:48:58.597665 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Virtual_disk EFI-SYSTEM. Oct 31 00:48:58.600256 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Virtual_disk OEM. Oct 31 00:48:58.603459 kernel: BTRFS: device fsid 1021cdf2-f4a0-46ed-8fe0-b31d3115a6e0 devid 1 transid 37 /dev/sda3 scanned by (udev-worker) (487) Oct 31 00:48:58.606179 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - Virtual_disk USR-A. Oct 31 00:48:58.606310 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Virtual_disk USR-A. Oct 31 00:48:58.609449 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Oct 31 00:48:58.633353 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Oct 31 00:48:58.637342 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Oct 31 00:48:58.642541 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Oct 31 00:48:59.641362 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Oct 31 00:48:59.641400 disk-uuid[588]: The operation has completed successfully. Oct 31 00:48:59.675958 systemd[1]: disk-uuid.service: Deactivated successfully. Oct 31 00:48:59.676011 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Oct 31 00:48:59.680400 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Oct 31 00:48:59.682115 sh[608]: Success Oct 31 00:48:59.690357 kernel: device-mapper: verity: sha256 using implementation "sha256-avx2" Oct 31 00:48:59.737539 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Oct 31 00:48:59.738049 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Oct 31 00:48:59.738928 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Oct 31 00:48:59.755720 kernel: BTRFS info (device dm-0): first mount of filesystem 1021cdf2-f4a0-46ed-8fe0-b31d3115a6e0 Oct 31 00:48:59.755744 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Oct 31 00:48:59.755753 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Oct 31 00:48:59.756813 kernel: BTRFS info (device dm-0): disabling log replay at mount time Oct 31 00:48:59.758335 kernel: BTRFS info (device dm-0): using free space tree Oct 31 00:48:59.764339 kernel: BTRFS info (device dm-0): enabling ssd optimizations Oct 31 00:48:59.766502 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Oct 31 00:48:59.771412 systemd[1]: Starting afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments... Oct 31 00:48:59.772420 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Oct 31 00:48:59.796468 kernel: BTRFS info (device sda6): first mount of filesystem 1a1fe00d-a5e5-45c6-a30a-fcc91f19f9ea Oct 31 00:48:59.796497 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Oct 31 00:48:59.797373 kernel: BTRFS info (device sda6): using free space tree Oct 31 00:48:59.805338 kernel: BTRFS info (device sda6): enabling ssd optimizations Oct 31 00:48:59.814425 systemd[1]: mnt-oem.mount: Deactivated successfully. Oct 31 00:48:59.816337 kernel: BTRFS info (device sda6): last unmount of filesystem 1a1fe00d-a5e5-45c6-a30a-fcc91f19f9ea Oct 31 00:48:59.818194 systemd[1]: Finished ignition-setup.service - Ignition (setup). Oct 31 00:48:59.823421 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Oct 31 00:48:59.847279 systemd[1]: Finished afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments. Oct 31 00:48:59.851420 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Oct 31 00:48:59.903948 ignition[666]: Ignition 2.19.0 Oct 31 00:48:59.904224 ignition[666]: Stage: fetch-offline Oct 31 00:48:59.904246 ignition[666]: no configs at "/usr/lib/ignition/base.d" Oct 31 00:48:59.904252 ignition[666]: no config dir at "/usr/lib/ignition/base.platform.d/vmware" Oct 31 00:48:59.904305 ignition[666]: parsed url from cmdline: "" Oct 31 00:48:59.904307 ignition[666]: no config URL provided Oct 31 00:48:59.904310 ignition[666]: reading system config file "/usr/lib/ignition/user.ign" Oct 31 00:48:59.904314 ignition[666]: no config at "/usr/lib/ignition/user.ign" Oct 31 00:48:59.904705 ignition[666]: config successfully fetched Oct 31 00:48:59.904724 ignition[666]: parsing config with SHA512: bba6c530faf942ee9b42d60c3a3efc26c7e260a410f3743137e5fa655b09c8f3d5ef29c8349e0aec55646014d4719f5ca77fe5877c177f19805d462b6a46cf2f Oct 31 00:48:59.907850 unknown[666]: fetched base config from "system" Oct 31 00:48:59.907991 unknown[666]: fetched user config from "vmware" Oct 31 00:48:59.908375 ignition[666]: fetch-offline: fetch-offline passed Oct 31 00:48:59.908546 ignition[666]: Ignition finished successfully Oct 31 00:48:59.909236 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Oct 31 00:48:59.928907 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Oct 31 00:48:59.932415 systemd[1]: Starting systemd-networkd.service - Network Configuration... Oct 31 00:48:59.945036 systemd-networkd[800]: lo: Link UP Oct 31 00:48:59.945223 systemd-networkd[800]: lo: Gained carrier Oct 31 00:48:59.946145 systemd-networkd[800]: Enumeration completed Oct 31 00:48:59.949544 kernel: vmxnet3 0000:0b:00.0 ens192: intr type 3, mode 0, 3 vectors allocated Oct 31 00:48:59.949658 kernel: vmxnet3 0000:0b:00.0 ens192: NIC Link is Up 10000 Mbps Oct 31 00:48:59.946298 systemd[1]: Started systemd-networkd.service - Network Configuration. Oct 31 00:48:59.946487 systemd[1]: Reached target network.target - Network. Oct 31 00:48:59.946507 systemd-networkd[800]: ens192: Configuring with /etc/systemd/network/10-dracut-cmdline-99.network. Oct 31 00:48:59.946582 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Oct 31 00:48:59.950420 systemd-networkd[800]: ens192: Link UP Oct 31 00:48:59.950423 systemd-networkd[800]: ens192: Gained carrier Oct 31 00:48:59.952470 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Oct 31 00:48:59.959592 ignition[802]: Ignition 2.19.0 Oct 31 00:48:59.959601 ignition[802]: Stage: kargs Oct 31 00:48:59.959764 ignition[802]: no configs at "/usr/lib/ignition/base.d" Oct 31 00:48:59.959771 ignition[802]: no config dir at "/usr/lib/ignition/base.platform.d/vmware" Oct 31 00:48:59.961321 ignition[802]: kargs: kargs passed Oct 31 00:48:59.961478 ignition[802]: Ignition finished successfully Oct 31 00:48:59.963072 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Oct 31 00:48:59.968541 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Oct 31 00:48:59.976626 ignition[809]: Ignition 2.19.0 Oct 31 00:48:59.976632 ignition[809]: Stage: disks Oct 31 00:48:59.977288 ignition[809]: no configs at "/usr/lib/ignition/base.d" Oct 31 00:48:59.977420 ignition[809]: no config dir at "/usr/lib/ignition/base.platform.d/vmware" Oct 31 00:48:59.978799 ignition[809]: disks: disks passed Oct 31 00:48:59.978831 ignition[809]: Ignition finished successfully Oct 31 00:48:59.979794 systemd[1]: Finished ignition-disks.service - Ignition (disks). Oct 31 00:48:59.980015 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Oct 31 00:48:59.980143 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Oct 31 00:48:59.980344 systemd[1]: Reached target local-fs.target - Local File Systems. Oct 31 00:48:59.980521 systemd[1]: Reached target sysinit.target - System Initialization. Oct 31 00:48:59.980693 systemd[1]: Reached target basic.target - Basic System. Oct 31 00:48:59.988548 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Oct 31 00:48:59.999132 systemd-fsck[817]: ROOT: clean, 14/1628000 files, 120691/1617920 blocks Oct 31 00:49:00.000652 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Oct 31 00:49:00.004496 systemd[1]: Mounting sysroot.mount - /sysroot... Oct 31 00:49:00.061605 kernel: EXT4-fs (sda9): mounted filesystem 044ea9d4-3e15-48f6-be3f-240ec74f6b62 r/w with ordered data mode. Quota mode: none. Oct 31 00:49:00.061091 systemd[1]: Mounted sysroot.mount - /sysroot. Oct 31 00:49:00.061460 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Oct 31 00:49:00.066411 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Oct 31 00:49:00.068372 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Oct 31 00:49:00.068772 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Oct 31 00:49:00.068988 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Oct 31 00:49:00.069003 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Oct 31 00:49:00.072322 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Oct 31 00:49:00.073621 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Oct 31 00:49:00.075346 kernel: BTRFS: device label OEM devid 1 transid 10 /dev/sda6 scanned by mount (825) Oct 31 00:49:00.078459 kernel: BTRFS info (device sda6): first mount of filesystem 1a1fe00d-a5e5-45c6-a30a-fcc91f19f9ea Oct 31 00:49:00.078479 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Oct 31 00:49:00.078488 kernel: BTRFS info (device sda6): using free space tree Oct 31 00:49:00.082437 kernel: BTRFS info (device sda6): enabling ssd optimizations Oct 31 00:49:00.082348 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Oct 31 00:49:00.102316 initrd-setup-root[849]: cut: /sysroot/etc/passwd: No such file or directory Oct 31 00:49:00.104860 initrd-setup-root[856]: cut: /sysroot/etc/group: No such file or directory Oct 31 00:49:00.107133 initrd-setup-root[863]: cut: /sysroot/etc/shadow: No such file or directory Oct 31 00:49:00.109082 initrd-setup-root[870]: cut: /sysroot/etc/gshadow: No such file or directory Oct 31 00:49:00.157871 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Oct 31 00:49:00.165464 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Oct 31 00:49:00.167873 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Oct 31 00:49:00.170414 kernel: BTRFS info (device sda6): last unmount of filesystem 1a1fe00d-a5e5-45c6-a30a-fcc91f19f9ea Oct 31 00:49:00.183583 ignition[937]: INFO : Ignition 2.19.0 Oct 31 00:49:00.183583 ignition[937]: INFO : Stage: mount Oct 31 00:49:00.183583 ignition[937]: INFO : no configs at "/usr/lib/ignition/base.d" Oct 31 00:49:00.183583 ignition[937]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/vmware" Oct 31 00:49:00.184705 ignition[937]: INFO : mount: mount passed Oct 31 00:49:00.184705 ignition[937]: INFO : Ignition finished successfully Oct 31 00:49:00.184587 systemd[1]: Finished ignition-mount.service - Ignition (mount). Oct 31 00:49:00.190403 systemd[1]: Starting ignition-files.service - Ignition (files)... Oct 31 00:49:00.190596 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Oct 31 00:49:00.754340 systemd[1]: sysroot-oem.mount: Deactivated successfully. Oct 31 00:49:00.759490 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Oct 31 00:49:00.841358 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/sda6 scanned by mount (950) Oct 31 00:49:00.846291 kernel: BTRFS info (device sda6): first mount of filesystem 1a1fe00d-a5e5-45c6-a30a-fcc91f19f9ea Oct 31 00:49:00.846321 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Oct 31 00:49:00.846354 kernel: BTRFS info (device sda6): using free space tree Oct 31 00:49:00.850344 kernel: BTRFS info (device sda6): enabling ssd optimizations Oct 31 00:49:00.851867 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Oct 31 00:49:00.866557 ignition[967]: INFO : Ignition 2.19.0 Oct 31 00:49:00.866829 ignition[967]: INFO : Stage: files Oct 31 00:49:00.867815 ignition[967]: INFO : no configs at "/usr/lib/ignition/base.d" Oct 31 00:49:00.867815 ignition[967]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/vmware" Oct 31 00:49:00.867815 ignition[967]: DEBUG : files: compiled without relabeling support, skipping Oct 31 00:49:00.868505 ignition[967]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Oct 31 00:49:00.868505 ignition[967]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Oct 31 00:49:00.870553 ignition[967]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Oct 31 00:49:00.870798 ignition[967]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Oct 31 00:49:00.871188 unknown[967]: wrote ssh authorized keys file for user: core Oct 31 00:49:00.871400 ignition[967]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Oct 31 00:49:00.873075 ignition[967]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Oct 31 00:49:00.873318 ignition[967]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.3-linux-amd64.tar.gz: attempt #1 Oct 31 00:49:00.911434 ignition[967]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Oct 31 00:49:00.957637 ignition[967]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Oct 31 00:49:00.957988 ignition[967]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Oct 31 00:49:00.957988 ignition[967]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Oct 31 00:49:00.957988 ignition[967]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Oct 31 00:49:00.957988 ignition[967]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Oct 31 00:49:00.957988 ignition[967]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Oct 31 00:49:00.959083 ignition[967]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Oct 31 00:49:00.959083 ignition[967]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Oct 31 00:49:00.959083 ignition[967]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Oct 31 00:49:00.959083 ignition[967]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Oct 31 00:49:00.959083 ignition[967]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Oct 31 00:49:00.959083 ignition[967]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.34.1-x86-64.raw" Oct 31 00:49:00.959083 ignition[967]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.34.1-x86-64.raw" Oct 31 00:49:00.959083 ignition[967]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.34.1-x86-64.raw" Oct 31 00:49:00.959083 ignition[967]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.34.1-x86-64.raw: attempt #1 Oct 31 00:49:01.417356 ignition[967]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Oct 31 00:49:01.665184 ignition[967]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.34.1-x86-64.raw" Oct 31 00:49:01.665465 ignition[967]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/etc/systemd/network/00-vmware.network" Oct 31 00:49:01.665465 ignition[967]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/etc/systemd/network/00-vmware.network" Oct 31 00:49:01.665465 ignition[967]: INFO : files: op(c): [started] processing unit "prepare-helm.service" Oct 31 00:49:01.665941 ignition[967]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Oct 31 00:49:01.665941 ignition[967]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Oct 31 00:49:01.665941 ignition[967]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" Oct 31 00:49:01.665941 ignition[967]: INFO : files: op(e): [started] processing unit "coreos-metadata.service" Oct 31 00:49:01.665941 ignition[967]: INFO : files: op(e): op(f): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Oct 31 00:49:01.665941 ignition[967]: INFO : files: op(e): op(f): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Oct 31 00:49:01.665941 ignition[967]: INFO : files: op(e): [finished] processing unit "coreos-metadata.service" Oct 31 00:49:01.665941 ignition[967]: INFO : files: op(10): [started] setting preset to disabled for "coreos-metadata.service" Oct 31 00:49:01.702171 ignition[967]: INFO : files: op(10): op(11): [started] removing enablement symlink(s) for "coreos-metadata.service" Oct 31 00:49:01.704927 ignition[967]: INFO : files: op(10): op(11): [finished] removing enablement symlink(s) for "coreos-metadata.service" Oct 31 00:49:01.705087 ignition[967]: INFO : files: op(10): [finished] setting preset to disabled for "coreos-metadata.service" Oct 31 00:49:01.705087 ignition[967]: INFO : files: op(12): [started] setting preset to enabled for "prepare-helm.service" Oct 31 00:49:01.705087 ignition[967]: INFO : files: op(12): [finished] setting preset to enabled for "prepare-helm.service" Oct 31 00:49:01.705507 ignition[967]: INFO : files: createResultFile: createFiles: op(13): [started] writing file "/sysroot/etc/.ignition-result.json" Oct 31 00:49:01.705507 ignition[967]: INFO : files: createResultFile: createFiles: op(13): [finished] writing file "/sysroot/etc/.ignition-result.json" Oct 31 00:49:01.705507 ignition[967]: INFO : files: files passed Oct 31 00:49:01.705507 ignition[967]: INFO : Ignition finished successfully Oct 31 00:49:01.706200 systemd[1]: Finished ignition-files.service - Ignition (files). Oct 31 00:49:01.710450 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Oct 31 00:49:01.712329 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Oct 31 00:49:01.712766 systemd[1]: ignition-quench.service: Deactivated successfully. Oct 31 00:49:01.712954 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Oct 31 00:49:01.719387 initrd-setup-root-after-ignition[998]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Oct 31 00:49:01.719387 initrd-setup-root-after-ignition[998]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Oct 31 00:49:01.720572 initrd-setup-root-after-ignition[1002]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Oct 31 00:49:01.721061 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Oct 31 00:49:01.721673 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Oct 31 00:49:01.724392 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Oct 31 00:49:01.746469 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Oct 31 00:49:01.746680 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Oct 31 00:49:01.746987 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Oct 31 00:49:01.747098 systemd[1]: Reached target initrd.target - Initrd Default Target. Oct 31 00:49:01.747221 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Oct 31 00:49:01.748412 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Oct 31 00:49:01.756679 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Oct 31 00:49:01.760416 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Oct 31 00:49:01.765451 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Oct 31 00:49:01.765601 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Oct 31 00:49:01.765753 systemd[1]: Stopped target timers.target - Timer Units. Oct 31 00:49:01.765879 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Oct 31 00:49:01.765947 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Oct 31 00:49:01.766161 systemd[1]: Stopped target initrd.target - Initrd Default Target. Oct 31 00:49:01.766291 systemd[1]: Stopped target basic.target - Basic System. Oct 31 00:49:01.766445 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Oct 31 00:49:01.766646 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Oct 31 00:49:01.766852 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Oct 31 00:49:01.767035 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Oct 31 00:49:01.767216 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Oct 31 00:49:01.767441 systemd[1]: Stopped target sysinit.target - System Initialization. Oct 31 00:49:01.767639 systemd[1]: Stopped target local-fs.target - Local File Systems. Oct 31 00:49:01.767794 systemd[1]: Stopped target swap.target - Swaps. Oct 31 00:49:01.768105 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Oct 31 00:49:01.768171 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Oct 31 00:49:01.768472 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Oct 31 00:49:01.768623 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Oct 31 00:49:01.768808 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Oct 31 00:49:01.768847 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Oct 31 00:49:01.769026 systemd[1]: dracut-initqueue.service: Deactivated successfully. Oct 31 00:49:01.769082 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Oct 31 00:49:01.769308 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Oct 31 00:49:01.769405 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Oct 31 00:49:01.769641 systemd[1]: Stopped target paths.target - Path Units. Oct 31 00:49:01.769760 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Oct 31 00:49:01.773347 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Oct 31 00:49:01.773507 systemd[1]: Stopped target slices.target - Slice Units. Oct 31 00:49:01.773758 systemd[1]: Stopped target sockets.target - Socket Units. Oct 31 00:49:01.773908 systemd[1]: iscsid.socket: Deactivated successfully. Oct 31 00:49:01.773954 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Oct 31 00:49:01.774095 systemd[1]: iscsiuio.socket: Deactivated successfully. Oct 31 00:49:01.774142 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Oct 31 00:49:01.774292 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Oct 31 00:49:01.774368 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Oct 31 00:49:01.774604 systemd[1]: ignition-files.service: Deactivated successfully. Oct 31 00:49:01.774665 systemd[1]: Stopped ignition-files.service - Ignition (files). Oct 31 00:49:01.782550 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Oct 31 00:49:01.784450 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Oct 31 00:49:01.784597 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Oct 31 00:49:01.784690 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Oct 31 00:49:01.784945 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Oct 31 00:49:01.785024 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Oct 31 00:49:01.786596 systemd-networkd[800]: ens192: Gained IPv6LL Oct 31 00:49:01.789745 systemd[1]: initrd-cleanup.service: Deactivated successfully. Oct 31 00:49:01.789831 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Oct 31 00:49:01.795503 ignition[1022]: INFO : Ignition 2.19.0 Oct 31 00:49:01.795503 ignition[1022]: INFO : Stage: umount Oct 31 00:49:01.795863 ignition[1022]: INFO : no configs at "/usr/lib/ignition/base.d" Oct 31 00:49:01.795863 ignition[1022]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/vmware" Oct 31 00:49:01.796963 ignition[1022]: INFO : umount: umount passed Oct 31 00:49:01.797097 ignition[1022]: INFO : Ignition finished successfully Oct 31 00:49:01.798910 systemd[1]: sysroot-boot.mount: Deactivated successfully. Oct 31 00:49:01.799181 systemd[1]: ignition-mount.service: Deactivated successfully. Oct 31 00:49:01.799243 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Oct 31 00:49:01.799425 systemd[1]: Stopped target network.target - Network. Oct 31 00:49:01.799510 systemd[1]: ignition-disks.service: Deactivated successfully. Oct 31 00:49:01.799548 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Oct 31 00:49:01.799650 systemd[1]: ignition-kargs.service: Deactivated successfully. Oct 31 00:49:01.799672 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Oct 31 00:49:01.799771 systemd[1]: ignition-setup.service: Deactivated successfully. Oct 31 00:49:01.799791 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Oct 31 00:49:01.799889 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Oct 31 00:49:01.799910 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Oct 31 00:49:01.800075 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Oct 31 00:49:01.800209 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Oct 31 00:49:01.804076 systemd[1]: systemd-networkd.service: Deactivated successfully. Oct 31 00:49:01.804138 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Oct 31 00:49:01.804434 systemd[1]: systemd-networkd.socket: Deactivated successfully. Oct 31 00:49:01.804458 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Oct 31 00:49:01.808451 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Oct 31 00:49:01.808568 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Oct 31 00:49:01.808596 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Oct 31 00:49:01.808706 systemd[1]: afterburn-network-kargs.service: Deactivated successfully. Oct 31 00:49:01.808728 systemd[1]: Stopped afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments. Oct 31 00:49:01.810053 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Oct 31 00:49:01.814186 systemd[1]: systemd-resolved.service: Deactivated successfully. Oct 31 00:49:01.814264 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Oct 31 00:49:01.815156 systemd[1]: systemd-sysctl.service: Deactivated successfully. Oct 31 00:49:01.815192 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Oct 31 00:49:01.815495 systemd[1]: systemd-modules-load.service: Deactivated successfully. Oct 31 00:49:01.815518 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Oct 31 00:49:01.815662 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Oct 31 00:49:01.815684 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Oct 31 00:49:01.818530 systemd[1]: network-cleanup.service: Deactivated successfully. Oct 31 00:49:01.818595 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Oct 31 00:49:01.822547 systemd[1]: systemd-udevd.service: Deactivated successfully. Oct 31 00:49:01.822622 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Oct 31 00:49:01.823041 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Oct 31 00:49:01.823077 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Oct 31 00:49:01.823188 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Oct 31 00:49:01.823207 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Oct 31 00:49:01.823368 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Oct 31 00:49:01.823392 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Oct 31 00:49:01.823666 systemd[1]: dracut-cmdline.service: Deactivated successfully. Oct 31 00:49:01.823689 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Oct 31 00:49:01.823974 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Oct 31 00:49:01.823997 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Oct 31 00:49:01.830400 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Oct 31 00:49:01.830510 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Oct 31 00:49:01.830540 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Oct 31 00:49:01.830667 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Oct 31 00:49:01.830690 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Oct 31 00:49:01.830812 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Oct 31 00:49:01.830834 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Oct 31 00:49:01.830951 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Oct 31 00:49:01.830972 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Oct 31 00:49:01.833356 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Oct 31 00:49:01.833597 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Oct 31 00:49:01.868155 systemd[1]: sysroot-boot.service: Deactivated successfully. Oct 31 00:49:01.868464 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Oct 31 00:49:01.869033 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Oct 31 00:49:01.869177 systemd[1]: initrd-setup-root.service: Deactivated successfully. Oct 31 00:49:01.869219 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Oct 31 00:49:01.873495 systemd[1]: Starting initrd-switch-root.service - Switch Root... Oct 31 00:49:01.885618 systemd[1]: Switching root. Oct 31 00:49:01.918026 systemd-journald[217]: Journal stopped Oct 31 00:49:02.918011 systemd-journald[217]: Received SIGTERM from PID 1 (systemd). Oct 31 00:49:02.918038 kernel: SELinux: policy capability network_peer_controls=1 Oct 31 00:49:02.918047 kernel: SELinux: policy capability open_perms=1 Oct 31 00:49:02.918053 kernel: SELinux: policy capability extended_socket_class=1 Oct 31 00:49:02.918058 kernel: SELinux: policy capability always_check_network=0 Oct 31 00:49:02.918064 kernel: SELinux: policy capability cgroup_seclabel=1 Oct 31 00:49:02.918071 kernel: SELinux: policy capability nnp_nosuid_transition=1 Oct 31 00:49:02.918077 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Oct 31 00:49:02.918083 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Oct 31 00:49:02.918090 systemd[1]: Successfully loaded SELinux policy in 30.552ms. Oct 31 00:49:02.918097 kernel: audit: type=1403 audit(1761871742.433:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Oct 31 00:49:02.918103 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 6.652ms. Oct 31 00:49:02.918110 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Oct 31 00:49:02.918118 systemd[1]: Detected virtualization vmware. Oct 31 00:49:02.918125 systemd[1]: Detected architecture x86-64. Oct 31 00:49:02.918132 systemd[1]: Detected first boot. Oct 31 00:49:02.918138 systemd[1]: Initializing machine ID from random generator. Oct 31 00:49:02.918146 zram_generator::config[1067]: No configuration found. Oct 31 00:49:02.918153 systemd[1]: Populated /etc with preset unit settings. Oct 31 00:49:02.918161 systemd[1]: /etc/systemd/system/coreos-metadata.service:11: Ignoring unknown escape sequences: "echo "COREOS_CUSTOM_PRIVATE_IPV4=$(ip addr show ens192 | grep "inet 10." | grep -Po "inet \K[\d.]+") Oct 31 00:49:02.918168 systemd[1]: COREOS_CUSTOM_PUBLIC_IPV4=$(ip addr show ens192 | grep -v "inet 10." | grep -Po "inet \K[\d.]+")" > ${OUTPUT}" Oct 31 00:49:02.918174 systemd[1]: initrd-switch-root.service: Deactivated successfully. Oct 31 00:49:02.918181 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Oct 31 00:49:02.918187 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Oct 31 00:49:02.918195 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Oct 31 00:49:02.918202 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Oct 31 00:49:02.918209 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Oct 31 00:49:02.918216 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Oct 31 00:49:02.918223 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Oct 31 00:49:02.918230 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Oct 31 00:49:02.918237 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Oct 31 00:49:02.918245 systemd[1]: Created slice user.slice - User and Session Slice. Oct 31 00:49:02.918252 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Oct 31 00:49:02.918258 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Oct 31 00:49:02.918265 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Oct 31 00:49:02.918272 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Oct 31 00:49:02.918278 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Oct 31 00:49:02.918285 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Oct 31 00:49:02.918291 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Oct 31 00:49:02.918300 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Oct 31 00:49:02.918308 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Oct 31 00:49:02.918316 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Oct 31 00:49:02.921066 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Oct 31 00:49:02.921085 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Oct 31 00:49:02.921093 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Oct 31 00:49:02.921100 systemd[1]: Reached target remote-fs.target - Remote File Systems. Oct 31 00:49:02.921107 systemd[1]: Reached target slices.target - Slice Units. Oct 31 00:49:02.921117 systemd[1]: Reached target swap.target - Swaps. Oct 31 00:49:02.921124 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Oct 31 00:49:02.921131 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Oct 31 00:49:02.921137 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Oct 31 00:49:02.921145 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Oct 31 00:49:02.921153 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Oct 31 00:49:02.921159 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Oct 31 00:49:02.921166 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Oct 31 00:49:02.921173 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Oct 31 00:49:02.921180 systemd[1]: Mounting media.mount - External Media Directory... Oct 31 00:49:02.921187 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Oct 31 00:49:02.921193 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Oct 31 00:49:02.921200 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Oct 31 00:49:02.921208 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Oct 31 00:49:02.921215 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Oct 31 00:49:02.921222 systemd[1]: Reached target machines.target - Containers. Oct 31 00:49:02.921229 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Oct 31 00:49:02.921235 systemd[1]: Starting ignition-delete-config.service - Ignition (delete config)... Oct 31 00:49:02.921242 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Oct 31 00:49:02.921249 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Oct 31 00:49:02.921256 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Oct 31 00:49:02.921264 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Oct 31 00:49:02.921272 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Oct 31 00:49:02.921278 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Oct 31 00:49:02.921285 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Oct 31 00:49:02.921292 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Oct 31 00:49:02.921298 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Oct 31 00:49:02.921305 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Oct 31 00:49:02.921311 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Oct 31 00:49:02.921318 systemd[1]: Stopped systemd-fsck-usr.service. Oct 31 00:49:02.921349 systemd[1]: Starting systemd-journald.service - Journal Service... Oct 31 00:49:02.921357 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Oct 31 00:49:02.921363 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Oct 31 00:49:02.921370 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Oct 31 00:49:02.921377 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Oct 31 00:49:02.921384 systemd[1]: verity-setup.service: Deactivated successfully. Oct 31 00:49:02.921390 systemd[1]: Stopped verity-setup.service. Oct 31 00:49:02.921397 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Oct 31 00:49:02.921406 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Oct 31 00:49:02.921413 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Oct 31 00:49:02.921419 systemd[1]: Mounted media.mount - External Media Directory. Oct 31 00:49:02.921426 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Oct 31 00:49:02.921433 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Oct 31 00:49:02.921439 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Oct 31 00:49:02.921446 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Oct 31 00:49:02.921453 systemd[1]: modprobe@configfs.service: Deactivated successfully. Oct 31 00:49:02.921459 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Oct 31 00:49:02.921467 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Oct 31 00:49:02.921474 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Oct 31 00:49:02.921481 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Oct 31 00:49:02.921487 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Oct 31 00:49:02.921494 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Oct 31 00:49:02.921501 kernel: loop: module loaded Oct 31 00:49:02.921508 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Oct 31 00:49:02.921514 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Oct 31 00:49:02.921541 systemd[1]: modprobe@loop.service: Deactivated successfully. Oct 31 00:49:02.921548 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Oct 31 00:49:02.921589 systemd-journald[1154]: Collecting audit messages is disabled. Oct 31 00:49:02.921621 systemd[1]: Reached target network-pre.target - Preparation for Network. Oct 31 00:49:02.921630 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Oct 31 00:49:02.921637 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Oct 31 00:49:02.921644 systemd[1]: Reached target local-fs.target - Local File Systems. Oct 31 00:49:02.921651 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Oct 31 00:49:02.921658 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Oct 31 00:49:02.921665 systemd-journald[1154]: Journal started Oct 31 00:49:02.921679 systemd-journald[1154]: Runtime Journal (/run/log/journal/46ff21dd5d1643609e79af72da14efe7) is 4.8M, max 38.6M, 33.8M free. Oct 31 00:49:02.737699 systemd[1]: Queued start job for default target multi-user.target. Oct 31 00:49:02.751099 systemd[1]: Unnecessary job was removed for dev-sda6.device - /dev/sda6. Oct 31 00:49:02.751342 systemd[1]: systemd-journald.service: Deactivated successfully. Oct 31 00:49:02.922161 jq[1134]: true Oct 31 00:49:02.922755 jq[1166]: true Oct 31 00:49:02.941805 kernel: fuse: init (API version 7.39) Oct 31 00:49:02.941831 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Oct 31 00:49:02.944743 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Oct 31 00:49:02.944764 kernel: ACPI: bus type drm_connector registered Oct 31 00:49:02.954441 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Oct 31 00:49:02.956965 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Oct 31 00:49:02.961897 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Oct 31 00:49:02.961942 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Oct 31 00:49:02.968265 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Oct 31 00:49:02.970390 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Oct 31 00:49:02.977403 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Oct 31 00:49:02.977434 systemd[1]: Started systemd-journald.service - Journal Service. Oct 31 00:49:02.978653 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Oct 31 00:49:02.978899 systemd[1]: modprobe@drm.service: Deactivated successfully. Oct 31 00:49:02.979013 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Oct 31 00:49:02.979698 systemd[1]: modprobe@fuse.service: Deactivated successfully. Oct 31 00:49:02.979791 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Oct 31 00:49:02.979969 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Oct 31 00:49:02.980198 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Oct 31 00:49:03.000435 kernel: loop0: detected capacity change from 0 to 219144 Oct 31 00:49:02.996583 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Oct 31 00:49:03.002803 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Oct 31 00:49:03.010545 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Oct 31 00:49:03.014040 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Oct 31 00:49:03.023528 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Oct 31 00:49:03.029610 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Oct 31 00:49:03.050242 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Oct 31 00:49:03.054866 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Oct 31 00:49:03.056205 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Oct 31 00:49:03.059395 systemd-journald[1154]: Time spent on flushing to /var/log/journal/46ff21dd5d1643609e79af72da14efe7 is 24.144ms for 1841 entries. Oct 31 00:49:03.059395 systemd-journald[1154]: System Journal (/var/log/journal/46ff21dd5d1643609e79af72da14efe7) is 8.0M, max 584.8M, 576.8M free. Oct 31 00:49:03.090226 systemd-journald[1154]: Received client request to flush runtime journal. Oct 31 00:49:03.090254 kernel: loop1: detected capacity change from 0 to 142488 Oct 31 00:49:03.065370 ignition[1174]: Ignition 2.19.0 Oct 31 00:49:03.068219 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Oct 31 00:49:03.065555 ignition[1174]: deleting config from guestinfo properties Oct 31 00:49:03.074174 systemd-tmpfiles[1185]: ACLs are not supported, ignoring. Oct 31 00:49:03.076813 ignition[1174]: Successfully deleted config Oct 31 00:49:03.074186 systemd-tmpfiles[1185]: ACLs are not supported, ignoring. Oct 31 00:49:03.077820 systemd[1]: Finished ignition-delete-config.service - Ignition (delete config). Oct 31 00:49:03.092417 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Oct 31 00:49:03.093989 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Oct 31 00:49:03.100877 systemd[1]: Starting systemd-sysusers.service - Create System Users... Oct 31 00:49:03.127809 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Oct 31 00:49:03.132613 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Oct 31 00:49:03.139396 kernel: loop2: detected capacity change from 0 to 2976 Oct 31 00:49:03.144405 udevadm[1232]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Oct 31 00:49:03.144840 systemd[1]: Finished systemd-sysusers.service - Create System Users. Oct 31 00:49:03.153532 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Oct 31 00:49:03.172310 systemd-tmpfiles[1234]: ACLs are not supported, ignoring. Oct 31 00:49:03.172333 systemd-tmpfiles[1234]: ACLs are not supported, ignoring. Oct 31 00:49:03.182340 kernel: loop3: detected capacity change from 0 to 140768 Oct 31 00:49:03.187709 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Oct 31 00:49:03.253340 kernel: loop4: detected capacity change from 0 to 219144 Oct 31 00:49:03.286337 kernel: loop5: detected capacity change from 0 to 142488 Oct 31 00:49:03.303337 kernel: loop6: detected capacity change from 0 to 2976 Oct 31 00:49:03.320338 kernel: loop7: detected capacity change from 0 to 140768 Oct 31 00:49:03.346546 (sd-merge)[1239]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-vmware'. Oct 31 00:49:03.346820 (sd-merge)[1239]: Merged extensions into '/usr'. Oct 31 00:49:03.352197 systemd[1]: Reloading requested from client PID 1184 ('systemd-sysext') (unit systemd-sysext.service)... Oct 31 00:49:03.352207 systemd[1]: Reloading... Oct 31 00:49:03.405337 zram_generator::config[1262]: No configuration found. Oct 31 00:49:03.505819 systemd[1]: /etc/systemd/system/coreos-metadata.service:11: Ignoring unknown escape sequences: "echo "COREOS_CUSTOM_PRIVATE_IPV4=$(ip addr show ens192 | grep "inet 10." | grep -Po "inet \K[\d.]+") Oct 31 00:49:03.521924 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Oct 31 00:49:03.551048 systemd[1]: Reloading finished in 198 ms. Oct 31 00:49:03.580345 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Oct 31 00:49:03.587819 systemd[1]: Starting ensure-sysext.service... Oct 31 00:49:03.591413 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Oct 31 00:49:03.593153 systemd[1]: Reloading requested from client PID 1320 ('systemctl') (unit ensure-sysext.service)... Oct 31 00:49:03.593190 systemd[1]: Reloading... Oct 31 00:49:03.608916 ldconfig[1180]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Oct 31 00:49:03.618564 systemd-tmpfiles[1321]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Oct 31 00:49:03.618771 systemd-tmpfiles[1321]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Oct 31 00:49:03.619263 systemd-tmpfiles[1321]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Oct 31 00:49:03.619439 systemd-tmpfiles[1321]: ACLs are not supported, ignoring. Oct 31 00:49:03.619478 systemd-tmpfiles[1321]: ACLs are not supported, ignoring. Oct 31 00:49:03.623105 systemd-tmpfiles[1321]: Detected autofs mount point /boot during canonicalization of boot. Oct 31 00:49:03.623111 systemd-tmpfiles[1321]: Skipping /boot Oct 31 00:49:03.632588 systemd-tmpfiles[1321]: Detected autofs mount point /boot during canonicalization of boot. Oct 31 00:49:03.632595 systemd-tmpfiles[1321]: Skipping /boot Oct 31 00:49:03.639372 zram_generator::config[1344]: No configuration found. Oct 31 00:49:03.711917 systemd[1]: /etc/systemd/system/coreos-metadata.service:11: Ignoring unknown escape sequences: "echo "COREOS_CUSTOM_PRIVATE_IPV4=$(ip addr show ens192 | grep "inet 10." | grep -Po "inet \K[\d.]+") Oct 31 00:49:03.727730 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Oct 31 00:49:03.756595 systemd[1]: Reloading finished in 163 ms. Oct 31 00:49:03.773406 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Oct 31 00:49:03.773790 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Oct 31 00:49:03.778625 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Oct 31 00:49:03.783547 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Oct 31 00:49:03.787101 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Oct 31 00:49:03.788612 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Oct 31 00:49:03.791976 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Oct 31 00:49:03.793446 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Oct 31 00:49:03.795456 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Oct 31 00:49:03.796585 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Oct 31 00:49:03.803495 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Oct 31 00:49:03.804230 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Oct 31 00:49:03.808496 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Oct 31 00:49:03.808668 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Oct 31 00:49:03.808733 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Oct 31 00:49:03.811303 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Oct 31 00:49:03.812428 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Oct 31 00:49:03.812510 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Oct 31 00:49:03.812570 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Oct 31 00:49:03.816318 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Oct 31 00:49:03.822502 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Oct 31 00:49:03.822676 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Oct 31 00:49:03.822795 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Oct 31 00:49:03.825820 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Oct 31 00:49:03.826351 systemd[1]: Finished ensure-sysext.service. Oct 31 00:49:03.826883 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Oct 31 00:49:03.827305 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Oct 31 00:49:03.837449 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Oct 31 00:49:03.837743 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Oct 31 00:49:03.837854 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Oct 31 00:49:03.838108 systemd[1]: modprobe@loop.service: Deactivated successfully. Oct 31 00:49:03.838184 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Oct 31 00:49:03.838513 systemd[1]: modprobe@drm.service: Deactivated successfully. Oct 31 00:49:03.838586 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Oct 31 00:49:03.839939 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Oct 31 00:49:03.839983 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Oct 31 00:49:03.842233 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Oct 31 00:49:03.846573 systemd[1]: Starting systemd-update-done.service - Update is Completed... Oct 31 00:49:03.853266 systemd-udevd[1415]: Using default interface naming scheme 'v255'. Oct 31 00:49:03.865544 augenrules[1442]: No rules Oct 31 00:49:03.863484 systemd[1]: Finished systemd-update-done.service - Update is Completed. Oct 31 00:49:03.864700 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Oct 31 00:49:03.866280 systemd[1]: Started systemd-userdbd.service - User Database Manager. Oct 31 00:49:03.883288 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Oct 31 00:49:03.883754 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Oct 31 00:49:03.894464 systemd[1]: Starting systemd-networkd.service - Network Configuration... Oct 31 00:49:03.894604 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Oct 31 00:49:03.930645 systemd-networkd[1463]: lo: Link UP Oct 31 00:49:03.930650 systemd-networkd[1463]: lo: Gained carrier Oct 31 00:49:03.930942 systemd-networkd[1463]: Enumeration completed Oct 31 00:49:03.931025 systemd[1]: Started systemd-networkd.service - Network Configuration. Oct 31 00:49:03.935424 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Oct 31 00:49:03.940365 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Oct 31 00:49:03.940613 systemd[1]: Reached target time-set.target - System Time Set. Oct 31 00:49:03.960994 systemd-resolved[1412]: Positive Trust Anchors: Oct 31 00:49:03.961004 systemd-resolved[1412]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Oct 31 00:49:03.961028 systemd-resolved[1412]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Oct 31 00:49:03.964090 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Oct 31 00:49:03.964549 systemd-resolved[1412]: Defaulting to hostname 'linux'. Oct 31 00:49:03.965931 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Oct 31 00:49:03.967745 systemd[1]: Reached target network.target - Network. Oct 31 00:49:03.969355 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Oct 31 00:49:03.999344 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 37 scanned by (udev-worker) (1468) Oct 31 00:49:04.021266 systemd-networkd[1463]: ens192: Configuring with /etc/systemd/network/00-vmware.network. Oct 31 00:49:04.023441 kernel: vmxnet3 0000:0b:00.0 ens192: intr type 3, mode 0, 3 vectors allocated Oct 31 00:49:04.023576 kernel: vmxnet3 0000:0b:00.0 ens192: NIC Link is Up 10000 Mbps Oct 31 00:49:04.025717 systemd-networkd[1463]: ens192: Link UP Oct 31 00:49:04.025808 systemd-networkd[1463]: ens192: Gained carrier Oct 31 00:49:04.026099 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Virtual_disk OEM. Oct 31 00:49:04.030350 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input2 Oct 31 00:49:04.031153 systemd-timesyncd[1431]: Network configuration changed, trying to establish connection. Oct 31 00:49:04.032523 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Oct 31 00:49:04.036348 kernel: piix4_smbus 0000:00:07.3: SMBus Host Controller not enabled! Oct 31 00:49:04.039491 kernel: ACPI: button: Power Button [PWRF] Oct 31 00:49:04.041483 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Oct 31 00:49:04.093461 kernel: vmw_vmci 0000:00:07.7: Using capabilities 0xc Oct 31 00:49:04.099451 kernel: Guest personality initialized and is active Oct 31 00:49:04.102380 kernel: VMCI host device registered (name=vmci, major=10, minor=125) Oct 31 00:49:04.102405 kernel: Initialized host personality Oct 31 00:49:04.108490 kernel: input: ImPS/2 Generic Wheel Mouse as /devices/platform/i8042/serio1/input/input3 Oct 31 00:50:41.796824 systemd-resolved[1412]: Clock change detected. Flushing caches. Oct 31 00:50:41.796903 systemd-timesyncd[1431]: Contacted time server 5.161.191.31:123 (0.flatcar.pool.ntp.org). Oct 31 00:50:41.796983 systemd-timesyncd[1431]: Initial clock synchronization to Fri 2025-10-31 00:50:41.796744 UTC. Oct 31 00:50:41.813659 kernel: mousedev: PS/2 mouse device common for all mice Oct 31 00:50:41.815721 (udev-worker)[1468]: id: Truncating stdout of 'dmi_memory_id' up to 16384 byte. Oct 31 00:50:41.817868 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Oct 31 00:50:41.839259 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Oct 31 00:50:41.843849 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Oct 31 00:50:41.854773 lvm[1498]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Oct 31 00:50:41.876300 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Oct 31 00:50:41.876712 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Oct 31 00:50:41.877440 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Oct 31 00:50:41.877571 systemd[1]: Reached target sysinit.target - System Initialization. Oct 31 00:50:41.877748 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Oct 31 00:50:41.877878 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Oct 31 00:50:41.878087 systemd[1]: Started logrotate.timer - Daily rotation of log files. Oct 31 00:50:41.878229 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Oct 31 00:50:41.878340 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Oct 31 00:50:41.878455 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Oct 31 00:50:41.878480 systemd[1]: Reached target paths.target - Path Units. Oct 31 00:50:41.878574 systemd[1]: Reached target timers.target - Timer Units. Oct 31 00:50:41.879180 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Oct 31 00:50:41.880121 systemd[1]: Starting docker.socket - Docker Socket for the API... Oct 31 00:50:41.883747 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Oct 31 00:50:41.884637 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Oct 31 00:50:41.884996 systemd[1]: Listening on docker.socket - Docker Socket for the API. Oct 31 00:50:41.885127 systemd[1]: Reached target sockets.target - Socket Units. Oct 31 00:50:41.885212 systemd[1]: Reached target basic.target - Basic System. Oct 31 00:50:41.885316 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Oct 31 00:50:41.885329 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Oct 31 00:50:41.887837 systemd[1]: Starting containerd.service - containerd container runtime... Oct 31 00:50:41.889089 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Oct 31 00:50:41.891660 lvm[1505]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Oct 31 00:50:41.891708 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Oct 31 00:50:41.895099 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Oct 31 00:50:41.895206 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Oct 31 00:50:41.897946 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Oct 31 00:50:41.899985 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Oct 31 00:50:41.904710 jq[1508]: false Oct 31 00:50:41.904862 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Oct 31 00:50:41.907701 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Oct 31 00:50:41.913340 systemd[1]: Starting systemd-logind.service - User Login Management... Oct 31 00:50:41.913701 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Oct 31 00:50:41.914127 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Oct 31 00:50:41.919764 systemd[1]: Starting update-engine.service - Update Engine... Oct 31 00:50:41.921175 extend-filesystems[1509]: Found loop4 Oct 31 00:50:41.921423 extend-filesystems[1509]: Found loop5 Oct 31 00:50:41.921554 extend-filesystems[1509]: Found loop6 Oct 31 00:50:41.921706 extend-filesystems[1509]: Found loop7 Oct 31 00:50:41.921916 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Oct 31 00:50:41.921960 extend-filesystems[1509]: Found sda Oct 31 00:50:41.922118 extend-filesystems[1509]: Found sda1 Oct 31 00:50:41.922235 extend-filesystems[1509]: Found sda2 Oct 31 00:50:41.922359 extend-filesystems[1509]: Found sda3 Oct 31 00:50:41.922484 extend-filesystems[1509]: Found usr Oct 31 00:50:41.922607 extend-filesystems[1509]: Found sda4 Oct 31 00:50:41.922755 extend-filesystems[1509]: Found sda6 Oct 31 00:50:41.922877 extend-filesystems[1509]: Found sda7 Oct 31 00:50:41.923090 extend-filesystems[1509]: Found sda9 Oct 31 00:50:41.923090 extend-filesystems[1509]: Checking size of /dev/sda9 Oct 31 00:50:41.925788 systemd[1]: Starting vgauthd.service - VGAuth Service for open-vm-tools... Oct 31 00:50:41.927157 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Oct 31 00:50:41.929691 extend-filesystems[1509]: Old size kept for /dev/sda9 Oct 31 00:50:41.929691 extend-filesystems[1509]: Found sr0 Oct 31 00:50:41.934859 dbus-daemon[1507]: [system] SELinux support is enabled Oct 31 00:50:41.935826 systemd[1]: Started dbus.service - D-Bus System Message Bus. Oct 31 00:50:41.945086 update_engine[1518]: I20251031 00:50:41.942011 1518 main.cc:92] Flatcar Update Engine starting Oct 31 00:50:41.945086 update_engine[1518]: I20251031 00:50:41.942883 1518 update_check_scheduler.cc:74] Next update check in 3m37s Oct 31 00:50:41.937151 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Oct 31 00:50:41.937787 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Oct 31 00:50:41.937955 systemd[1]: extend-filesystems.service: Deactivated successfully. Oct 31 00:50:41.938369 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Oct 31 00:50:41.938861 systemd[1]: motdgen.service: Deactivated successfully. Oct 31 00:50:41.945397 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Oct 31 00:50:41.946850 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Oct 31 00:50:41.946953 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Oct 31 00:50:41.956004 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Oct 31 00:50:41.956032 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Oct 31 00:50:41.957054 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Oct 31 00:50:41.957069 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Oct 31 00:50:41.957258 systemd[1]: Started update-engine.service - Update Engine. Oct 31 00:50:41.964727 systemd[1]: Started locksmithd.service - Cluster reboot manager. Oct 31 00:50:41.965084 (ntainerd)[1535]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Oct 31 00:50:41.970536 jq[1520]: true Oct 31 00:50:41.970677 tar[1533]: linux-amd64/LICENSE Oct 31 00:50:41.970677 tar[1533]: linux-amd64/helm Oct 31 00:50:41.985221 jq[1548]: true Oct 31 00:50:41.984716 systemd[1]: Started vgauthd.service - VGAuth Service for open-vm-tools. Oct 31 00:50:41.986035 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 37 scanned by (udev-worker) (1472) Oct 31 00:50:41.991680 systemd[1]: Starting vmtoolsd.service - Service for virtual machines hosted on VMware... Oct 31 00:50:42.043759 systemd[1]: Started vmtoolsd.service - Service for virtual machines hosted on VMware. Oct 31 00:50:42.061194 unknown[1550]: Pref_Init: Using '/etc/vmware-tools/vgauth.conf' as preferences filepath Oct 31 00:50:42.064250 systemd-logind[1514]: Watching system buttons on /dev/input/event1 (Power Button) Oct 31 00:50:42.065645 systemd-logind[1514]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Oct 31 00:50:42.065750 systemd-logind[1514]: New seat seat0. Oct 31 00:50:42.066503 systemd[1]: Started systemd-logind.service - User Login Management. Oct 31 00:50:42.074596 unknown[1550]: Core dump limit set to -1 Oct 31 00:50:42.085896 bash[1570]: Updated "/home/core/.ssh/authorized_keys" Oct 31 00:50:42.091352 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Oct 31 00:50:42.092384 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Oct 31 00:50:42.104389 kernel: NET: Registered PF_VSOCK protocol family Oct 31 00:50:42.152608 locksmithd[1546]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Oct 31 00:50:42.237635 containerd[1535]: time="2025-10-31T00:50:42.237286936Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Oct 31 00:50:42.279610 containerd[1535]: time="2025-10-31T00:50:42.279506604Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Oct 31 00:50:42.284012 containerd[1535]: time="2025-10-31T00:50:42.283337793Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.113-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Oct 31 00:50:42.284012 containerd[1535]: time="2025-10-31T00:50:42.283361743Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Oct 31 00:50:42.284012 containerd[1535]: time="2025-10-31T00:50:42.283375786Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Oct 31 00:50:42.284012 containerd[1535]: time="2025-10-31T00:50:42.283482768Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Oct 31 00:50:42.284012 containerd[1535]: time="2025-10-31T00:50:42.283492980Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Oct 31 00:50:42.284012 containerd[1535]: time="2025-10-31T00:50:42.283527724Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Oct 31 00:50:42.284012 containerd[1535]: time="2025-10-31T00:50:42.283535419Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Oct 31 00:50:42.284012 containerd[1535]: time="2025-10-31T00:50:42.283634933Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Oct 31 00:50:42.284012 containerd[1535]: time="2025-10-31T00:50:42.283644169Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Oct 31 00:50:42.284012 containerd[1535]: time="2025-10-31T00:50:42.283651739Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Oct 31 00:50:42.284012 containerd[1535]: time="2025-10-31T00:50:42.283657446Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Oct 31 00:50:42.284166 containerd[1535]: time="2025-10-31T00:50:42.283699806Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Oct 31 00:50:42.284166 containerd[1535]: time="2025-10-31T00:50:42.283809447Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Oct 31 00:50:42.284166 containerd[1535]: time="2025-10-31T00:50:42.283864506Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Oct 31 00:50:42.284166 containerd[1535]: time="2025-10-31T00:50:42.283872890Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Oct 31 00:50:42.284166 containerd[1535]: time="2025-10-31T00:50:42.283913409Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Oct 31 00:50:42.284166 containerd[1535]: time="2025-10-31T00:50:42.283941164Z" level=info msg="metadata content store policy set" policy=shared Oct 31 00:50:42.299453 containerd[1535]: time="2025-10-31T00:50:42.298846365Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Oct 31 00:50:42.299453 containerd[1535]: time="2025-10-31T00:50:42.298881249Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Oct 31 00:50:42.299453 containerd[1535]: time="2025-10-31T00:50:42.298902393Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Oct 31 00:50:42.299453 containerd[1535]: time="2025-10-31T00:50:42.298913552Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Oct 31 00:50:42.299453 containerd[1535]: time="2025-10-31T00:50:42.298922025Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Oct 31 00:50:42.299453 containerd[1535]: time="2025-10-31T00:50:42.299000582Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Oct 31 00:50:42.299453 containerd[1535]: time="2025-10-31T00:50:42.299274700Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Oct 31 00:50:42.299453 containerd[1535]: time="2025-10-31T00:50:42.299337522Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Oct 31 00:50:42.299453 containerd[1535]: time="2025-10-31T00:50:42.299351717Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Oct 31 00:50:42.299453 containerd[1535]: time="2025-10-31T00:50:42.299362290Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Oct 31 00:50:42.299649 containerd[1535]: time="2025-10-31T00:50:42.299496246Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Oct 31 00:50:42.299649 containerd[1535]: time="2025-10-31T00:50:42.299518131Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Oct 31 00:50:42.299649 containerd[1535]: time="2025-10-31T00:50:42.299529571Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Oct 31 00:50:42.299649 containerd[1535]: time="2025-10-31T00:50:42.299540319Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Oct 31 00:50:42.299649 containerd[1535]: time="2025-10-31T00:50:42.299549435Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Oct 31 00:50:42.299649 containerd[1535]: time="2025-10-31T00:50:42.299559527Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Oct 31 00:50:42.299649 containerd[1535]: time="2025-10-31T00:50:42.299571980Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Oct 31 00:50:42.299649 containerd[1535]: time="2025-10-31T00:50:42.299581572Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Oct 31 00:50:42.299649 containerd[1535]: time="2025-10-31T00:50:42.299597192Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Oct 31 00:50:42.299649 containerd[1535]: time="2025-10-31T00:50:42.299607908Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Oct 31 00:50:42.299649 containerd[1535]: time="2025-10-31T00:50:42.299617266Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Oct 31 00:50:42.299649 containerd[1535]: time="2025-10-31T00:50:42.299636729Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Oct 31 00:50:42.299649 containerd[1535]: time="2025-10-31T00:50:42.299644576Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Oct 31 00:50:42.299811 containerd[1535]: time="2025-10-31T00:50:42.299654144Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Oct 31 00:50:42.299811 containerd[1535]: time="2025-10-31T00:50:42.299663112Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Oct 31 00:50:42.299811 containerd[1535]: time="2025-10-31T00:50:42.299672052Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Oct 31 00:50:42.299811 containerd[1535]: time="2025-10-31T00:50:42.299681610Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Oct 31 00:50:42.299811 containerd[1535]: time="2025-10-31T00:50:42.299693346Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Oct 31 00:50:42.299811 containerd[1535]: time="2025-10-31T00:50:42.299702043Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Oct 31 00:50:42.299811 containerd[1535]: time="2025-10-31T00:50:42.299713508Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Oct 31 00:50:42.299811 containerd[1535]: time="2025-10-31T00:50:42.299723119Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Oct 31 00:50:42.299811 containerd[1535]: time="2025-10-31T00:50:42.299734785Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Oct 31 00:50:42.299811 containerd[1535]: time="2025-10-31T00:50:42.299750982Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Oct 31 00:50:42.299811 containerd[1535]: time="2025-10-31T00:50:42.299760496Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Oct 31 00:50:42.299811 containerd[1535]: time="2025-10-31T00:50:42.299768436Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Oct 31 00:50:42.299811 containerd[1535]: time="2025-10-31T00:50:42.299797319Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Oct 31 00:50:42.300000 containerd[1535]: time="2025-10-31T00:50:42.299810826Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Oct 31 00:50:42.300000 containerd[1535]: time="2025-10-31T00:50:42.299836223Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Oct 31 00:50:42.300000 containerd[1535]: time="2025-10-31T00:50:42.299845725Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Oct 31 00:50:42.300000 containerd[1535]: time="2025-10-31T00:50:42.299853584Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Oct 31 00:50:42.300000 containerd[1535]: time="2025-10-31T00:50:42.299879285Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Oct 31 00:50:42.300000 containerd[1535]: time="2025-10-31T00:50:42.299887787Z" level=info msg="NRI interface is disabled by configuration." Oct 31 00:50:42.300000 containerd[1535]: time="2025-10-31T00:50:42.299896674Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Oct 31 00:50:42.300099 containerd[1535]: time="2025-10-31T00:50:42.300065715Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Oct 31 00:50:42.300185 containerd[1535]: time="2025-10-31T00:50:42.300104145Z" level=info msg="Connect containerd service" Oct 31 00:50:42.300185 containerd[1535]: time="2025-10-31T00:50:42.300130641Z" level=info msg="using legacy CRI server" Oct 31 00:50:42.300185 containerd[1535]: time="2025-10-31T00:50:42.300138027Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Oct 31 00:50:42.300226 containerd[1535]: time="2025-10-31T00:50:42.300208398Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Oct 31 00:50:42.303369 containerd[1535]: time="2025-10-31T00:50:42.302990120Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Oct 31 00:50:42.303568 containerd[1535]: time="2025-10-31T00:50:42.303414609Z" level=info msg="Start subscribing containerd event" Oct 31 00:50:42.303568 containerd[1535]: time="2025-10-31T00:50:42.303443771Z" level=info msg="Start recovering state" Oct 31 00:50:42.303568 containerd[1535]: time="2025-10-31T00:50:42.303475295Z" level=info msg="Start event monitor" Oct 31 00:50:42.303568 containerd[1535]: time="2025-10-31T00:50:42.303484038Z" level=info msg="Start snapshots syncer" Oct 31 00:50:42.303568 containerd[1535]: time="2025-10-31T00:50:42.303489724Z" level=info msg="Start cni network conf syncer for default" Oct 31 00:50:42.303568 containerd[1535]: time="2025-10-31T00:50:42.303493519Z" level=info msg="Start streaming server" Oct 31 00:50:42.303568 containerd[1535]: time="2025-10-31T00:50:42.303554858Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Oct 31 00:50:42.303693 containerd[1535]: time="2025-10-31T00:50:42.303586291Z" level=info msg=serving... address=/run/containerd/containerd.sock Oct 31 00:50:42.303682 systemd[1]: Started containerd.service - containerd container runtime. Oct 31 00:50:42.304412 containerd[1535]: time="2025-10-31T00:50:42.304398830Z" level=info msg="containerd successfully booted in 0.067508s" Oct 31 00:50:42.375576 sshd_keygen[1536]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Oct 31 00:50:42.395966 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Oct 31 00:50:42.401833 systemd[1]: Starting issuegen.service - Generate /run/issue... Oct 31 00:50:42.405801 systemd[1]: issuegen.service: Deactivated successfully. Oct 31 00:50:42.406461 systemd[1]: Finished issuegen.service - Generate /run/issue. Oct 31 00:50:42.414158 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Oct 31 00:50:42.419923 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Oct 31 00:50:42.421805 systemd[1]: Started getty@tty1.service - Getty on tty1. Oct 31 00:50:42.424712 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Oct 31 00:50:42.424936 systemd[1]: Reached target getty.target - Login Prompts. Oct 31 00:50:42.491842 tar[1533]: linux-amd64/README.md Oct 31 00:50:42.498206 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Oct 31 00:50:42.800819 systemd-networkd[1463]: ens192: Gained IPv6LL Oct 31 00:50:42.801937 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Oct 31 00:50:42.802837 systemd[1]: Reached target network-online.target - Network is Online. Oct 31 00:50:42.808801 systemd[1]: Starting coreos-metadata.service - VMware metadata agent... Oct 31 00:50:42.810747 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Oct 31 00:50:42.812717 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Oct 31 00:50:42.833224 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Oct 31 00:50:42.842660 systemd[1]: coreos-metadata.service: Deactivated successfully. Oct 31 00:50:42.842781 systemd[1]: Finished coreos-metadata.service - VMware metadata agent. Oct 31 00:50:42.843514 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Oct 31 00:50:43.698134 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Oct 31 00:50:43.698529 systemd[1]: Reached target multi-user.target - Multi-User System. Oct 31 00:50:43.698699 systemd[1]: Startup finished in 987ms (kernel) + 4.808s (initrd) + 3.608s (userspace) = 9.404s. Oct 31 00:50:43.704764 (kubelet)[1688]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Oct 31 00:50:43.725742 login[1654]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Oct 31 00:50:43.725991 login[1653]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Oct 31 00:50:43.732401 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Oct 31 00:50:43.738094 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Oct 31 00:50:43.740070 systemd-logind[1514]: New session 1 of user core. Oct 31 00:50:43.744130 systemd-logind[1514]: New session 2 of user core. Oct 31 00:50:43.747893 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Oct 31 00:50:43.752891 systemd[1]: Starting user@500.service - User Manager for UID 500... Oct 31 00:50:43.755251 (systemd)[1695]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Oct 31 00:50:43.815480 systemd[1695]: Queued start job for default target default.target. Oct 31 00:50:43.823361 systemd[1695]: Created slice app.slice - User Application Slice. Oct 31 00:50:43.823596 systemd[1695]: Reached target paths.target - Paths. Oct 31 00:50:43.823662 systemd[1695]: Reached target timers.target - Timers. Oct 31 00:50:43.824370 systemd[1695]: Starting dbus.socket - D-Bus User Message Bus Socket... Oct 31 00:50:43.831220 systemd[1695]: Listening on dbus.socket - D-Bus User Message Bus Socket. Oct 31 00:50:43.831407 systemd[1695]: Reached target sockets.target - Sockets. Oct 31 00:50:43.831420 systemd[1695]: Reached target basic.target - Basic System. Oct 31 00:50:43.831443 systemd[1695]: Reached target default.target - Main User Target. Oct 31 00:50:43.831461 systemd[1695]: Startup finished in 72ms. Oct 31 00:50:43.831603 systemd[1]: Started user@500.service - User Manager for UID 500. Oct 31 00:50:43.832493 systemd[1]: Started session-1.scope - Session 1 of User core. Oct 31 00:50:43.833042 systemd[1]: Started session-2.scope - Session 2 of User core. Oct 31 00:50:44.374037 kubelet[1688]: E1031 00:50:44.373999 1688 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Oct 31 00:50:44.375440 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Oct 31 00:50:44.375524 systemd[1]: kubelet.service: Failed with result 'exit-code'. Oct 31 00:50:54.625921 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Oct 31 00:50:54.641981 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Oct 31 00:50:54.707752 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Oct 31 00:50:54.710085 (kubelet)[1737]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Oct 31 00:50:54.749173 kubelet[1737]: E1031 00:50:54.749137 1737 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Oct 31 00:50:54.751029 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Oct 31 00:50:54.751108 systemd[1]: kubelet.service: Failed with result 'exit-code'. Oct 31 00:51:05.001552 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Oct 31 00:51:05.010758 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Oct 31 00:51:05.132649 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Oct 31 00:51:05.135011 (kubelet)[1752]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Oct 31 00:51:05.188340 kubelet[1752]: E1031 00:51:05.188305 1752 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Oct 31 00:51:05.189329 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Oct 31 00:51:05.189406 systemd[1]: kubelet.service: Failed with result 'exit-code'. Oct 31 00:51:12.177773 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Oct 31 00:51:12.185969 systemd[1]: Started sshd@0-139.178.70.106:22-139.178.68.195:55272.service - OpenSSH per-connection server daemon (139.178.68.195:55272). Oct 31 00:51:12.213785 sshd[1761]: Accepted publickey for core from 139.178.68.195 port 55272 ssh2: RSA SHA256:oP22LaO0XIGArebTxOYO0fKYAeWWO0nZUw/MwdKnF1w Oct 31 00:51:12.214465 sshd[1761]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 31 00:51:12.217445 systemd-logind[1514]: New session 3 of user core. Oct 31 00:51:12.223814 systemd[1]: Started session-3.scope - Session 3 of User core. Oct 31 00:51:12.275834 systemd[1]: Started sshd@1-139.178.70.106:22-139.178.68.195:55288.service - OpenSSH per-connection server daemon (139.178.68.195:55288). Oct 31 00:51:12.300683 sshd[1766]: Accepted publickey for core from 139.178.68.195 port 55288 ssh2: RSA SHA256:oP22LaO0XIGArebTxOYO0fKYAeWWO0nZUw/MwdKnF1w Oct 31 00:51:12.301304 sshd[1766]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 31 00:51:12.304473 systemd-logind[1514]: New session 4 of user core. Oct 31 00:51:12.306703 systemd[1]: Started session-4.scope - Session 4 of User core. Oct 31 00:51:12.355733 sshd[1766]: pam_unix(sshd:session): session closed for user core Oct 31 00:51:12.364015 systemd[1]: sshd@1-139.178.70.106:22-139.178.68.195:55288.service: Deactivated successfully. Oct 31 00:51:12.364921 systemd[1]: session-4.scope: Deactivated successfully. Oct 31 00:51:12.365659 systemd-logind[1514]: Session 4 logged out. Waiting for processes to exit. Oct 31 00:51:12.366740 systemd[1]: Started sshd@2-139.178.70.106:22-139.178.68.195:55304.service - OpenSSH per-connection server daemon (139.178.68.195:55304). Oct 31 00:51:12.367175 systemd-logind[1514]: Removed session 4. Oct 31 00:51:12.390020 sshd[1773]: Accepted publickey for core from 139.178.68.195 port 55304 ssh2: RSA SHA256:oP22LaO0XIGArebTxOYO0fKYAeWWO0nZUw/MwdKnF1w Oct 31 00:51:12.390705 sshd[1773]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 31 00:51:12.393775 systemd-logind[1514]: New session 5 of user core. Oct 31 00:51:12.399845 systemd[1]: Started session-5.scope - Session 5 of User core. Oct 31 00:51:12.445928 sshd[1773]: pam_unix(sshd:session): session closed for user core Oct 31 00:51:12.454346 systemd[1]: sshd@2-139.178.70.106:22-139.178.68.195:55304.service: Deactivated successfully. Oct 31 00:51:12.455377 systemd[1]: session-5.scope: Deactivated successfully. Oct 31 00:51:12.455898 systemd-logind[1514]: Session 5 logged out. Waiting for processes to exit. Oct 31 00:51:12.457129 systemd[1]: Started sshd@3-139.178.70.106:22-139.178.68.195:55310.service - OpenSSH per-connection server daemon (139.178.68.195:55310). Oct 31 00:51:12.458950 systemd-logind[1514]: Removed session 5. Oct 31 00:51:12.485447 sshd[1780]: Accepted publickey for core from 139.178.68.195 port 55310 ssh2: RSA SHA256:oP22LaO0XIGArebTxOYO0fKYAeWWO0nZUw/MwdKnF1w Oct 31 00:51:12.486238 sshd[1780]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 31 00:51:12.489139 systemd-logind[1514]: New session 6 of user core. Oct 31 00:51:12.494731 systemd[1]: Started session-6.scope - Session 6 of User core. Oct 31 00:51:12.543875 sshd[1780]: pam_unix(sshd:session): session closed for user core Oct 31 00:51:12.553175 systemd[1]: sshd@3-139.178.70.106:22-139.178.68.195:55310.service: Deactivated successfully. Oct 31 00:51:12.554104 systemd[1]: session-6.scope: Deactivated successfully. Oct 31 00:51:12.555087 systemd-logind[1514]: Session 6 logged out. Waiting for processes to exit. Oct 31 00:51:12.558072 systemd[1]: Started sshd@4-139.178.70.106:22-139.178.68.195:55312.service - OpenSSH per-connection server daemon (139.178.68.195:55312). Oct 31 00:51:12.558975 systemd-logind[1514]: Removed session 6. Oct 31 00:51:12.583779 sshd[1787]: Accepted publickey for core from 139.178.68.195 port 55312 ssh2: RSA SHA256:oP22LaO0XIGArebTxOYO0fKYAeWWO0nZUw/MwdKnF1w Oct 31 00:51:12.584661 sshd[1787]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 31 00:51:12.588886 systemd-logind[1514]: New session 7 of user core. Oct 31 00:51:12.595801 systemd[1]: Started session-7.scope - Session 7 of User core. Oct 31 00:51:12.653582 sudo[1790]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Oct 31 00:51:12.653806 sudo[1790]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Oct 31 00:51:12.667300 sudo[1790]: pam_unix(sudo:session): session closed for user root Oct 31 00:51:12.669123 sshd[1787]: pam_unix(sshd:session): session closed for user core Oct 31 00:51:12.674376 systemd[1]: sshd@4-139.178.70.106:22-139.178.68.195:55312.service: Deactivated successfully. Oct 31 00:51:12.675349 systemd[1]: session-7.scope: Deactivated successfully. Oct 31 00:51:12.676370 systemd-logind[1514]: Session 7 logged out. Waiting for processes to exit. Oct 31 00:51:12.679904 systemd[1]: Started sshd@5-139.178.70.106:22-139.178.68.195:55318.service - OpenSSH per-connection server daemon (139.178.68.195:55318). Oct 31 00:51:12.681134 systemd-logind[1514]: Removed session 7. Oct 31 00:51:12.706129 sshd[1795]: Accepted publickey for core from 139.178.68.195 port 55318 ssh2: RSA SHA256:oP22LaO0XIGArebTxOYO0fKYAeWWO0nZUw/MwdKnF1w Oct 31 00:51:12.706279 sshd[1795]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 31 00:51:12.709179 systemd-logind[1514]: New session 8 of user core. Oct 31 00:51:12.721803 systemd[1]: Started session-8.scope - Session 8 of User core. Oct 31 00:51:12.771346 sudo[1799]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Oct 31 00:51:12.771556 sudo[1799]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Oct 31 00:51:12.773775 sudo[1799]: pam_unix(sudo:session): session closed for user root Oct 31 00:51:12.777417 sudo[1798]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Oct 31 00:51:12.777617 sudo[1798]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Oct 31 00:51:12.791801 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Oct 31 00:51:12.792698 auditctl[1802]: No rules Oct 31 00:51:12.793030 systemd[1]: audit-rules.service: Deactivated successfully. Oct 31 00:51:12.793176 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Oct 31 00:51:12.795037 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Oct 31 00:51:12.813341 augenrules[1820]: No rules Oct 31 00:51:12.813984 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Oct 31 00:51:12.814879 sudo[1798]: pam_unix(sudo:session): session closed for user root Oct 31 00:51:12.816177 sshd[1795]: pam_unix(sshd:session): session closed for user core Oct 31 00:51:12.818900 systemd[1]: sshd@5-139.178.70.106:22-139.178.68.195:55318.service: Deactivated successfully. Oct 31 00:51:12.819515 systemd[1]: session-8.scope: Deactivated successfully. Oct 31 00:51:12.820832 systemd-logind[1514]: Session 8 logged out. Waiting for processes to exit. Oct 31 00:51:12.821892 systemd[1]: Started sshd@6-139.178.70.106:22-139.178.68.195:55320.service - OpenSSH per-connection server daemon (139.178.68.195:55320). Oct 31 00:51:12.822813 systemd-logind[1514]: Removed session 8. Oct 31 00:51:12.845314 sshd[1828]: Accepted publickey for core from 139.178.68.195 port 55320 ssh2: RSA SHA256:oP22LaO0XIGArebTxOYO0fKYAeWWO0nZUw/MwdKnF1w Oct 31 00:51:12.845972 sshd[1828]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 31 00:51:12.849122 systemd-logind[1514]: New session 9 of user core. Oct 31 00:51:12.854826 systemd[1]: Started session-9.scope - Session 9 of User core. Oct 31 00:51:12.903036 sudo[1831]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Oct 31 00:51:12.903244 sudo[1831]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Oct 31 00:51:13.173834 (dockerd)[1847]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Oct 31 00:51:13.173863 systemd[1]: Starting docker.service - Docker Application Container Engine... Oct 31 00:51:13.428144 dockerd[1847]: time="2025-10-31T00:51:13.428063334Z" level=info msg="Starting up" Oct 31 00:51:13.500930 dockerd[1847]: time="2025-10-31T00:51:13.500785382Z" level=info msg="Loading containers: start." Oct 31 00:51:13.563639 kernel: Initializing XFRM netlink socket Oct 31 00:51:13.606773 systemd-networkd[1463]: docker0: Link UP Oct 31 00:51:13.613323 dockerd[1847]: time="2025-10-31T00:51:13.613302229Z" level=info msg="Loading containers: done." Oct 31 00:51:13.622355 dockerd[1847]: time="2025-10-31T00:51:13.622333420Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Oct 31 00:51:13.622429 dockerd[1847]: time="2025-10-31T00:51:13.622383590Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Oct 31 00:51:13.622448 dockerd[1847]: time="2025-10-31T00:51:13.622438414Z" level=info msg="Daemon has completed initialization" Oct 31 00:51:13.636974 dockerd[1847]: time="2025-10-31T00:51:13.636920773Z" level=info msg="API listen on /run/docker.sock" Oct 31 00:51:13.637140 systemd[1]: Started docker.service - Docker Application Container Engine. Oct 31 00:51:14.202499 containerd[1535]: time="2025-10-31T00:51:14.202312749Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.34.1\"" Oct 31 00:51:14.735098 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3880984346.mount: Deactivated successfully. Oct 31 00:51:15.439705 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Oct 31 00:51:15.444725 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Oct 31 00:51:15.518093 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Oct 31 00:51:15.521229 (kubelet)[2051]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Oct 31 00:51:15.558080 kubelet[2051]: E1031 00:51:15.557925 2051 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Oct 31 00:51:15.559059 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Oct 31 00:51:15.559147 systemd[1]: kubelet.service: Failed with result 'exit-code'. Oct 31 00:51:15.698728 containerd[1535]: time="2025-10-31T00:51:15.698662860Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.34.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 31 00:51:15.706014 containerd[1535]: time="2025-10-31T00:51:15.705970798Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.34.1: active requests=0, bytes read=27065392" Oct 31 00:51:15.713226 containerd[1535]: time="2025-10-31T00:51:15.713203380Z" level=info msg="ImageCreate event name:\"sha256:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 31 00:51:15.722775 containerd[1535]: time="2025-10-31T00:51:15.721329867Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:b9d7c117f8ac52bed4b13aeed973dc5198f9d93a926e6fe9e0b384f155baa902\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 31 00:51:15.722775 containerd[1535]: time="2025-10-31T00:51:15.722187883Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.34.1\" with image id \"sha256:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97\", repo tag \"registry.k8s.io/kube-apiserver:v1.34.1\", repo digest \"registry.k8s.io/kube-apiserver@sha256:b9d7c117f8ac52bed4b13aeed973dc5198f9d93a926e6fe9e0b384f155baa902\", size \"27061991\" in 1.519847457s" Oct 31 00:51:15.722775 containerd[1535]: time="2025-10-31T00:51:15.722208076Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.34.1\" returns image reference \"sha256:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97\"" Oct 31 00:51:15.722937 containerd[1535]: time="2025-10-31T00:51:15.722895835Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.34.1\"" Oct 31 00:51:16.968124 containerd[1535]: time="2025-10-31T00:51:16.967798176Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.34.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 31 00:51:16.975783 containerd[1535]: time="2025-10-31T00:51:16.975753719Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.34.1: active requests=0, bytes read=21159757" Oct 31 00:51:16.981655 containerd[1535]: time="2025-10-31T00:51:16.981592254Z" level=info msg="ImageCreate event name:\"sha256:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 31 00:51:16.991356 containerd[1535]: time="2025-10-31T00:51:16.990374320Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:2bf47c1b01f51e8963bf2327390883c9fa4ed03ea1b284500a2cba17ce303e89\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 31 00:51:16.991356 containerd[1535]: time="2025-10-31T00:51:16.991096123Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.34.1\" with image id \"sha256:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f\", repo tag \"registry.k8s.io/kube-controller-manager:v1.34.1\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:2bf47c1b01f51e8963bf2327390883c9fa4ed03ea1b284500a2cba17ce303e89\", size \"22820214\" in 1.268177119s" Oct 31 00:51:16.991356 containerd[1535]: time="2025-10-31T00:51:16.991117594Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.34.1\" returns image reference \"sha256:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f\"" Oct 31 00:51:16.991682 containerd[1535]: time="2025-10-31T00:51:16.991667394Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.34.1\"" Oct 31 00:51:17.901069 containerd[1535]: time="2025-10-31T00:51:17.901042185Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.34.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 31 00:51:17.901512 containerd[1535]: time="2025-10-31T00:51:17.901489057Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.34.1: active requests=0, bytes read=15725093" Oct 31 00:51:17.901955 containerd[1535]: time="2025-10-31T00:51:17.901676174Z" level=info msg="ImageCreate event name:\"sha256:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 31 00:51:17.903304 containerd[1535]: time="2025-10-31T00:51:17.903288843Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:6e9fbc4e25a576483e6a233976353a66e4d77eb5d0530e9118e94b7d46fb3500\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 31 00:51:17.903947 containerd[1535]: time="2025-10-31T00:51:17.903931705Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.34.1\" with image id \"sha256:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813\", repo tag \"registry.k8s.io/kube-scheduler:v1.34.1\", repo digest \"registry.k8s.io/kube-scheduler@sha256:6e9fbc4e25a576483e6a233976353a66e4d77eb5d0530e9118e94b7d46fb3500\", size \"17385568\" in 912.154931ms" Oct 31 00:51:17.903982 containerd[1535]: time="2025-10-31T00:51:17.903949284Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.34.1\" returns image reference \"sha256:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813\"" Oct 31 00:51:17.904224 containerd[1535]: time="2025-10-31T00:51:17.904212092Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.34.1\"" Oct 31 00:51:18.891577 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2367201497.mount: Deactivated successfully. Oct 31 00:51:19.115651 containerd[1535]: time="2025-10-31T00:51:19.115598693Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.34.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 31 00:51:19.116469 containerd[1535]: time="2025-10-31T00:51:19.116453982Z" level=info msg="ImageCreate event name:\"sha256:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 31 00:51:19.116544 containerd[1535]: time="2025-10-31T00:51:19.116528331Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.34.1: active requests=0, bytes read=25964699" Oct 31 00:51:19.118715 containerd[1535]: time="2025-10-31T00:51:19.118698143Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:913cc83ca0b5588a81d86ce8eedeb3ed1e9c1326e81852a1ea4f622b74ff749a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 31 00:51:19.119726 containerd[1535]: time="2025-10-31T00:51:19.119712352Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.34.1\" with image id \"sha256:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7\", repo tag \"registry.k8s.io/kube-proxy:v1.34.1\", repo digest \"registry.k8s.io/kube-proxy@sha256:913cc83ca0b5588a81d86ce8eedeb3ed1e9c1326e81852a1ea4f622b74ff749a\", size \"25963718\" in 1.215484688s" Oct 31 00:51:19.119777 containerd[1535]: time="2025-10-31T00:51:19.119768133Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.34.1\" returns image reference \"sha256:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7\"" Oct 31 00:51:19.120461 containerd[1535]: time="2025-10-31T00:51:19.120339176Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.1\"" Oct 31 00:51:19.673960 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount167353275.mount: Deactivated successfully. Oct 31 00:51:20.477643 containerd[1535]: time="2025-10-31T00:51:20.477088377Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.12.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 31 00:51:20.477643 containerd[1535]: time="2025-10-31T00:51:20.477588037Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.12.1: active requests=0, bytes read=22388007" Oct 31 00:51:20.478008 containerd[1535]: time="2025-10-31T00:51:20.477980285Z" level=info msg="ImageCreate event name:\"sha256:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 31 00:51:20.480409 containerd[1535]: time="2025-10-31T00:51:20.480389367Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 31 00:51:20.481393 containerd[1535]: time="2025-10-31T00:51:20.481372568Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.12.1\" with image id \"sha256:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969\", repo tag \"registry.k8s.io/coredns/coredns:v1.12.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c\", size \"22384805\" in 1.360907279s" Oct 31 00:51:20.481471 containerd[1535]: time="2025-10-31T00:51:20.481458055Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.1\" returns image reference \"sha256:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969\"" Oct 31 00:51:20.481902 containerd[1535]: time="2025-10-31T00:51:20.481889400Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10.1\"" Oct 31 00:51:20.884040 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2943500787.mount: Deactivated successfully. Oct 31 00:51:20.886051 containerd[1535]: time="2025-10-31T00:51:20.885616129Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 31 00:51:20.886245 containerd[1535]: time="2025-10-31T00:51:20.886229223Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10.1: active requests=0, bytes read=321218" Oct 31 00:51:20.886601 containerd[1535]: time="2025-10-31T00:51:20.886589777Z" level=info msg="ImageCreate event name:\"sha256:cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 31 00:51:20.887956 containerd[1535]: time="2025-10-31T00:51:20.887944570Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 31 00:51:20.888662 containerd[1535]: time="2025-10-31T00:51:20.888453588Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10.1\" with image id \"sha256:cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f\", repo tag \"registry.k8s.io/pause:3.10.1\", repo digest \"registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c\", size \"320448\" in 406.488015ms" Oct 31 00:51:20.888716 containerd[1535]: time="2025-10-31T00:51:20.888707225Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10.1\" returns image reference \"sha256:cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f\"" Oct 31 00:51:20.889219 containerd[1535]: time="2025-10-31T00:51:20.889204334Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.6.4-0\"" Oct 31 00:51:23.608893 containerd[1535]: time="2025-10-31T00:51:23.608868344Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.6.4-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 31 00:51:23.612055 containerd[1535]: time="2025-10-31T00:51:23.610011953Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.6.4-0: active requests=0, bytes read=73514593" Oct 31 00:51:23.612055 containerd[1535]: time="2025-10-31T00:51:23.610348205Z" level=info msg="ImageCreate event name:\"sha256:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 31 00:51:23.612055 containerd[1535]: time="2025-10-31T00:51:23.612018717Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:e36c081683425b5b3bc1425bc508b37e7107bb65dfa9367bf5a80125d431fa19\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 31 00:51:23.613179 containerd[1535]: time="2025-10-31T00:51:23.613158591Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.6.4-0\" with image id \"sha256:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115\", repo tag \"registry.k8s.io/etcd:3.6.4-0\", repo digest \"registry.k8s.io/etcd@sha256:e36c081683425b5b3bc1425bc508b37e7107bb65dfa9367bf5a80125d431fa19\", size \"74311308\" in 2.723901885s" Oct 31 00:51:23.613211 containerd[1535]: time="2025-10-31T00:51:23.613178506Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.6.4-0\" returns image reference \"sha256:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115\"" Oct 31 00:51:25.543701 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Oct 31 00:51:25.551765 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Oct 31 00:51:25.568684 systemd[1]: Reloading requested from client PID 2200 ('systemctl') (unit session-9.scope)... Oct 31 00:51:25.568773 systemd[1]: Reloading... Oct 31 00:51:25.629665 zram_generator::config[2237]: No configuration found. Oct 31 00:51:25.678032 systemd[1]: /etc/systemd/system/coreos-metadata.service:11: Ignoring unknown escape sequences: "echo "COREOS_CUSTOM_PRIVATE_IPV4=$(ip addr show ens192 | grep "inet 10." | grep -Po "inet \K[\d.]+") Oct 31 00:51:25.693212 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Oct 31 00:51:25.737171 systemd[1]: Reloading finished in 168 ms. Oct 31 00:51:25.777588 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Oct 31 00:51:25.777654 systemd[1]: kubelet.service: Failed with result 'signal'. Oct 31 00:51:25.777812 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Oct 31 00:51:25.781849 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Oct 31 00:51:26.156313 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Oct 31 00:51:26.166811 (kubelet)[2305]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Oct 31 00:51:26.219801 kubelet[2305]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Oct 31 00:51:26.219801 kubelet[2305]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Oct 31 00:51:26.219801 kubelet[2305]: I1031 00:51:26.219733 2305 server.go:213] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Oct 31 00:51:26.552476 kubelet[2305]: I1031 00:51:26.552408 2305 server.go:529] "Kubelet version" kubeletVersion="v1.34.1" Oct 31 00:51:26.552476 kubelet[2305]: I1031 00:51:26.552429 2305 server.go:531] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Oct 31 00:51:26.554909 kubelet[2305]: I1031 00:51:26.554894 2305 watchdog_linux.go:95] "Systemd watchdog is not enabled" Oct 31 00:51:26.554945 kubelet[2305]: I1031 00:51:26.554909 2305 watchdog_linux.go:137] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Oct 31 00:51:26.555073 kubelet[2305]: I1031 00:51:26.555059 2305 server.go:956] "Client rotation is on, will bootstrap in background" Oct 31 00:51:26.571114 kubelet[2305]: E1031 00:51:26.571070 2305 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://139.178.70.106:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 139.178.70.106:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Oct 31 00:51:26.573064 kubelet[2305]: I1031 00:51:26.573050 2305 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Oct 31 00:51:26.583831 kubelet[2305]: E1031 00:51:26.583774 2305 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Oct 31 00:51:26.583831 kubelet[2305]: I1031 00:51:26.583807 2305 server.go:1400] "CRI implementation should be updated to support RuntimeConfig. Falling back to using cgroupDriver from kubelet config." Oct 31 00:51:26.587475 kubelet[2305]: I1031 00:51:26.587461 2305 server.go:781] "--cgroups-per-qos enabled, but --cgroup-root was not specified. Defaulting to /" Oct 31 00:51:26.591382 kubelet[2305]: I1031 00:51:26.591356 2305 container_manager_linux.go:270] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Oct 31 00:51:26.592741 kubelet[2305]: I1031 00:51:26.591382 2305 container_manager_linux.go:275] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Oct 31 00:51:26.592826 kubelet[2305]: I1031 00:51:26.592745 2305 topology_manager.go:138] "Creating topology manager with none policy" Oct 31 00:51:26.592826 kubelet[2305]: I1031 00:51:26.592754 2305 container_manager_linux.go:306] "Creating device plugin manager" Oct 31 00:51:26.592826 kubelet[2305]: I1031 00:51:26.592817 2305 container_manager_linux.go:315] "Creating Dynamic Resource Allocation (DRA) manager" Oct 31 00:51:26.593869 kubelet[2305]: I1031 00:51:26.593854 2305 state_mem.go:36] "Initialized new in-memory state store" Oct 31 00:51:26.595232 kubelet[2305]: I1031 00:51:26.595214 2305 kubelet.go:475] "Attempting to sync node with API server" Oct 31 00:51:26.595232 kubelet[2305]: I1031 00:51:26.595232 2305 kubelet.go:376] "Adding static pod path" path="/etc/kubernetes/manifests" Oct 31 00:51:26.596026 kubelet[2305]: I1031 00:51:26.595246 2305 kubelet.go:387] "Adding apiserver pod source" Oct 31 00:51:26.596026 kubelet[2305]: I1031 00:51:26.595253 2305 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Oct 31 00:51:26.598103 kubelet[2305]: E1031 00:51:26.597612 2305 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: Get \"https://139.178.70.106:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 139.178.70.106:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Oct 31 00:51:26.598103 kubelet[2305]: E1031 00:51:26.597803 2305 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://139.178.70.106:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 139.178.70.106:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Oct 31 00:51:26.598172 kubelet[2305]: I1031 00:51:26.598147 2305 kuberuntime_manager.go:291] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Oct 31 00:51:26.600072 kubelet[2305]: I1031 00:51:26.599718 2305 kubelet.go:940] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Oct 31 00:51:26.600072 kubelet[2305]: I1031 00:51:26.599742 2305 kubelet.go:964] "Not starting PodCertificateRequest manager because we are in static kubelet mode or the PodCertificateProjection feature gate is disabled" Oct 31 00:51:26.601911 kubelet[2305]: W1031 00:51:26.601898 2305 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Oct 31 00:51:26.605296 kubelet[2305]: I1031 00:51:26.605221 2305 server.go:1262] "Started kubelet" Oct 31 00:51:26.608025 kubelet[2305]: I1031 00:51:26.607958 2305 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Oct 31 00:51:26.611336 kubelet[2305]: E1031 00:51:26.608954 2305 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://139.178.70.106:6443/api/v1/namespaces/default/events\": dial tcp 139.178.70.106:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.18736d2564d9add3 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-10-31 00:51:26.605200851 +0000 UTC m=+0.436524823,LastTimestamp:2025-10-31 00:51:26.605200851 +0000 UTC m=+0.436524823,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Oct 31 00:51:26.611336 kubelet[2305]: I1031 00:51:26.610815 2305 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Oct 31 00:51:26.614829 kubelet[2305]: I1031 00:51:26.614535 2305 server.go:310] "Adding debug handlers to kubelet server" Oct 31 00:51:26.615523 kubelet[2305]: I1031 00:51:26.615492 2305 volume_manager.go:313] "Starting Kubelet Volume Manager" Oct 31 00:51:26.615679 kubelet[2305]: E1031 00:51:26.615670 2305 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Oct 31 00:51:26.617559 kubelet[2305]: I1031 00:51:26.617353 2305 ratelimit.go:56] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Oct 31 00:51:26.617559 kubelet[2305]: I1031 00:51:26.617378 2305 server_v1.go:49] "podresources" method="list" useActivePods=true Oct 31 00:51:26.617559 kubelet[2305]: I1031 00:51:26.617463 2305 server.go:249] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Oct 31 00:51:26.618227 kubelet[2305]: I1031 00:51:26.618211 2305 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Oct 31 00:51:26.618758 kubelet[2305]: I1031 00:51:26.618681 2305 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Oct 31 00:51:26.618758 kubelet[2305]: I1031 00:51:26.618705 2305 reconciler.go:29] "Reconciler: start to sync state" Oct 31 00:51:26.619998 kubelet[2305]: E1031 00:51:26.619975 2305 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://139.178.70.106:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 139.178.70.106:6443: connect: connection refused" interval="200ms" Oct 31 00:51:26.620099 kubelet[2305]: I1031 00:51:26.620087 2305 factory.go:223] Registration of the systemd container factory successfully Oct 31 00:51:26.620232 kubelet[2305]: I1031 00:51:26.620130 2305 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Oct 31 00:51:26.620930 kubelet[2305]: E1031 00:51:26.620576 2305 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://139.178.70.106:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 139.178.70.106:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Oct 31 00:51:26.621821 kubelet[2305]: I1031 00:51:26.621795 2305 factory.go:223] Registration of the containerd container factory successfully Oct 31 00:51:26.627941 kubelet[2305]: I1031 00:51:26.627872 2305 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv4" Oct 31 00:51:26.628456 kubelet[2305]: I1031 00:51:26.628444 2305 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv6" Oct 31 00:51:26.628483 kubelet[2305]: I1031 00:51:26.628457 2305 status_manager.go:244] "Starting to sync pod status with apiserver" Oct 31 00:51:26.628483 kubelet[2305]: I1031 00:51:26.628472 2305 kubelet.go:2427] "Starting kubelet main sync loop" Oct 31 00:51:26.628510 kubelet[2305]: E1031 00:51:26.628499 2305 kubelet.go:2451] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Oct 31 00:51:26.632529 kubelet[2305]: E1031 00:51:26.632513 2305 reflector.go:205] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://139.178.70.106:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 139.178.70.106:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Oct 31 00:51:26.632597 kubelet[2305]: E1031 00:51:26.632585 2305 kubelet.go:1615] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Oct 31 00:51:26.636820 kubelet[2305]: I1031 00:51:26.636809 2305 cpu_manager.go:221] "Starting CPU manager" policy="none" Oct 31 00:51:26.636820 kubelet[2305]: I1031 00:51:26.636818 2305 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Oct 31 00:51:26.636870 kubelet[2305]: I1031 00:51:26.636826 2305 state_mem.go:36] "Initialized new in-memory state store" Oct 31 00:51:26.637454 kubelet[2305]: I1031 00:51:26.637444 2305 policy_none.go:49] "None policy: Start" Oct 31 00:51:26.637476 kubelet[2305]: I1031 00:51:26.637455 2305 memory_manager.go:187] "Starting memorymanager" policy="None" Oct 31 00:51:26.637476 kubelet[2305]: I1031 00:51:26.637462 2305 state_mem.go:36] "Initializing new in-memory state store" logger="Memory Manager state checkpoint" Oct 31 00:51:26.637815 kubelet[2305]: I1031 00:51:26.637806 2305 policy_none.go:47] "Start" Oct 31 00:51:26.641489 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Oct 31 00:51:26.652636 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Oct 31 00:51:26.655833 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Oct 31 00:51:26.665158 kubelet[2305]: E1031 00:51:26.665139 2305 manager.go:513] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Oct 31 00:51:26.665423 kubelet[2305]: I1031 00:51:26.665266 2305 eviction_manager.go:189] "Eviction manager: starting control loop" Oct 31 00:51:26.665423 kubelet[2305]: I1031 00:51:26.665283 2305 container_log_manager.go:146] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Oct 31 00:51:26.665537 kubelet[2305]: I1031 00:51:26.665523 2305 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Oct 31 00:51:26.667592 kubelet[2305]: E1031 00:51:26.667538 2305 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Oct 31 00:51:26.667956 kubelet[2305]: E1031 00:51:26.667941 2305 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Oct 31 00:51:26.738774 systemd[1]: Created slice kubepods-burstable-podccc88040b03835fa55c4ea441685ea91.slice - libcontainer container kubepods-burstable-podccc88040b03835fa55c4ea441685ea91.slice. Oct 31 00:51:26.754799 kubelet[2305]: E1031 00:51:26.754777 2305 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Oct 31 00:51:26.757509 systemd[1]: Created slice kubepods-burstable-podce161b3b11c90b0b844f2e4f86b4e8cd.slice - libcontainer container kubepods-burstable-podce161b3b11c90b0b844f2e4f86b4e8cd.slice. Oct 31 00:51:26.764385 kubelet[2305]: E1031 00:51:26.764368 2305 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Oct 31 00:51:26.766402 systemd[1]: Created slice kubepods-burstable-pod72ae43bf624d285361487631af8a6ba6.slice - libcontainer container kubepods-burstable-pod72ae43bf624d285361487631af8a6ba6.slice. Oct 31 00:51:26.766972 kubelet[2305]: I1031 00:51:26.766584 2305 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Oct 31 00:51:26.766972 kubelet[2305]: E1031 00:51:26.766821 2305 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://139.178.70.106:6443/api/v1/nodes\": dial tcp 139.178.70.106:6443: connect: connection refused" node="localhost" Oct 31 00:51:26.768187 kubelet[2305]: E1031 00:51:26.768175 2305 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Oct 31 00:51:26.819709 kubelet[2305]: I1031 00:51:26.819650 2305 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/ccc88040b03835fa55c4ea441685ea91-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"ccc88040b03835fa55c4ea441685ea91\") " pod="kube-system/kube-apiserver-localhost" Oct 31 00:51:26.820953 kubelet[2305]: E1031 00:51:26.820923 2305 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://139.178.70.106:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 139.178.70.106:6443: connect: connection refused" interval="400ms" Oct 31 00:51:26.920741 kubelet[2305]: I1031 00:51:26.920715 2305 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/ce161b3b11c90b0b844f2e4f86b4e8cd-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"ce161b3b11c90b0b844f2e4f86b4e8cd\") " pod="kube-system/kube-controller-manager-localhost" Oct 31 00:51:26.920842 kubelet[2305]: I1031 00:51:26.920753 2305 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/ce161b3b11c90b0b844f2e4f86b4e8cd-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"ce161b3b11c90b0b844f2e4f86b4e8cd\") " pod="kube-system/kube-controller-manager-localhost" Oct 31 00:51:26.920842 kubelet[2305]: I1031 00:51:26.920767 2305 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/ce161b3b11c90b0b844f2e4f86b4e8cd-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"ce161b3b11c90b0b844f2e4f86b4e8cd\") " pod="kube-system/kube-controller-manager-localhost" Oct 31 00:51:26.920842 kubelet[2305]: I1031 00:51:26.920779 2305 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/ce161b3b11c90b0b844f2e4f86b4e8cd-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"ce161b3b11c90b0b844f2e4f86b4e8cd\") " pod="kube-system/kube-controller-manager-localhost" Oct 31 00:51:26.920842 kubelet[2305]: I1031 00:51:26.920791 2305 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/72ae43bf624d285361487631af8a6ba6-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"72ae43bf624d285361487631af8a6ba6\") " pod="kube-system/kube-scheduler-localhost" Oct 31 00:51:26.920842 kubelet[2305]: I1031 00:51:26.920821 2305 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/ccc88040b03835fa55c4ea441685ea91-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"ccc88040b03835fa55c4ea441685ea91\") " pod="kube-system/kube-apiserver-localhost" Oct 31 00:51:26.920957 kubelet[2305]: I1031 00:51:26.920834 2305 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/ccc88040b03835fa55c4ea441685ea91-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"ccc88040b03835fa55c4ea441685ea91\") " pod="kube-system/kube-apiserver-localhost" Oct 31 00:51:26.920957 kubelet[2305]: I1031 00:51:26.920845 2305 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/ce161b3b11c90b0b844f2e4f86b4e8cd-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"ce161b3b11c90b0b844f2e4f86b4e8cd\") " pod="kube-system/kube-controller-manager-localhost" Oct 31 00:51:26.967986 kubelet[2305]: I1031 00:51:26.967952 2305 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Oct 31 00:51:26.968217 kubelet[2305]: E1031 00:51:26.968187 2305 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://139.178.70.106:6443/api/v1/nodes\": dial tcp 139.178.70.106:6443: connect: connection refused" node="localhost" Oct 31 00:51:27.056988 containerd[1535]: time="2025-10-31T00:51:27.056707430Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:ccc88040b03835fa55c4ea441685ea91,Namespace:kube-system,Attempt:0,}" Oct 31 00:51:27.074657 containerd[1535]: time="2025-10-31T00:51:27.071374053Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:ce161b3b11c90b0b844f2e4f86b4e8cd,Namespace:kube-system,Attempt:0,}" Oct 31 00:51:27.083452 containerd[1535]: time="2025-10-31T00:51:27.083436659Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:72ae43bf624d285361487631af8a6ba6,Namespace:kube-system,Attempt:0,}" Oct 31 00:51:27.221903 kubelet[2305]: E1031 00:51:27.221866 2305 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://139.178.70.106:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 139.178.70.106:6443: connect: connection refused" interval="800ms" Oct 31 00:51:27.295877 update_engine[1518]: I20251031 00:51:27.295828 1518 update_attempter.cc:509] Updating boot flags... Oct 31 00:51:27.348655 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 37 scanned by (udev-worker) (2348) Oct 31 00:51:27.370263 kubelet[2305]: I1031 00:51:27.369978 2305 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Oct 31 00:51:27.370263 kubelet[2305]: E1031 00:51:27.370231 2305 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://139.178.70.106:6443/api/v1/nodes\": dial tcp 139.178.70.106:6443: connect: connection refused" node="localhost" Oct 31 00:51:27.379659 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 37 scanned by (udev-worker) (2348) Oct 31 00:51:27.531849 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount78713894.mount: Deactivated successfully. Oct 31 00:51:27.534477 containerd[1535]: time="2025-10-31T00:51:27.534451145Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Oct 31 00:51:27.535200 containerd[1535]: time="2025-10-31T00:51:27.535159799Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312056" Oct 31 00:51:27.535306 containerd[1535]: time="2025-10-31T00:51:27.535279792Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Oct 31 00:51:27.535932 containerd[1535]: time="2025-10-31T00:51:27.535771809Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Oct 31 00:51:27.536128 containerd[1535]: time="2025-10-31T00:51:27.536102577Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Oct 31 00:51:27.536389 containerd[1535]: time="2025-10-31T00:51:27.536368997Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Oct 31 00:51:27.536833 containerd[1535]: time="2025-10-31T00:51:27.536812358Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Oct 31 00:51:27.539996 containerd[1535]: time="2025-10-31T00:51:27.539965909Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Oct 31 00:51:27.542196 containerd[1535]: time="2025-10-31T00:51:27.542018376Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 485.25345ms" Oct 31 00:51:27.550456 containerd[1535]: time="2025-10-31T00:51:27.550396442Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 466.928862ms" Oct 31 00:51:27.551315 containerd[1535]: time="2025-10-31T00:51:27.551188247Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 479.766586ms" Oct 31 00:51:27.653954 containerd[1535]: time="2025-10-31T00:51:27.649925717Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 31 00:51:27.653954 containerd[1535]: time="2025-10-31T00:51:27.650650616Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 31 00:51:27.653954 containerd[1535]: time="2025-10-31T00:51:27.650672528Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 31 00:51:27.653954 containerd[1535]: time="2025-10-31T00:51:27.650738432Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 31 00:51:27.660545 containerd[1535]: time="2025-10-31T00:51:27.660348065Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 31 00:51:27.660545 containerd[1535]: time="2025-10-31T00:51:27.660380588Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 31 00:51:27.660545 containerd[1535]: time="2025-10-31T00:51:27.660390300Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 31 00:51:27.660545 containerd[1535]: time="2025-10-31T00:51:27.660430497Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 31 00:51:27.663942 containerd[1535]: time="2025-10-31T00:51:27.662612948Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 31 00:51:27.664044 containerd[1535]: time="2025-10-31T00:51:27.663995720Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 31 00:51:27.664044 containerd[1535]: time="2025-10-31T00:51:27.664015425Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 31 00:51:27.664723 containerd[1535]: time="2025-10-31T00:51:27.664156035Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 31 00:51:27.671709 systemd[1]: Started cri-containerd-210ee0f4a4f2cd50ee023ebc638d7187f023e1bf57f00f5a9598577de4a406a6.scope - libcontainer container 210ee0f4a4f2cd50ee023ebc638d7187f023e1bf57f00f5a9598577de4a406a6. Oct 31 00:51:27.678329 systemd[1]: Started cri-containerd-04ee5427a18760dd6fb5b8e9f4155fdd9075ef4c9ab82737dc2043a402458560.scope - libcontainer container 04ee5427a18760dd6fb5b8e9f4155fdd9075ef4c9ab82737dc2043a402458560. Oct 31 00:51:27.682749 systemd[1]: Started cri-containerd-3cccb06abdd0ad5e9bd9bbf2936abd7449cf18996420121003c6f81def440f4c.scope - libcontainer container 3cccb06abdd0ad5e9bd9bbf2936abd7449cf18996420121003c6f81def440f4c. Oct 31 00:51:27.706939 kubelet[2305]: E1031 00:51:27.706903 2305 reflector.go:205] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://139.178.70.106:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 139.178.70.106:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Oct 31 00:51:27.725873 containerd[1535]: time="2025-10-31T00:51:27.725851362Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:ccc88040b03835fa55c4ea441685ea91,Namespace:kube-system,Attempt:0,} returns sandbox id \"3cccb06abdd0ad5e9bd9bbf2936abd7449cf18996420121003c6f81def440f4c\"" Oct 31 00:51:27.727600 containerd[1535]: time="2025-10-31T00:51:27.727493573Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:ce161b3b11c90b0b844f2e4f86b4e8cd,Namespace:kube-system,Attempt:0,} returns sandbox id \"210ee0f4a4f2cd50ee023ebc638d7187f023e1bf57f00f5a9598577de4a406a6\"" Oct 31 00:51:27.730608 containerd[1535]: time="2025-10-31T00:51:27.730575753Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:72ae43bf624d285361487631af8a6ba6,Namespace:kube-system,Attempt:0,} returns sandbox id \"04ee5427a18760dd6fb5b8e9f4155fdd9075ef4c9ab82737dc2043a402458560\"" Oct 31 00:51:27.739163 containerd[1535]: time="2025-10-31T00:51:27.738989980Z" level=info msg="CreateContainer within sandbox \"3cccb06abdd0ad5e9bd9bbf2936abd7449cf18996420121003c6f81def440f4c\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Oct 31 00:51:27.739286 containerd[1535]: time="2025-10-31T00:51:27.739274043Z" level=info msg="CreateContainer within sandbox \"04ee5427a18760dd6fb5b8e9f4155fdd9075ef4c9ab82737dc2043a402458560\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Oct 31 00:51:27.741256 containerd[1535]: time="2025-10-31T00:51:27.740946392Z" level=info msg="CreateContainer within sandbox \"210ee0f4a4f2cd50ee023ebc638d7187f023e1bf57f00f5a9598577de4a406a6\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Oct 31 00:51:27.755000 containerd[1535]: time="2025-10-31T00:51:27.754977994Z" level=info msg="CreateContainer within sandbox \"210ee0f4a4f2cd50ee023ebc638d7187f023e1bf57f00f5a9598577de4a406a6\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"c3e3485cbf39e01fc6a051dd85b05c725eb979270a6f33dd3689d894cf6a2dae\"" Oct 31 00:51:27.755154 containerd[1535]: time="2025-10-31T00:51:27.755133959Z" level=info msg="CreateContainer within sandbox \"04ee5427a18760dd6fb5b8e9f4155fdd9075ef4c9ab82737dc2043a402458560\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"fc88599bb39af54121802c6a371604795a71cbce695efdb1230dbb2fd8a6c343\"" Oct 31 00:51:27.755306 containerd[1535]: time="2025-10-31T00:51:27.755295834Z" level=info msg="StartContainer for \"c3e3485cbf39e01fc6a051dd85b05c725eb979270a6f33dd3689d894cf6a2dae\"" Oct 31 00:51:27.756726 containerd[1535]: time="2025-10-31T00:51:27.756712666Z" level=info msg="CreateContainer within sandbox \"3cccb06abdd0ad5e9bd9bbf2936abd7449cf18996420121003c6f81def440f4c\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"714597bc12539824eb86db2b16f0a9a17de49d6f0d9e7dbfab6112e63a4e5c98\"" Oct 31 00:51:27.756997 containerd[1535]: time="2025-10-31T00:51:27.756986275Z" level=info msg="StartContainer for \"fc88599bb39af54121802c6a371604795a71cbce695efdb1230dbb2fd8a6c343\"" Oct 31 00:51:27.762519 containerd[1535]: time="2025-10-31T00:51:27.762501198Z" level=info msg="StartContainer for \"714597bc12539824eb86db2b16f0a9a17de49d6f0d9e7dbfab6112e63a4e5c98\"" Oct 31 00:51:27.774708 systemd[1]: Started cri-containerd-c3e3485cbf39e01fc6a051dd85b05c725eb979270a6f33dd3689d894cf6a2dae.scope - libcontainer container c3e3485cbf39e01fc6a051dd85b05c725eb979270a6f33dd3689d894cf6a2dae. Oct 31 00:51:27.782777 systemd[1]: Started cri-containerd-fc88599bb39af54121802c6a371604795a71cbce695efdb1230dbb2fd8a6c343.scope - libcontainer container fc88599bb39af54121802c6a371604795a71cbce695efdb1230dbb2fd8a6c343. Oct 31 00:51:27.792759 systemd[1]: Started cri-containerd-714597bc12539824eb86db2b16f0a9a17de49d6f0d9e7dbfab6112e63a4e5c98.scope - libcontainer container 714597bc12539824eb86db2b16f0a9a17de49d6f0d9e7dbfab6112e63a4e5c98. Oct 31 00:51:27.833890 containerd[1535]: time="2025-10-31T00:51:27.833869189Z" level=info msg="StartContainer for \"714597bc12539824eb86db2b16f0a9a17de49d6f0d9e7dbfab6112e63a4e5c98\" returns successfully" Oct 31 00:51:27.838293 containerd[1535]: time="2025-10-31T00:51:27.838274229Z" level=info msg="StartContainer for \"c3e3485cbf39e01fc6a051dd85b05c725eb979270a6f33dd3689d894cf6a2dae\" returns successfully" Oct 31 00:51:27.842823 containerd[1535]: time="2025-10-31T00:51:27.842746837Z" level=info msg="StartContainer for \"fc88599bb39af54121802c6a371604795a71cbce695efdb1230dbb2fd8a6c343\" returns successfully" Oct 31 00:51:27.911408 kubelet[2305]: E1031 00:51:27.911335 2305 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://139.178.70.106:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 139.178.70.106:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Oct 31 00:51:28.022247 kubelet[2305]: E1031 00:51:28.022218 2305 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://139.178.70.106:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 139.178.70.106:6443: connect: connection refused" interval="1.6s" Oct 31 00:51:28.139439 kubelet[2305]: E1031 00:51:28.139415 2305 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://139.178.70.106:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 139.178.70.106:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Oct 31 00:51:28.160597 kubelet[2305]: E1031 00:51:28.160579 2305 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: Get \"https://139.178.70.106:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 139.178.70.106:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Oct 31 00:51:28.171481 kubelet[2305]: I1031 00:51:28.171331 2305 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Oct 31 00:51:28.171481 kubelet[2305]: E1031 00:51:28.171442 2305 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://139.178.70.106:6443/api/v1/nodes\": dial tcp 139.178.70.106:6443: connect: connection refused" node="localhost" Oct 31 00:51:28.637062 kubelet[2305]: E1031 00:51:28.636949 2305 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Oct 31 00:51:28.639825 kubelet[2305]: E1031 00:51:28.639276 2305 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Oct 31 00:51:28.640057 kubelet[2305]: E1031 00:51:28.640050 2305 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Oct 31 00:51:29.641869 kubelet[2305]: E1031 00:51:29.641802 2305 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Oct 31 00:51:29.642148 kubelet[2305]: E1031 00:51:29.641945 2305 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Oct 31 00:51:29.655282 kubelet[2305]: E1031 00:51:29.655260 2305 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Oct 31 00:51:29.773028 kubelet[2305]: I1031 00:51:29.772998 2305 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Oct 31 00:51:29.782925 kubelet[2305]: I1031 00:51:29.782901 2305 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Oct 31 00:51:29.782925 kubelet[2305]: E1031 00:51:29.782922 2305 kubelet_node_status.go:486] "Error updating node status, will retry" err="error getting node \"localhost\": node \"localhost\" not found" Oct 31 00:51:29.790431 kubelet[2305]: E1031 00:51:29.790416 2305 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Oct 31 00:51:29.891210 kubelet[2305]: E1031 00:51:29.891189 2305 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Oct 31 00:51:29.991715 kubelet[2305]: E1031 00:51:29.991678 2305 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Oct 31 00:51:30.092356 kubelet[2305]: E1031 00:51:30.092335 2305 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Oct 31 00:51:30.193279 kubelet[2305]: E1031 00:51:30.193241 2305 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Oct 31 00:51:30.293942 kubelet[2305]: E1031 00:51:30.293884 2305 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Oct 31 00:51:30.394729 kubelet[2305]: E1031 00:51:30.394694 2305 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Oct 31 00:51:30.495895 kubelet[2305]: E1031 00:51:30.495867 2305 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Oct 31 00:51:30.598464 kubelet[2305]: I1031 00:51:30.597976 2305 apiserver.go:52] "Watching apiserver" Oct 31 00:51:30.616938 kubelet[2305]: I1031 00:51:30.616911 2305 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Oct 31 00:51:30.618922 kubelet[2305]: I1031 00:51:30.618904 2305 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Oct 31 00:51:30.622868 kubelet[2305]: I1031 00:51:30.622848 2305 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Oct 31 00:51:30.626431 kubelet[2305]: I1031 00:51:30.626413 2305 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Oct 31 00:51:30.643317 kubelet[2305]: I1031 00:51:30.643209 2305 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Oct 31 00:51:30.645268 kubelet[2305]: E1031 00:51:30.645252 2305 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" Oct 31 00:51:31.701809 systemd[1]: Reloading requested from client PID 2605 ('systemctl') (unit session-9.scope)... Oct 31 00:51:31.701820 systemd[1]: Reloading... Oct 31 00:51:31.757639 zram_generator::config[2642]: No configuration found. Oct 31 00:51:31.816134 systemd[1]: /etc/systemd/system/coreos-metadata.service:11: Ignoring unknown escape sequences: "echo "COREOS_CUSTOM_PRIVATE_IPV4=$(ip addr show ens192 | grep "inet 10." | grep -Po "inet \K[\d.]+") Oct 31 00:51:31.833854 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Oct 31 00:51:31.885437 systemd[1]: Reloading finished in 183 ms. Oct 31 00:51:31.916090 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Oct 31 00:51:31.925204 systemd[1]: kubelet.service: Deactivated successfully. Oct 31 00:51:31.925366 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Oct 31 00:51:31.929938 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Oct 31 00:51:32.221144 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Oct 31 00:51:32.229843 (kubelet)[2709]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Oct 31 00:51:32.279924 kubelet[2709]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Oct 31 00:51:32.280158 kubelet[2709]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Oct 31 00:51:32.280263 kubelet[2709]: I1031 00:51:32.280243 2709 server.go:213] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Oct 31 00:51:32.292858 kubelet[2709]: I1031 00:51:32.292839 2709 server.go:529] "Kubelet version" kubeletVersion="v1.34.1" Oct 31 00:51:32.292858 kubelet[2709]: I1031 00:51:32.292855 2709 server.go:531] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Oct 31 00:51:32.318541 kubelet[2709]: I1031 00:51:32.318523 2709 watchdog_linux.go:95] "Systemd watchdog is not enabled" Oct 31 00:51:32.318541 kubelet[2709]: I1031 00:51:32.318541 2709 watchdog_linux.go:137] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Oct 31 00:51:32.318763 kubelet[2709]: I1031 00:51:32.318745 2709 server.go:956] "Client rotation is on, will bootstrap in background" Oct 31 00:51:32.319661 kubelet[2709]: I1031 00:51:32.319645 2709 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem" Oct 31 00:51:32.329681 kubelet[2709]: I1031 00:51:32.329465 2709 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Oct 31 00:51:32.418913 kubelet[2709]: E1031 00:51:32.418853 2709 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Oct 31 00:51:32.418913 kubelet[2709]: I1031 00:51:32.418906 2709 server.go:1400] "CRI implementation should be updated to support RuntimeConfig. Falling back to using cgroupDriver from kubelet config." Oct 31 00:51:32.421814 kubelet[2709]: I1031 00:51:32.421743 2709 server.go:781] "--cgroups-per-qos enabled, but --cgroup-root was not specified. Defaulting to /" Oct 31 00:51:32.421891 kubelet[2709]: I1031 00:51:32.421877 2709 container_manager_linux.go:270] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Oct 31 00:51:32.422013 kubelet[2709]: I1031 00:51:32.421893 2709 container_manager_linux.go:275] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Oct 31 00:51:32.422013 kubelet[2709]: I1031 00:51:32.422013 2709 topology_manager.go:138] "Creating topology manager with none policy" Oct 31 00:51:32.422113 kubelet[2709]: I1031 00:51:32.422022 2709 container_manager_linux.go:306] "Creating device plugin manager" Oct 31 00:51:32.422113 kubelet[2709]: I1031 00:51:32.422040 2709 container_manager_linux.go:315] "Creating Dynamic Resource Allocation (DRA) manager" Oct 31 00:51:32.422551 kubelet[2709]: I1031 00:51:32.422537 2709 state_mem.go:36] "Initialized new in-memory state store" Oct 31 00:51:32.422752 kubelet[2709]: I1031 00:51:32.422699 2709 kubelet.go:475] "Attempting to sync node with API server" Oct 31 00:51:32.422752 kubelet[2709]: I1031 00:51:32.422710 2709 kubelet.go:376] "Adding static pod path" path="/etc/kubernetes/manifests" Oct 31 00:51:32.434982 kubelet[2709]: I1031 00:51:32.434962 2709 kubelet.go:387] "Adding apiserver pod source" Oct 31 00:51:32.435029 kubelet[2709]: I1031 00:51:32.434985 2709 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Oct 31 00:51:32.439857 kubelet[2709]: I1031 00:51:32.439752 2709 kuberuntime_manager.go:291] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Oct 31 00:51:32.440259 kubelet[2709]: I1031 00:51:32.440237 2709 kubelet.go:940] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Oct 31 00:51:32.440411 kubelet[2709]: I1031 00:51:32.440336 2709 kubelet.go:964] "Not starting PodCertificateRequest manager because we are in static kubelet mode or the PodCertificateProjection feature gate is disabled" Oct 31 00:51:32.442282 kubelet[2709]: I1031 00:51:32.442035 2709 server.go:1262] "Started kubelet" Oct 31 00:51:32.446443 kubelet[2709]: I1031 00:51:32.446416 2709 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Oct 31 00:51:32.453870 kubelet[2709]: I1031 00:51:32.452838 2709 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Oct 31 00:51:32.453870 kubelet[2709]: I1031 00:51:32.453582 2709 server.go:310] "Adding debug handlers to kubelet server" Oct 31 00:51:32.459668 kubelet[2709]: I1031 00:51:32.457445 2709 ratelimit.go:56] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Oct 31 00:51:32.459668 kubelet[2709]: I1031 00:51:32.457477 2709 server_v1.go:49] "podresources" method="list" useActivePods=true Oct 31 00:51:32.459668 kubelet[2709]: I1031 00:51:32.457572 2709 server.go:249] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Oct 31 00:51:32.459668 kubelet[2709]: I1031 00:51:32.457772 2709 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Oct 31 00:51:32.461791 kubelet[2709]: I1031 00:51:32.461780 2709 volume_manager.go:313] "Starting Kubelet Volume Manager" Oct 31 00:51:32.462336 kubelet[2709]: E1031 00:51:32.462316 2709 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Oct 31 00:51:32.465001 kubelet[2709]: I1031 00:51:32.464992 2709 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Oct 31 00:51:32.465161 kubelet[2709]: I1031 00:51:32.465112 2709 factory.go:223] Registration of the systemd container factory successfully Oct 31 00:51:32.465198 kubelet[2709]: I1031 00:51:32.465172 2709 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Oct 31 00:51:32.468184 kubelet[2709]: I1031 00:51:32.468038 2709 reconciler.go:29] "Reconciler: start to sync state" Oct 31 00:51:32.469396 kubelet[2709]: I1031 00:51:32.469379 2709 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv4" Oct 31 00:51:32.470341 kubelet[2709]: I1031 00:51:32.470328 2709 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv6" Oct 31 00:51:32.470341 kubelet[2709]: I1031 00:51:32.470338 2709 status_manager.go:244] "Starting to sync pod status with apiserver" Oct 31 00:51:32.470392 kubelet[2709]: I1031 00:51:32.470350 2709 kubelet.go:2427] "Starting kubelet main sync loop" Oct 31 00:51:32.470392 kubelet[2709]: E1031 00:51:32.470371 2709 kubelet.go:2451] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Oct 31 00:51:32.471699 kubelet[2709]: I1031 00:51:32.470474 2709 factory.go:223] Registration of the containerd container factory successfully Oct 31 00:51:32.505371 kubelet[2709]: I1031 00:51:32.505358 2709 cpu_manager.go:221] "Starting CPU manager" policy="none" Oct 31 00:51:32.505474 kubelet[2709]: I1031 00:51:32.505466 2709 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Oct 31 00:51:32.505511 kubelet[2709]: I1031 00:51:32.505507 2709 state_mem.go:36] "Initialized new in-memory state store" Oct 31 00:51:32.505631 kubelet[2709]: I1031 00:51:32.505606 2709 state_mem.go:88] "Updated default CPUSet" cpuSet="" Oct 31 00:51:32.505676 kubelet[2709]: I1031 00:51:32.505664 2709 state_mem.go:96] "Updated CPUSet assignments" assignments={} Oct 31 00:51:32.505715 kubelet[2709]: I1031 00:51:32.505711 2709 policy_none.go:49] "None policy: Start" Oct 31 00:51:32.505746 kubelet[2709]: I1031 00:51:32.505742 2709 memory_manager.go:187] "Starting memorymanager" policy="None" Oct 31 00:51:32.505773 kubelet[2709]: I1031 00:51:32.505769 2709 state_mem.go:36] "Initializing new in-memory state store" logger="Memory Manager state checkpoint" Oct 31 00:51:32.505883 kubelet[2709]: I1031 00:51:32.505877 2709 state_mem.go:77] "Updated machine memory state" logger="Memory Manager state checkpoint" Oct 31 00:51:32.505926 kubelet[2709]: I1031 00:51:32.505921 2709 policy_none.go:47] "Start" Oct 31 00:51:32.508475 kubelet[2709]: E1031 00:51:32.508466 2709 manager.go:513] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Oct 31 00:51:32.509440 kubelet[2709]: I1031 00:51:32.508965 2709 eviction_manager.go:189] "Eviction manager: starting control loop" Oct 31 00:51:32.509440 kubelet[2709]: I1031 00:51:32.508975 2709 container_log_manager.go:146] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Oct 31 00:51:32.509440 kubelet[2709]: I1031 00:51:32.509186 2709 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Oct 31 00:51:32.511200 kubelet[2709]: E1031 00:51:32.510056 2709 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Oct 31 00:51:32.570843 kubelet[2709]: I1031 00:51:32.570821 2709 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Oct 31 00:51:32.570981 kubelet[2709]: I1031 00:51:32.570972 2709 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Oct 31 00:51:32.571084 kubelet[2709]: I1031 00:51:32.571070 2709 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Oct 31 00:51:32.575131 kubelet[2709]: E1031 00:51:32.575070 2709 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" Oct 31 00:51:32.575599 kubelet[2709]: E1031 00:51:32.575583 2709 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Oct 31 00:51:32.575709 kubelet[2709]: E1031 00:51:32.575697 2709 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" already exists" pod="kube-system/kube-controller-manager-localhost" Oct 31 00:51:32.610951 kubelet[2709]: I1031 00:51:32.610935 2709 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Oct 31 00:51:32.616063 kubelet[2709]: I1031 00:51:32.616044 2709 kubelet_node_status.go:124] "Node was previously registered" node="localhost" Oct 31 00:51:32.616234 kubelet[2709]: I1031 00:51:32.616093 2709 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Oct 31 00:51:32.669295 kubelet[2709]: I1031 00:51:32.669271 2709 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/ccc88040b03835fa55c4ea441685ea91-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"ccc88040b03835fa55c4ea441685ea91\") " pod="kube-system/kube-apiserver-localhost" Oct 31 00:51:32.669358 kubelet[2709]: I1031 00:51:32.669299 2709 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/ce161b3b11c90b0b844f2e4f86b4e8cd-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"ce161b3b11c90b0b844f2e4f86b4e8cd\") " pod="kube-system/kube-controller-manager-localhost" Oct 31 00:51:32.669358 kubelet[2709]: I1031 00:51:32.669314 2709 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/ce161b3b11c90b0b844f2e4f86b4e8cd-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"ce161b3b11c90b0b844f2e4f86b4e8cd\") " pod="kube-system/kube-controller-manager-localhost" Oct 31 00:51:32.669358 kubelet[2709]: I1031 00:51:32.669327 2709 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/ce161b3b11c90b0b844f2e4f86b4e8cd-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"ce161b3b11c90b0b844f2e4f86b4e8cd\") " pod="kube-system/kube-controller-manager-localhost" Oct 31 00:51:32.669358 kubelet[2709]: I1031 00:51:32.669342 2709 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/ce161b3b11c90b0b844f2e4f86b4e8cd-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"ce161b3b11c90b0b844f2e4f86b4e8cd\") " pod="kube-system/kube-controller-manager-localhost" Oct 31 00:51:32.669358 kubelet[2709]: I1031 00:51:32.669353 2709 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/72ae43bf624d285361487631af8a6ba6-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"72ae43bf624d285361487631af8a6ba6\") " pod="kube-system/kube-scheduler-localhost" Oct 31 00:51:32.669506 kubelet[2709]: I1031 00:51:32.669363 2709 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/ccc88040b03835fa55c4ea441685ea91-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"ccc88040b03835fa55c4ea441685ea91\") " pod="kube-system/kube-apiserver-localhost" Oct 31 00:51:32.669506 kubelet[2709]: I1031 00:51:32.669374 2709 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/ccc88040b03835fa55c4ea441685ea91-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"ccc88040b03835fa55c4ea441685ea91\") " pod="kube-system/kube-apiserver-localhost" Oct 31 00:51:32.669506 kubelet[2709]: I1031 00:51:32.669386 2709 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/ce161b3b11c90b0b844f2e4f86b4e8cd-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"ce161b3b11c90b0b844f2e4f86b4e8cd\") " pod="kube-system/kube-controller-manager-localhost" Oct 31 00:51:33.448982 kubelet[2709]: I1031 00:51:33.448947 2709 apiserver.go:52] "Watching apiserver" Oct 31 00:51:33.465185 kubelet[2709]: I1031 00:51:33.465154 2709 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Oct 31 00:51:33.495655 kubelet[2709]: I1031 00:51:33.494782 2709 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Oct 31 00:51:33.495655 kubelet[2709]: I1031 00:51:33.495012 2709 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Oct 31 00:51:33.498416 kubelet[2709]: E1031 00:51:33.498303 2709 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" Oct 31 00:51:33.501723 kubelet[2709]: E1031 00:51:33.501709 2709 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Oct 31 00:51:33.508743 kubelet[2709]: I1031 00:51:33.508697 2709 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=3.508689198 podStartE2EDuration="3.508689198s" podCreationTimestamp="2025-10-31 00:51:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-31 00:51:33.508680514 +0000 UTC m=+1.268130299" watchObservedRunningTime="2025-10-31 00:51:33.508689198 +0000 UTC m=+1.268138980" Oct 31 00:51:33.512338 kubelet[2709]: I1031 00:51:33.512314 2709 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=3.5123063979999998 podStartE2EDuration="3.512306398s" podCreationTimestamp="2025-10-31 00:51:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-31 00:51:33.512174348 +0000 UTC m=+1.271624131" watchObservedRunningTime="2025-10-31 00:51:33.512306398 +0000 UTC m=+1.271756171" Oct 31 00:51:33.515709 kubelet[2709]: I1031 00:51:33.515603 2709 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=3.5155946670000002 podStartE2EDuration="3.515594667s" podCreationTimestamp="2025-10-31 00:51:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-31 00:51:33.515566651 +0000 UTC m=+1.275016436" watchObservedRunningTime="2025-10-31 00:51:33.515594667 +0000 UTC m=+1.275044453" Oct 31 00:51:39.136758 kubelet[2709]: I1031 00:51:39.136728 2709 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Oct 31 00:51:39.137133 containerd[1535]: time="2025-10-31T00:51:39.137096583Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Oct 31 00:51:39.137472 kubelet[2709]: I1031 00:51:39.137233 2709 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Oct 31 00:51:39.794387 systemd[1]: Created slice kubepods-besteffort-podcdf6ad0a_3d9d_4156_a76d_6af37570cb8b.slice - libcontainer container kubepods-besteffort-podcdf6ad0a_3d9d_4156_a76d_6af37570cb8b.slice. Oct 31 00:51:39.818874 kubelet[2709]: I1031 00:51:39.818747 2709 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/cdf6ad0a-3d9d-4156-a76d-6af37570cb8b-kube-proxy\") pod \"kube-proxy-vtxg5\" (UID: \"cdf6ad0a-3d9d-4156-a76d-6af37570cb8b\") " pod="kube-system/kube-proxy-vtxg5" Oct 31 00:51:39.818874 kubelet[2709]: I1031 00:51:39.818783 2709 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/cdf6ad0a-3d9d-4156-a76d-6af37570cb8b-lib-modules\") pod \"kube-proxy-vtxg5\" (UID: \"cdf6ad0a-3d9d-4156-a76d-6af37570cb8b\") " pod="kube-system/kube-proxy-vtxg5" Oct 31 00:51:39.818874 kubelet[2709]: I1031 00:51:39.818802 2709 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/cdf6ad0a-3d9d-4156-a76d-6af37570cb8b-xtables-lock\") pod \"kube-proxy-vtxg5\" (UID: \"cdf6ad0a-3d9d-4156-a76d-6af37570cb8b\") " pod="kube-system/kube-proxy-vtxg5" Oct 31 00:51:39.818874 kubelet[2709]: I1031 00:51:39.818815 2709 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9zqgv\" (UniqueName: \"kubernetes.io/projected/cdf6ad0a-3d9d-4156-a76d-6af37570cb8b-kube-api-access-9zqgv\") pod \"kube-proxy-vtxg5\" (UID: \"cdf6ad0a-3d9d-4156-a76d-6af37570cb8b\") " pod="kube-system/kube-proxy-vtxg5" Oct 31 00:51:39.930125 kubelet[2709]: E1031 00:51:39.929690 2709 projected.go:291] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found Oct 31 00:51:39.930125 kubelet[2709]: E1031 00:51:39.929715 2709 projected.go:196] Error preparing data for projected volume kube-api-access-9zqgv for pod kube-system/kube-proxy-vtxg5: configmap "kube-root-ca.crt" not found Oct 31 00:51:39.930125 kubelet[2709]: E1031 00:51:39.929763 2709 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/cdf6ad0a-3d9d-4156-a76d-6af37570cb8b-kube-api-access-9zqgv podName:cdf6ad0a-3d9d-4156-a76d-6af37570cb8b nodeName:}" failed. No retries permitted until 2025-10-31 00:51:40.429746474 +0000 UTC m=+8.189196258 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-9zqgv" (UniqueName: "kubernetes.io/projected/cdf6ad0a-3d9d-4156-a76d-6af37570cb8b-kube-api-access-9zqgv") pod "kube-proxy-vtxg5" (UID: "cdf6ad0a-3d9d-4156-a76d-6af37570cb8b") : configmap "kube-root-ca.crt" not found Oct 31 00:51:40.293985 systemd[1]: Created slice kubepods-besteffort-pod6b7842d9_ffbb_4c51_8f74_f1e28429c476.slice - libcontainer container kubepods-besteffort-pod6b7842d9_ffbb_4c51_8f74_f1e28429c476.slice. Oct 31 00:51:40.297030 kubelet[2709]: E1031 00:51:40.297011 2709 status_manager.go:1018] "Failed to get status for pod" err="pods \"tigera-operator-65cdcdfd6d-v6bzb\" is forbidden: User \"system:node:localhost\" cannot get resource \"pods\" in API group \"\" in the namespace \"tigera-operator\": no relationship found between node 'localhost' and this object" podUID="6b7842d9-ffbb-4c51-8f74-f1e28429c476" pod="tigera-operator/tigera-operator-65cdcdfd6d-v6bzb" Oct 31 00:51:40.297451 kubelet[2709]: E1031 00:51:40.297166 2709 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"kubernetes-services-endpoint\" is forbidden: User \"system:node:localhost\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"tigera-operator\": no relationship found between node 'localhost' and this object" logger="UnhandledError" reflector="object-\"tigera-operator\"/\"kubernetes-services-endpoint\"" type="*v1.ConfigMap" Oct 31 00:51:40.297576 kubelet[2709]: E1031 00:51:40.297565 2709 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:localhost\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"tigera-operator\": no relationship found between node 'localhost' and this object" logger="UnhandledError" reflector="object-\"tigera-operator\"/\"kube-root-ca.crt\"" type="*v1.ConfigMap" Oct 31 00:51:40.324131 kubelet[2709]: I1031 00:51:40.324102 2709 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/6b7842d9-ffbb-4c51-8f74-f1e28429c476-var-lib-calico\") pod \"tigera-operator-65cdcdfd6d-v6bzb\" (UID: \"6b7842d9-ffbb-4c51-8f74-f1e28429c476\") " pod="tigera-operator/tigera-operator-65cdcdfd6d-v6bzb" Oct 31 00:51:40.324348 kubelet[2709]: I1031 00:51:40.324248 2709 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5mqzj\" (UniqueName: \"kubernetes.io/projected/6b7842d9-ffbb-4c51-8f74-f1e28429c476-kube-api-access-5mqzj\") pod \"tigera-operator-65cdcdfd6d-v6bzb\" (UID: \"6b7842d9-ffbb-4c51-8f74-f1e28429c476\") " pod="tigera-operator/tigera-operator-65cdcdfd6d-v6bzb" Oct 31 00:51:40.705504 containerd[1535]: time="2025-10-31T00:51:40.705429774Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-vtxg5,Uid:cdf6ad0a-3d9d-4156-a76d-6af37570cb8b,Namespace:kube-system,Attempt:0,}" Oct 31 00:51:40.720012 containerd[1535]: time="2025-10-31T00:51:40.719777661Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 31 00:51:40.720012 containerd[1535]: time="2025-10-31T00:51:40.719830437Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 31 00:51:40.720012 containerd[1535]: time="2025-10-31T00:51:40.719840700Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 31 00:51:40.720012 containerd[1535]: time="2025-10-31T00:51:40.719889256Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 31 00:51:40.738841 systemd[1]: Started cri-containerd-ebfe2bed15f8f050cb7ae5ac7653b4803dad7e83ab81e3020390f9ffcb74b966.scope - libcontainer container ebfe2bed15f8f050cb7ae5ac7653b4803dad7e83ab81e3020390f9ffcb74b966. Oct 31 00:51:40.754183 containerd[1535]: time="2025-10-31T00:51:40.753955337Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-vtxg5,Uid:cdf6ad0a-3d9d-4156-a76d-6af37570cb8b,Namespace:kube-system,Attempt:0,} returns sandbox id \"ebfe2bed15f8f050cb7ae5ac7653b4803dad7e83ab81e3020390f9ffcb74b966\"" Oct 31 00:51:40.758751 containerd[1535]: time="2025-10-31T00:51:40.758674300Z" level=info msg="CreateContainer within sandbox \"ebfe2bed15f8f050cb7ae5ac7653b4803dad7e83ab81e3020390f9ffcb74b966\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Oct 31 00:51:40.765494 containerd[1535]: time="2025-10-31T00:51:40.765442675Z" level=info msg="CreateContainer within sandbox \"ebfe2bed15f8f050cb7ae5ac7653b4803dad7e83ab81e3020390f9ffcb74b966\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"ffe58cb46df4e840236adfc4cbfc7134324631106704119dc5e102db66540500\"" Oct 31 00:51:40.765757 containerd[1535]: time="2025-10-31T00:51:40.765708345Z" level=info msg="StartContainer for \"ffe58cb46df4e840236adfc4cbfc7134324631106704119dc5e102db66540500\"" Oct 31 00:51:40.784839 systemd[1]: Started cri-containerd-ffe58cb46df4e840236adfc4cbfc7134324631106704119dc5e102db66540500.scope - libcontainer container ffe58cb46df4e840236adfc4cbfc7134324631106704119dc5e102db66540500. Oct 31 00:51:40.801846 containerd[1535]: time="2025-10-31T00:51:40.801796985Z" level=info msg="StartContainer for \"ffe58cb46df4e840236adfc4cbfc7134324631106704119dc5e102db66540500\" returns successfully" Oct 31 00:51:41.429557 kubelet[2709]: E1031 00:51:41.429420 2709 projected.go:291] Couldn't get configMap tigera-operator/kube-root-ca.crt: failed to sync configmap cache: timed out waiting for the condition Oct 31 00:51:41.429557 kubelet[2709]: E1031 00:51:41.429437 2709 projected.go:196] Error preparing data for projected volume kube-api-access-5mqzj for pod tigera-operator/tigera-operator-65cdcdfd6d-v6bzb: failed to sync configmap cache: timed out waiting for the condition Oct 31 00:51:41.429557 kubelet[2709]: E1031 00:51:41.429474 2709 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/6b7842d9-ffbb-4c51-8f74-f1e28429c476-kube-api-access-5mqzj podName:6b7842d9-ffbb-4c51-8f74-f1e28429c476 nodeName:}" failed. No retries permitted until 2025-10-31 00:51:41.929459221 +0000 UTC m=+9.688908998 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-5mqzj" (UniqueName: "kubernetes.io/projected/6b7842d9-ffbb-4c51-8f74-f1e28429c476-kube-api-access-5mqzj") pod "tigera-operator-65cdcdfd6d-v6bzb" (UID: "6b7842d9-ffbb-4c51-8f74-f1e28429c476") : failed to sync configmap cache: timed out waiting for the condition Oct 31 00:51:41.519014 kubelet[2709]: I1031 00:51:41.518679 2709 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-vtxg5" podStartSLOduration=2.5186663400000002 podStartE2EDuration="2.51866634s" podCreationTimestamp="2025-10-31 00:51:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-31 00:51:41.51082501 +0000 UTC m=+9.270274802" watchObservedRunningTime="2025-10-31 00:51:41.51866634 +0000 UTC m=+9.278116127" Oct 31 00:51:41.531071 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4037056678.mount: Deactivated successfully. Oct 31 00:51:42.097651 containerd[1535]: time="2025-10-31T00:51:42.097608021Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-65cdcdfd6d-v6bzb,Uid:6b7842d9-ffbb-4c51-8f74-f1e28429c476,Namespace:tigera-operator,Attempt:0,}" Oct 31 00:51:42.112389 containerd[1535]: time="2025-10-31T00:51:42.112232670Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 31 00:51:42.112389 containerd[1535]: time="2025-10-31T00:51:42.112275312Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 31 00:51:42.112389 containerd[1535]: time="2025-10-31T00:51:42.112285419Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 31 00:51:42.112389 containerd[1535]: time="2025-10-31T00:51:42.112329787Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 31 00:51:42.126835 systemd[1]: Started cri-containerd-3eb69f924a73509a2873299198b06870bca9ef44fa5f55bfda38b3a41e02f2c9.scope - libcontainer container 3eb69f924a73509a2873299198b06870bca9ef44fa5f55bfda38b3a41e02f2c9. Oct 31 00:51:42.151725 containerd[1535]: time="2025-10-31T00:51:42.151697458Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-65cdcdfd6d-v6bzb,Uid:6b7842d9-ffbb-4c51-8f74-f1e28429c476,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"3eb69f924a73509a2873299198b06870bca9ef44fa5f55bfda38b3a41e02f2c9\"" Oct 31 00:51:42.153105 containerd[1535]: time="2025-10-31T00:51:42.153087762Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.7\"" Oct 31 00:51:43.498818 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3594765034.mount: Deactivated successfully. Oct 31 00:51:43.893866 containerd[1535]: time="2025-10-31T00:51:43.893800212Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.38.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 31 00:51:43.894251 containerd[1535]: time="2025-10-31T00:51:43.894226318Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.38.7: active requests=0, bytes read=25061691" Oct 31 00:51:43.894645 containerd[1535]: time="2025-10-31T00:51:43.894494970Z" level=info msg="ImageCreate event name:\"sha256:f2c1be207523e593db82e3b8cf356a12f3ad8d1aad2225f8114b2cf9d6486cf1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 31 00:51:43.895774 containerd[1535]: time="2025-10-31T00:51:43.895759475Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:1b629a1403f5b6d7243f7dd523d04b8a50352a33c1d4d6970b6002a8733acf2e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 31 00:51:43.896245 containerd[1535]: time="2025-10-31T00:51:43.896229496Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.38.7\" with image id \"sha256:f2c1be207523e593db82e3b8cf356a12f3ad8d1aad2225f8114b2cf9d6486cf1\", repo tag \"quay.io/tigera/operator:v1.38.7\", repo digest \"quay.io/tigera/operator@sha256:1b629a1403f5b6d7243f7dd523d04b8a50352a33c1d4d6970b6002a8733acf2e\", size \"25057686\" in 1.743121166s" Oct 31 00:51:43.896271 containerd[1535]: time="2025-10-31T00:51:43.896246376Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.7\" returns image reference \"sha256:f2c1be207523e593db82e3b8cf356a12f3ad8d1aad2225f8114b2cf9d6486cf1\"" Oct 31 00:51:43.898771 containerd[1535]: time="2025-10-31T00:51:43.898758554Z" level=info msg="CreateContainer within sandbox \"3eb69f924a73509a2873299198b06870bca9ef44fa5f55bfda38b3a41e02f2c9\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Oct 31 00:51:43.905330 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1043047367.mount: Deactivated successfully. Oct 31 00:51:43.921888 containerd[1535]: time="2025-10-31T00:51:43.921861557Z" level=info msg="CreateContainer within sandbox \"3eb69f924a73509a2873299198b06870bca9ef44fa5f55bfda38b3a41e02f2c9\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"8a229ed13834a7f52ebf5357230372e4899b398eb921bb7cc5292f8b48197b86\"" Oct 31 00:51:43.923139 containerd[1535]: time="2025-10-31T00:51:43.923121516Z" level=info msg="StartContainer for \"8a229ed13834a7f52ebf5357230372e4899b398eb921bb7cc5292f8b48197b86\"" Oct 31 00:51:43.945826 systemd[1]: Started cri-containerd-8a229ed13834a7f52ebf5357230372e4899b398eb921bb7cc5292f8b48197b86.scope - libcontainer container 8a229ed13834a7f52ebf5357230372e4899b398eb921bb7cc5292f8b48197b86. Oct 31 00:51:43.959842 containerd[1535]: time="2025-10-31T00:51:43.959818740Z" level=info msg="StartContainer for \"8a229ed13834a7f52ebf5357230372e4899b398eb921bb7cc5292f8b48197b86\" returns successfully" Oct 31 00:51:44.519315 kubelet[2709]: I1031 00:51:44.519276 2709 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-65cdcdfd6d-v6bzb" podStartSLOduration=2.7748994590000002 podStartE2EDuration="4.519265371s" podCreationTimestamp="2025-10-31 00:51:40 +0000 UTC" firstStartedPulling="2025-10-31 00:51:42.152515269 +0000 UTC m=+9.911965042" lastFinishedPulling="2025-10-31 00:51:43.896881178 +0000 UTC m=+11.656330954" observedRunningTime="2025-10-31 00:51:44.518679375 +0000 UTC m=+12.278129160" watchObservedRunningTime="2025-10-31 00:51:44.519265371 +0000 UTC m=+12.278715156" Oct 31 00:51:48.903375 sudo[1831]: pam_unix(sudo:session): session closed for user root Oct 31 00:51:48.906860 sshd[1828]: pam_unix(sshd:session): session closed for user core Oct 31 00:51:48.909866 systemd[1]: sshd@6-139.178.70.106:22-139.178.68.195:55320.service: Deactivated successfully. Oct 31 00:51:48.912609 systemd[1]: session-9.scope: Deactivated successfully. Oct 31 00:51:48.912806 systemd[1]: session-9.scope: Consumed 3.072s CPU time, 144.7M memory peak, 0B memory swap peak. Oct 31 00:51:48.914782 systemd-logind[1514]: Session 9 logged out. Waiting for processes to exit. Oct 31 00:51:48.915764 systemd-logind[1514]: Removed session 9. Oct 31 00:51:52.942483 systemd[1]: Created slice kubepods-besteffort-pod8e970896_8b75_4293_a2ea_264f43272b83.slice - libcontainer container kubepods-besteffort-pod8e970896_8b75_4293_a2ea_264f43272b83.slice. Oct 31 00:51:53.011334 kubelet[2709]: I1031 00:51:53.011300 2709 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nkwjn\" (UniqueName: \"kubernetes.io/projected/8e970896-8b75-4293-a2ea-264f43272b83-kube-api-access-nkwjn\") pod \"calico-typha-7ccb656d8b-bvrvh\" (UID: \"8e970896-8b75-4293-a2ea-264f43272b83\") " pod="calico-system/calico-typha-7ccb656d8b-bvrvh" Oct 31 00:51:53.011334 kubelet[2709]: I1031 00:51:53.011329 2709 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/8e970896-8b75-4293-a2ea-264f43272b83-typha-certs\") pod \"calico-typha-7ccb656d8b-bvrvh\" (UID: \"8e970896-8b75-4293-a2ea-264f43272b83\") " pod="calico-system/calico-typha-7ccb656d8b-bvrvh" Oct 31 00:51:53.011606 kubelet[2709]: I1031 00:51:53.011341 2709 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8e970896-8b75-4293-a2ea-264f43272b83-tigera-ca-bundle\") pod \"calico-typha-7ccb656d8b-bvrvh\" (UID: \"8e970896-8b75-4293-a2ea-264f43272b83\") " pod="calico-system/calico-typha-7ccb656d8b-bvrvh" Oct 31 00:51:53.111573 systemd[1]: Created slice kubepods-besteffort-pod15f270dd_f8f9_40ad_b9b7_6ef9c9ea4695.slice - libcontainer container kubepods-besteffort-pod15f270dd_f8f9_40ad_b9b7_6ef9c9ea4695.slice. Oct 31 00:51:53.212918 kubelet[2709]: I1031 00:51:53.212593 2709 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/15f270dd-f8f9-40ad-b9b7-6ef9c9ea4695-cni-bin-dir\") pod \"calico-node-jfsvd\" (UID: \"15f270dd-f8f9-40ad-b9b7-6ef9c9ea4695\") " pod="calico-system/calico-node-jfsvd" Oct 31 00:51:53.213162 kubelet[2709]: I1031 00:51:53.213149 2709 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/15f270dd-f8f9-40ad-b9b7-6ef9c9ea4695-var-run-calico\") pod \"calico-node-jfsvd\" (UID: \"15f270dd-f8f9-40ad-b9b7-6ef9c9ea4695\") " pod="calico-system/calico-node-jfsvd" Oct 31 00:51:53.213257 kubelet[2709]: I1031 00:51:53.213248 2709 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/15f270dd-f8f9-40ad-b9b7-6ef9c9ea4695-cni-net-dir\") pod \"calico-node-jfsvd\" (UID: \"15f270dd-f8f9-40ad-b9b7-6ef9c9ea4695\") " pod="calico-system/calico-node-jfsvd" Oct 31 00:51:53.213346 kubelet[2709]: I1031 00:51:53.213337 2709 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/15f270dd-f8f9-40ad-b9b7-6ef9c9ea4695-flexvol-driver-host\") pod \"calico-node-jfsvd\" (UID: \"15f270dd-f8f9-40ad-b9b7-6ef9c9ea4695\") " pod="calico-system/calico-node-jfsvd" Oct 31 00:51:53.213430 kubelet[2709]: I1031 00:51:53.213422 2709 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/15f270dd-f8f9-40ad-b9b7-6ef9c9ea4695-lib-modules\") pod \"calico-node-jfsvd\" (UID: \"15f270dd-f8f9-40ad-b9b7-6ef9c9ea4695\") " pod="calico-system/calico-node-jfsvd" Oct 31 00:51:53.213632 kubelet[2709]: I1031 00:51:53.213512 2709 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gpfvs\" (UniqueName: \"kubernetes.io/projected/15f270dd-f8f9-40ad-b9b7-6ef9c9ea4695-kube-api-access-gpfvs\") pod \"calico-node-jfsvd\" (UID: \"15f270dd-f8f9-40ad-b9b7-6ef9c9ea4695\") " pod="calico-system/calico-node-jfsvd" Oct 31 00:51:53.213632 kubelet[2709]: I1031 00:51:53.213531 2709 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/15f270dd-f8f9-40ad-b9b7-6ef9c9ea4695-xtables-lock\") pod \"calico-node-jfsvd\" (UID: \"15f270dd-f8f9-40ad-b9b7-6ef9c9ea4695\") " pod="calico-system/calico-node-jfsvd" Oct 31 00:51:53.213632 kubelet[2709]: I1031 00:51:53.213544 2709 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/15f270dd-f8f9-40ad-b9b7-6ef9c9ea4695-node-certs\") pod \"calico-node-jfsvd\" (UID: \"15f270dd-f8f9-40ad-b9b7-6ef9c9ea4695\") " pod="calico-system/calico-node-jfsvd" Oct 31 00:51:53.213632 kubelet[2709]: I1031 00:51:53.213564 2709 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/15f270dd-f8f9-40ad-b9b7-6ef9c9ea4695-cni-log-dir\") pod \"calico-node-jfsvd\" (UID: \"15f270dd-f8f9-40ad-b9b7-6ef9c9ea4695\") " pod="calico-system/calico-node-jfsvd" Oct 31 00:51:53.213632 kubelet[2709]: I1031 00:51:53.213575 2709 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/15f270dd-f8f9-40ad-b9b7-6ef9c9ea4695-policysync\") pod \"calico-node-jfsvd\" (UID: \"15f270dd-f8f9-40ad-b9b7-6ef9c9ea4695\") " pod="calico-system/calico-node-jfsvd" Oct 31 00:51:53.213786 kubelet[2709]: I1031 00:51:53.213588 2709 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/15f270dd-f8f9-40ad-b9b7-6ef9c9ea4695-tigera-ca-bundle\") pod \"calico-node-jfsvd\" (UID: \"15f270dd-f8f9-40ad-b9b7-6ef9c9ea4695\") " pod="calico-system/calico-node-jfsvd" Oct 31 00:51:53.213786 kubelet[2709]: I1031 00:51:53.213599 2709 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/15f270dd-f8f9-40ad-b9b7-6ef9c9ea4695-var-lib-calico\") pod \"calico-node-jfsvd\" (UID: \"15f270dd-f8f9-40ad-b9b7-6ef9c9ea4695\") " pod="calico-system/calico-node-jfsvd" Oct 31 00:51:53.256870 containerd[1535]: time="2025-10-31T00:51:53.256827678Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-7ccb656d8b-bvrvh,Uid:8e970896-8b75-4293-a2ea-264f43272b83,Namespace:calico-system,Attempt:0,}" Oct 31 00:51:53.276166 containerd[1535]: time="2025-10-31T00:51:53.275565026Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 31 00:51:53.276166 containerd[1535]: time="2025-10-31T00:51:53.275973684Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 31 00:51:53.276166 containerd[1535]: time="2025-10-31T00:51:53.275985847Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 31 00:51:53.279112 containerd[1535]: time="2025-10-31T00:51:53.276260383Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 31 00:51:53.318262 kubelet[2709]: E1031 00:51:53.318152 2709 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 00:51:53.318262 kubelet[2709]: W1031 00:51:53.318166 2709 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 00:51:53.318262 kubelet[2709]: E1031 00:51:53.318181 2709 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 00:51:53.319126 kubelet[2709]: E1031 00:51:53.318990 2709 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 00:51:53.319126 kubelet[2709]: W1031 00:51:53.318997 2709 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 00:51:53.319126 kubelet[2709]: E1031 00:51:53.319005 2709 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 00:51:53.319659 kubelet[2709]: E1031 00:51:53.319310 2709 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 00:51:53.319659 kubelet[2709]: W1031 00:51:53.319318 2709 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 00:51:53.319659 kubelet[2709]: E1031 00:51:53.319324 2709 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 00:51:53.319766 kubelet[2709]: E1031 00:51:53.319664 2709 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 00:51:53.319766 kubelet[2709]: W1031 00:51:53.319670 2709 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 00:51:53.319766 kubelet[2709]: E1031 00:51:53.319676 2709 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 00:51:53.320343 kubelet[2709]: E1031 00:51:53.320256 2709 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 00:51:53.320343 kubelet[2709]: W1031 00:51:53.320262 2709 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 00:51:53.320343 kubelet[2709]: E1031 00:51:53.320268 2709 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 00:51:53.320693 kubelet[2709]: E1031 00:51:53.320540 2709 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 00:51:53.320693 kubelet[2709]: W1031 00:51:53.320561 2709 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 00:51:53.320693 kubelet[2709]: E1031 00:51:53.320569 2709 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 00:51:53.321255 kubelet[2709]: E1031 00:51:53.320696 2709 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 00:51:53.321255 kubelet[2709]: W1031 00:51:53.320701 2709 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 00:51:53.321255 kubelet[2709]: E1031 00:51:53.320707 2709 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 00:51:53.321255 kubelet[2709]: E1031 00:51:53.320890 2709 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 00:51:53.321255 kubelet[2709]: W1031 00:51:53.320901 2709 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 00:51:53.321255 kubelet[2709]: E1031 00:51:53.320906 2709 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 00:51:53.321848 kubelet[2709]: E1031 00:51:53.321824 2709 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 00:51:53.321848 kubelet[2709]: W1031 00:51:53.321833 2709 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 00:51:53.321848 kubelet[2709]: E1031 00:51:53.321841 2709 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 00:51:53.322205 kubelet[2709]: E1031 00:51:53.321956 2709 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 00:51:53.322205 kubelet[2709]: W1031 00:51:53.321974 2709 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 00:51:53.322205 kubelet[2709]: E1031 00:51:53.321980 2709 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 00:51:53.323083 kubelet[2709]: E1031 00:51:53.322553 2709 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-qfqgh" podUID="5e393e8e-d87c-4c00-a0d8-1932978c09f4" Oct 31 00:51:53.323083 kubelet[2709]: E1031 00:51:53.322850 2709 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 00:51:53.323083 kubelet[2709]: W1031 00:51:53.322855 2709 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 00:51:53.323083 kubelet[2709]: E1031 00:51:53.322861 2709 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 00:51:53.322824 systemd[1]: Started cri-containerd-ca08317084dbc17c4bc9397837f5b4b1fdbbaada83bba15c83e33974e1b7f4f6.scope - libcontainer container ca08317084dbc17c4bc9397837f5b4b1fdbbaada83bba15c83e33974e1b7f4f6. Oct 31 00:51:53.324669 kubelet[2709]: E1031 00:51:53.323915 2709 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 00:51:53.324669 kubelet[2709]: W1031 00:51:53.323923 2709 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 00:51:53.324669 kubelet[2709]: E1031 00:51:53.323931 2709 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 00:51:53.325150 kubelet[2709]: E1031 00:51:53.325046 2709 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 00:51:53.325356 kubelet[2709]: W1031 00:51:53.325342 2709 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 00:51:53.326660 kubelet[2709]: E1031 00:51:53.325355 2709 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 00:51:53.341334 kubelet[2709]: E1031 00:51:53.341202 2709 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 00:51:53.341334 kubelet[2709]: W1031 00:51:53.341218 2709 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 00:51:53.341334 kubelet[2709]: E1031 00:51:53.341233 2709 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 00:51:53.379516 containerd[1535]: time="2025-10-31T00:51:53.379490711Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-7ccb656d8b-bvrvh,Uid:8e970896-8b75-4293-a2ea-264f43272b83,Namespace:calico-system,Attempt:0,} returns sandbox id \"ca08317084dbc17c4bc9397837f5b4b1fdbbaada83bba15c83e33974e1b7f4f6\"" Oct 31 00:51:53.380541 containerd[1535]: time="2025-10-31T00:51:53.380524362Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.4\"" Oct 31 00:51:53.394532 kubelet[2709]: E1031 00:51:53.394437 2709 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 00:51:53.394532 kubelet[2709]: W1031 00:51:53.394449 2709 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 00:51:53.394532 kubelet[2709]: E1031 00:51:53.394462 2709 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 00:51:53.395212 kubelet[2709]: E1031 00:51:53.395058 2709 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 00:51:53.395212 kubelet[2709]: W1031 00:51:53.395066 2709 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 00:51:53.395212 kubelet[2709]: E1031 00:51:53.395073 2709 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 00:51:53.395572 kubelet[2709]: E1031 00:51:53.395442 2709 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 00:51:53.395572 kubelet[2709]: W1031 00:51:53.395449 2709 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 00:51:53.395572 kubelet[2709]: E1031 00:51:53.395456 2709 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 00:51:53.395963 kubelet[2709]: E1031 00:51:53.395849 2709 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 00:51:53.395963 kubelet[2709]: W1031 00:51:53.395857 2709 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 00:51:53.395963 kubelet[2709]: E1031 00:51:53.395863 2709 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 00:51:53.396143 kubelet[2709]: E1031 00:51:53.396079 2709 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 00:51:53.396143 kubelet[2709]: W1031 00:51:53.396085 2709 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 00:51:53.396143 kubelet[2709]: E1031 00:51:53.396092 2709 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 00:51:53.396366 kubelet[2709]: E1031 00:51:53.396303 2709 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 00:51:53.396366 kubelet[2709]: W1031 00:51:53.396307 2709 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 00:51:53.396366 kubelet[2709]: E1031 00:51:53.396313 2709 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 00:51:53.396590 kubelet[2709]: E1031 00:51:53.396501 2709 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 00:51:53.396590 kubelet[2709]: W1031 00:51:53.396508 2709 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 00:51:53.396590 kubelet[2709]: E1031 00:51:53.396521 2709 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 00:51:53.396820 kubelet[2709]: E1031 00:51:53.396750 2709 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 00:51:53.396820 kubelet[2709]: W1031 00:51:53.396758 2709 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 00:51:53.396820 kubelet[2709]: E1031 00:51:53.396764 2709 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 00:51:53.397013 kubelet[2709]: E1031 00:51:53.396967 2709 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 00:51:53.397013 kubelet[2709]: W1031 00:51:53.396974 2709 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 00:51:53.397013 kubelet[2709]: E1031 00:51:53.396980 2709 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 00:51:53.397250 kubelet[2709]: E1031 00:51:53.397208 2709 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 00:51:53.397250 kubelet[2709]: W1031 00:51:53.397215 2709 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 00:51:53.397250 kubelet[2709]: E1031 00:51:53.397220 2709 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 00:51:53.397468 kubelet[2709]: E1031 00:51:53.397418 2709 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 00:51:53.397468 kubelet[2709]: W1031 00:51:53.397424 2709 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 00:51:53.397468 kubelet[2709]: E1031 00:51:53.397430 2709 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 00:51:53.397703 kubelet[2709]: E1031 00:51:53.397640 2709 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 00:51:53.397703 kubelet[2709]: W1031 00:51:53.397646 2709 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 00:51:53.397703 kubelet[2709]: E1031 00:51:53.397652 2709 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 00:51:53.397889 kubelet[2709]: E1031 00:51:53.397830 2709 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 00:51:53.397889 kubelet[2709]: W1031 00:51:53.397850 2709 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 00:51:53.397889 kubelet[2709]: E1031 00:51:53.397857 2709 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 00:51:53.398097 kubelet[2709]: E1031 00:51:53.398019 2709 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 00:51:53.398097 kubelet[2709]: W1031 00:51:53.398025 2709 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 00:51:53.398097 kubelet[2709]: E1031 00:51:53.398030 2709 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 00:51:53.398364 kubelet[2709]: E1031 00:51:53.398301 2709 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 00:51:53.398364 kubelet[2709]: W1031 00:51:53.398308 2709 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 00:51:53.398364 kubelet[2709]: E1031 00:51:53.398314 2709 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 00:51:53.398676 kubelet[2709]: E1031 00:51:53.398567 2709 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 00:51:53.398676 kubelet[2709]: W1031 00:51:53.398598 2709 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 00:51:53.398676 kubelet[2709]: E1031 00:51:53.398607 2709 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 00:51:53.398952 kubelet[2709]: E1031 00:51:53.398913 2709 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 00:51:53.398952 kubelet[2709]: W1031 00:51:53.398927 2709 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 00:51:53.398952 kubelet[2709]: E1031 00:51:53.398935 2709 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 00:51:53.399313 kubelet[2709]: E1031 00:51:53.399246 2709 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 00:51:53.399313 kubelet[2709]: W1031 00:51:53.399253 2709 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 00:51:53.399313 kubelet[2709]: E1031 00:51:53.399259 2709 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 00:51:53.399563 kubelet[2709]: E1031 00:51:53.399464 2709 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 00:51:53.399563 kubelet[2709]: W1031 00:51:53.399470 2709 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 00:51:53.399563 kubelet[2709]: E1031 00:51:53.399476 2709 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 00:51:53.399748 kubelet[2709]: E1031 00:51:53.399741 2709 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 00:51:53.399820 kubelet[2709]: W1031 00:51:53.399774 2709 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 00:51:53.399820 kubelet[2709]: E1031 00:51:53.399782 2709 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 00:51:53.416908 kubelet[2709]: E1031 00:51:53.416890 2709 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 00:51:53.416908 kubelet[2709]: W1031 00:51:53.416904 2709 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 00:51:53.417031 kubelet[2709]: E1031 00:51:53.416916 2709 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 00:51:53.417031 kubelet[2709]: I1031 00:51:53.416942 2709 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rp2c9\" (UniqueName: \"kubernetes.io/projected/5e393e8e-d87c-4c00-a0d8-1932978c09f4-kube-api-access-rp2c9\") pod \"csi-node-driver-qfqgh\" (UID: \"5e393e8e-d87c-4c00-a0d8-1932978c09f4\") " pod="calico-system/csi-node-driver-qfqgh" Oct 31 00:51:53.417165 kubelet[2709]: E1031 00:51:53.417151 2709 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 00:51:53.417165 kubelet[2709]: W1031 00:51:53.417162 2709 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 00:51:53.417296 kubelet[2709]: E1031 00:51:53.417170 2709 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 00:51:53.417296 kubelet[2709]: I1031 00:51:53.417182 2709 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/5e393e8e-d87c-4c00-a0d8-1932978c09f4-socket-dir\") pod \"csi-node-driver-qfqgh\" (UID: \"5e393e8e-d87c-4c00-a0d8-1932978c09f4\") " pod="calico-system/csi-node-driver-qfqgh" Oct 31 00:51:53.417443 kubelet[2709]: E1031 00:51:53.417367 2709 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 00:51:53.417443 kubelet[2709]: W1031 00:51:53.417376 2709 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 00:51:53.417443 kubelet[2709]: E1031 00:51:53.417383 2709 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 00:51:53.417574 kubelet[2709]: E1031 00:51:53.417543 2709 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 00:51:53.417574 kubelet[2709]: W1031 00:51:53.417551 2709 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 00:51:53.417574 kubelet[2709]: E1031 00:51:53.417556 2709 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 00:51:53.418188 kubelet[2709]: E1031 00:51:53.418173 2709 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 00:51:53.418188 kubelet[2709]: W1031 00:51:53.418181 2709 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 00:51:53.418188 kubelet[2709]: E1031 00:51:53.418188 2709 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 00:51:53.418515 kubelet[2709]: I1031 00:51:53.418202 2709 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/5e393e8e-d87c-4c00-a0d8-1932978c09f4-kubelet-dir\") pod \"csi-node-driver-qfqgh\" (UID: \"5e393e8e-d87c-4c00-a0d8-1932978c09f4\") " pod="calico-system/csi-node-driver-qfqgh" Oct 31 00:51:53.418515 kubelet[2709]: E1031 00:51:53.418415 2709 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 00:51:53.418515 kubelet[2709]: W1031 00:51:53.418422 2709 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 00:51:53.418515 kubelet[2709]: E1031 00:51:53.418428 2709 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 00:51:53.418515 kubelet[2709]: I1031 00:51:53.418450 2709 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/5e393e8e-d87c-4c00-a0d8-1932978c09f4-registration-dir\") pod \"csi-node-driver-qfqgh\" (UID: \"5e393e8e-d87c-4c00-a0d8-1932978c09f4\") " pod="calico-system/csi-node-driver-qfqgh" Oct 31 00:51:53.418794 kubelet[2709]: E1031 00:51:53.418778 2709 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 00:51:53.418794 kubelet[2709]: W1031 00:51:53.418785 2709 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 00:51:53.418794 kubelet[2709]: E1031 00:51:53.418791 2709 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 00:51:53.419106 kubelet[2709]: E1031 00:51:53.419048 2709 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 00:51:53.419106 kubelet[2709]: W1031 00:51:53.419054 2709 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 00:51:53.419106 kubelet[2709]: E1031 00:51:53.419059 2709 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 00:51:53.419350 kubelet[2709]: E1031 00:51:53.419307 2709 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 00:51:53.419350 kubelet[2709]: W1031 00:51:53.419312 2709 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 00:51:53.419350 kubelet[2709]: E1031 00:51:53.419318 2709 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 00:51:53.419350 kubelet[2709]: I1031 00:51:53.419330 2709 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/5e393e8e-d87c-4c00-a0d8-1932978c09f4-varrun\") pod \"csi-node-driver-qfqgh\" (UID: \"5e393e8e-d87c-4c00-a0d8-1932978c09f4\") " pod="calico-system/csi-node-driver-qfqgh" Oct 31 00:51:53.419579 kubelet[2709]: E1031 00:51:53.419569 2709 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 00:51:53.419579 kubelet[2709]: W1031 00:51:53.419577 2709 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 00:51:53.419843 kubelet[2709]: E1031 00:51:53.419583 2709 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 00:51:53.419971 kubelet[2709]: E1031 00:51:53.419960 2709 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 00:51:53.419971 kubelet[2709]: W1031 00:51:53.419968 2709 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 00:51:53.420023 kubelet[2709]: E1031 00:51:53.419974 2709 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 00:51:53.420589 kubelet[2709]: E1031 00:51:53.420124 2709 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 00:51:53.420589 kubelet[2709]: W1031 00:51:53.420129 2709 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 00:51:53.420589 kubelet[2709]: E1031 00:51:53.420134 2709 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 00:51:53.421311 kubelet[2709]: E1031 00:51:53.421289 2709 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 00:51:53.421311 kubelet[2709]: W1031 00:51:53.421297 2709 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 00:51:53.421311 kubelet[2709]: E1031 00:51:53.421303 2709 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 00:51:53.421963 kubelet[2709]: E1031 00:51:53.421676 2709 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 00:51:53.421963 kubelet[2709]: W1031 00:51:53.421684 2709 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 00:51:53.421963 kubelet[2709]: E1031 00:51:53.421690 2709 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 00:51:53.421963 kubelet[2709]: E1031 00:51:53.421877 2709 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 00:51:53.421963 kubelet[2709]: W1031 00:51:53.421882 2709 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 00:51:53.421963 kubelet[2709]: E1031 00:51:53.421887 2709 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 00:51:53.427873 containerd[1535]: time="2025-10-31T00:51:53.427836575Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-jfsvd,Uid:15f270dd-f8f9-40ad-b9b7-6ef9c9ea4695,Namespace:calico-system,Attempt:0,}" Oct 31 00:51:53.457110 containerd[1535]: time="2025-10-31T00:51:53.456910531Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 31 00:51:53.457110 containerd[1535]: time="2025-10-31T00:51:53.456948524Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 31 00:51:53.457110 containerd[1535]: time="2025-10-31T00:51:53.456958787Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 31 00:51:53.457110 containerd[1535]: time="2025-10-31T00:51:53.457018189Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 31 00:51:53.471722 systemd[1]: Started cri-containerd-bc801b46745252665da9d0246e36689da73fb14eeea207f025f33bd738e6312f.scope - libcontainer container bc801b46745252665da9d0246e36689da73fb14eeea207f025f33bd738e6312f. Oct 31 00:51:53.498723 containerd[1535]: time="2025-10-31T00:51:53.498702399Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-jfsvd,Uid:15f270dd-f8f9-40ad-b9b7-6ef9c9ea4695,Namespace:calico-system,Attempt:0,} returns sandbox id \"bc801b46745252665da9d0246e36689da73fb14eeea207f025f33bd738e6312f\"" Oct 31 00:51:53.521905 kubelet[2709]: E1031 00:51:53.521887 2709 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 00:51:53.522008 kubelet[2709]: W1031 00:51:53.521997 2709 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 00:51:53.522087 kubelet[2709]: E1031 00:51:53.522077 2709 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 00:51:53.522339 kubelet[2709]: E1031 00:51:53.522333 2709 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 00:51:53.522407 kubelet[2709]: W1031 00:51:53.522379 2709 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 00:51:53.522407 kubelet[2709]: E1031 00:51:53.522397 2709 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 00:51:53.522690 kubelet[2709]: E1031 00:51:53.522614 2709 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 00:51:53.522690 kubelet[2709]: W1031 00:51:53.522648 2709 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 00:51:53.522690 kubelet[2709]: E1031 00:51:53.522655 2709 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 00:51:53.522923 kubelet[2709]: E1031 00:51:53.522906 2709 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 00:51:53.522923 kubelet[2709]: W1031 00:51:53.522912 2709 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 00:51:53.522923 kubelet[2709]: E1031 00:51:53.522917 2709 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 00:51:53.523124 kubelet[2709]: E1031 00:51:53.523119 2709 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 00:51:53.523315 kubelet[2709]: W1031 00:51:53.523151 2709 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 00:51:53.523315 kubelet[2709]: E1031 00:51:53.523305 2709 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 00:51:53.524155 kubelet[2709]: E1031 00:51:53.524145 2709 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 00:51:53.524291 kubelet[2709]: W1031 00:51:53.524215 2709 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 00:51:53.524291 kubelet[2709]: E1031 00:51:53.524226 2709 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 00:51:53.524389 kubelet[2709]: E1031 00:51:53.524383 2709 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 00:51:53.524445 kubelet[2709]: W1031 00:51:53.524422 2709 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 00:51:53.524546 kubelet[2709]: E1031 00:51:53.524519 2709 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 00:51:53.524931 kubelet[2709]: E1031 00:51:53.524843 2709 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 00:51:53.524931 kubelet[2709]: W1031 00:51:53.524850 2709 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 00:51:53.524931 kubelet[2709]: E1031 00:51:53.524856 2709 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 00:51:53.525238 kubelet[2709]: E1031 00:51:53.525118 2709 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 00:51:53.525238 kubelet[2709]: W1031 00:51:53.525126 2709 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 00:51:53.525238 kubelet[2709]: E1031 00:51:53.525133 2709 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 00:51:53.525801 kubelet[2709]: E1031 00:51:53.525639 2709 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 00:51:53.525801 kubelet[2709]: W1031 00:51:53.525649 2709 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 00:51:53.525801 kubelet[2709]: E1031 00:51:53.525655 2709 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 00:51:53.526048 kubelet[2709]: E1031 00:51:53.526019 2709 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 00:51:53.526048 kubelet[2709]: W1031 00:51:53.526030 2709 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 00:51:53.526048 kubelet[2709]: E1031 00:51:53.526038 2709 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 00:51:53.526408 kubelet[2709]: E1031 00:51:53.526327 2709 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 00:51:53.526408 kubelet[2709]: W1031 00:51:53.526334 2709 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 00:51:53.526408 kubelet[2709]: E1031 00:51:53.526341 2709 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 00:51:53.526666 kubelet[2709]: E1031 00:51:53.526594 2709 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 00:51:53.526666 kubelet[2709]: W1031 00:51:53.526601 2709 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 00:51:53.526666 kubelet[2709]: E1031 00:51:53.526607 2709 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 00:51:53.526845 kubelet[2709]: E1031 00:51:53.526825 2709 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 00:51:53.526845 kubelet[2709]: W1031 00:51:53.526832 2709 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 00:51:53.526845 kubelet[2709]: E1031 00:51:53.526838 2709 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 00:51:53.527018 kubelet[2709]: E1031 00:51:53.527001 2709 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 00:51:53.527018 kubelet[2709]: W1031 00:51:53.527007 2709 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 00:51:53.527018 kubelet[2709]: E1031 00:51:53.527012 2709 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 00:51:53.527222 kubelet[2709]: E1031 00:51:53.527205 2709 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 00:51:53.527222 kubelet[2709]: W1031 00:51:53.527211 2709 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 00:51:53.527222 kubelet[2709]: E1031 00:51:53.527217 2709 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 00:51:53.527697 kubelet[2709]: E1031 00:51:53.527690 2709 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 00:51:53.527900 kubelet[2709]: W1031 00:51:53.527840 2709 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 00:51:53.527900 kubelet[2709]: E1031 00:51:53.527848 2709 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 00:51:53.528108 kubelet[2709]: E1031 00:51:53.528046 2709 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 00:51:53.528108 kubelet[2709]: W1031 00:51:53.528052 2709 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 00:51:53.528108 kubelet[2709]: E1031 00:51:53.528058 2709 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 00:51:53.528781 kubelet[2709]: E1031 00:51:53.528327 2709 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 00:51:53.528781 kubelet[2709]: W1031 00:51:53.528333 2709 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 00:51:53.528781 kubelet[2709]: E1031 00:51:53.528339 2709 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 00:51:53.528881 kubelet[2709]: E1031 00:51:53.528875 2709 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 00:51:53.528918 kubelet[2709]: W1031 00:51:53.528912 2709 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 00:51:53.528989 kubelet[2709]: E1031 00:51:53.528946 2709 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 00:51:53.529593 kubelet[2709]: E1031 00:51:53.529566 2709 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 00:51:53.529850 kubelet[2709]: W1031 00:51:53.529650 2709 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 00:51:53.529850 kubelet[2709]: E1031 00:51:53.529659 2709 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 00:51:53.533700 kubelet[2709]: E1031 00:51:53.533687 2709 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 00:51:53.533700 kubelet[2709]: W1031 00:51:53.533698 2709 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 00:51:53.533753 kubelet[2709]: E1031 00:51:53.533708 2709 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 00:51:53.536628 kubelet[2709]: E1031 00:51:53.535030 2709 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 00:51:53.536628 kubelet[2709]: W1031 00:51:53.535038 2709 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 00:51:53.536628 kubelet[2709]: E1031 00:51:53.535050 2709 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 00:51:53.536628 kubelet[2709]: E1031 00:51:53.535185 2709 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 00:51:53.536628 kubelet[2709]: W1031 00:51:53.535190 2709 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 00:51:53.536628 kubelet[2709]: E1031 00:51:53.535196 2709 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 00:51:53.536628 kubelet[2709]: E1031 00:51:53.535307 2709 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 00:51:53.536628 kubelet[2709]: W1031 00:51:53.535312 2709 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 00:51:53.536628 kubelet[2709]: E1031 00:51:53.535317 2709 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 00:51:53.536628 kubelet[2709]: E1031 00:51:53.535756 2709 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 00:51:53.536807 kubelet[2709]: W1031 00:51:53.535765 2709 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 00:51:53.536807 kubelet[2709]: E1031 00:51:53.535773 2709 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 00:51:54.778937 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1683783023.mount: Deactivated successfully. Oct 31 00:51:55.268207 containerd[1535]: time="2025-10-31T00:51:55.268179494Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 31 00:51:55.269119 containerd[1535]: time="2025-10-31T00:51:55.269059118Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.30.4: active requests=0, bytes read=35234628" Oct 31 00:51:55.269426 containerd[1535]: time="2025-10-31T00:51:55.269407415Z" level=info msg="ImageCreate event name:\"sha256:aa1490366a77160b4cc8f9af82281ab7201ffda0882871f860e1eb1c4f825958\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 31 00:51:55.270743 containerd[1535]: time="2025-10-31T00:51:55.270724294Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:6f437220b5b3c627fb4a0fc8dc323363101f3c22a8f337612c2a1ddfb73b810c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 31 00:51:55.271503 containerd[1535]: time="2025-10-31T00:51:55.271480159Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.30.4\" with image id \"sha256:aa1490366a77160b4cc8f9af82281ab7201ffda0882871f860e1eb1c4f825958\", repo tag \"ghcr.io/flatcar/calico/typha:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:6f437220b5b3c627fb4a0fc8dc323363101f3c22a8f337612c2a1ddfb73b810c\", size \"35234482\" in 1.890936654s" Oct 31 00:51:55.271547 containerd[1535]: time="2025-10-31T00:51:55.271504441Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.4\" returns image reference \"sha256:aa1490366a77160b4cc8f9af82281ab7201ffda0882871f860e1eb1c4f825958\"" Oct 31 00:51:55.272695 containerd[1535]: time="2025-10-31T00:51:55.272590411Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\"" Oct 31 00:51:55.286401 containerd[1535]: time="2025-10-31T00:51:55.286366715Z" level=info msg="CreateContainer within sandbox \"ca08317084dbc17c4bc9397837f5b4b1fdbbaada83bba15c83e33974e1b7f4f6\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Oct 31 00:51:55.294054 containerd[1535]: time="2025-10-31T00:51:55.294030608Z" level=info msg="CreateContainer within sandbox \"ca08317084dbc17c4bc9397837f5b4b1fdbbaada83bba15c83e33974e1b7f4f6\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"ae8b1c99991c66028e99ac1deb8e38881c32682b7757169e803d2720a2ce7761\"" Oct 31 00:51:55.295649 containerd[1535]: time="2025-10-31T00:51:55.295591937Z" level=info msg="StartContainer for \"ae8b1c99991c66028e99ac1deb8e38881c32682b7757169e803d2720a2ce7761\"" Oct 31 00:51:55.313792 systemd[1]: Started cri-containerd-ae8b1c99991c66028e99ac1deb8e38881c32682b7757169e803d2720a2ce7761.scope - libcontainer container ae8b1c99991c66028e99ac1deb8e38881c32682b7757169e803d2720a2ce7761. Oct 31 00:51:55.364845 containerd[1535]: time="2025-10-31T00:51:55.364597675Z" level=info msg="StartContainer for \"ae8b1c99991c66028e99ac1deb8e38881c32682b7757169e803d2720a2ce7761\" returns successfully" Oct 31 00:51:55.470871 kubelet[2709]: E1031 00:51:55.470559 2709 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-qfqgh" podUID="5e393e8e-d87c-4c00-a0d8-1932978c09f4" Oct 31 00:51:55.549769 kubelet[2709]: I1031 00:51:55.548224 2709 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-7ccb656d8b-bvrvh" podStartSLOduration=1.6565224010000001 podStartE2EDuration="3.548211473s" podCreationTimestamp="2025-10-31 00:51:52 +0000 UTC" firstStartedPulling="2025-10-31 00:51:53.380281355 +0000 UTC m=+21.139731129" lastFinishedPulling="2025-10-31 00:51:55.271970427 +0000 UTC m=+23.031420201" observedRunningTime="2025-10-31 00:51:55.546853092 +0000 UTC m=+23.306302878" watchObservedRunningTime="2025-10-31 00:51:55.548211473 +0000 UTC m=+23.307661258" Oct 31 00:51:55.615930 kubelet[2709]: E1031 00:51:55.615902 2709 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 00:51:55.615930 kubelet[2709]: W1031 00:51:55.615927 2709 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 00:51:55.616133 kubelet[2709]: E1031 00:51:55.615947 2709 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 00:51:55.616133 kubelet[2709]: E1031 00:51:55.616075 2709 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 00:51:55.616133 kubelet[2709]: W1031 00:51:55.616083 2709 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 00:51:55.616133 kubelet[2709]: E1031 00:51:55.616089 2709 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 00:51:55.616370 kubelet[2709]: E1031 00:51:55.616218 2709 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 00:51:55.616370 kubelet[2709]: W1031 00:51:55.616228 2709 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 00:51:55.616370 kubelet[2709]: E1031 00:51:55.616234 2709 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 00:51:55.616497 kubelet[2709]: E1031 00:51:55.616381 2709 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 00:51:55.616497 kubelet[2709]: W1031 00:51:55.616388 2709 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 00:51:55.616497 kubelet[2709]: E1031 00:51:55.616394 2709 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 00:51:55.616738 kubelet[2709]: E1031 00:51:55.616516 2709 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 00:51:55.616738 kubelet[2709]: W1031 00:51:55.616521 2709 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 00:51:55.616738 kubelet[2709]: E1031 00:51:55.616528 2709 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 00:51:55.616738 kubelet[2709]: E1031 00:51:55.616718 2709 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 00:51:55.616738 kubelet[2709]: W1031 00:51:55.616723 2709 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 00:51:55.617861 kubelet[2709]: E1031 00:51:55.616729 2709 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 00:51:55.618974 kubelet[2709]: E1031 00:51:55.618958 2709 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 00:51:55.618974 kubelet[2709]: W1031 00:51:55.618972 2709 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 00:51:55.619155 kubelet[2709]: E1031 00:51:55.618988 2709 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 00:51:55.619155 kubelet[2709]: E1031 00:51:55.619152 2709 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 00:51:55.619200 kubelet[2709]: W1031 00:51:55.619158 2709 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 00:51:55.619200 kubelet[2709]: E1031 00:51:55.619166 2709 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 00:51:55.619306 kubelet[2709]: E1031 00:51:55.619301 2709 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 00:51:55.619332 kubelet[2709]: W1031 00:51:55.619308 2709 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 00:51:55.619332 kubelet[2709]: E1031 00:51:55.619323 2709 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 00:51:55.619461 kubelet[2709]: E1031 00:51:55.619450 2709 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 00:51:55.619461 kubelet[2709]: W1031 00:51:55.619457 2709 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 00:51:55.619572 kubelet[2709]: E1031 00:51:55.619463 2709 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 00:51:55.619596 kubelet[2709]: E1031 00:51:55.619575 2709 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 00:51:55.619596 kubelet[2709]: W1031 00:51:55.619582 2709 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 00:51:55.619729 kubelet[2709]: E1031 00:51:55.619600 2709 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 00:51:55.619814 kubelet[2709]: E1031 00:51:55.619805 2709 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 00:51:55.619851 kubelet[2709]: W1031 00:51:55.619814 2709 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 00:51:55.619851 kubelet[2709]: E1031 00:51:55.619822 2709 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 00:51:55.619972 kubelet[2709]: E1031 00:51:55.619936 2709 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 00:51:55.619972 kubelet[2709]: W1031 00:51:55.619943 2709 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 00:51:55.619972 kubelet[2709]: E1031 00:51:55.619954 2709 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 00:51:55.620153 kubelet[2709]: E1031 00:51:55.620143 2709 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 00:51:55.620196 kubelet[2709]: W1031 00:51:55.620152 2709 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 00:51:55.620196 kubelet[2709]: E1031 00:51:55.620160 2709 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 00:51:55.620320 kubelet[2709]: E1031 00:51:55.620311 2709 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 00:51:55.620320 kubelet[2709]: W1031 00:51:55.620318 2709 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 00:51:55.620369 kubelet[2709]: E1031 00:51:55.620323 2709 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 00:51:55.639674 kubelet[2709]: E1031 00:51:55.639597 2709 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 00:51:55.639674 kubelet[2709]: W1031 00:51:55.639611 2709 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 00:51:55.639674 kubelet[2709]: E1031 00:51:55.639655 2709 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 00:51:55.639957 kubelet[2709]: E1031 00:51:55.639926 2709 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 00:51:55.639957 kubelet[2709]: W1031 00:51:55.639932 2709 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 00:51:55.639957 kubelet[2709]: E1031 00:51:55.639937 2709 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 00:51:55.640071 kubelet[2709]: E1031 00:51:55.640059 2709 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 00:51:55.640071 kubelet[2709]: W1031 00:51:55.640069 2709 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 00:51:55.640197 kubelet[2709]: E1031 00:51:55.640077 2709 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 00:51:55.640224 kubelet[2709]: E1031 00:51:55.640203 2709 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 00:51:55.640224 kubelet[2709]: W1031 00:51:55.640208 2709 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 00:51:55.640224 kubelet[2709]: E1031 00:51:55.640213 2709 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 00:51:55.640312 kubelet[2709]: E1031 00:51:55.640302 2709 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 00:51:55.640312 kubelet[2709]: W1031 00:51:55.640310 2709 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 00:51:55.641786 kubelet[2709]: E1031 00:51:55.640315 2709 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 00:51:55.641786 kubelet[2709]: E1031 00:51:55.640419 2709 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 00:51:55.641786 kubelet[2709]: W1031 00:51:55.640423 2709 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 00:51:55.641786 kubelet[2709]: E1031 00:51:55.640430 2709 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 00:51:55.641786 kubelet[2709]: E1031 00:51:55.640608 2709 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 00:51:55.641786 kubelet[2709]: W1031 00:51:55.640616 2709 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 00:51:55.641786 kubelet[2709]: E1031 00:51:55.640636 2709 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 00:51:55.641786 kubelet[2709]: E1031 00:51:55.640774 2709 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 00:51:55.641786 kubelet[2709]: W1031 00:51:55.640779 2709 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 00:51:55.641786 kubelet[2709]: E1031 00:51:55.640784 2709 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 00:51:55.642027 kubelet[2709]: E1031 00:51:55.640894 2709 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 00:51:55.642027 kubelet[2709]: W1031 00:51:55.640898 2709 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 00:51:55.642027 kubelet[2709]: E1031 00:51:55.640903 2709 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 00:51:55.642027 kubelet[2709]: E1031 00:51:55.640998 2709 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 00:51:55.642027 kubelet[2709]: W1031 00:51:55.641008 2709 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 00:51:55.642027 kubelet[2709]: E1031 00:51:55.641012 2709 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 00:51:55.642027 kubelet[2709]: E1031 00:51:55.641113 2709 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 00:51:55.642027 kubelet[2709]: W1031 00:51:55.641118 2709 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 00:51:55.642027 kubelet[2709]: E1031 00:51:55.641122 2709 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 00:51:55.642027 kubelet[2709]: E1031 00:51:55.641376 2709 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 00:51:55.642197 kubelet[2709]: W1031 00:51:55.641384 2709 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 00:51:55.642197 kubelet[2709]: E1031 00:51:55.641392 2709 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 00:51:55.642197 kubelet[2709]: E1031 00:51:55.641500 2709 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 00:51:55.642197 kubelet[2709]: W1031 00:51:55.641505 2709 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 00:51:55.642197 kubelet[2709]: E1031 00:51:55.641509 2709 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 00:51:55.642197 kubelet[2709]: E1031 00:51:55.641675 2709 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 00:51:55.642197 kubelet[2709]: W1031 00:51:55.641679 2709 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 00:51:55.642197 kubelet[2709]: E1031 00:51:55.641684 2709 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 00:51:55.642430 kubelet[2709]: E1031 00:51:55.642372 2709 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 00:51:55.642430 kubelet[2709]: W1031 00:51:55.642378 2709 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 00:51:55.642430 kubelet[2709]: E1031 00:51:55.642384 2709 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 00:51:55.642863 kubelet[2709]: E1031 00:51:55.642485 2709 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 00:51:55.642863 kubelet[2709]: W1031 00:51:55.642492 2709 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 00:51:55.642863 kubelet[2709]: E1031 00:51:55.642499 2709 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 00:51:55.642863 kubelet[2709]: E1031 00:51:55.642646 2709 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 00:51:55.642863 kubelet[2709]: W1031 00:51:55.642653 2709 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 00:51:55.642863 kubelet[2709]: E1031 00:51:55.642658 2709 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 00:51:55.642863 kubelet[2709]: E1031 00:51:55.642834 2709 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 00:51:55.642863 kubelet[2709]: W1031 00:51:55.642841 2709 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 00:51:55.642863 kubelet[2709]: E1031 00:51:55.642849 2709 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 00:51:56.543417 kubelet[2709]: I1031 00:51:56.543120 2709 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Oct 31 00:51:56.627163 kubelet[2709]: E1031 00:51:56.627143 2709 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 00:51:56.627163 kubelet[2709]: W1031 00:51:56.627159 2709 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 00:51:56.627391 kubelet[2709]: E1031 00:51:56.627174 2709 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 00:51:56.627391 kubelet[2709]: E1031 00:51:56.627344 2709 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 00:51:56.627391 kubelet[2709]: W1031 00:51:56.627351 2709 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 00:51:56.627391 kubelet[2709]: E1031 00:51:56.627356 2709 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 00:51:56.627584 kubelet[2709]: E1031 00:51:56.627485 2709 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 00:51:56.627584 kubelet[2709]: W1031 00:51:56.627490 2709 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 00:51:56.627584 kubelet[2709]: E1031 00:51:56.627495 2709 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 00:51:56.627643 kubelet[2709]: E1031 00:51:56.627602 2709 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 00:51:56.627643 kubelet[2709]: W1031 00:51:56.627606 2709 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 00:51:56.627643 kubelet[2709]: E1031 00:51:56.627612 2709 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 00:51:56.627870 kubelet[2709]: E1031 00:51:56.627769 2709 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 00:51:56.627870 kubelet[2709]: W1031 00:51:56.627775 2709 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 00:51:56.627870 kubelet[2709]: E1031 00:51:56.627780 2709 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 00:51:56.627983 kubelet[2709]: E1031 00:51:56.627899 2709 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 00:51:56.627983 kubelet[2709]: W1031 00:51:56.627904 2709 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 00:51:56.627983 kubelet[2709]: E1031 00:51:56.627909 2709 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 00:51:56.628091 kubelet[2709]: E1031 00:51:56.628014 2709 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 00:51:56.628091 kubelet[2709]: W1031 00:51:56.628018 2709 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 00:51:56.628091 kubelet[2709]: E1031 00:51:56.628023 2709 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 00:51:56.628238 kubelet[2709]: E1031 00:51:56.628199 2709 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 00:51:56.628238 kubelet[2709]: W1031 00:51:56.628204 2709 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 00:51:56.628238 kubelet[2709]: E1031 00:51:56.628209 2709 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 00:51:56.628483 kubelet[2709]: E1031 00:51:56.628318 2709 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 00:51:56.628483 kubelet[2709]: W1031 00:51:56.628323 2709 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 00:51:56.628483 kubelet[2709]: E1031 00:51:56.628328 2709 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 00:51:56.628483 kubelet[2709]: E1031 00:51:56.628478 2709 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 00:51:56.628483 kubelet[2709]: W1031 00:51:56.628482 2709 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 00:51:56.628634 kubelet[2709]: E1031 00:51:56.628487 2709 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 00:51:56.628759 kubelet[2709]: E1031 00:51:56.628729 2709 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 00:51:56.628759 kubelet[2709]: W1031 00:51:56.628740 2709 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 00:51:56.628759 kubelet[2709]: E1031 00:51:56.628746 2709 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 00:51:56.628878 kubelet[2709]: E1031 00:51:56.628850 2709 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 00:51:56.628878 kubelet[2709]: W1031 00:51:56.628855 2709 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 00:51:56.628983 kubelet[2709]: E1031 00:51:56.628933 2709 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 00:51:56.640087 kubelet[2709]: E1031 00:51:56.629161 2709 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 00:51:56.640087 kubelet[2709]: W1031 00:51:56.629166 2709 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 00:51:56.640087 kubelet[2709]: E1031 00:51:56.629171 2709 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 00:51:56.640087 kubelet[2709]: E1031 00:51:56.629274 2709 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 00:51:56.640087 kubelet[2709]: W1031 00:51:56.629278 2709 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 00:51:56.640087 kubelet[2709]: E1031 00:51:56.629283 2709 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 00:51:56.640087 kubelet[2709]: E1031 00:51:56.629493 2709 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 00:51:56.640087 kubelet[2709]: W1031 00:51:56.629498 2709 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 00:51:56.640087 kubelet[2709]: E1031 00:51:56.629503 2709 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 00:51:56.647691 kubelet[2709]: E1031 00:51:56.647680 2709 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 00:51:56.647827 kubelet[2709]: W1031 00:51:56.647746 2709 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 00:51:56.647827 kubelet[2709]: E1031 00:51:56.647759 2709 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 00:51:56.647935 kubelet[2709]: E1031 00:51:56.647915 2709 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 00:51:56.648008 kubelet[2709]: W1031 00:51:56.647923 2709 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 00:51:56.648008 kubelet[2709]: E1031 00:51:56.647982 2709 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 00:51:56.648130 kubelet[2709]: E1031 00:51:56.648118 2709 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 00:51:56.648130 kubelet[2709]: W1031 00:51:56.648128 2709 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 00:51:56.648214 kubelet[2709]: E1031 00:51:56.648136 2709 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 00:51:56.648317 kubelet[2709]: E1031 00:51:56.648307 2709 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 00:51:56.648317 kubelet[2709]: W1031 00:51:56.648315 2709 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 00:51:56.648359 kubelet[2709]: E1031 00:51:56.648322 2709 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 00:51:56.648435 kubelet[2709]: E1031 00:51:56.648425 2709 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 00:51:56.648497 kubelet[2709]: W1031 00:51:56.648434 2709 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 00:51:56.648497 kubelet[2709]: E1031 00:51:56.648442 2709 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 00:51:56.648586 kubelet[2709]: E1031 00:51:56.648561 2709 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 00:51:56.648586 kubelet[2709]: W1031 00:51:56.648566 2709 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 00:51:56.648586 kubelet[2709]: E1031 00:51:56.648572 2709 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 00:51:56.648877 kubelet[2709]: E1031 00:51:56.648835 2709 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 00:51:56.648877 kubelet[2709]: W1031 00:51:56.648842 2709 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 00:51:56.648877 kubelet[2709]: E1031 00:51:56.648849 2709 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 00:51:56.675545 kubelet[2709]: E1031 00:51:56.648967 2709 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 00:51:56.675545 kubelet[2709]: W1031 00:51:56.648973 2709 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 00:51:56.675545 kubelet[2709]: E1031 00:51:56.648979 2709 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 00:51:56.675545 kubelet[2709]: E1031 00:51:56.649117 2709 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 00:51:56.675545 kubelet[2709]: W1031 00:51:56.649123 2709 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 00:51:56.675545 kubelet[2709]: E1031 00:51:56.649130 2709 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 00:51:56.675545 kubelet[2709]: E1031 00:51:56.649256 2709 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 00:51:56.675545 kubelet[2709]: W1031 00:51:56.649261 2709 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 00:51:56.675545 kubelet[2709]: E1031 00:51:56.649267 2709 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 00:51:56.675545 kubelet[2709]: E1031 00:51:56.649388 2709 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 00:51:56.675756 kubelet[2709]: W1031 00:51:56.649393 2709 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 00:51:56.675756 kubelet[2709]: E1031 00:51:56.649400 2709 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 00:51:56.675756 kubelet[2709]: E1031 00:51:56.649716 2709 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 00:51:56.675756 kubelet[2709]: W1031 00:51:56.649723 2709 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 00:51:56.675756 kubelet[2709]: E1031 00:51:56.649729 2709 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 00:51:56.675756 kubelet[2709]: E1031 00:51:56.649877 2709 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 00:51:56.675756 kubelet[2709]: W1031 00:51:56.649883 2709 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 00:51:56.675756 kubelet[2709]: E1031 00:51:56.649889 2709 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 00:51:56.675756 kubelet[2709]: E1031 00:51:56.650028 2709 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 00:51:56.675756 kubelet[2709]: W1031 00:51:56.650034 2709 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 00:51:56.675931 kubelet[2709]: E1031 00:51:56.650040 2709 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 00:51:56.675931 kubelet[2709]: E1031 00:51:56.650160 2709 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 00:51:56.675931 kubelet[2709]: W1031 00:51:56.650165 2709 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 00:51:56.675931 kubelet[2709]: E1031 00:51:56.650171 2709 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 00:51:56.675931 kubelet[2709]: E1031 00:51:56.650297 2709 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 00:51:56.675931 kubelet[2709]: W1031 00:51:56.650305 2709 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 00:51:56.675931 kubelet[2709]: E1031 00:51:56.650707 2709 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 00:51:56.675931 kubelet[2709]: E1031 00:51:56.650851 2709 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 00:51:56.675931 kubelet[2709]: W1031 00:51:56.650858 2709 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 00:51:56.675931 kubelet[2709]: E1031 00:51:56.650864 2709 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 00:51:56.676134 kubelet[2709]: E1031 00:51:56.651142 2709 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 00:51:56.676134 kubelet[2709]: W1031 00:51:56.651148 2709 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 00:51:56.676134 kubelet[2709]: E1031 00:51:56.651156 2709 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 00:51:56.721073 containerd[1535]: time="2025-10-31T00:51:56.720557087Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 31 00:51:56.726353 containerd[1535]: time="2025-10-31T00:51:56.726325508Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4: active requests=0, bytes read=4446754" Oct 31 00:51:56.737770 containerd[1535]: time="2025-10-31T00:51:56.737738956Z" level=info msg="ImageCreate event name:\"sha256:570719e9c34097019014ae2ad94edf4e523bc6892e77fb1c64c23e5b7f390fe5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 31 00:51:56.742918 containerd[1535]: time="2025-10-31T00:51:56.742875652Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:50bdfe370b7308fa9957ed1eaccd094aa4f27f9a4f1dfcfef2f8a7696a1551e1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 31 00:51:56.743513 containerd[1535]: time="2025-10-31T00:51:56.743166029Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" with image id \"sha256:570719e9c34097019014ae2ad94edf4e523bc6892e77fb1c64c23e5b7f390fe5\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:50bdfe370b7308fa9957ed1eaccd094aa4f27f9a4f1dfcfef2f8a7696a1551e1\", size \"5941314\" in 1.47055382s" Oct 31 00:51:56.743513 containerd[1535]: time="2025-10-31T00:51:56.743189654Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" returns image reference \"sha256:570719e9c34097019014ae2ad94edf4e523bc6892e77fb1c64c23e5b7f390fe5\"" Oct 31 00:51:56.745983 containerd[1535]: time="2025-10-31T00:51:56.745953804Z" level=info msg="CreateContainer within sandbox \"bc801b46745252665da9d0246e36689da73fb14eeea207f025f33bd738e6312f\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Oct 31 00:51:56.769583 containerd[1535]: time="2025-10-31T00:51:56.769561660Z" level=info msg="CreateContainer within sandbox \"bc801b46745252665da9d0246e36689da73fb14eeea207f025f33bd738e6312f\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"a4595b14e72eeef6fb5b7c71800718f62918e033dbd6e7b11430eb184c320cdf\"" Oct 31 00:51:56.770231 containerd[1535]: time="2025-10-31T00:51:56.770075458Z" level=info msg="StartContainer for \"a4595b14e72eeef6fb5b7c71800718f62918e033dbd6e7b11430eb184c320cdf\"" Oct 31 00:51:56.801793 systemd[1]: Started cri-containerd-a4595b14e72eeef6fb5b7c71800718f62918e033dbd6e7b11430eb184c320cdf.scope - libcontainer container a4595b14e72eeef6fb5b7c71800718f62918e033dbd6e7b11430eb184c320cdf. Oct 31 00:51:56.837200 containerd[1535]: time="2025-10-31T00:51:56.837168598Z" level=info msg="StartContainer for \"a4595b14e72eeef6fb5b7c71800718f62918e033dbd6e7b11430eb184c320cdf\" returns successfully" Oct 31 00:51:56.839732 systemd[1]: cri-containerd-a4595b14e72eeef6fb5b7c71800718f62918e033dbd6e7b11430eb184c320cdf.scope: Deactivated successfully. Oct 31 00:51:56.855231 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-a4595b14e72eeef6fb5b7c71800718f62918e033dbd6e7b11430eb184c320cdf-rootfs.mount: Deactivated successfully. Oct 31 00:51:57.017073 containerd[1535]: time="2025-10-31T00:51:57.015692456Z" level=info msg="shim disconnected" id=a4595b14e72eeef6fb5b7c71800718f62918e033dbd6e7b11430eb184c320cdf namespace=k8s.io Oct 31 00:51:57.017073 containerd[1535]: time="2025-10-31T00:51:57.016937715Z" level=warning msg="cleaning up after shim disconnected" id=a4595b14e72eeef6fb5b7c71800718f62918e033dbd6e7b11430eb184c320cdf namespace=k8s.io Oct 31 00:51:57.017073 containerd[1535]: time="2025-10-31T00:51:57.016995735Z" level=info msg="cleaning up dead shim" namespace=k8s.io Oct 31 00:51:57.471106 kubelet[2709]: E1031 00:51:57.471072 2709 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-qfqgh" podUID="5e393e8e-d87c-4c00-a0d8-1932978c09f4" Oct 31 00:51:57.546267 containerd[1535]: time="2025-10-31T00:51:57.546224384Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.4\"" Oct 31 00:51:59.470733 kubelet[2709]: E1031 00:51:59.470709 2709 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-qfqgh" podUID="5e393e8e-d87c-4c00-a0d8-1932978c09f4" Oct 31 00:52:00.187462 containerd[1535]: time="2025-10-31T00:52:00.186990861Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 31 00:52:00.193587 containerd[1535]: time="2025-10-31T00:52:00.193561211Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.30.4: active requests=0, bytes read=70446859" Oct 31 00:52:00.203112 containerd[1535]: time="2025-10-31T00:52:00.203094434Z" level=info msg="ImageCreate event name:\"sha256:24e1e7377c738d4080eb462a29e2c6756d383d8d25ad87b7f49165581f20c3cd\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 31 00:52:00.210789 containerd[1535]: time="2025-10-31T00:52:00.210710444Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.30.4\" with image id \"sha256:24e1e7377c738d4080eb462a29e2c6756d383d8d25ad87b7f49165581f20c3cd\", repo tag \"ghcr.io/flatcar/calico/cni:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:273501a9cfbd848ade2b6a8452dfafdd3adb4f9bf9aec45c398a5d19b8026627\", size \"71941459\" in 2.664440002s" Oct 31 00:52:00.210789 containerd[1535]: time="2025-10-31T00:52:00.210732330Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.4\" returns image reference \"sha256:24e1e7377c738d4080eb462a29e2c6756d383d8d25ad87b7f49165581f20c3cd\"" Oct 31 00:52:00.211928 containerd[1535]: time="2025-10-31T00:52:00.211790756Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:273501a9cfbd848ade2b6a8452dfafdd3adb4f9bf9aec45c398a5d19b8026627\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 31 00:52:00.243850 containerd[1535]: time="2025-10-31T00:52:00.243817230Z" level=info msg="CreateContainer within sandbox \"bc801b46745252665da9d0246e36689da73fb14eeea207f025f33bd738e6312f\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Oct 31 00:52:00.295494 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2901444771.mount: Deactivated successfully. Oct 31 00:52:00.296107 containerd[1535]: time="2025-10-31T00:52:00.296087088Z" level=info msg="CreateContainer within sandbox \"bc801b46745252665da9d0246e36689da73fb14eeea207f025f33bd738e6312f\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"5d94946b08005ad5305959d7e4284a2846f94c9ccfe47a5dd40d8c94a662e152\"" Oct 31 00:52:00.297556 containerd[1535]: time="2025-10-31T00:52:00.296501394Z" level=info msg="StartContainer for \"5d94946b08005ad5305959d7e4284a2846f94c9ccfe47a5dd40d8c94a662e152\"" Oct 31 00:52:00.320715 systemd[1]: Started cri-containerd-5d94946b08005ad5305959d7e4284a2846f94c9ccfe47a5dd40d8c94a662e152.scope - libcontainer container 5d94946b08005ad5305959d7e4284a2846f94c9ccfe47a5dd40d8c94a662e152. Oct 31 00:52:00.339692 containerd[1535]: time="2025-10-31T00:52:00.339672301Z" level=info msg="StartContainer for \"5d94946b08005ad5305959d7e4284a2846f94c9ccfe47a5dd40d8c94a662e152\" returns successfully" Oct 31 00:52:01.448826 systemd[1]: cri-containerd-5d94946b08005ad5305959d7e4284a2846f94c9ccfe47a5dd40d8c94a662e152.scope: Deactivated successfully. Oct 31 00:52:01.471260 kubelet[2709]: E1031 00:52:01.470736 2709 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-qfqgh" podUID="5e393e8e-d87c-4c00-a0d8-1932978c09f4" Oct 31 00:52:01.477685 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-5d94946b08005ad5305959d7e4284a2846f94c9ccfe47a5dd40d8c94a662e152-rootfs.mount: Deactivated successfully. Oct 31 00:52:01.492978 containerd[1535]: time="2025-10-31T00:52:01.492945525Z" level=info msg="shim disconnected" id=5d94946b08005ad5305959d7e4284a2846f94c9ccfe47a5dd40d8c94a662e152 namespace=k8s.io Oct 31 00:52:01.494362 containerd[1535]: time="2025-10-31T00:52:01.493195638Z" level=warning msg="cleaning up after shim disconnected" id=5d94946b08005ad5305959d7e4284a2846f94c9ccfe47a5dd40d8c94a662e152 namespace=k8s.io Oct 31 00:52:01.494362 containerd[1535]: time="2025-10-31T00:52:01.493205679Z" level=info msg="cleaning up dead shim" namespace=k8s.io Oct 31 00:52:01.501902 containerd[1535]: time="2025-10-31T00:52:01.501883963Z" level=warning msg="cleanup warnings time=\"2025-10-31T00:52:01Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Oct 31 00:52:01.512576 kubelet[2709]: I1031 00:52:01.512561 2709 kubelet_node_status.go:439] "Fast updating node status as it just became ready" Oct 31 00:52:01.541927 systemd[1]: Created slice kubepods-burstable-podcaacd2fa_ec7d_4e23_bd05_4cbeb62fc6f5.slice - libcontainer container kubepods-burstable-podcaacd2fa_ec7d_4e23_bd05_4cbeb62fc6f5.slice. Oct 31 00:52:01.559130 systemd[1]: Created slice kubepods-besteffort-pod56a98ce3_aefb_4f4d_a4ba_fe832cc8a1df.slice - libcontainer container kubepods-besteffort-pod56a98ce3_aefb_4f4d_a4ba_fe832cc8a1df.slice. Oct 31 00:52:01.564825 containerd[1535]: time="2025-10-31T00:52:01.564807581Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.4\"" Oct 31 00:52:01.571808 systemd[1]: Created slice kubepods-besteffort-pod38874b0f_68ea_48b6_8bf2_17a9a00061c5.slice - libcontainer container kubepods-besteffort-pod38874b0f_68ea_48b6_8bf2_17a9a00061c5.slice. Oct 31 00:52:01.581681 systemd[1]: Created slice kubepods-besteffort-pod3561ae24_137d_44ba_89a5_d4068542bce6.slice - libcontainer container kubepods-besteffort-pod3561ae24_137d_44ba_89a5_d4068542bce6.slice. Oct 31 00:52:01.582101 kubelet[2709]: I1031 00:52:01.581871 2709 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ddrnt\" (UniqueName: \"kubernetes.io/projected/56a98ce3-aefb-4f4d-a4ba-fe832cc8a1df-kube-api-access-ddrnt\") pod \"goldmane-7c778bb748-62kb2\" (UID: \"56a98ce3-aefb-4f4d-a4ba-fe832cc8a1df\") " pod="calico-system/goldmane-7c778bb748-62kb2" Oct 31 00:52:01.582101 kubelet[2709]: I1031 00:52:01.581894 2709 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/bab1cf9d-8b44-456a-b129-5e28e68f2959-whisker-backend-key-pair\") pod \"whisker-6b8495b656-fkzcv\" (UID: \"bab1cf9d-8b44-456a-b129-5e28e68f2959\") " pod="calico-system/whisker-6b8495b656-fkzcv" Oct 31 00:52:01.582101 kubelet[2709]: I1031 00:52:01.581904 2709 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/bab1cf9d-8b44-456a-b129-5e28e68f2959-whisker-ca-bundle\") pod \"whisker-6b8495b656-fkzcv\" (UID: \"bab1cf9d-8b44-456a-b129-5e28e68f2959\") " pod="calico-system/whisker-6b8495b656-fkzcv" Oct 31 00:52:01.582101 kubelet[2709]: I1031 00:52:01.581912 2709 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lr7xs\" (UniqueName: \"kubernetes.io/projected/bab1cf9d-8b44-456a-b129-5e28e68f2959-kube-api-access-lr7xs\") pod \"whisker-6b8495b656-fkzcv\" (UID: \"bab1cf9d-8b44-456a-b129-5e28e68f2959\") " pod="calico-system/whisker-6b8495b656-fkzcv" Oct 31 00:52:01.582101 kubelet[2709]: I1031 00:52:01.581921 2709 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/487d8bf9-c5d4-4162-b747-015052300a2e-config-volume\") pod \"coredns-66bc5c9577-czb88\" (UID: \"487d8bf9-c5d4-4162-b747-015052300a2e\") " pod="kube-system/coredns-66bc5c9577-czb88" Oct 31 00:52:01.582269 kubelet[2709]: I1031 00:52:01.581933 2709 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/3561ae24-137d-44ba-89a5-d4068542bce6-calico-apiserver-certs\") pod \"calico-apiserver-6577bb4886-7s98w\" (UID: \"3561ae24-137d-44ba-89a5-d4068542bce6\") " pod="calico-apiserver/calico-apiserver-6577bb4886-7s98w" Oct 31 00:52:01.582269 kubelet[2709]: I1031 00:52:01.581942 2709 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/caacd2fa-ec7d-4e23-bd05-4cbeb62fc6f5-config-volume\") pod \"coredns-66bc5c9577-tx48q\" (UID: \"caacd2fa-ec7d-4e23-bd05-4cbeb62fc6f5\") " pod="kube-system/coredns-66bc5c9577-tx48q" Oct 31 00:52:01.582269 kubelet[2709]: I1031 00:52:01.581952 2709 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sx8hs\" (UniqueName: \"kubernetes.io/projected/caacd2fa-ec7d-4e23-bd05-4cbeb62fc6f5-kube-api-access-sx8hs\") pod \"coredns-66bc5c9577-tx48q\" (UID: \"caacd2fa-ec7d-4e23-bd05-4cbeb62fc6f5\") " pod="kube-system/coredns-66bc5c9577-tx48q" Oct 31 00:52:01.582269 kubelet[2709]: I1031 00:52:01.581966 2709 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q4pm8\" (UniqueName: \"kubernetes.io/projected/3561ae24-137d-44ba-89a5-d4068542bce6-kube-api-access-q4pm8\") pod \"calico-apiserver-6577bb4886-7s98w\" (UID: \"3561ae24-137d-44ba-89a5-d4068542bce6\") " pod="calico-apiserver/calico-apiserver-6577bb4886-7s98w" Oct 31 00:52:01.582269 kubelet[2709]: I1031 00:52:01.581975 2709 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/56a98ce3-aefb-4f4d-a4ba-fe832cc8a1df-goldmane-ca-bundle\") pod \"goldmane-7c778bb748-62kb2\" (UID: \"56a98ce3-aefb-4f4d-a4ba-fe832cc8a1df\") " pod="calico-system/goldmane-7c778bb748-62kb2" Oct 31 00:52:01.582381 kubelet[2709]: I1031 00:52:01.581992 2709 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xm45t\" (UniqueName: \"kubernetes.io/projected/fb892052-e9f7-4494-bd9e-d42433970af9-kube-api-access-xm45t\") pod \"calico-kube-controllers-84c8975546-52zt4\" (UID: \"fb892052-e9f7-4494-bd9e-d42433970af9\") " pod="calico-system/calico-kube-controllers-84c8975546-52zt4" Oct 31 00:52:01.582381 kubelet[2709]: I1031 00:52:01.582012 2709 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/56a98ce3-aefb-4f4d-a4ba-fe832cc8a1df-config\") pod \"goldmane-7c778bb748-62kb2\" (UID: \"56a98ce3-aefb-4f4d-a4ba-fe832cc8a1df\") " pod="calico-system/goldmane-7c778bb748-62kb2" Oct 31 00:52:01.582381 kubelet[2709]: I1031 00:52:01.582022 2709 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/38874b0f-68ea-48b6-8bf2-17a9a00061c5-calico-apiserver-certs\") pod \"calico-apiserver-6577bb4886-mlhcr\" (UID: \"38874b0f-68ea-48b6-8bf2-17a9a00061c5\") " pod="calico-apiserver/calico-apiserver-6577bb4886-mlhcr" Oct 31 00:52:01.582381 kubelet[2709]: I1031 00:52:01.582031 2709 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wfh45\" (UniqueName: \"kubernetes.io/projected/38874b0f-68ea-48b6-8bf2-17a9a00061c5-kube-api-access-wfh45\") pod \"calico-apiserver-6577bb4886-mlhcr\" (UID: \"38874b0f-68ea-48b6-8bf2-17a9a00061c5\") " pod="calico-apiserver/calico-apiserver-6577bb4886-mlhcr" Oct 31 00:52:01.582381 kubelet[2709]: I1031 00:52:01.582042 2709 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-glhmk\" (UniqueName: \"kubernetes.io/projected/487d8bf9-c5d4-4162-b747-015052300a2e-kube-api-access-glhmk\") pod \"coredns-66bc5c9577-czb88\" (UID: \"487d8bf9-c5d4-4162-b747-015052300a2e\") " pod="kube-system/coredns-66bc5c9577-czb88" Oct 31 00:52:01.582940 kubelet[2709]: I1031 00:52:01.582067 2709 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/fb892052-e9f7-4494-bd9e-d42433970af9-tigera-ca-bundle\") pod \"calico-kube-controllers-84c8975546-52zt4\" (UID: \"fb892052-e9f7-4494-bd9e-d42433970af9\") " pod="calico-system/calico-kube-controllers-84c8975546-52zt4" Oct 31 00:52:01.582940 kubelet[2709]: I1031 00:52:01.582078 2709 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/56a98ce3-aefb-4f4d-a4ba-fe832cc8a1df-goldmane-key-pair\") pod \"goldmane-7c778bb748-62kb2\" (UID: \"56a98ce3-aefb-4f4d-a4ba-fe832cc8a1df\") " pod="calico-system/goldmane-7c778bb748-62kb2" Oct 31 00:52:01.586261 systemd[1]: Created slice kubepods-burstable-pod487d8bf9_c5d4_4162_b747_015052300a2e.slice - libcontainer container kubepods-burstable-pod487d8bf9_c5d4_4162_b747_015052300a2e.slice. Oct 31 00:52:01.593878 systemd[1]: Created slice kubepods-besteffort-podbab1cf9d_8b44_456a_b129_5e28e68f2959.slice - libcontainer container kubepods-besteffort-podbab1cf9d_8b44_456a_b129_5e28e68f2959.slice. Oct 31 00:52:01.597091 systemd[1]: Created slice kubepods-besteffort-podfb892052_e9f7_4494_bd9e_d42433970af9.slice - libcontainer container kubepods-besteffort-podfb892052_e9f7_4494_bd9e_d42433970af9.slice. Oct 31 00:52:01.860260 containerd[1535]: time="2025-10-31T00:52:01.860239285Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-tx48q,Uid:caacd2fa-ec7d-4e23-bd05-4cbeb62fc6f5,Namespace:kube-system,Attempt:0,}" Oct 31 00:52:01.867605 containerd[1535]: time="2025-10-31T00:52:01.867584301Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-7c778bb748-62kb2,Uid:56a98ce3-aefb-4f4d-a4ba-fe832cc8a1df,Namespace:calico-system,Attempt:0,}" Oct 31 00:52:01.878544 containerd[1535]: time="2025-10-31T00:52:01.878522625Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6577bb4886-mlhcr,Uid:38874b0f-68ea-48b6-8bf2-17a9a00061c5,Namespace:calico-apiserver,Attempt:0,}" Oct 31 00:52:01.897368 containerd[1535]: time="2025-10-31T00:52:01.897207960Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-6b8495b656-fkzcv,Uid:bab1cf9d-8b44-456a-b129-5e28e68f2959,Namespace:calico-system,Attempt:0,}" Oct 31 00:52:01.898201 containerd[1535]: time="2025-10-31T00:52:01.898148850Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6577bb4886-7s98w,Uid:3561ae24-137d-44ba-89a5-d4068542bce6,Namespace:calico-apiserver,Attempt:0,}" Oct 31 00:52:01.898383 containerd[1535]: time="2025-10-31T00:52:01.898367351Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-czb88,Uid:487d8bf9-c5d4-4162-b747-015052300a2e,Namespace:kube-system,Attempt:0,}" Oct 31 00:52:01.900528 containerd[1535]: time="2025-10-31T00:52:01.900515745Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-84c8975546-52zt4,Uid:fb892052-e9f7-4494-bd9e-d42433970af9,Namespace:calico-system,Attempt:0,}" Oct 31 00:52:02.124777 containerd[1535]: time="2025-10-31T00:52:02.124687697Z" level=error msg="Failed to destroy network for sandbox \"4f84c823dec1fe4194385f81db9f5bb956efcbe330f3474159feffcdbb5d2e5f\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 31 00:52:02.125665 containerd[1535]: time="2025-10-31T00:52:02.125006809Z" level=error msg="Failed to destroy network for sandbox \"8b537c6e4d2b69aee3949653c38278980ff87664400857b087d62263b3754a04\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 31 00:52:02.128248 containerd[1535]: time="2025-10-31T00:52:02.128231353Z" level=error msg="encountered an error cleaning up failed sandbox \"4f84c823dec1fe4194385f81db9f5bb956efcbe330f3474159feffcdbb5d2e5f\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 31 00:52:02.128438 containerd[1535]: time="2025-10-31T00:52:02.128315178Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-7c778bb748-62kb2,Uid:56a98ce3-aefb-4f4d-a4ba-fe832cc8a1df,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"4f84c823dec1fe4194385f81db9f5bb956efcbe330f3474159feffcdbb5d2e5f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 31 00:52:02.132965 containerd[1535]: time="2025-10-31T00:52:02.132634891Z" level=error msg="encountered an error cleaning up failed sandbox \"8b537c6e4d2b69aee3949653c38278980ff87664400857b087d62263b3754a04\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 31 00:52:02.132965 containerd[1535]: time="2025-10-31T00:52:02.132677263Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-84c8975546-52zt4,Uid:fb892052-e9f7-4494-bd9e-d42433970af9,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"8b537c6e4d2b69aee3949653c38278980ff87664400857b087d62263b3754a04\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 31 00:52:02.137483 containerd[1535]: time="2025-10-31T00:52:02.137453028Z" level=error msg="Failed to destroy network for sandbox \"afc1584ace6ae6cad1154f5fe1167888c3bef1e302851579624a34bbdfaa0ae3\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 31 00:52:02.137696 containerd[1535]: time="2025-10-31T00:52:02.137678944Z" level=error msg="encountered an error cleaning up failed sandbox \"afc1584ace6ae6cad1154f5fe1167888c3bef1e302851579624a34bbdfaa0ae3\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 31 00:52:02.137738 containerd[1535]: time="2025-10-31T00:52:02.137714974Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6577bb4886-mlhcr,Uid:38874b0f-68ea-48b6-8bf2-17a9a00061c5,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"afc1584ace6ae6cad1154f5fe1167888c3bef1e302851579624a34bbdfaa0ae3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 31 00:52:02.140100 kubelet[2709]: E1031 00:52:02.138308 2709 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"afc1584ace6ae6cad1154f5fe1167888c3bef1e302851579624a34bbdfaa0ae3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 31 00:52:02.140100 kubelet[2709]: E1031 00:52:02.138357 2709 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"afc1584ace6ae6cad1154f5fe1167888c3bef1e302851579624a34bbdfaa0ae3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6577bb4886-mlhcr" Oct 31 00:52:02.140100 kubelet[2709]: E1031 00:52:02.138376 2709 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"afc1584ace6ae6cad1154f5fe1167888c3bef1e302851579624a34bbdfaa0ae3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6577bb4886-mlhcr" Oct 31 00:52:02.140192 kubelet[2709]: E1031 00:52:02.138411 2709 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-6577bb4886-mlhcr_calico-apiserver(38874b0f-68ea-48b6-8bf2-17a9a00061c5)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-6577bb4886-mlhcr_calico-apiserver(38874b0f-68ea-48b6-8bf2-17a9a00061c5)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"afc1584ace6ae6cad1154f5fe1167888c3bef1e302851579624a34bbdfaa0ae3\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-6577bb4886-mlhcr" podUID="38874b0f-68ea-48b6-8bf2-17a9a00061c5" Oct 31 00:52:02.140192 kubelet[2709]: E1031 00:52:02.138557 2709 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4f84c823dec1fe4194385f81db9f5bb956efcbe330f3474159feffcdbb5d2e5f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 31 00:52:02.140192 kubelet[2709]: E1031 00:52:02.138569 2709 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4f84c823dec1fe4194385f81db9f5bb956efcbe330f3474159feffcdbb5d2e5f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-7c778bb748-62kb2" Oct 31 00:52:02.140266 kubelet[2709]: E1031 00:52:02.138578 2709 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4f84c823dec1fe4194385f81db9f5bb956efcbe330f3474159feffcdbb5d2e5f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-7c778bb748-62kb2" Oct 31 00:52:02.140266 kubelet[2709]: E1031 00:52:02.138596 2709 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-7c778bb748-62kb2_calico-system(56a98ce3-aefb-4f4d-a4ba-fe832cc8a1df)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-7c778bb748-62kb2_calico-system(56a98ce3-aefb-4f4d-a4ba-fe832cc8a1df)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"4f84c823dec1fe4194385f81db9f5bb956efcbe330f3474159feffcdbb5d2e5f\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-7c778bb748-62kb2" podUID="56a98ce3-aefb-4f4d-a4ba-fe832cc8a1df" Oct 31 00:52:02.140266 kubelet[2709]: E1031 00:52:02.138813 2709 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8b537c6e4d2b69aee3949653c38278980ff87664400857b087d62263b3754a04\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 31 00:52:02.140363 kubelet[2709]: E1031 00:52:02.138829 2709 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8b537c6e4d2b69aee3949653c38278980ff87664400857b087d62263b3754a04\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-84c8975546-52zt4" Oct 31 00:52:02.140363 kubelet[2709]: E1031 00:52:02.138839 2709 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8b537c6e4d2b69aee3949653c38278980ff87664400857b087d62263b3754a04\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-84c8975546-52zt4" Oct 31 00:52:02.140363 kubelet[2709]: E1031 00:52:02.138859 2709 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-84c8975546-52zt4_calico-system(fb892052-e9f7-4494-bd9e-d42433970af9)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-84c8975546-52zt4_calico-system(fb892052-e9f7-4494-bd9e-d42433970af9)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"8b537c6e4d2b69aee3949653c38278980ff87664400857b087d62263b3754a04\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-84c8975546-52zt4" podUID="fb892052-e9f7-4494-bd9e-d42433970af9" Oct 31 00:52:02.145788 containerd[1535]: time="2025-10-31T00:52:02.145699589Z" level=error msg="Failed to destroy network for sandbox \"ceb38f7ee9cb29205c35e10ae71428df3a5d39ec886165e32b917be45fb281c8\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 31 00:52:02.146048 containerd[1535]: time="2025-10-31T00:52:02.145942230Z" level=error msg="encountered an error cleaning up failed sandbox \"ceb38f7ee9cb29205c35e10ae71428df3a5d39ec886165e32b917be45fb281c8\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 31 00:52:02.146048 containerd[1535]: time="2025-10-31T00:52:02.145975209Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-tx48q,Uid:caacd2fa-ec7d-4e23-bd05-4cbeb62fc6f5,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"ceb38f7ee9cb29205c35e10ae71428df3a5d39ec886165e32b917be45fb281c8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 31 00:52:02.146735 kubelet[2709]: E1031 00:52:02.146717 2709 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ceb38f7ee9cb29205c35e10ae71428df3a5d39ec886165e32b917be45fb281c8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 31 00:52:02.146860 kubelet[2709]: E1031 00:52:02.146746 2709 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ceb38f7ee9cb29205c35e10ae71428df3a5d39ec886165e32b917be45fb281c8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-66bc5c9577-tx48q" Oct 31 00:52:02.146860 kubelet[2709]: E1031 00:52:02.146773 2709 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ceb38f7ee9cb29205c35e10ae71428df3a5d39ec886165e32b917be45fb281c8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-66bc5c9577-tx48q" Oct 31 00:52:02.146860 kubelet[2709]: E1031 00:52:02.146805 2709 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-66bc5c9577-tx48q_kube-system(caacd2fa-ec7d-4e23-bd05-4cbeb62fc6f5)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-66bc5c9577-tx48q_kube-system(caacd2fa-ec7d-4e23-bd05-4cbeb62fc6f5)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"ceb38f7ee9cb29205c35e10ae71428df3a5d39ec886165e32b917be45fb281c8\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-66bc5c9577-tx48q" podUID="caacd2fa-ec7d-4e23-bd05-4cbeb62fc6f5" Oct 31 00:52:02.150560 containerd[1535]: time="2025-10-31T00:52:02.150478927Z" level=error msg="Failed to destroy network for sandbox \"e052b87dd21ca98617011b88fba7c81b4a9fe745c55e055df734d99cb65ec604\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 31 00:52:02.150716 containerd[1535]: time="2025-10-31T00:52:02.150699693Z" level=error msg="encountered an error cleaning up failed sandbox \"e052b87dd21ca98617011b88fba7c81b4a9fe745c55e055df734d99cb65ec604\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 31 00:52:02.150755 containerd[1535]: time="2025-10-31T00:52:02.150730463Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-6b8495b656-fkzcv,Uid:bab1cf9d-8b44-456a-b129-5e28e68f2959,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"e052b87dd21ca98617011b88fba7c81b4a9fe745c55e055df734d99cb65ec604\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 31 00:52:02.150888 kubelet[2709]: E1031 00:52:02.150830 2709 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e052b87dd21ca98617011b88fba7c81b4a9fe745c55e055df734d99cb65ec604\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 31 00:52:02.150888 kubelet[2709]: E1031 00:52:02.150855 2709 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e052b87dd21ca98617011b88fba7c81b4a9fe745c55e055df734d99cb65ec604\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-6b8495b656-fkzcv" Oct 31 00:52:02.150888 kubelet[2709]: E1031 00:52:02.150866 2709 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e052b87dd21ca98617011b88fba7c81b4a9fe745c55e055df734d99cb65ec604\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-6b8495b656-fkzcv" Oct 31 00:52:02.152186 kubelet[2709]: E1031 00:52:02.150891 2709 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-6b8495b656-fkzcv_calico-system(bab1cf9d-8b44-456a-b129-5e28e68f2959)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-6b8495b656-fkzcv_calico-system(bab1cf9d-8b44-456a-b129-5e28e68f2959)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"e052b87dd21ca98617011b88fba7c81b4a9fe745c55e055df734d99cb65ec604\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-6b8495b656-fkzcv" podUID="bab1cf9d-8b44-456a-b129-5e28e68f2959" Oct 31 00:52:02.152235 containerd[1535]: time="2025-10-31T00:52:02.152097550Z" level=error msg="Failed to destroy network for sandbox \"7d1d21b2d5e8471fed85b6420e09a173dc91af84b6159b82570c8cfb5eeea9ea\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 31 00:52:02.152470 containerd[1535]: time="2025-10-31T00:52:02.152407684Z" level=error msg="encountered an error cleaning up failed sandbox \"7d1d21b2d5e8471fed85b6420e09a173dc91af84b6159b82570c8cfb5eeea9ea\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 31 00:52:02.152470 containerd[1535]: time="2025-10-31T00:52:02.152435309Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6577bb4886-7s98w,Uid:3561ae24-137d-44ba-89a5-d4068542bce6,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"7d1d21b2d5e8471fed85b6420e09a173dc91af84b6159b82570c8cfb5eeea9ea\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 31 00:52:02.152643 kubelet[2709]: E1031 00:52:02.152554 2709 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7d1d21b2d5e8471fed85b6420e09a173dc91af84b6159b82570c8cfb5eeea9ea\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 31 00:52:02.152643 kubelet[2709]: E1031 00:52:02.152575 2709 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7d1d21b2d5e8471fed85b6420e09a173dc91af84b6159b82570c8cfb5eeea9ea\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6577bb4886-7s98w" Oct 31 00:52:02.152643 kubelet[2709]: E1031 00:52:02.152601 2709 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7d1d21b2d5e8471fed85b6420e09a173dc91af84b6159b82570c8cfb5eeea9ea\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6577bb4886-7s98w" Oct 31 00:52:02.152720 kubelet[2709]: E1031 00:52:02.152647 2709 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-6577bb4886-7s98w_calico-apiserver(3561ae24-137d-44ba-89a5-d4068542bce6)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-6577bb4886-7s98w_calico-apiserver(3561ae24-137d-44ba-89a5-d4068542bce6)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"7d1d21b2d5e8471fed85b6420e09a173dc91af84b6159b82570c8cfb5eeea9ea\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-6577bb4886-7s98w" podUID="3561ae24-137d-44ba-89a5-d4068542bce6" Oct 31 00:52:02.156342 containerd[1535]: time="2025-10-31T00:52:02.156323246Z" level=error msg="Failed to destroy network for sandbox \"d6f3147cf3337156a02b0c9a820c943cf399fa8734ed1be5c3149eace8f07109\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 31 00:52:02.156754 containerd[1535]: time="2025-10-31T00:52:02.156598642Z" level=error msg="encountered an error cleaning up failed sandbox \"d6f3147cf3337156a02b0c9a820c943cf399fa8734ed1be5c3149eace8f07109\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 31 00:52:02.156754 containerd[1535]: time="2025-10-31T00:52:02.156643936Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-czb88,Uid:487d8bf9-c5d4-4162-b747-015052300a2e,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"d6f3147cf3337156a02b0c9a820c943cf399fa8734ed1be5c3149eace8f07109\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 31 00:52:02.157429 kubelet[2709]: E1031 00:52:02.156760 2709 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d6f3147cf3337156a02b0c9a820c943cf399fa8734ed1be5c3149eace8f07109\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 31 00:52:02.157429 kubelet[2709]: E1031 00:52:02.156784 2709 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d6f3147cf3337156a02b0c9a820c943cf399fa8734ed1be5c3149eace8f07109\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-66bc5c9577-czb88" Oct 31 00:52:02.157429 kubelet[2709]: E1031 00:52:02.156795 2709 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d6f3147cf3337156a02b0c9a820c943cf399fa8734ed1be5c3149eace8f07109\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-66bc5c9577-czb88" Oct 31 00:52:02.157500 kubelet[2709]: E1031 00:52:02.156829 2709 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-66bc5c9577-czb88_kube-system(487d8bf9-c5d4-4162-b747-015052300a2e)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-66bc5c9577-czb88_kube-system(487d8bf9-c5d4-4162-b747-015052300a2e)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"d6f3147cf3337156a02b0c9a820c943cf399fa8734ed1be5c3149eace8f07109\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-66bc5c9577-czb88" podUID="487d8bf9-c5d4-4162-b747-015052300a2e" Oct 31 00:52:02.481188 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-ceb38f7ee9cb29205c35e10ae71428df3a5d39ec886165e32b917be45fb281c8-shm.mount: Deactivated successfully. Oct 31 00:52:02.565945 kubelet[2709]: I1031 00:52:02.565916 2709 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7d1d21b2d5e8471fed85b6420e09a173dc91af84b6159b82570c8cfb5eeea9ea" Oct 31 00:52:02.568007 kubelet[2709]: I1031 00:52:02.567943 2709 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4f84c823dec1fe4194385f81db9f5bb956efcbe330f3474159feffcdbb5d2e5f" Oct 31 00:52:02.569547 containerd[1535]: time="2025-10-31T00:52:02.568557794Z" level=info msg="StopPodSandbox for \"4f84c823dec1fe4194385f81db9f5bb956efcbe330f3474159feffcdbb5d2e5f\"" Oct 31 00:52:02.569812 containerd[1535]: time="2025-10-31T00:52:02.569608038Z" level=info msg="Ensure that sandbox 4f84c823dec1fe4194385f81db9f5bb956efcbe330f3474159feffcdbb5d2e5f in task-service has been cleanup successfully" Oct 31 00:52:02.575045 kubelet[2709]: I1031 00:52:02.575026 2709 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ceb38f7ee9cb29205c35e10ae71428df3a5d39ec886165e32b917be45fb281c8" Oct 31 00:52:02.575729 containerd[1535]: time="2025-10-31T00:52:02.575473894Z" level=info msg="StopPodSandbox for \"ceb38f7ee9cb29205c35e10ae71428df3a5d39ec886165e32b917be45fb281c8\"" Oct 31 00:52:02.575729 containerd[1535]: time="2025-10-31T00:52:02.575579257Z" level=info msg="Ensure that sandbox ceb38f7ee9cb29205c35e10ae71428df3a5d39ec886165e32b917be45fb281c8 in task-service has been cleanup successfully" Oct 31 00:52:02.575926 containerd[1535]: time="2025-10-31T00:52:02.575913588Z" level=info msg="StopPodSandbox for \"7d1d21b2d5e8471fed85b6420e09a173dc91af84b6159b82570c8cfb5eeea9ea\"" Oct 31 00:52:02.576065 containerd[1535]: time="2025-10-31T00:52:02.576055426Z" level=info msg="Ensure that sandbox 7d1d21b2d5e8471fed85b6420e09a173dc91af84b6159b82570c8cfb5eeea9ea in task-service has been cleanup successfully" Oct 31 00:52:02.580644 kubelet[2709]: I1031 00:52:02.579834 2709 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8b537c6e4d2b69aee3949653c38278980ff87664400857b087d62263b3754a04" Oct 31 00:52:02.580849 containerd[1535]: time="2025-10-31T00:52:02.580830311Z" level=info msg="StopPodSandbox for \"8b537c6e4d2b69aee3949653c38278980ff87664400857b087d62263b3754a04\"" Oct 31 00:52:02.580930 containerd[1535]: time="2025-10-31T00:52:02.580918080Z" level=info msg="Ensure that sandbox 8b537c6e4d2b69aee3949653c38278980ff87664400857b087d62263b3754a04 in task-service has been cleanup successfully" Oct 31 00:52:02.583352 kubelet[2709]: I1031 00:52:02.583336 2709 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d6f3147cf3337156a02b0c9a820c943cf399fa8734ed1be5c3149eace8f07109" Oct 31 00:52:02.583970 containerd[1535]: time="2025-10-31T00:52:02.583957311Z" level=info msg="StopPodSandbox for \"d6f3147cf3337156a02b0c9a820c943cf399fa8734ed1be5c3149eace8f07109\"" Oct 31 00:52:02.584125 containerd[1535]: time="2025-10-31T00:52:02.584115231Z" level=info msg="Ensure that sandbox d6f3147cf3337156a02b0c9a820c943cf399fa8734ed1be5c3149eace8f07109 in task-service has been cleanup successfully" Oct 31 00:52:02.587797 kubelet[2709]: I1031 00:52:02.587781 2709 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e052b87dd21ca98617011b88fba7c81b4a9fe745c55e055df734d99cb65ec604" Oct 31 00:52:02.588721 containerd[1535]: time="2025-10-31T00:52:02.588702400Z" level=info msg="StopPodSandbox for \"e052b87dd21ca98617011b88fba7c81b4a9fe745c55e055df734d99cb65ec604\"" Oct 31 00:52:02.596962 containerd[1535]: time="2025-10-31T00:52:02.596761038Z" level=info msg="Ensure that sandbox e052b87dd21ca98617011b88fba7c81b4a9fe745c55e055df734d99cb65ec604 in task-service has been cleanup successfully" Oct 31 00:52:02.599534 kubelet[2709]: I1031 00:52:02.599403 2709 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="afc1584ace6ae6cad1154f5fe1167888c3bef1e302851579624a34bbdfaa0ae3" Oct 31 00:52:02.600670 containerd[1535]: time="2025-10-31T00:52:02.600652137Z" level=info msg="StopPodSandbox for \"afc1584ace6ae6cad1154f5fe1167888c3bef1e302851579624a34bbdfaa0ae3\"" Oct 31 00:52:02.600906 containerd[1535]: time="2025-10-31T00:52:02.600742247Z" level=info msg="Ensure that sandbox afc1584ace6ae6cad1154f5fe1167888c3bef1e302851579624a34bbdfaa0ae3 in task-service has been cleanup successfully" Oct 31 00:52:02.624425 containerd[1535]: time="2025-10-31T00:52:02.624374246Z" level=error msg="StopPodSandbox for \"7d1d21b2d5e8471fed85b6420e09a173dc91af84b6159b82570c8cfb5eeea9ea\" failed" error="failed to destroy network for sandbox \"7d1d21b2d5e8471fed85b6420e09a173dc91af84b6159b82570c8cfb5eeea9ea\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 31 00:52:02.624812 kubelet[2709]: E1031 00:52:02.624786 2709 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"7d1d21b2d5e8471fed85b6420e09a173dc91af84b6159b82570c8cfb5eeea9ea\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="7d1d21b2d5e8471fed85b6420e09a173dc91af84b6159b82570c8cfb5eeea9ea" Oct 31 00:52:02.624874 kubelet[2709]: E1031 00:52:02.624825 2709 kuberuntime_manager.go:1665] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"7d1d21b2d5e8471fed85b6420e09a173dc91af84b6159b82570c8cfb5eeea9ea"} Oct 31 00:52:02.624874 kubelet[2709]: E1031 00:52:02.624858 2709 kuberuntime_manager.go:1233] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"3561ae24-137d-44ba-89a5-d4068542bce6\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"7d1d21b2d5e8471fed85b6420e09a173dc91af84b6159b82570c8cfb5eeea9ea\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Oct 31 00:52:02.625040 kubelet[2709]: E1031 00:52:02.624876 2709 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"3561ae24-137d-44ba-89a5-d4068542bce6\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"7d1d21b2d5e8471fed85b6420e09a173dc91af84b6159b82570c8cfb5eeea9ea\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-6577bb4886-7s98w" podUID="3561ae24-137d-44ba-89a5-d4068542bce6" Oct 31 00:52:02.631762 containerd[1535]: time="2025-10-31T00:52:02.631608044Z" level=error msg="StopPodSandbox for \"4f84c823dec1fe4194385f81db9f5bb956efcbe330f3474159feffcdbb5d2e5f\" failed" error="failed to destroy network for sandbox \"4f84c823dec1fe4194385f81db9f5bb956efcbe330f3474159feffcdbb5d2e5f\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 31 00:52:02.631904 kubelet[2709]: E1031 00:52:02.631838 2709 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"4f84c823dec1fe4194385f81db9f5bb956efcbe330f3474159feffcdbb5d2e5f\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="4f84c823dec1fe4194385f81db9f5bb956efcbe330f3474159feffcdbb5d2e5f" Oct 31 00:52:02.631904 kubelet[2709]: E1031 00:52:02.631866 2709 kuberuntime_manager.go:1665] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"4f84c823dec1fe4194385f81db9f5bb956efcbe330f3474159feffcdbb5d2e5f"} Oct 31 00:52:02.631904 kubelet[2709]: E1031 00:52:02.631888 2709 kuberuntime_manager.go:1233] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"56a98ce3-aefb-4f4d-a4ba-fe832cc8a1df\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"4f84c823dec1fe4194385f81db9f5bb956efcbe330f3474159feffcdbb5d2e5f\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Oct 31 00:52:02.632092 kubelet[2709]: E1031 00:52:02.631904 2709 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"56a98ce3-aefb-4f4d-a4ba-fe832cc8a1df\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"4f84c823dec1fe4194385f81db9f5bb956efcbe330f3474159feffcdbb5d2e5f\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-7c778bb748-62kb2" podUID="56a98ce3-aefb-4f4d-a4ba-fe832cc8a1df" Oct 31 00:52:02.635055 containerd[1535]: time="2025-10-31T00:52:02.634952124Z" level=error msg="StopPodSandbox for \"ceb38f7ee9cb29205c35e10ae71428df3a5d39ec886165e32b917be45fb281c8\" failed" error="failed to destroy network for sandbox \"ceb38f7ee9cb29205c35e10ae71428df3a5d39ec886165e32b917be45fb281c8\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 31 00:52:02.635164 kubelet[2709]: E1031 00:52:02.635131 2709 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"ceb38f7ee9cb29205c35e10ae71428df3a5d39ec886165e32b917be45fb281c8\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="ceb38f7ee9cb29205c35e10ae71428df3a5d39ec886165e32b917be45fb281c8" Oct 31 00:52:02.635205 kubelet[2709]: E1031 00:52:02.635166 2709 kuberuntime_manager.go:1665] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"ceb38f7ee9cb29205c35e10ae71428df3a5d39ec886165e32b917be45fb281c8"} Oct 31 00:52:02.635205 kubelet[2709]: E1031 00:52:02.635182 2709 kuberuntime_manager.go:1233] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"caacd2fa-ec7d-4e23-bd05-4cbeb62fc6f5\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"ceb38f7ee9cb29205c35e10ae71428df3a5d39ec886165e32b917be45fb281c8\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Oct 31 00:52:02.635205 kubelet[2709]: E1031 00:52:02.635195 2709 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"caacd2fa-ec7d-4e23-bd05-4cbeb62fc6f5\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"ceb38f7ee9cb29205c35e10ae71428df3a5d39ec886165e32b917be45fb281c8\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-66bc5c9577-tx48q" podUID="caacd2fa-ec7d-4e23-bd05-4cbeb62fc6f5" Oct 31 00:52:02.638381 containerd[1535]: time="2025-10-31T00:52:02.638253495Z" level=error msg="StopPodSandbox for \"8b537c6e4d2b69aee3949653c38278980ff87664400857b087d62263b3754a04\" failed" error="failed to destroy network for sandbox \"8b537c6e4d2b69aee3949653c38278980ff87664400857b087d62263b3754a04\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 31 00:52:02.638635 kubelet[2709]: E1031 00:52:02.638346 2709 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"8b537c6e4d2b69aee3949653c38278980ff87664400857b087d62263b3754a04\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="8b537c6e4d2b69aee3949653c38278980ff87664400857b087d62263b3754a04" Oct 31 00:52:02.638635 kubelet[2709]: E1031 00:52:02.638362 2709 kuberuntime_manager.go:1665] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"8b537c6e4d2b69aee3949653c38278980ff87664400857b087d62263b3754a04"} Oct 31 00:52:02.638635 kubelet[2709]: E1031 00:52:02.638376 2709 kuberuntime_manager.go:1233] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"fb892052-e9f7-4494-bd9e-d42433970af9\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"8b537c6e4d2b69aee3949653c38278980ff87664400857b087d62263b3754a04\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Oct 31 00:52:02.638635 kubelet[2709]: E1031 00:52:02.638389 2709 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"fb892052-e9f7-4494-bd9e-d42433970af9\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"8b537c6e4d2b69aee3949653c38278980ff87664400857b087d62263b3754a04\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-84c8975546-52zt4" podUID="fb892052-e9f7-4494-bd9e-d42433970af9" Oct 31 00:52:02.641240 containerd[1535]: time="2025-10-31T00:52:02.641219129Z" level=error msg="StopPodSandbox for \"e052b87dd21ca98617011b88fba7c81b4a9fe745c55e055df734d99cb65ec604\" failed" error="failed to destroy network for sandbox \"e052b87dd21ca98617011b88fba7c81b4a9fe745c55e055df734d99cb65ec604\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 31 00:52:02.641314 kubelet[2709]: E1031 00:52:02.641297 2709 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"e052b87dd21ca98617011b88fba7c81b4a9fe745c55e055df734d99cb65ec604\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="e052b87dd21ca98617011b88fba7c81b4a9fe745c55e055df734d99cb65ec604" Oct 31 00:52:02.641349 kubelet[2709]: E1031 00:52:02.641316 2709 kuberuntime_manager.go:1665] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"e052b87dd21ca98617011b88fba7c81b4a9fe745c55e055df734d99cb65ec604"} Oct 31 00:52:02.641349 kubelet[2709]: E1031 00:52:02.641329 2709 kuberuntime_manager.go:1233] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"bab1cf9d-8b44-456a-b129-5e28e68f2959\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"e052b87dd21ca98617011b88fba7c81b4a9fe745c55e055df734d99cb65ec604\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Oct 31 00:52:02.641349 kubelet[2709]: E1031 00:52:02.641342 2709 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"bab1cf9d-8b44-456a-b129-5e28e68f2959\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"e052b87dd21ca98617011b88fba7c81b4a9fe745c55e055df734d99cb65ec604\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-6b8495b656-fkzcv" podUID="bab1cf9d-8b44-456a-b129-5e28e68f2959" Oct 31 00:52:02.649713 containerd[1535]: time="2025-10-31T00:52:02.649349600Z" level=error msg="StopPodSandbox for \"d6f3147cf3337156a02b0c9a820c943cf399fa8734ed1be5c3149eace8f07109\" failed" error="failed to destroy network for sandbox \"d6f3147cf3337156a02b0c9a820c943cf399fa8734ed1be5c3149eace8f07109\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 31 00:52:02.649790 kubelet[2709]: E1031 00:52:02.649494 2709 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"d6f3147cf3337156a02b0c9a820c943cf399fa8734ed1be5c3149eace8f07109\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="d6f3147cf3337156a02b0c9a820c943cf399fa8734ed1be5c3149eace8f07109" Oct 31 00:52:02.649790 kubelet[2709]: E1031 00:52:02.649534 2709 kuberuntime_manager.go:1665] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"d6f3147cf3337156a02b0c9a820c943cf399fa8734ed1be5c3149eace8f07109"} Oct 31 00:52:02.649790 kubelet[2709]: E1031 00:52:02.649550 2709 kuberuntime_manager.go:1233] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"487d8bf9-c5d4-4162-b747-015052300a2e\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"d6f3147cf3337156a02b0c9a820c943cf399fa8734ed1be5c3149eace8f07109\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Oct 31 00:52:02.649790 kubelet[2709]: E1031 00:52:02.649570 2709 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"487d8bf9-c5d4-4162-b747-015052300a2e\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"d6f3147cf3337156a02b0c9a820c943cf399fa8734ed1be5c3149eace8f07109\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-66bc5c9577-czb88" podUID="487d8bf9-c5d4-4162-b747-015052300a2e" Oct 31 00:52:02.650337 containerd[1535]: time="2025-10-31T00:52:02.650309875Z" level=error msg="StopPodSandbox for \"afc1584ace6ae6cad1154f5fe1167888c3bef1e302851579624a34bbdfaa0ae3\" failed" error="failed to destroy network for sandbox \"afc1584ace6ae6cad1154f5fe1167888c3bef1e302851579624a34bbdfaa0ae3\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 31 00:52:02.650433 kubelet[2709]: E1031 00:52:02.650417 2709 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"afc1584ace6ae6cad1154f5fe1167888c3bef1e302851579624a34bbdfaa0ae3\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="afc1584ace6ae6cad1154f5fe1167888c3bef1e302851579624a34bbdfaa0ae3" Oct 31 00:52:02.650461 kubelet[2709]: E1031 00:52:02.650434 2709 kuberuntime_manager.go:1665] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"afc1584ace6ae6cad1154f5fe1167888c3bef1e302851579624a34bbdfaa0ae3"} Oct 31 00:52:02.650461 kubelet[2709]: E1031 00:52:02.650446 2709 kuberuntime_manager.go:1233] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"38874b0f-68ea-48b6-8bf2-17a9a00061c5\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"afc1584ace6ae6cad1154f5fe1167888c3bef1e302851579624a34bbdfaa0ae3\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Oct 31 00:52:02.650514 kubelet[2709]: E1031 00:52:02.650458 2709 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"38874b0f-68ea-48b6-8bf2-17a9a00061c5\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"afc1584ace6ae6cad1154f5fe1167888c3bef1e302851579624a34bbdfaa0ae3\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-6577bb4886-mlhcr" podUID="38874b0f-68ea-48b6-8bf2-17a9a00061c5" Oct 31 00:52:03.474959 systemd[1]: Created slice kubepods-besteffort-pod5e393e8e_d87c_4c00_a0d8_1932978c09f4.slice - libcontainer container kubepods-besteffort-pod5e393e8e_d87c_4c00_a0d8_1932978c09f4.slice. Oct 31 00:52:03.486235 containerd[1535]: time="2025-10-31T00:52:03.486208209Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-qfqgh,Uid:5e393e8e-d87c-4c00-a0d8-1932978c09f4,Namespace:calico-system,Attempt:0,}" Oct 31 00:52:03.663574 containerd[1535]: time="2025-10-31T00:52:03.663539541Z" level=error msg="Failed to destroy network for sandbox \"4c1ff8f77ca8225253f7b87874d03762a63424051ab3c7d8179a10904eb22b3b\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 31 00:52:03.664708 containerd[1535]: time="2025-10-31T00:52:03.663957280Z" level=error msg="encountered an error cleaning up failed sandbox \"4c1ff8f77ca8225253f7b87874d03762a63424051ab3c7d8179a10904eb22b3b\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 31 00:52:03.664708 containerd[1535]: time="2025-10-31T00:52:03.663989134Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-qfqgh,Uid:5e393e8e-d87c-4c00-a0d8-1932978c09f4,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"4c1ff8f77ca8225253f7b87874d03762a63424051ab3c7d8179a10904eb22b3b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 31 00:52:03.664817 kubelet[2709]: E1031 00:52:03.664720 2709 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4c1ff8f77ca8225253f7b87874d03762a63424051ab3c7d8179a10904eb22b3b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 31 00:52:03.664817 kubelet[2709]: E1031 00:52:03.664781 2709 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4c1ff8f77ca8225253f7b87874d03762a63424051ab3c7d8179a10904eb22b3b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-qfqgh" Oct 31 00:52:03.664817 kubelet[2709]: E1031 00:52:03.664794 2709 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4c1ff8f77ca8225253f7b87874d03762a63424051ab3c7d8179a10904eb22b3b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-qfqgh" Oct 31 00:52:03.666291 kubelet[2709]: E1031 00:52:03.664831 2709 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-qfqgh_calico-system(5e393e8e-d87c-4c00-a0d8-1932978c09f4)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-qfqgh_calico-system(5e393e8e-d87c-4c00-a0d8-1932978c09f4)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"4c1ff8f77ca8225253f7b87874d03762a63424051ab3c7d8179a10904eb22b3b\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-qfqgh" podUID="5e393e8e-d87c-4c00-a0d8-1932978c09f4" Oct 31 00:52:03.665825 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-4c1ff8f77ca8225253f7b87874d03762a63424051ab3c7d8179a10904eb22b3b-shm.mount: Deactivated successfully. Oct 31 00:52:04.637934 kubelet[2709]: I1031 00:52:04.637913 2709 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4c1ff8f77ca8225253f7b87874d03762a63424051ab3c7d8179a10904eb22b3b" Oct 31 00:52:04.638650 containerd[1535]: time="2025-10-31T00:52:04.638628900Z" level=info msg="StopPodSandbox for \"4c1ff8f77ca8225253f7b87874d03762a63424051ab3c7d8179a10904eb22b3b\"" Oct 31 00:52:04.654952 containerd[1535]: time="2025-10-31T00:52:04.654916271Z" level=info msg="Ensure that sandbox 4c1ff8f77ca8225253f7b87874d03762a63424051ab3c7d8179a10904eb22b3b in task-service has been cleanup successfully" Oct 31 00:52:04.684608 containerd[1535]: time="2025-10-31T00:52:04.684540588Z" level=error msg="StopPodSandbox for \"4c1ff8f77ca8225253f7b87874d03762a63424051ab3c7d8179a10904eb22b3b\" failed" error="failed to destroy network for sandbox \"4c1ff8f77ca8225253f7b87874d03762a63424051ab3c7d8179a10904eb22b3b\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 31 00:52:04.685013 kubelet[2709]: E1031 00:52:04.684756 2709 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"4c1ff8f77ca8225253f7b87874d03762a63424051ab3c7d8179a10904eb22b3b\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="4c1ff8f77ca8225253f7b87874d03762a63424051ab3c7d8179a10904eb22b3b" Oct 31 00:52:04.685013 kubelet[2709]: E1031 00:52:04.684816 2709 kuberuntime_manager.go:1665] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"4c1ff8f77ca8225253f7b87874d03762a63424051ab3c7d8179a10904eb22b3b"} Oct 31 00:52:04.685013 kubelet[2709]: E1031 00:52:04.684842 2709 kuberuntime_manager.go:1233] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"5e393e8e-d87c-4c00-a0d8-1932978c09f4\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"4c1ff8f77ca8225253f7b87874d03762a63424051ab3c7d8179a10904eb22b3b\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Oct 31 00:52:04.685013 kubelet[2709]: E1031 00:52:04.684861 2709 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"5e393e8e-d87c-4c00-a0d8-1932978c09f4\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"4c1ff8f77ca8225253f7b87874d03762a63424051ab3c7d8179a10904eb22b3b\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-qfqgh" podUID="5e393e8e-d87c-4c00-a0d8-1932978c09f4" Oct 31 00:52:05.678479 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3414461448.mount: Deactivated successfully. Oct 31 00:52:05.732013 containerd[1535]: time="2025-10-31T00:52:05.731267340Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.30.4: active requests=0, bytes read=156883675" Oct 31 00:52:05.742093 containerd[1535]: time="2025-10-31T00:52:05.742077525Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 31 00:52:05.765743 containerd[1535]: time="2025-10-31T00:52:05.765724336Z" level=info msg="ImageCreate event name:\"sha256:833e8e11d9dc187377eab6f31e275114a6b0f8f0afc3bf578a2a00507e85afc9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 31 00:52:05.780843 containerd[1535]: time="2025-10-31T00:52:05.780823507Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:e92cca333202c87d07bf57f38182fd68f0779f912ef55305eda1fccc9f33667c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 31 00:52:05.786215 containerd[1535]: time="2025-10-31T00:52:05.786166880Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.30.4\" with image id \"sha256:833e8e11d9dc187377eab6f31e275114a6b0f8f0afc3bf578a2a00507e85afc9\", repo tag \"ghcr.io/flatcar/calico/node:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/node@sha256:e92cca333202c87d07bf57f38182fd68f0779f912ef55305eda1fccc9f33667c\", size \"156883537\" in 4.218949574s" Oct 31 00:52:05.786215 containerd[1535]: time="2025-10-31T00:52:05.786188601Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.4\" returns image reference \"sha256:833e8e11d9dc187377eab6f31e275114a6b0f8f0afc3bf578a2a00507e85afc9\"" Oct 31 00:52:05.846121 containerd[1535]: time="2025-10-31T00:52:05.846064145Z" level=info msg="CreateContainer within sandbox \"bc801b46745252665da9d0246e36689da73fb14eeea207f025f33bd738e6312f\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Oct 31 00:52:05.877583 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3478783714.mount: Deactivated successfully. Oct 31 00:52:05.881559 containerd[1535]: time="2025-10-31T00:52:05.881537000Z" level=info msg="CreateContainer within sandbox \"bc801b46745252665da9d0246e36689da73fb14eeea207f025f33bd738e6312f\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"aedcab1663400171fac8ea15e99e8a97eed1e80174b781965f72c83b6acef519\"" Oct 31 00:52:05.890742 containerd[1535]: time="2025-10-31T00:52:05.889959452Z" level=info msg="StartContainer for \"aedcab1663400171fac8ea15e99e8a97eed1e80174b781965f72c83b6acef519\"" Oct 31 00:52:06.033706 systemd[1]: Started cri-containerd-aedcab1663400171fac8ea15e99e8a97eed1e80174b781965f72c83b6acef519.scope - libcontainer container aedcab1663400171fac8ea15e99e8a97eed1e80174b781965f72c83b6acef519. Oct 31 00:52:06.053648 containerd[1535]: time="2025-10-31T00:52:06.053598259Z" level=info msg="StartContainer for \"aedcab1663400171fac8ea15e99e8a97eed1e80174b781965f72c83b6acef519\" returns successfully" Oct 31 00:52:06.518699 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Oct 31 00:52:06.520734 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Oct 31 00:52:06.736089 systemd[1]: run-containerd-runc-k8s.io-aedcab1663400171fac8ea15e99e8a97eed1e80174b781965f72c83b6acef519-runc.QdOV9G.mount: Deactivated successfully. Oct 31 00:52:06.803812 kubelet[2709]: I1031 00:52:06.783371 2709 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-jfsvd" podStartSLOduration=1.481806815 podStartE2EDuration="13.769128224s" podCreationTimestamp="2025-10-31 00:51:53 +0000 UTC" firstStartedPulling="2025-10-31 00:51:53.499439168 +0000 UTC m=+21.258888942" lastFinishedPulling="2025-10-31 00:52:05.786760577 +0000 UTC m=+33.546210351" observedRunningTime="2025-10-31 00:52:06.702950477 +0000 UTC m=+34.462400262" watchObservedRunningTime="2025-10-31 00:52:06.769128224 +0000 UTC m=+34.528578004" Oct 31 00:52:06.804360 containerd[1535]: time="2025-10-31T00:52:06.804339096Z" level=info msg="StopPodSandbox for \"e052b87dd21ca98617011b88fba7c81b4a9fe745c55e055df734d99cb65ec604\"" Oct 31 00:52:07.228187 containerd[1535]: 2025-10-31 00:52:06.903 [INFO][3969] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="e052b87dd21ca98617011b88fba7c81b4a9fe745c55e055df734d99cb65ec604" Oct 31 00:52:07.228187 containerd[1535]: 2025-10-31 00:52:06.903 [INFO][3969] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="e052b87dd21ca98617011b88fba7c81b4a9fe745c55e055df734d99cb65ec604" iface="eth0" netns="/var/run/netns/cni-c83ab255-592d-3d58-ba8b-5a24dda56708" Oct 31 00:52:07.228187 containerd[1535]: 2025-10-31 00:52:06.904 [INFO][3969] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="e052b87dd21ca98617011b88fba7c81b4a9fe745c55e055df734d99cb65ec604" iface="eth0" netns="/var/run/netns/cni-c83ab255-592d-3d58-ba8b-5a24dda56708" Oct 31 00:52:07.228187 containerd[1535]: 2025-10-31 00:52:06.904 [INFO][3969] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="e052b87dd21ca98617011b88fba7c81b4a9fe745c55e055df734d99cb65ec604" iface="eth0" netns="/var/run/netns/cni-c83ab255-592d-3d58-ba8b-5a24dda56708" Oct 31 00:52:07.228187 containerd[1535]: 2025-10-31 00:52:06.904 [INFO][3969] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="e052b87dd21ca98617011b88fba7c81b4a9fe745c55e055df734d99cb65ec604" Oct 31 00:52:07.228187 containerd[1535]: 2025-10-31 00:52:06.904 [INFO][3969] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="e052b87dd21ca98617011b88fba7c81b4a9fe745c55e055df734d99cb65ec604" Oct 31 00:52:07.228187 containerd[1535]: 2025-10-31 00:52:07.212 [INFO][3977] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="e052b87dd21ca98617011b88fba7c81b4a9fe745c55e055df734d99cb65ec604" HandleID="k8s-pod-network.e052b87dd21ca98617011b88fba7c81b4a9fe745c55e055df734d99cb65ec604" Workload="localhost-k8s-whisker--6b8495b656--fkzcv-eth0" Oct 31 00:52:07.228187 containerd[1535]: 2025-10-31 00:52:07.215 [INFO][3977] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Oct 31 00:52:07.228187 containerd[1535]: 2025-10-31 00:52:07.215 [INFO][3977] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Oct 31 00:52:07.228187 containerd[1535]: 2025-10-31 00:52:07.224 [WARNING][3977] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="e052b87dd21ca98617011b88fba7c81b4a9fe745c55e055df734d99cb65ec604" HandleID="k8s-pod-network.e052b87dd21ca98617011b88fba7c81b4a9fe745c55e055df734d99cb65ec604" Workload="localhost-k8s-whisker--6b8495b656--fkzcv-eth0" Oct 31 00:52:07.228187 containerd[1535]: 2025-10-31 00:52:07.224 [INFO][3977] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="e052b87dd21ca98617011b88fba7c81b4a9fe745c55e055df734d99cb65ec604" HandleID="k8s-pod-network.e052b87dd21ca98617011b88fba7c81b4a9fe745c55e055df734d99cb65ec604" Workload="localhost-k8s-whisker--6b8495b656--fkzcv-eth0" Oct 31 00:52:07.228187 containerd[1535]: 2025-10-31 00:52:07.225 [INFO][3977] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Oct 31 00:52:07.228187 containerd[1535]: 2025-10-31 00:52:07.226 [INFO][3969] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="e052b87dd21ca98617011b88fba7c81b4a9fe745c55e055df734d99cb65ec604" Oct 31 00:52:07.229401 containerd[1535]: time="2025-10-31T00:52:07.228529125Z" level=info msg="TearDown network for sandbox \"e052b87dd21ca98617011b88fba7c81b4a9fe745c55e055df734d99cb65ec604\" successfully" Oct 31 00:52:07.229401 containerd[1535]: time="2025-10-31T00:52:07.228547844Z" level=info msg="StopPodSandbox for \"e052b87dd21ca98617011b88fba7c81b4a9fe745c55e055df734d99cb65ec604\" returns successfully" Oct 31 00:52:07.228966 systemd[1]: run-netns-cni\x2dc83ab255\x2d592d\x2d3d58\x2dba8b\x2d5a24dda56708.mount: Deactivated successfully. Oct 31 00:52:07.321109 kubelet[2709]: I1031 00:52:07.320859 2709 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/bab1cf9d-8b44-456a-b129-5e28e68f2959-whisker-ca-bundle\") pod \"bab1cf9d-8b44-456a-b129-5e28e68f2959\" (UID: \"bab1cf9d-8b44-456a-b129-5e28e68f2959\") " Oct 31 00:52:07.321109 kubelet[2709]: I1031 00:52:07.320916 2709 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lr7xs\" (UniqueName: \"kubernetes.io/projected/bab1cf9d-8b44-456a-b129-5e28e68f2959-kube-api-access-lr7xs\") pod \"bab1cf9d-8b44-456a-b129-5e28e68f2959\" (UID: \"bab1cf9d-8b44-456a-b129-5e28e68f2959\") " Oct 31 00:52:07.321109 kubelet[2709]: I1031 00:52:07.320935 2709 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/bab1cf9d-8b44-456a-b129-5e28e68f2959-whisker-backend-key-pair\") pod \"bab1cf9d-8b44-456a-b129-5e28e68f2959\" (UID: \"bab1cf9d-8b44-456a-b129-5e28e68f2959\") " Oct 31 00:52:07.331509 kubelet[2709]: I1031 00:52:07.327664 2709 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bab1cf9d-8b44-456a-b129-5e28e68f2959-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "bab1cf9d-8b44-456a-b129-5e28e68f2959" (UID: "bab1cf9d-8b44-456a-b129-5e28e68f2959"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Oct 31 00:52:07.344522 kubelet[2709]: I1031 00:52:07.344492 2709 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bab1cf9d-8b44-456a-b129-5e28e68f2959-kube-api-access-lr7xs" (OuterVolumeSpecName: "kube-api-access-lr7xs") pod "bab1cf9d-8b44-456a-b129-5e28e68f2959" (UID: "bab1cf9d-8b44-456a-b129-5e28e68f2959"). InnerVolumeSpecName "kube-api-access-lr7xs". PluginName "kubernetes.io/projected", VolumeGIDValue "" Oct 31 00:52:07.344860 kubelet[2709]: I1031 00:52:07.344838 2709 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bab1cf9d-8b44-456a-b129-5e28e68f2959-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "bab1cf9d-8b44-456a-b129-5e28e68f2959" (UID: "bab1cf9d-8b44-456a-b129-5e28e68f2959"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGIDValue "" Oct 31 00:52:07.345347 systemd[1]: var-lib-kubelet-pods-bab1cf9d\x2d8b44\x2d456a\x2db129\x2d5e28e68f2959-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dlr7xs.mount: Deactivated successfully. Oct 31 00:52:07.345419 systemd[1]: var-lib-kubelet-pods-bab1cf9d\x2d8b44\x2d456a\x2db129\x2d5e28e68f2959-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Oct 31 00:52:07.421135 kubelet[2709]: I1031 00:52:07.421106 2709 reconciler_common.go:299] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/bab1cf9d-8b44-456a-b129-5e28e68f2959-whisker-backend-key-pair\") on node \"localhost\" DevicePath \"\"" Oct 31 00:52:07.421135 kubelet[2709]: I1031 00:52:07.421133 2709 reconciler_common.go:299] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/bab1cf9d-8b44-456a-b129-5e28e68f2959-whisker-ca-bundle\") on node \"localhost\" DevicePath \"\"" Oct 31 00:52:07.421135 kubelet[2709]: I1031 00:52:07.421143 2709 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-lr7xs\" (UniqueName: \"kubernetes.io/projected/bab1cf9d-8b44-456a-b129-5e28e68f2959-kube-api-access-lr7xs\") on node \"localhost\" DevicePath \"\"" Oct 31 00:52:07.687112 systemd[1]: Removed slice kubepods-besteffort-podbab1cf9d_8b44_456a_b129_5e28e68f2959.slice - libcontainer container kubepods-besteffort-podbab1cf9d_8b44_456a_b129_5e28e68f2959.slice. Oct 31 00:52:08.137014 systemd[1]: Created slice kubepods-besteffort-pode4d268cd_a761_4f69_b6d1_2ef35e9f3bff.slice - libcontainer container kubepods-besteffort-pode4d268cd_a761_4f69_b6d1_2ef35e9f3bff.slice. Oct 31 00:52:08.226779 kubelet[2709]: I1031 00:52:08.226749 2709 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/e4d268cd-a761-4f69-b6d1-2ef35e9f3bff-whisker-backend-key-pair\") pod \"whisker-565cb9cccd-v4n4z\" (UID: \"e4d268cd-a761-4f69-b6d1-2ef35e9f3bff\") " pod="calico-system/whisker-565cb9cccd-v4n4z" Oct 31 00:52:08.229981 kubelet[2709]: I1031 00:52:08.229956 2709 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e4d268cd-a761-4f69-b6d1-2ef35e9f3bff-whisker-ca-bundle\") pod \"whisker-565cb9cccd-v4n4z\" (UID: \"e4d268cd-a761-4f69-b6d1-2ef35e9f3bff\") " pod="calico-system/whisker-565cb9cccd-v4n4z" Oct 31 00:52:08.230213 kubelet[2709]: I1031 00:52:08.229997 2709 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8xj9p\" (UniqueName: \"kubernetes.io/projected/e4d268cd-a761-4f69-b6d1-2ef35e9f3bff-kube-api-access-8xj9p\") pod \"whisker-565cb9cccd-v4n4z\" (UID: \"e4d268cd-a761-4f69-b6d1-2ef35e9f3bff\") " pod="calico-system/whisker-565cb9cccd-v4n4z" Oct 31 00:52:08.450413 containerd[1535]: time="2025-10-31T00:52:08.450302412Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-565cb9cccd-v4n4z,Uid:e4d268cd-a761-4f69-b6d1-2ef35e9f3bff,Namespace:calico-system,Attempt:0,}" Oct 31 00:52:08.483067 kubelet[2709]: I1031 00:52:08.482933 2709 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bab1cf9d-8b44-456a-b129-5e28e68f2959" path="/var/lib/kubelet/pods/bab1cf9d-8b44-456a-b129-5e28e68f2959/volumes" Oct 31 00:52:08.545194 systemd-networkd[1463]: calia40860fe6d0: Link UP Oct 31 00:52:08.545298 systemd-networkd[1463]: calia40860fe6d0: Gained carrier Oct 31 00:52:08.557365 containerd[1535]: 2025-10-31 00:52:08.479 [INFO][4111] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Oct 31 00:52:08.557365 containerd[1535]: 2025-10-31 00:52:08.489 [INFO][4111] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-whisker--565cb9cccd--v4n4z-eth0 whisker-565cb9cccd- calico-system e4d268cd-a761-4f69-b6d1-2ef35e9f3bff 912 0 2025-10-31 00:52:07 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:565cb9cccd projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s localhost whisker-565cb9cccd-v4n4z eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] calia40860fe6d0 [] [] }} ContainerID="f6588d238a1931a79c2807a6fa4f7cd2bb76701d26c9beb4b26e054e4e177438" Namespace="calico-system" Pod="whisker-565cb9cccd-v4n4z" WorkloadEndpoint="localhost-k8s-whisker--565cb9cccd--v4n4z-" Oct 31 00:52:08.557365 containerd[1535]: 2025-10-31 00:52:08.489 [INFO][4111] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="f6588d238a1931a79c2807a6fa4f7cd2bb76701d26c9beb4b26e054e4e177438" Namespace="calico-system" Pod="whisker-565cb9cccd-v4n4z" WorkloadEndpoint="localhost-k8s-whisker--565cb9cccd--v4n4z-eth0" Oct 31 00:52:08.557365 containerd[1535]: 2025-10-31 00:52:08.505 [INFO][4124] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="f6588d238a1931a79c2807a6fa4f7cd2bb76701d26c9beb4b26e054e4e177438" HandleID="k8s-pod-network.f6588d238a1931a79c2807a6fa4f7cd2bb76701d26c9beb4b26e054e4e177438" Workload="localhost-k8s-whisker--565cb9cccd--v4n4z-eth0" Oct 31 00:52:08.557365 containerd[1535]: 2025-10-31 00:52:08.505 [INFO][4124] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="f6588d238a1931a79c2807a6fa4f7cd2bb76701d26c9beb4b26e054e4e177438" HandleID="k8s-pod-network.f6588d238a1931a79c2807a6fa4f7cd2bb76701d26c9beb4b26e054e4e177438" Workload="localhost-k8s-whisker--565cb9cccd--v4n4z-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00024efe0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"whisker-565cb9cccd-v4n4z", "timestamp":"2025-10-31 00:52:08.505201085 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Oct 31 00:52:08.557365 containerd[1535]: 2025-10-31 00:52:08.505 [INFO][4124] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Oct 31 00:52:08.557365 containerd[1535]: 2025-10-31 00:52:08.505 [INFO][4124] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Oct 31 00:52:08.557365 containerd[1535]: 2025-10-31 00:52:08.505 [INFO][4124] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Oct 31 00:52:08.557365 containerd[1535]: 2025-10-31 00:52:08.512 [INFO][4124] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.f6588d238a1931a79c2807a6fa4f7cd2bb76701d26c9beb4b26e054e4e177438" host="localhost" Oct 31 00:52:08.557365 containerd[1535]: 2025-10-31 00:52:08.522 [INFO][4124] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Oct 31 00:52:08.557365 containerd[1535]: 2025-10-31 00:52:08.526 [INFO][4124] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Oct 31 00:52:08.557365 containerd[1535]: 2025-10-31 00:52:08.526 [INFO][4124] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Oct 31 00:52:08.557365 containerd[1535]: 2025-10-31 00:52:08.528 [INFO][4124] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Oct 31 00:52:08.557365 containerd[1535]: 2025-10-31 00:52:08.528 [INFO][4124] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.f6588d238a1931a79c2807a6fa4f7cd2bb76701d26c9beb4b26e054e4e177438" host="localhost" Oct 31 00:52:08.557365 containerd[1535]: 2025-10-31 00:52:08.528 [INFO][4124] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.f6588d238a1931a79c2807a6fa4f7cd2bb76701d26c9beb4b26e054e4e177438 Oct 31 00:52:08.557365 containerd[1535]: 2025-10-31 00:52:08.530 [INFO][4124] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.f6588d238a1931a79c2807a6fa4f7cd2bb76701d26c9beb4b26e054e4e177438" host="localhost" Oct 31 00:52:08.557365 containerd[1535]: 2025-10-31 00:52:08.533 [INFO][4124] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.129/26] block=192.168.88.128/26 handle="k8s-pod-network.f6588d238a1931a79c2807a6fa4f7cd2bb76701d26c9beb4b26e054e4e177438" host="localhost" Oct 31 00:52:08.557365 containerd[1535]: 2025-10-31 00:52:08.533 [INFO][4124] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.129/26] handle="k8s-pod-network.f6588d238a1931a79c2807a6fa4f7cd2bb76701d26c9beb4b26e054e4e177438" host="localhost" Oct 31 00:52:08.557365 containerd[1535]: 2025-10-31 00:52:08.533 [INFO][4124] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Oct 31 00:52:08.557365 containerd[1535]: 2025-10-31 00:52:08.533 [INFO][4124] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.129/26] IPv6=[] ContainerID="f6588d238a1931a79c2807a6fa4f7cd2bb76701d26c9beb4b26e054e4e177438" HandleID="k8s-pod-network.f6588d238a1931a79c2807a6fa4f7cd2bb76701d26c9beb4b26e054e4e177438" Workload="localhost-k8s-whisker--565cb9cccd--v4n4z-eth0" Oct 31 00:52:08.562672 containerd[1535]: 2025-10-31 00:52:08.535 [INFO][4111] cni-plugin/k8s.go 418: Populated endpoint ContainerID="f6588d238a1931a79c2807a6fa4f7cd2bb76701d26c9beb4b26e054e4e177438" Namespace="calico-system" Pod="whisker-565cb9cccd-v4n4z" WorkloadEndpoint="localhost-k8s-whisker--565cb9cccd--v4n4z-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--565cb9cccd--v4n4z-eth0", GenerateName:"whisker-565cb9cccd-", Namespace:"calico-system", SelfLink:"", UID:"e4d268cd-a761-4f69-b6d1-2ef35e9f3bff", ResourceVersion:"912", Generation:0, CreationTimestamp:time.Date(2025, time.October, 31, 0, 52, 7, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"565cb9cccd", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"whisker-565cb9cccd-v4n4z", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"calia40860fe6d0", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 31 00:52:08.562672 containerd[1535]: 2025-10-31 00:52:08.536 [INFO][4111] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.129/32] ContainerID="f6588d238a1931a79c2807a6fa4f7cd2bb76701d26c9beb4b26e054e4e177438" Namespace="calico-system" Pod="whisker-565cb9cccd-v4n4z" WorkloadEndpoint="localhost-k8s-whisker--565cb9cccd--v4n4z-eth0" Oct 31 00:52:08.562672 containerd[1535]: 2025-10-31 00:52:08.536 [INFO][4111] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calia40860fe6d0 ContainerID="f6588d238a1931a79c2807a6fa4f7cd2bb76701d26c9beb4b26e054e4e177438" Namespace="calico-system" Pod="whisker-565cb9cccd-v4n4z" WorkloadEndpoint="localhost-k8s-whisker--565cb9cccd--v4n4z-eth0" Oct 31 00:52:08.562672 containerd[1535]: 2025-10-31 00:52:08.543 [INFO][4111] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="f6588d238a1931a79c2807a6fa4f7cd2bb76701d26c9beb4b26e054e4e177438" Namespace="calico-system" Pod="whisker-565cb9cccd-v4n4z" WorkloadEndpoint="localhost-k8s-whisker--565cb9cccd--v4n4z-eth0" Oct 31 00:52:08.562672 containerd[1535]: 2025-10-31 00:52:08.543 [INFO][4111] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="f6588d238a1931a79c2807a6fa4f7cd2bb76701d26c9beb4b26e054e4e177438" Namespace="calico-system" Pod="whisker-565cb9cccd-v4n4z" WorkloadEndpoint="localhost-k8s-whisker--565cb9cccd--v4n4z-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--565cb9cccd--v4n4z-eth0", GenerateName:"whisker-565cb9cccd-", Namespace:"calico-system", SelfLink:"", UID:"e4d268cd-a761-4f69-b6d1-2ef35e9f3bff", ResourceVersion:"912", Generation:0, CreationTimestamp:time.Date(2025, time.October, 31, 0, 52, 7, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"565cb9cccd", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"f6588d238a1931a79c2807a6fa4f7cd2bb76701d26c9beb4b26e054e4e177438", Pod:"whisker-565cb9cccd-v4n4z", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"calia40860fe6d0", MAC:"8a:bd:66:c3:f4:62", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 31 00:52:08.562672 containerd[1535]: 2025-10-31 00:52:08.551 [INFO][4111] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="f6588d238a1931a79c2807a6fa4f7cd2bb76701d26c9beb4b26e054e4e177438" Namespace="calico-system" Pod="whisker-565cb9cccd-v4n4z" WorkloadEndpoint="localhost-k8s-whisker--565cb9cccd--v4n4z-eth0" Oct 31 00:52:08.575283 containerd[1535]: time="2025-10-31T00:52:08.575211991Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 31 00:52:08.575283 containerd[1535]: time="2025-10-31T00:52:08.575245172Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 31 00:52:08.575283 containerd[1535]: time="2025-10-31T00:52:08.575262682Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 31 00:52:08.575569 containerd[1535]: time="2025-10-31T00:52:08.575311655Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 31 00:52:08.588748 systemd[1]: Started cri-containerd-f6588d238a1931a79c2807a6fa4f7cd2bb76701d26c9beb4b26e054e4e177438.scope - libcontainer container f6588d238a1931a79c2807a6fa4f7cd2bb76701d26c9beb4b26e054e4e177438. Oct 31 00:52:08.596726 systemd-resolved[1412]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Oct 31 00:52:08.636492 containerd[1535]: time="2025-10-31T00:52:08.636418491Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-565cb9cccd-v4n4z,Uid:e4d268cd-a761-4f69-b6d1-2ef35e9f3bff,Namespace:calico-system,Attempt:0,} returns sandbox id \"f6588d238a1931a79c2807a6fa4f7cd2bb76701d26c9beb4b26e054e4e177438\"" Oct 31 00:52:08.639332 containerd[1535]: time="2025-10-31T00:52:08.639267030Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Oct 31 00:52:08.976394 containerd[1535]: time="2025-10-31T00:52:08.976358282Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Oct 31 00:52:08.982604 containerd[1535]: time="2025-10-31T00:52:08.977762863Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Oct 31 00:52:08.982693 containerd[1535]: time="2025-10-31T00:52:08.977918171Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Oct 31 00:52:08.982918 kubelet[2709]: E1031 00:52:08.982793 2709 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Oct 31 00:52:08.982918 kubelet[2709]: E1031 00:52:08.982831 2709 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Oct 31 00:52:08.986819 kubelet[2709]: E1031 00:52:08.986686 2709 kuberuntime_manager.go:1449] "Unhandled Error" err="container whisker start failed in pod whisker-565cb9cccd-v4n4z_calico-system(e4d268cd-a761-4f69-b6d1-2ef35e9f3bff): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Oct 31 00:52:08.988528 containerd[1535]: time="2025-10-31T00:52:08.988492116Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Oct 31 00:52:09.316509 containerd[1535]: time="2025-10-31T00:52:09.316369835Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Oct 31 00:52:09.321967 containerd[1535]: time="2025-10-31T00:52:09.321888785Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Oct 31 00:52:09.321967 containerd[1535]: time="2025-10-31T00:52:09.321938602Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Oct 31 00:52:09.322213 kubelet[2709]: E1031 00:52:09.322185 2709 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Oct 31 00:52:09.322394 kubelet[2709]: E1031 00:52:09.322218 2709 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Oct 31 00:52:09.322394 kubelet[2709]: E1031 00:52:09.322267 2709 kuberuntime_manager.go:1449] "Unhandled Error" err="container whisker-backend start failed in pod whisker-565cb9cccd-v4n4z_calico-system(e4d268cd-a761-4f69-b6d1-2ef35e9f3bff): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Oct 31 00:52:09.322394 kubelet[2709]: E1031 00:52:09.322294 2709 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-565cb9cccd-v4n4z" podUID="e4d268cd-a761-4f69-b6d1-2ef35e9f3bff" Oct 31 00:52:09.658976 kubelet[2709]: E1031 00:52:09.658584 2709 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-565cb9cccd-v4n4z" podUID="e4d268cd-a761-4f69-b6d1-2ef35e9f3bff" Oct 31 00:52:09.840774 systemd-networkd[1463]: calia40860fe6d0: Gained IPv6LL Oct 31 00:52:13.472859 containerd[1535]: time="2025-10-31T00:52:13.472801393Z" level=info msg="StopPodSandbox for \"4f84c823dec1fe4194385f81db9f5bb956efcbe330f3474159feffcdbb5d2e5f\"" Oct 31 00:52:13.524764 containerd[1535]: 2025-10-31 00:52:13.498 [INFO][4304] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="4f84c823dec1fe4194385f81db9f5bb956efcbe330f3474159feffcdbb5d2e5f" Oct 31 00:52:13.524764 containerd[1535]: 2025-10-31 00:52:13.498 [INFO][4304] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="4f84c823dec1fe4194385f81db9f5bb956efcbe330f3474159feffcdbb5d2e5f" iface="eth0" netns="/var/run/netns/cni-900486ee-027f-f6b9-1f1b-e999c24b12e6" Oct 31 00:52:13.524764 containerd[1535]: 2025-10-31 00:52:13.499 [INFO][4304] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="4f84c823dec1fe4194385f81db9f5bb956efcbe330f3474159feffcdbb5d2e5f" iface="eth0" netns="/var/run/netns/cni-900486ee-027f-f6b9-1f1b-e999c24b12e6" Oct 31 00:52:13.524764 containerd[1535]: 2025-10-31 00:52:13.499 [INFO][4304] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="4f84c823dec1fe4194385f81db9f5bb956efcbe330f3474159feffcdbb5d2e5f" iface="eth0" netns="/var/run/netns/cni-900486ee-027f-f6b9-1f1b-e999c24b12e6" Oct 31 00:52:13.524764 containerd[1535]: 2025-10-31 00:52:13.499 [INFO][4304] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="4f84c823dec1fe4194385f81db9f5bb956efcbe330f3474159feffcdbb5d2e5f" Oct 31 00:52:13.524764 containerd[1535]: 2025-10-31 00:52:13.499 [INFO][4304] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="4f84c823dec1fe4194385f81db9f5bb956efcbe330f3474159feffcdbb5d2e5f" Oct 31 00:52:13.524764 containerd[1535]: 2025-10-31 00:52:13.515 [INFO][4312] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="4f84c823dec1fe4194385f81db9f5bb956efcbe330f3474159feffcdbb5d2e5f" HandleID="k8s-pod-network.4f84c823dec1fe4194385f81db9f5bb956efcbe330f3474159feffcdbb5d2e5f" Workload="localhost-k8s-goldmane--7c778bb748--62kb2-eth0" Oct 31 00:52:13.524764 containerd[1535]: 2025-10-31 00:52:13.516 [INFO][4312] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Oct 31 00:52:13.524764 containerd[1535]: 2025-10-31 00:52:13.516 [INFO][4312] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Oct 31 00:52:13.524764 containerd[1535]: 2025-10-31 00:52:13.520 [WARNING][4312] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="4f84c823dec1fe4194385f81db9f5bb956efcbe330f3474159feffcdbb5d2e5f" HandleID="k8s-pod-network.4f84c823dec1fe4194385f81db9f5bb956efcbe330f3474159feffcdbb5d2e5f" Workload="localhost-k8s-goldmane--7c778bb748--62kb2-eth0" Oct 31 00:52:13.524764 containerd[1535]: 2025-10-31 00:52:13.520 [INFO][4312] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="4f84c823dec1fe4194385f81db9f5bb956efcbe330f3474159feffcdbb5d2e5f" HandleID="k8s-pod-network.4f84c823dec1fe4194385f81db9f5bb956efcbe330f3474159feffcdbb5d2e5f" Workload="localhost-k8s-goldmane--7c778bb748--62kb2-eth0" Oct 31 00:52:13.524764 containerd[1535]: 2025-10-31 00:52:13.521 [INFO][4312] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Oct 31 00:52:13.524764 containerd[1535]: 2025-10-31 00:52:13.522 [INFO][4304] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="4f84c823dec1fe4194385f81db9f5bb956efcbe330f3474159feffcdbb5d2e5f" Oct 31 00:52:13.526841 containerd[1535]: time="2025-10-31T00:52:13.525223599Z" level=info msg="TearDown network for sandbox \"4f84c823dec1fe4194385f81db9f5bb956efcbe330f3474159feffcdbb5d2e5f\" successfully" Oct 31 00:52:13.526841 containerd[1535]: time="2025-10-31T00:52:13.525238649Z" level=info msg="StopPodSandbox for \"4f84c823dec1fe4194385f81db9f5bb956efcbe330f3474159feffcdbb5d2e5f\" returns successfully" Oct 31 00:52:13.526841 containerd[1535]: time="2025-10-31T00:52:13.526735776Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-7c778bb748-62kb2,Uid:56a98ce3-aefb-4f4d-a4ba-fe832cc8a1df,Namespace:calico-system,Attempt:1,}" Oct 31 00:52:13.524769 systemd[1]: run-netns-cni\x2d900486ee\x2d027f\x2df6b9\x2d1f1b\x2de999c24b12e6.mount: Deactivated successfully. Oct 31 00:52:13.616938 systemd-networkd[1463]: cali0dfb2244a1a: Link UP Oct 31 00:52:13.620213 systemd-networkd[1463]: cali0dfb2244a1a: Gained carrier Oct 31 00:52:13.632421 containerd[1535]: 2025-10-31 00:52:13.556 [INFO][4325] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Oct 31 00:52:13.632421 containerd[1535]: 2025-10-31 00:52:13.565 [INFO][4325] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-goldmane--7c778bb748--62kb2-eth0 goldmane-7c778bb748- calico-system 56a98ce3-aefb-4f4d-a4ba-fe832cc8a1df 943 0 2025-10-31 00:51:50 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:7c778bb748 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s localhost goldmane-7c778bb748-62kb2 eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] cali0dfb2244a1a [] [] }} ContainerID="7ba59e6168a2750c86cb266a5c5e9589be7d932399b904ba71b0ada0db657073" Namespace="calico-system" Pod="goldmane-7c778bb748-62kb2" WorkloadEndpoint="localhost-k8s-goldmane--7c778bb748--62kb2-" Oct 31 00:52:13.632421 containerd[1535]: 2025-10-31 00:52:13.565 [INFO][4325] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="7ba59e6168a2750c86cb266a5c5e9589be7d932399b904ba71b0ada0db657073" Namespace="calico-system" Pod="goldmane-7c778bb748-62kb2" WorkloadEndpoint="localhost-k8s-goldmane--7c778bb748--62kb2-eth0" Oct 31 00:52:13.632421 containerd[1535]: 2025-10-31 00:52:13.592 [INFO][4344] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="7ba59e6168a2750c86cb266a5c5e9589be7d932399b904ba71b0ada0db657073" HandleID="k8s-pod-network.7ba59e6168a2750c86cb266a5c5e9589be7d932399b904ba71b0ada0db657073" Workload="localhost-k8s-goldmane--7c778bb748--62kb2-eth0" Oct 31 00:52:13.632421 containerd[1535]: 2025-10-31 00:52:13.592 [INFO][4344] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="7ba59e6168a2750c86cb266a5c5e9589be7d932399b904ba71b0ada0db657073" HandleID="k8s-pod-network.7ba59e6168a2750c86cb266a5c5e9589be7d932399b904ba71b0ada0db657073" Workload="localhost-k8s-goldmane--7c778bb748--62kb2-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00024f220), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"goldmane-7c778bb748-62kb2", "timestamp":"2025-10-31 00:52:13.592344139 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Oct 31 00:52:13.632421 containerd[1535]: 2025-10-31 00:52:13.592 [INFO][4344] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Oct 31 00:52:13.632421 containerd[1535]: 2025-10-31 00:52:13.592 [INFO][4344] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Oct 31 00:52:13.632421 containerd[1535]: 2025-10-31 00:52:13.592 [INFO][4344] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Oct 31 00:52:13.632421 containerd[1535]: 2025-10-31 00:52:13.600 [INFO][4344] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.7ba59e6168a2750c86cb266a5c5e9589be7d932399b904ba71b0ada0db657073" host="localhost" Oct 31 00:52:13.632421 containerd[1535]: 2025-10-31 00:52:13.603 [INFO][4344] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Oct 31 00:52:13.632421 containerd[1535]: 2025-10-31 00:52:13.606 [INFO][4344] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Oct 31 00:52:13.632421 containerd[1535]: 2025-10-31 00:52:13.607 [INFO][4344] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Oct 31 00:52:13.632421 containerd[1535]: 2025-10-31 00:52:13.608 [INFO][4344] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Oct 31 00:52:13.632421 containerd[1535]: 2025-10-31 00:52:13.608 [INFO][4344] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.7ba59e6168a2750c86cb266a5c5e9589be7d932399b904ba71b0ada0db657073" host="localhost" Oct 31 00:52:13.632421 containerd[1535]: 2025-10-31 00:52:13.608 [INFO][4344] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.7ba59e6168a2750c86cb266a5c5e9589be7d932399b904ba71b0ada0db657073 Oct 31 00:52:13.632421 containerd[1535]: 2025-10-31 00:52:13.610 [INFO][4344] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.7ba59e6168a2750c86cb266a5c5e9589be7d932399b904ba71b0ada0db657073" host="localhost" Oct 31 00:52:13.632421 containerd[1535]: 2025-10-31 00:52:13.613 [INFO][4344] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.130/26] block=192.168.88.128/26 handle="k8s-pod-network.7ba59e6168a2750c86cb266a5c5e9589be7d932399b904ba71b0ada0db657073" host="localhost" Oct 31 00:52:13.632421 containerd[1535]: 2025-10-31 00:52:13.613 [INFO][4344] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.130/26] handle="k8s-pod-network.7ba59e6168a2750c86cb266a5c5e9589be7d932399b904ba71b0ada0db657073" host="localhost" Oct 31 00:52:13.632421 containerd[1535]: 2025-10-31 00:52:13.613 [INFO][4344] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Oct 31 00:52:13.632421 containerd[1535]: 2025-10-31 00:52:13.613 [INFO][4344] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.130/26] IPv6=[] ContainerID="7ba59e6168a2750c86cb266a5c5e9589be7d932399b904ba71b0ada0db657073" HandleID="k8s-pod-network.7ba59e6168a2750c86cb266a5c5e9589be7d932399b904ba71b0ada0db657073" Workload="localhost-k8s-goldmane--7c778bb748--62kb2-eth0" Oct 31 00:52:13.635688 containerd[1535]: 2025-10-31 00:52:13.614 [INFO][4325] cni-plugin/k8s.go 418: Populated endpoint ContainerID="7ba59e6168a2750c86cb266a5c5e9589be7d932399b904ba71b0ada0db657073" Namespace="calico-system" Pod="goldmane-7c778bb748-62kb2" WorkloadEndpoint="localhost-k8s-goldmane--7c778bb748--62kb2-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--7c778bb748--62kb2-eth0", GenerateName:"goldmane-7c778bb748-", Namespace:"calico-system", SelfLink:"", UID:"56a98ce3-aefb-4f4d-a4ba-fe832cc8a1df", ResourceVersion:"943", Generation:0, CreationTimestamp:time.Date(2025, time.October, 31, 0, 51, 50, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"7c778bb748", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"goldmane-7c778bb748-62kb2", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali0dfb2244a1a", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 31 00:52:13.635688 containerd[1535]: 2025-10-31 00:52:13.614 [INFO][4325] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.130/32] ContainerID="7ba59e6168a2750c86cb266a5c5e9589be7d932399b904ba71b0ada0db657073" Namespace="calico-system" Pod="goldmane-7c778bb748-62kb2" WorkloadEndpoint="localhost-k8s-goldmane--7c778bb748--62kb2-eth0" Oct 31 00:52:13.635688 containerd[1535]: 2025-10-31 00:52:13.614 [INFO][4325] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali0dfb2244a1a ContainerID="7ba59e6168a2750c86cb266a5c5e9589be7d932399b904ba71b0ada0db657073" Namespace="calico-system" Pod="goldmane-7c778bb748-62kb2" WorkloadEndpoint="localhost-k8s-goldmane--7c778bb748--62kb2-eth0" Oct 31 00:52:13.635688 containerd[1535]: 2025-10-31 00:52:13.620 [INFO][4325] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="7ba59e6168a2750c86cb266a5c5e9589be7d932399b904ba71b0ada0db657073" Namespace="calico-system" Pod="goldmane-7c778bb748-62kb2" WorkloadEndpoint="localhost-k8s-goldmane--7c778bb748--62kb2-eth0" Oct 31 00:52:13.635688 containerd[1535]: 2025-10-31 00:52:13.620 [INFO][4325] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="7ba59e6168a2750c86cb266a5c5e9589be7d932399b904ba71b0ada0db657073" Namespace="calico-system" Pod="goldmane-7c778bb748-62kb2" WorkloadEndpoint="localhost-k8s-goldmane--7c778bb748--62kb2-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--7c778bb748--62kb2-eth0", GenerateName:"goldmane-7c778bb748-", Namespace:"calico-system", SelfLink:"", UID:"56a98ce3-aefb-4f4d-a4ba-fe832cc8a1df", ResourceVersion:"943", Generation:0, CreationTimestamp:time.Date(2025, time.October, 31, 0, 51, 50, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"7c778bb748", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"7ba59e6168a2750c86cb266a5c5e9589be7d932399b904ba71b0ada0db657073", Pod:"goldmane-7c778bb748-62kb2", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali0dfb2244a1a", MAC:"a6:00:df:60:b7:7c", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 31 00:52:13.635688 containerd[1535]: 2025-10-31 00:52:13.627 [INFO][4325] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="7ba59e6168a2750c86cb266a5c5e9589be7d932399b904ba71b0ada0db657073" Namespace="calico-system" Pod="goldmane-7c778bb748-62kb2" WorkloadEndpoint="localhost-k8s-goldmane--7c778bb748--62kb2-eth0" Oct 31 00:52:13.644816 containerd[1535]: time="2025-10-31T00:52:13.644502472Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 31 00:52:13.644816 containerd[1535]: time="2025-10-31T00:52:13.644534927Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 31 00:52:13.644816 containerd[1535]: time="2025-10-31T00:52:13.644548975Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 31 00:52:13.644816 containerd[1535]: time="2025-10-31T00:52:13.644592201Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 31 00:52:13.659807 systemd[1]: Started cri-containerd-7ba59e6168a2750c86cb266a5c5e9589be7d932399b904ba71b0ada0db657073.scope - libcontainer container 7ba59e6168a2750c86cb266a5c5e9589be7d932399b904ba71b0ada0db657073. Oct 31 00:52:13.667795 systemd-resolved[1412]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Oct 31 00:52:13.687475 containerd[1535]: time="2025-10-31T00:52:13.687247474Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-7c778bb748-62kb2,Uid:56a98ce3-aefb-4f4d-a4ba-fe832cc8a1df,Namespace:calico-system,Attempt:1,} returns sandbox id \"7ba59e6168a2750c86cb266a5c5e9589be7d932399b904ba71b0ada0db657073\"" Oct 31 00:52:13.688592 containerd[1535]: time="2025-10-31T00:52:13.688558650Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Oct 31 00:52:14.156420 containerd[1535]: time="2025-10-31T00:52:14.156385413Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Oct 31 00:52:14.156780 containerd[1535]: time="2025-10-31T00:52:14.156749996Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Oct 31 00:52:14.156830 containerd[1535]: time="2025-10-31T00:52:14.156804252Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=77" Oct 31 00:52:14.156948 kubelet[2709]: E1031 00:52:14.156902 2709 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Oct 31 00:52:14.156948 kubelet[2709]: E1031 00:52:14.156936 2709 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Oct 31 00:52:14.157310 kubelet[2709]: E1031 00:52:14.156991 2709 kuberuntime_manager.go:1449] "Unhandled Error" err="container goldmane start failed in pod goldmane-7c778bb748-62kb2_calico-system(56a98ce3-aefb-4f4d-a4ba-fe832cc8a1df): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Oct 31 00:52:14.157310 kubelet[2709]: E1031 00:52:14.157015 2709 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-62kb2" podUID="56a98ce3-aefb-4f4d-a4ba-fe832cc8a1df" Oct 31 00:52:14.473101 containerd[1535]: time="2025-10-31T00:52:14.472776387Z" level=info msg="StopPodSandbox for \"ceb38f7ee9cb29205c35e10ae71428df3a5d39ec886165e32b917be45fb281c8\"" Oct 31 00:52:14.473436 containerd[1535]: time="2025-10-31T00:52:14.473052157Z" level=info msg="StopPodSandbox for \"7d1d21b2d5e8471fed85b6420e09a173dc91af84b6159b82570c8cfb5eeea9ea\"" Oct 31 00:52:14.533678 containerd[1535]: 2025-10-31 00:52:14.504 [INFO][4419] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="ceb38f7ee9cb29205c35e10ae71428df3a5d39ec886165e32b917be45fb281c8" Oct 31 00:52:14.533678 containerd[1535]: 2025-10-31 00:52:14.504 [INFO][4419] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="ceb38f7ee9cb29205c35e10ae71428df3a5d39ec886165e32b917be45fb281c8" iface="eth0" netns="/var/run/netns/cni-5b7f057d-b6cd-62e4-6640-962eaa6f2d68" Oct 31 00:52:14.533678 containerd[1535]: 2025-10-31 00:52:14.504 [INFO][4419] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="ceb38f7ee9cb29205c35e10ae71428df3a5d39ec886165e32b917be45fb281c8" iface="eth0" netns="/var/run/netns/cni-5b7f057d-b6cd-62e4-6640-962eaa6f2d68" Oct 31 00:52:14.533678 containerd[1535]: 2025-10-31 00:52:14.504 [INFO][4419] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="ceb38f7ee9cb29205c35e10ae71428df3a5d39ec886165e32b917be45fb281c8" iface="eth0" netns="/var/run/netns/cni-5b7f057d-b6cd-62e4-6640-962eaa6f2d68" Oct 31 00:52:14.533678 containerd[1535]: 2025-10-31 00:52:14.504 [INFO][4419] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="ceb38f7ee9cb29205c35e10ae71428df3a5d39ec886165e32b917be45fb281c8" Oct 31 00:52:14.533678 containerd[1535]: 2025-10-31 00:52:14.504 [INFO][4419] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="ceb38f7ee9cb29205c35e10ae71428df3a5d39ec886165e32b917be45fb281c8" Oct 31 00:52:14.533678 containerd[1535]: 2025-10-31 00:52:14.519 [INFO][4431] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="ceb38f7ee9cb29205c35e10ae71428df3a5d39ec886165e32b917be45fb281c8" HandleID="k8s-pod-network.ceb38f7ee9cb29205c35e10ae71428df3a5d39ec886165e32b917be45fb281c8" Workload="localhost-k8s-coredns--66bc5c9577--tx48q-eth0" Oct 31 00:52:14.533678 containerd[1535]: 2025-10-31 00:52:14.520 [INFO][4431] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Oct 31 00:52:14.533678 containerd[1535]: 2025-10-31 00:52:14.520 [INFO][4431] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Oct 31 00:52:14.533678 containerd[1535]: 2025-10-31 00:52:14.527 [WARNING][4431] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="ceb38f7ee9cb29205c35e10ae71428df3a5d39ec886165e32b917be45fb281c8" HandleID="k8s-pod-network.ceb38f7ee9cb29205c35e10ae71428df3a5d39ec886165e32b917be45fb281c8" Workload="localhost-k8s-coredns--66bc5c9577--tx48q-eth0" Oct 31 00:52:14.533678 containerd[1535]: 2025-10-31 00:52:14.527 [INFO][4431] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="ceb38f7ee9cb29205c35e10ae71428df3a5d39ec886165e32b917be45fb281c8" HandleID="k8s-pod-network.ceb38f7ee9cb29205c35e10ae71428df3a5d39ec886165e32b917be45fb281c8" Workload="localhost-k8s-coredns--66bc5c9577--tx48q-eth0" Oct 31 00:52:14.533678 containerd[1535]: 2025-10-31 00:52:14.528 [INFO][4431] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Oct 31 00:52:14.533678 containerd[1535]: 2025-10-31 00:52:14.529 [INFO][4419] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="ceb38f7ee9cb29205c35e10ae71428df3a5d39ec886165e32b917be45fb281c8" Oct 31 00:52:14.537021 systemd[1]: run-netns-cni\x2d5b7f057d\x2db6cd\x2d62e4\x2d6640\x2d962eaa6f2d68.mount: Deactivated successfully. Oct 31 00:52:14.538502 containerd[1535]: time="2025-10-31T00:52:14.538479722Z" level=info msg="TearDown network for sandbox \"ceb38f7ee9cb29205c35e10ae71428df3a5d39ec886165e32b917be45fb281c8\" successfully" Oct 31 00:52:14.538502 containerd[1535]: time="2025-10-31T00:52:14.538500242Z" level=info msg="StopPodSandbox for \"ceb38f7ee9cb29205c35e10ae71428df3a5d39ec886165e32b917be45fb281c8\" returns successfully" Oct 31 00:52:14.542712 containerd[1535]: time="2025-10-31T00:52:14.542661041Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-tx48q,Uid:caacd2fa-ec7d-4e23-bd05-4cbeb62fc6f5,Namespace:kube-system,Attempt:1,}" Oct 31 00:52:14.553901 containerd[1535]: 2025-10-31 00:52:14.504 [INFO][4418] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="7d1d21b2d5e8471fed85b6420e09a173dc91af84b6159b82570c8cfb5eeea9ea" Oct 31 00:52:14.553901 containerd[1535]: 2025-10-31 00:52:14.506 [INFO][4418] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="7d1d21b2d5e8471fed85b6420e09a173dc91af84b6159b82570c8cfb5eeea9ea" iface="eth0" netns="/var/run/netns/cni-a5c17199-d781-fdb1-9ce0-eb194c99c504" Oct 31 00:52:14.553901 containerd[1535]: 2025-10-31 00:52:14.506 [INFO][4418] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="7d1d21b2d5e8471fed85b6420e09a173dc91af84b6159b82570c8cfb5eeea9ea" iface="eth0" netns="/var/run/netns/cni-a5c17199-d781-fdb1-9ce0-eb194c99c504" Oct 31 00:52:14.553901 containerd[1535]: 2025-10-31 00:52:14.506 [INFO][4418] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="7d1d21b2d5e8471fed85b6420e09a173dc91af84b6159b82570c8cfb5eeea9ea" iface="eth0" netns="/var/run/netns/cni-a5c17199-d781-fdb1-9ce0-eb194c99c504" Oct 31 00:52:14.553901 containerd[1535]: 2025-10-31 00:52:14.506 [INFO][4418] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="7d1d21b2d5e8471fed85b6420e09a173dc91af84b6159b82570c8cfb5eeea9ea" Oct 31 00:52:14.553901 containerd[1535]: 2025-10-31 00:52:14.506 [INFO][4418] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="7d1d21b2d5e8471fed85b6420e09a173dc91af84b6159b82570c8cfb5eeea9ea" Oct 31 00:52:14.553901 containerd[1535]: 2025-10-31 00:52:14.541 [INFO][4433] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="7d1d21b2d5e8471fed85b6420e09a173dc91af84b6159b82570c8cfb5eeea9ea" HandleID="k8s-pod-network.7d1d21b2d5e8471fed85b6420e09a173dc91af84b6159b82570c8cfb5eeea9ea" Workload="localhost-k8s-calico--apiserver--6577bb4886--7s98w-eth0" Oct 31 00:52:14.553901 containerd[1535]: 2025-10-31 00:52:14.541 [INFO][4433] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Oct 31 00:52:14.553901 containerd[1535]: 2025-10-31 00:52:14.541 [INFO][4433] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Oct 31 00:52:14.553901 containerd[1535]: 2025-10-31 00:52:14.547 [WARNING][4433] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="7d1d21b2d5e8471fed85b6420e09a173dc91af84b6159b82570c8cfb5eeea9ea" HandleID="k8s-pod-network.7d1d21b2d5e8471fed85b6420e09a173dc91af84b6159b82570c8cfb5eeea9ea" Workload="localhost-k8s-calico--apiserver--6577bb4886--7s98w-eth0" Oct 31 00:52:14.553901 containerd[1535]: 2025-10-31 00:52:14.547 [INFO][4433] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="7d1d21b2d5e8471fed85b6420e09a173dc91af84b6159b82570c8cfb5eeea9ea" HandleID="k8s-pod-network.7d1d21b2d5e8471fed85b6420e09a173dc91af84b6159b82570c8cfb5eeea9ea" Workload="localhost-k8s-calico--apiserver--6577bb4886--7s98w-eth0" Oct 31 00:52:14.553901 containerd[1535]: 2025-10-31 00:52:14.548 [INFO][4433] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Oct 31 00:52:14.553901 containerd[1535]: 2025-10-31 00:52:14.551 [INFO][4418] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="7d1d21b2d5e8471fed85b6420e09a173dc91af84b6159b82570c8cfb5eeea9ea" Oct 31 00:52:14.553901 containerd[1535]: time="2025-10-31T00:52:14.552609215Z" level=info msg="TearDown network for sandbox \"7d1d21b2d5e8471fed85b6420e09a173dc91af84b6159b82570c8cfb5eeea9ea\" successfully" Oct 31 00:52:14.553901 containerd[1535]: time="2025-10-31T00:52:14.552647902Z" level=info msg="StopPodSandbox for \"7d1d21b2d5e8471fed85b6420e09a173dc91af84b6159b82570c8cfb5eeea9ea\" returns successfully" Oct 31 00:52:14.554232 systemd[1]: run-netns-cni\x2da5c17199\x2dd781\x2dfdb1\x2d9ce0\x2deb194c99c504.mount: Deactivated successfully. Oct 31 00:52:14.557690 containerd[1535]: time="2025-10-31T00:52:14.556866592Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6577bb4886-7s98w,Uid:3561ae24-137d-44ba-89a5-d4068542bce6,Namespace:calico-apiserver,Attempt:1,}" Oct 31 00:52:14.675641 kubelet[2709]: E1031 00:52:14.674194 2709 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-62kb2" podUID="56a98ce3-aefb-4f4d-a4ba-fe832cc8a1df" Oct 31 00:52:14.684380 systemd-networkd[1463]: cali74905b13898: Link UP Oct 31 00:52:14.687494 systemd-networkd[1463]: cali74905b13898: Gained carrier Oct 31 00:52:14.715954 containerd[1535]: 2025-10-31 00:52:14.592 [INFO][4451] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Oct 31 00:52:14.715954 containerd[1535]: 2025-10-31 00:52:14.604 [INFO][4451] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--66bc5c9577--tx48q-eth0 coredns-66bc5c9577- kube-system caacd2fa-ec7d-4e23-bd05-4cbeb62fc6f5 954 0 2025-10-31 00:51:40 +0000 UTC map[k8s-app:kube-dns pod-template-hash:66bc5c9577 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-66bc5c9577-tx48q eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali74905b13898 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 } {liveness-probe TCP 8080 0 } {readiness-probe TCP 8181 0 }] [] }} ContainerID="570e91535d9bf045e5aac75623aa5c3f028026a5d26a04bb48f54fc4a68c5242" Namespace="kube-system" Pod="coredns-66bc5c9577-tx48q" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--tx48q-" Oct 31 00:52:14.715954 containerd[1535]: 2025-10-31 00:52:14.604 [INFO][4451] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="570e91535d9bf045e5aac75623aa5c3f028026a5d26a04bb48f54fc4a68c5242" Namespace="kube-system" Pod="coredns-66bc5c9577-tx48q" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--tx48q-eth0" Oct 31 00:52:14.715954 containerd[1535]: 2025-10-31 00:52:14.633 [INFO][4477] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="570e91535d9bf045e5aac75623aa5c3f028026a5d26a04bb48f54fc4a68c5242" HandleID="k8s-pod-network.570e91535d9bf045e5aac75623aa5c3f028026a5d26a04bb48f54fc4a68c5242" Workload="localhost-k8s-coredns--66bc5c9577--tx48q-eth0" Oct 31 00:52:14.715954 containerd[1535]: 2025-10-31 00:52:14.633 [INFO][4477] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="570e91535d9bf045e5aac75623aa5c3f028026a5d26a04bb48f54fc4a68c5242" HandleID="k8s-pod-network.570e91535d9bf045e5aac75623aa5c3f028026a5d26a04bb48f54fc4a68c5242" Workload="localhost-k8s-coredns--66bc5c9577--tx48q-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00024f200), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-66bc5c9577-tx48q", "timestamp":"2025-10-31 00:52:14.633475746 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Oct 31 00:52:14.715954 containerd[1535]: 2025-10-31 00:52:14.633 [INFO][4477] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Oct 31 00:52:14.715954 containerd[1535]: 2025-10-31 00:52:14.633 [INFO][4477] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Oct 31 00:52:14.715954 containerd[1535]: 2025-10-31 00:52:14.633 [INFO][4477] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Oct 31 00:52:14.715954 containerd[1535]: 2025-10-31 00:52:14.643 [INFO][4477] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.570e91535d9bf045e5aac75623aa5c3f028026a5d26a04bb48f54fc4a68c5242" host="localhost" Oct 31 00:52:14.715954 containerd[1535]: 2025-10-31 00:52:14.650 [INFO][4477] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Oct 31 00:52:14.715954 containerd[1535]: 2025-10-31 00:52:14.655 [INFO][4477] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Oct 31 00:52:14.715954 containerd[1535]: 2025-10-31 00:52:14.658 [INFO][4477] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Oct 31 00:52:14.715954 containerd[1535]: 2025-10-31 00:52:14.659 [INFO][4477] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Oct 31 00:52:14.715954 containerd[1535]: 2025-10-31 00:52:14.659 [INFO][4477] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.570e91535d9bf045e5aac75623aa5c3f028026a5d26a04bb48f54fc4a68c5242" host="localhost" Oct 31 00:52:14.715954 containerd[1535]: 2025-10-31 00:52:14.660 [INFO][4477] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.570e91535d9bf045e5aac75623aa5c3f028026a5d26a04bb48f54fc4a68c5242 Oct 31 00:52:14.715954 containerd[1535]: 2025-10-31 00:52:14.663 [INFO][4477] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.570e91535d9bf045e5aac75623aa5c3f028026a5d26a04bb48f54fc4a68c5242" host="localhost" Oct 31 00:52:14.715954 containerd[1535]: 2025-10-31 00:52:14.668 [INFO][4477] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.131/26] block=192.168.88.128/26 handle="k8s-pod-network.570e91535d9bf045e5aac75623aa5c3f028026a5d26a04bb48f54fc4a68c5242" host="localhost" Oct 31 00:52:14.715954 containerd[1535]: 2025-10-31 00:52:14.668 [INFO][4477] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.131/26] handle="k8s-pod-network.570e91535d9bf045e5aac75623aa5c3f028026a5d26a04bb48f54fc4a68c5242" host="localhost" Oct 31 00:52:14.715954 containerd[1535]: 2025-10-31 00:52:14.668 [INFO][4477] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Oct 31 00:52:14.715954 containerd[1535]: 2025-10-31 00:52:14.668 [INFO][4477] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.131/26] IPv6=[] ContainerID="570e91535d9bf045e5aac75623aa5c3f028026a5d26a04bb48f54fc4a68c5242" HandleID="k8s-pod-network.570e91535d9bf045e5aac75623aa5c3f028026a5d26a04bb48f54fc4a68c5242" Workload="localhost-k8s-coredns--66bc5c9577--tx48q-eth0" Oct 31 00:52:14.716968 containerd[1535]: 2025-10-31 00:52:14.675 [INFO][4451] cni-plugin/k8s.go 418: Populated endpoint ContainerID="570e91535d9bf045e5aac75623aa5c3f028026a5d26a04bb48f54fc4a68c5242" Namespace="kube-system" Pod="coredns-66bc5c9577-tx48q" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--tx48q-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--66bc5c9577--tx48q-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"caacd2fa-ec7d-4e23-bd05-4cbeb62fc6f5", ResourceVersion:"954", Generation:0, CreationTimestamp:time.Date(2025, time.October, 31, 0, 51, 40, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-66bc5c9577-tx48q", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali74905b13898", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 31 00:52:14.716968 containerd[1535]: 2025-10-31 00:52:14.675 [INFO][4451] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.131/32] ContainerID="570e91535d9bf045e5aac75623aa5c3f028026a5d26a04bb48f54fc4a68c5242" Namespace="kube-system" Pod="coredns-66bc5c9577-tx48q" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--tx48q-eth0" Oct 31 00:52:14.716968 containerd[1535]: 2025-10-31 00:52:14.675 [INFO][4451] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali74905b13898 ContainerID="570e91535d9bf045e5aac75623aa5c3f028026a5d26a04bb48f54fc4a68c5242" Namespace="kube-system" Pod="coredns-66bc5c9577-tx48q" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--tx48q-eth0" Oct 31 00:52:14.716968 containerd[1535]: 2025-10-31 00:52:14.688 [INFO][4451] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="570e91535d9bf045e5aac75623aa5c3f028026a5d26a04bb48f54fc4a68c5242" Namespace="kube-system" Pod="coredns-66bc5c9577-tx48q" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--tx48q-eth0" Oct 31 00:52:14.716968 containerd[1535]: 2025-10-31 00:52:14.688 [INFO][4451] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="570e91535d9bf045e5aac75623aa5c3f028026a5d26a04bb48f54fc4a68c5242" Namespace="kube-system" Pod="coredns-66bc5c9577-tx48q" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--tx48q-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--66bc5c9577--tx48q-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"caacd2fa-ec7d-4e23-bd05-4cbeb62fc6f5", ResourceVersion:"954", Generation:0, CreationTimestamp:time.Date(2025, time.October, 31, 0, 51, 40, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"570e91535d9bf045e5aac75623aa5c3f028026a5d26a04bb48f54fc4a68c5242", Pod:"coredns-66bc5c9577-tx48q", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali74905b13898", MAC:"0a:ef:11:16:ba:89", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 31 00:52:14.716968 containerd[1535]: 2025-10-31 00:52:14.713 [INFO][4451] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="570e91535d9bf045e5aac75623aa5c3f028026a5d26a04bb48f54fc4a68c5242" Namespace="kube-system" Pod="coredns-66bc5c9577-tx48q" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--tx48q-eth0" Oct 31 00:52:14.736286 containerd[1535]: time="2025-10-31T00:52:14.736116249Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 31 00:52:14.736518 containerd[1535]: time="2025-10-31T00:52:14.736495532Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 31 00:52:14.736560 containerd[1535]: time="2025-10-31T00:52:14.736531907Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 31 00:52:14.737276 containerd[1535]: time="2025-10-31T00:52:14.736971266Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 31 00:52:14.754973 systemd[1]: Started cri-containerd-570e91535d9bf045e5aac75623aa5c3f028026a5d26a04bb48f54fc4a68c5242.scope - libcontainer container 570e91535d9bf045e5aac75623aa5c3f028026a5d26a04bb48f54fc4a68c5242. Oct 31 00:52:14.764797 systemd-resolved[1412]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Oct 31 00:52:14.768693 systemd-networkd[1463]: cali0dfb2244a1a: Gained IPv6LL Oct 31 00:52:14.771855 systemd-networkd[1463]: cali81674dde2e9: Link UP Oct 31 00:52:14.774235 systemd-networkd[1463]: cali81674dde2e9: Gained carrier Oct 31 00:52:14.788610 containerd[1535]: 2025-10-31 00:52:14.606 [INFO][4466] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Oct 31 00:52:14.788610 containerd[1535]: 2025-10-31 00:52:14.614 [INFO][4466] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--6577bb4886--7s98w-eth0 calico-apiserver-6577bb4886- calico-apiserver 3561ae24-137d-44ba-89a5-d4068542bce6 955 0 2025-10-31 00:51:49 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:6577bb4886 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-6577bb4886-7s98w eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali81674dde2e9 [] [] }} ContainerID="2896c845f7a2febab7991dfc577d089d1befcefd1468597fdb729d308517a718" Namespace="calico-apiserver" Pod="calico-apiserver-6577bb4886-7s98w" WorkloadEndpoint="localhost-k8s-calico--apiserver--6577bb4886--7s98w-" Oct 31 00:52:14.788610 containerd[1535]: 2025-10-31 00:52:14.614 [INFO][4466] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="2896c845f7a2febab7991dfc577d089d1befcefd1468597fdb729d308517a718" Namespace="calico-apiserver" Pod="calico-apiserver-6577bb4886-7s98w" WorkloadEndpoint="localhost-k8s-calico--apiserver--6577bb4886--7s98w-eth0" Oct 31 00:52:14.788610 containerd[1535]: 2025-10-31 00:52:14.650 [INFO][4483] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="2896c845f7a2febab7991dfc577d089d1befcefd1468597fdb729d308517a718" HandleID="k8s-pod-network.2896c845f7a2febab7991dfc577d089d1befcefd1468597fdb729d308517a718" Workload="localhost-k8s-calico--apiserver--6577bb4886--7s98w-eth0" Oct 31 00:52:14.788610 containerd[1535]: 2025-10-31 00:52:14.650 [INFO][4483] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="2896c845f7a2febab7991dfc577d089d1befcefd1468597fdb729d308517a718" HandleID="k8s-pod-network.2896c845f7a2febab7991dfc577d089d1befcefd1468597fdb729d308517a718" Workload="localhost-k8s-calico--apiserver--6577bb4886--7s98w-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00024f590), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-6577bb4886-7s98w", "timestamp":"2025-10-31 00:52:14.650257148 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Oct 31 00:52:14.788610 containerd[1535]: 2025-10-31 00:52:14.651 [INFO][4483] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Oct 31 00:52:14.788610 containerd[1535]: 2025-10-31 00:52:14.668 [INFO][4483] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Oct 31 00:52:14.788610 containerd[1535]: 2025-10-31 00:52:14.669 [INFO][4483] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Oct 31 00:52:14.788610 containerd[1535]: 2025-10-31 00:52:14.744 [INFO][4483] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.2896c845f7a2febab7991dfc577d089d1befcefd1468597fdb729d308517a718" host="localhost" Oct 31 00:52:14.788610 containerd[1535]: 2025-10-31 00:52:14.754 [INFO][4483] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Oct 31 00:52:14.788610 containerd[1535]: 2025-10-31 00:52:14.758 [INFO][4483] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Oct 31 00:52:14.788610 containerd[1535]: 2025-10-31 00:52:14.759 [INFO][4483] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Oct 31 00:52:14.788610 containerd[1535]: 2025-10-31 00:52:14.760 [INFO][4483] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Oct 31 00:52:14.788610 containerd[1535]: 2025-10-31 00:52:14.760 [INFO][4483] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.2896c845f7a2febab7991dfc577d089d1befcefd1468597fdb729d308517a718" host="localhost" Oct 31 00:52:14.788610 containerd[1535]: 2025-10-31 00:52:14.761 [INFO][4483] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.2896c845f7a2febab7991dfc577d089d1befcefd1468597fdb729d308517a718 Oct 31 00:52:14.788610 containerd[1535]: 2025-10-31 00:52:14.764 [INFO][4483] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.2896c845f7a2febab7991dfc577d089d1befcefd1468597fdb729d308517a718" host="localhost" Oct 31 00:52:14.788610 containerd[1535]: 2025-10-31 00:52:14.767 [INFO][4483] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.132/26] block=192.168.88.128/26 handle="k8s-pod-network.2896c845f7a2febab7991dfc577d089d1befcefd1468597fdb729d308517a718" host="localhost" Oct 31 00:52:14.788610 containerd[1535]: 2025-10-31 00:52:14.767 [INFO][4483] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.132/26] handle="k8s-pod-network.2896c845f7a2febab7991dfc577d089d1befcefd1468597fdb729d308517a718" host="localhost" Oct 31 00:52:14.788610 containerd[1535]: 2025-10-31 00:52:14.767 [INFO][4483] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Oct 31 00:52:14.788610 containerd[1535]: 2025-10-31 00:52:14.767 [INFO][4483] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.132/26] IPv6=[] ContainerID="2896c845f7a2febab7991dfc577d089d1befcefd1468597fdb729d308517a718" HandleID="k8s-pod-network.2896c845f7a2febab7991dfc577d089d1befcefd1468597fdb729d308517a718" Workload="localhost-k8s-calico--apiserver--6577bb4886--7s98w-eth0" Oct 31 00:52:14.791850 containerd[1535]: 2025-10-31 00:52:14.768 [INFO][4466] cni-plugin/k8s.go 418: Populated endpoint ContainerID="2896c845f7a2febab7991dfc577d089d1befcefd1468597fdb729d308517a718" Namespace="calico-apiserver" Pod="calico-apiserver-6577bb4886-7s98w" WorkloadEndpoint="localhost-k8s-calico--apiserver--6577bb4886--7s98w-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--6577bb4886--7s98w-eth0", GenerateName:"calico-apiserver-6577bb4886-", Namespace:"calico-apiserver", SelfLink:"", UID:"3561ae24-137d-44ba-89a5-d4068542bce6", ResourceVersion:"955", Generation:0, CreationTimestamp:time.Date(2025, time.October, 31, 0, 51, 49, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6577bb4886", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-6577bb4886-7s98w", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali81674dde2e9", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 31 00:52:14.791850 containerd[1535]: 2025-10-31 00:52:14.768 [INFO][4466] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.132/32] ContainerID="2896c845f7a2febab7991dfc577d089d1befcefd1468597fdb729d308517a718" Namespace="calico-apiserver" Pod="calico-apiserver-6577bb4886-7s98w" WorkloadEndpoint="localhost-k8s-calico--apiserver--6577bb4886--7s98w-eth0" Oct 31 00:52:14.791850 containerd[1535]: 2025-10-31 00:52:14.769 [INFO][4466] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali81674dde2e9 ContainerID="2896c845f7a2febab7991dfc577d089d1befcefd1468597fdb729d308517a718" Namespace="calico-apiserver" Pod="calico-apiserver-6577bb4886-7s98w" WorkloadEndpoint="localhost-k8s-calico--apiserver--6577bb4886--7s98w-eth0" Oct 31 00:52:14.791850 containerd[1535]: 2025-10-31 00:52:14.775 [INFO][4466] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="2896c845f7a2febab7991dfc577d089d1befcefd1468597fdb729d308517a718" Namespace="calico-apiserver" Pod="calico-apiserver-6577bb4886-7s98w" WorkloadEndpoint="localhost-k8s-calico--apiserver--6577bb4886--7s98w-eth0" Oct 31 00:52:14.791850 containerd[1535]: 2025-10-31 00:52:14.776 [INFO][4466] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="2896c845f7a2febab7991dfc577d089d1befcefd1468597fdb729d308517a718" Namespace="calico-apiserver" Pod="calico-apiserver-6577bb4886-7s98w" WorkloadEndpoint="localhost-k8s-calico--apiserver--6577bb4886--7s98w-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--6577bb4886--7s98w-eth0", GenerateName:"calico-apiserver-6577bb4886-", Namespace:"calico-apiserver", SelfLink:"", UID:"3561ae24-137d-44ba-89a5-d4068542bce6", ResourceVersion:"955", Generation:0, CreationTimestamp:time.Date(2025, time.October, 31, 0, 51, 49, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6577bb4886", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"2896c845f7a2febab7991dfc577d089d1befcefd1468597fdb729d308517a718", Pod:"calico-apiserver-6577bb4886-7s98w", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali81674dde2e9", MAC:"de:27:94:e9:80:3b", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 31 00:52:14.791850 containerd[1535]: 2025-10-31 00:52:14.786 [INFO][4466] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="2896c845f7a2febab7991dfc577d089d1befcefd1468597fdb729d308517a718" Namespace="calico-apiserver" Pod="calico-apiserver-6577bb4886-7s98w" WorkloadEndpoint="localhost-k8s-calico--apiserver--6577bb4886--7s98w-eth0" Oct 31 00:52:14.804060 containerd[1535]: time="2025-10-31T00:52:14.803840265Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-tx48q,Uid:caacd2fa-ec7d-4e23-bd05-4cbeb62fc6f5,Namespace:kube-system,Attempt:1,} returns sandbox id \"570e91535d9bf045e5aac75623aa5c3f028026a5d26a04bb48f54fc4a68c5242\"" Oct 31 00:52:14.806375 containerd[1535]: time="2025-10-31T00:52:14.806147291Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 31 00:52:14.806375 containerd[1535]: time="2025-10-31T00:52:14.806203118Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 31 00:52:14.806375 containerd[1535]: time="2025-10-31T00:52:14.806221121Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 31 00:52:14.806375 containerd[1535]: time="2025-10-31T00:52:14.806311950Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 31 00:52:14.809971 containerd[1535]: time="2025-10-31T00:52:14.809952646Z" level=info msg="CreateContainer within sandbox \"570e91535d9bf045e5aac75623aa5c3f028026a5d26a04bb48f54fc4a68c5242\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Oct 31 00:52:14.819440 containerd[1535]: time="2025-10-31T00:52:14.819422873Z" level=info msg="CreateContainer within sandbox \"570e91535d9bf045e5aac75623aa5c3f028026a5d26a04bb48f54fc4a68c5242\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"601c05c856b5c1a208dab3295232e6928dd154bea7219fac0f4ed40e076d29cf\"" Oct 31 00:52:14.819909 containerd[1535]: time="2025-10-31T00:52:14.819896050Z" level=info msg="StartContainer for \"601c05c856b5c1a208dab3295232e6928dd154bea7219fac0f4ed40e076d29cf\"" Oct 31 00:52:14.820723 systemd[1]: Started cri-containerd-2896c845f7a2febab7991dfc577d089d1befcefd1468597fdb729d308517a718.scope - libcontainer container 2896c845f7a2febab7991dfc577d089d1befcefd1468597fdb729d308517a718. Oct 31 00:52:14.829668 systemd-resolved[1412]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Oct 31 00:52:14.846699 systemd[1]: Started cri-containerd-601c05c856b5c1a208dab3295232e6928dd154bea7219fac0f4ed40e076d29cf.scope - libcontainer container 601c05c856b5c1a208dab3295232e6928dd154bea7219fac0f4ed40e076d29cf. Oct 31 00:52:14.858026 containerd[1535]: time="2025-10-31T00:52:14.858001673Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6577bb4886-7s98w,Uid:3561ae24-137d-44ba-89a5-d4068542bce6,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"2896c845f7a2febab7991dfc577d089d1befcefd1468597fdb729d308517a718\"" Oct 31 00:52:14.862099 containerd[1535]: time="2025-10-31T00:52:14.861947299Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Oct 31 00:52:14.872642 containerd[1535]: time="2025-10-31T00:52:14.872598890Z" level=info msg="StartContainer for \"601c05c856b5c1a208dab3295232e6928dd154bea7219fac0f4ed40e076d29cf\" returns successfully" Oct 31 00:52:15.192661 containerd[1535]: time="2025-10-31T00:52:15.191616263Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Oct 31 00:52:15.192661 containerd[1535]: time="2025-10-31T00:52:15.191978768Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Oct 31 00:52:15.192661 containerd[1535]: time="2025-10-31T00:52:15.192033671Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Oct 31 00:52:15.193639 kubelet[2709]: E1031 00:52:15.193079 2709 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Oct 31 00:52:15.193639 kubelet[2709]: E1031 00:52:15.193112 2709 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Oct 31 00:52:15.193911 kubelet[2709]: E1031 00:52:15.193757 2709 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-apiserver start failed in pod calico-apiserver-6577bb4886-7s98w_calico-apiserver(3561ae24-137d-44ba-89a5-d4068542bce6): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Oct 31 00:52:15.193911 kubelet[2709]: E1031 00:52:15.193786 2709 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6577bb4886-7s98w" podUID="3561ae24-137d-44ba-89a5-d4068542bce6" Oct 31 00:52:15.673912 kubelet[2709]: E1031 00:52:15.673881 2709 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6577bb4886-7s98w" podUID="3561ae24-137d-44ba-89a5-d4068542bce6" Oct 31 00:52:15.677656 kubelet[2709]: E1031 00:52:15.676876 2709 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-62kb2" podUID="56a98ce3-aefb-4f4d-a4ba-fe832cc8a1df" Oct 31 00:52:15.698781 kubelet[2709]: I1031 00:52:15.698745 2709 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-tx48q" podStartSLOduration=35.698733954 podStartE2EDuration="35.698733954s" podCreationTimestamp="2025-10-31 00:51:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-31 00:52:15.698443824 +0000 UTC m=+43.457893610" watchObservedRunningTime="2025-10-31 00:52:15.698733954 +0000 UTC m=+43.458183734" Oct 31 00:52:16.368816 systemd-networkd[1463]: cali74905b13898: Gained IPv6LL Oct 31 00:52:16.473940 containerd[1535]: time="2025-10-31T00:52:16.473730935Z" level=info msg="StopPodSandbox for \"8b537c6e4d2b69aee3949653c38278980ff87664400857b087d62263b3754a04\"" Oct 31 00:52:16.476038 containerd[1535]: time="2025-10-31T00:52:16.474203629Z" level=info msg="StopPodSandbox for \"afc1584ace6ae6cad1154f5fe1167888c3bef1e302851579624a34bbdfaa0ae3\"" Oct 31 00:52:16.476189 containerd[1535]: time="2025-10-31T00:52:16.476120802Z" level=info msg="StopPodSandbox for \"d6f3147cf3337156a02b0c9a820c943cf399fa8734ed1be5c3149eace8f07109\"" Oct 31 00:52:16.497542 systemd-networkd[1463]: cali81674dde2e9: Gained IPv6LL Oct 31 00:52:16.554249 containerd[1535]: 2025-10-31 00:52:16.527 [INFO][4682] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="8b537c6e4d2b69aee3949653c38278980ff87664400857b087d62263b3754a04" Oct 31 00:52:16.554249 containerd[1535]: 2025-10-31 00:52:16.527 [INFO][4682] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="8b537c6e4d2b69aee3949653c38278980ff87664400857b087d62263b3754a04" iface="eth0" netns="/var/run/netns/cni-da4526e9-719d-8acd-1394-b1e4ff022868" Oct 31 00:52:16.554249 containerd[1535]: 2025-10-31 00:52:16.527 [INFO][4682] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="8b537c6e4d2b69aee3949653c38278980ff87664400857b087d62263b3754a04" iface="eth0" netns="/var/run/netns/cni-da4526e9-719d-8acd-1394-b1e4ff022868" Oct 31 00:52:16.554249 containerd[1535]: 2025-10-31 00:52:16.528 [INFO][4682] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="8b537c6e4d2b69aee3949653c38278980ff87664400857b087d62263b3754a04" iface="eth0" netns="/var/run/netns/cni-da4526e9-719d-8acd-1394-b1e4ff022868" Oct 31 00:52:16.554249 containerd[1535]: 2025-10-31 00:52:16.528 [INFO][4682] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="8b537c6e4d2b69aee3949653c38278980ff87664400857b087d62263b3754a04" Oct 31 00:52:16.554249 containerd[1535]: 2025-10-31 00:52:16.528 [INFO][4682] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="8b537c6e4d2b69aee3949653c38278980ff87664400857b087d62263b3754a04" Oct 31 00:52:16.554249 containerd[1535]: 2025-10-31 00:52:16.546 [INFO][4702] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="8b537c6e4d2b69aee3949653c38278980ff87664400857b087d62263b3754a04" HandleID="k8s-pod-network.8b537c6e4d2b69aee3949653c38278980ff87664400857b087d62263b3754a04" Workload="localhost-k8s-calico--kube--controllers--84c8975546--52zt4-eth0" Oct 31 00:52:16.554249 containerd[1535]: 2025-10-31 00:52:16.546 [INFO][4702] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Oct 31 00:52:16.554249 containerd[1535]: 2025-10-31 00:52:16.546 [INFO][4702] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Oct 31 00:52:16.554249 containerd[1535]: 2025-10-31 00:52:16.551 [WARNING][4702] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="8b537c6e4d2b69aee3949653c38278980ff87664400857b087d62263b3754a04" HandleID="k8s-pod-network.8b537c6e4d2b69aee3949653c38278980ff87664400857b087d62263b3754a04" Workload="localhost-k8s-calico--kube--controllers--84c8975546--52zt4-eth0" Oct 31 00:52:16.554249 containerd[1535]: 2025-10-31 00:52:16.551 [INFO][4702] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="8b537c6e4d2b69aee3949653c38278980ff87664400857b087d62263b3754a04" HandleID="k8s-pod-network.8b537c6e4d2b69aee3949653c38278980ff87664400857b087d62263b3754a04" Workload="localhost-k8s-calico--kube--controllers--84c8975546--52zt4-eth0" Oct 31 00:52:16.554249 containerd[1535]: 2025-10-31 00:52:16.552 [INFO][4702] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Oct 31 00:52:16.554249 containerd[1535]: 2025-10-31 00:52:16.553 [INFO][4682] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="8b537c6e4d2b69aee3949653c38278980ff87664400857b087d62263b3754a04" Oct 31 00:52:16.555497 containerd[1535]: time="2025-10-31T00:52:16.555469647Z" level=info msg="TearDown network for sandbox \"8b537c6e4d2b69aee3949653c38278980ff87664400857b087d62263b3754a04\" successfully" Oct 31 00:52:16.555553 containerd[1535]: time="2025-10-31T00:52:16.555543931Z" level=info msg="StopPodSandbox for \"8b537c6e4d2b69aee3949653c38278980ff87664400857b087d62263b3754a04\" returns successfully" Oct 31 00:52:16.557310 systemd[1]: run-netns-cni\x2dda4526e9\x2d719d\x2d8acd\x2d1394\x2db1e4ff022868.mount: Deactivated successfully. Oct 31 00:52:16.559764 containerd[1535]: time="2025-10-31T00:52:16.559747633Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-84c8975546-52zt4,Uid:fb892052-e9f7-4494-bd9e-d42433970af9,Namespace:calico-system,Attempt:1,}" Oct 31 00:52:16.578718 containerd[1535]: 2025-10-31 00:52:16.525 [INFO][4680] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="afc1584ace6ae6cad1154f5fe1167888c3bef1e302851579624a34bbdfaa0ae3" Oct 31 00:52:16.578718 containerd[1535]: 2025-10-31 00:52:16.527 [INFO][4680] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="afc1584ace6ae6cad1154f5fe1167888c3bef1e302851579624a34bbdfaa0ae3" iface="eth0" netns="/var/run/netns/cni-ba06fd79-951b-1c9e-70a4-c184c84749b5" Oct 31 00:52:16.578718 containerd[1535]: 2025-10-31 00:52:16.527 [INFO][4680] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="afc1584ace6ae6cad1154f5fe1167888c3bef1e302851579624a34bbdfaa0ae3" iface="eth0" netns="/var/run/netns/cni-ba06fd79-951b-1c9e-70a4-c184c84749b5" Oct 31 00:52:16.578718 containerd[1535]: 2025-10-31 00:52:16.527 [INFO][4680] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="afc1584ace6ae6cad1154f5fe1167888c3bef1e302851579624a34bbdfaa0ae3" iface="eth0" netns="/var/run/netns/cni-ba06fd79-951b-1c9e-70a4-c184c84749b5" Oct 31 00:52:16.578718 containerd[1535]: 2025-10-31 00:52:16.527 [INFO][4680] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="afc1584ace6ae6cad1154f5fe1167888c3bef1e302851579624a34bbdfaa0ae3" Oct 31 00:52:16.578718 containerd[1535]: 2025-10-31 00:52:16.527 [INFO][4680] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="afc1584ace6ae6cad1154f5fe1167888c3bef1e302851579624a34bbdfaa0ae3" Oct 31 00:52:16.578718 containerd[1535]: 2025-10-31 00:52:16.563 [INFO][4700] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="afc1584ace6ae6cad1154f5fe1167888c3bef1e302851579624a34bbdfaa0ae3" HandleID="k8s-pod-network.afc1584ace6ae6cad1154f5fe1167888c3bef1e302851579624a34bbdfaa0ae3" Workload="localhost-k8s-calico--apiserver--6577bb4886--mlhcr-eth0" Oct 31 00:52:16.578718 containerd[1535]: 2025-10-31 00:52:16.563 [INFO][4700] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Oct 31 00:52:16.578718 containerd[1535]: 2025-10-31 00:52:16.563 [INFO][4700] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Oct 31 00:52:16.578718 containerd[1535]: 2025-10-31 00:52:16.574 [WARNING][4700] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="afc1584ace6ae6cad1154f5fe1167888c3bef1e302851579624a34bbdfaa0ae3" HandleID="k8s-pod-network.afc1584ace6ae6cad1154f5fe1167888c3bef1e302851579624a34bbdfaa0ae3" Workload="localhost-k8s-calico--apiserver--6577bb4886--mlhcr-eth0" Oct 31 00:52:16.578718 containerd[1535]: 2025-10-31 00:52:16.575 [INFO][4700] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="afc1584ace6ae6cad1154f5fe1167888c3bef1e302851579624a34bbdfaa0ae3" HandleID="k8s-pod-network.afc1584ace6ae6cad1154f5fe1167888c3bef1e302851579624a34bbdfaa0ae3" Workload="localhost-k8s-calico--apiserver--6577bb4886--mlhcr-eth0" Oct 31 00:52:16.578718 containerd[1535]: 2025-10-31 00:52:16.576 [INFO][4700] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Oct 31 00:52:16.578718 containerd[1535]: 2025-10-31 00:52:16.577 [INFO][4680] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="afc1584ace6ae6cad1154f5fe1167888c3bef1e302851579624a34bbdfaa0ae3" Oct 31 00:52:16.580172 systemd[1]: run-netns-cni\x2dba06fd79\x2d951b\x2d1c9e\x2d70a4\x2dc184c84749b5.mount: Deactivated successfully. Oct 31 00:52:16.580540 containerd[1535]: time="2025-10-31T00:52:16.580522267Z" level=info msg="TearDown network for sandbox \"afc1584ace6ae6cad1154f5fe1167888c3bef1e302851579624a34bbdfaa0ae3\" successfully" Oct 31 00:52:16.580540 containerd[1535]: time="2025-10-31T00:52:16.580538358Z" level=info msg="StopPodSandbox for \"afc1584ace6ae6cad1154f5fe1167888c3bef1e302851579624a34bbdfaa0ae3\" returns successfully" Oct 31 00:52:16.581948 containerd[1535]: time="2025-10-31T00:52:16.581932194Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6577bb4886-mlhcr,Uid:38874b0f-68ea-48b6-8bf2-17a9a00061c5,Namespace:calico-apiserver,Attempt:1,}" Oct 31 00:52:16.592193 containerd[1535]: 2025-10-31 00:52:16.532 [INFO][4681] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="d6f3147cf3337156a02b0c9a820c943cf399fa8734ed1be5c3149eace8f07109" Oct 31 00:52:16.592193 containerd[1535]: 2025-10-31 00:52:16.532 [INFO][4681] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="d6f3147cf3337156a02b0c9a820c943cf399fa8734ed1be5c3149eace8f07109" iface="eth0" netns="/var/run/netns/cni-58bb8d84-c09e-c9d5-2c8e-d7be9fa03622" Oct 31 00:52:16.592193 containerd[1535]: 2025-10-31 00:52:16.533 [INFO][4681] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="d6f3147cf3337156a02b0c9a820c943cf399fa8734ed1be5c3149eace8f07109" iface="eth0" netns="/var/run/netns/cni-58bb8d84-c09e-c9d5-2c8e-d7be9fa03622" Oct 31 00:52:16.592193 containerd[1535]: 2025-10-31 00:52:16.533 [INFO][4681] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="d6f3147cf3337156a02b0c9a820c943cf399fa8734ed1be5c3149eace8f07109" iface="eth0" netns="/var/run/netns/cni-58bb8d84-c09e-c9d5-2c8e-d7be9fa03622" Oct 31 00:52:16.592193 containerd[1535]: 2025-10-31 00:52:16.533 [INFO][4681] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="d6f3147cf3337156a02b0c9a820c943cf399fa8734ed1be5c3149eace8f07109" Oct 31 00:52:16.592193 containerd[1535]: 2025-10-31 00:52:16.533 [INFO][4681] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="d6f3147cf3337156a02b0c9a820c943cf399fa8734ed1be5c3149eace8f07109" Oct 31 00:52:16.592193 containerd[1535]: 2025-10-31 00:52:16.577 [INFO][4710] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="d6f3147cf3337156a02b0c9a820c943cf399fa8734ed1be5c3149eace8f07109" HandleID="k8s-pod-network.d6f3147cf3337156a02b0c9a820c943cf399fa8734ed1be5c3149eace8f07109" Workload="localhost-k8s-coredns--66bc5c9577--czb88-eth0" Oct 31 00:52:16.592193 containerd[1535]: 2025-10-31 00:52:16.577 [INFO][4710] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Oct 31 00:52:16.592193 containerd[1535]: 2025-10-31 00:52:16.577 [INFO][4710] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Oct 31 00:52:16.592193 containerd[1535]: 2025-10-31 00:52:16.585 [WARNING][4710] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="d6f3147cf3337156a02b0c9a820c943cf399fa8734ed1be5c3149eace8f07109" HandleID="k8s-pod-network.d6f3147cf3337156a02b0c9a820c943cf399fa8734ed1be5c3149eace8f07109" Workload="localhost-k8s-coredns--66bc5c9577--czb88-eth0" Oct 31 00:52:16.592193 containerd[1535]: 2025-10-31 00:52:16.585 [INFO][4710] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="d6f3147cf3337156a02b0c9a820c943cf399fa8734ed1be5c3149eace8f07109" HandleID="k8s-pod-network.d6f3147cf3337156a02b0c9a820c943cf399fa8734ed1be5c3149eace8f07109" Workload="localhost-k8s-coredns--66bc5c9577--czb88-eth0" Oct 31 00:52:16.592193 containerd[1535]: 2025-10-31 00:52:16.586 [INFO][4710] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Oct 31 00:52:16.592193 containerd[1535]: 2025-10-31 00:52:16.587 [INFO][4681] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="d6f3147cf3337156a02b0c9a820c943cf399fa8734ed1be5c3149eace8f07109" Oct 31 00:52:16.592951 containerd[1535]: time="2025-10-31T00:52:16.592483840Z" level=info msg="TearDown network for sandbox \"d6f3147cf3337156a02b0c9a820c943cf399fa8734ed1be5c3149eace8f07109\" successfully" Oct 31 00:52:16.592951 containerd[1535]: time="2025-10-31T00:52:16.592501068Z" level=info msg="StopPodSandbox for \"d6f3147cf3337156a02b0c9a820c943cf399fa8734ed1be5c3149eace8f07109\" returns successfully" Oct 31 00:52:16.597302 containerd[1535]: time="2025-10-31T00:52:16.597081014Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-czb88,Uid:487d8bf9-c5d4-4162-b747-015052300a2e,Namespace:kube-system,Attempt:1,}" Oct 31 00:52:16.683281 kubelet[2709]: E1031 00:52:16.683223 2709 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6577bb4886-7s98w" podUID="3561ae24-137d-44ba-89a5-d4068542bce6" Oct 31 00:52:16.685157 systemd-networkd[1463]: calid20ff29def5: Link UP Oct 31 00:52:16.687045 systemd-networkd[1463]: calid20ff29def5: Gained carrier Oct 31 00:52:16.718022 containerd[1535]: 2025-10-31 00:52:16.620 [INFO][4742] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Oct 31 00:52:16.718022 containerd[1535]: 2025-10-31 00:52:16.627 [INFO][4742] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--66bc5c9577--czb88-eth0 coredns-66bc5c9577- kube-system 487d8bf9-c5d4-4162-b747-015052300a2e 994 0 2025-10-31 00:51:40 +0000 UTC map[k8s-app:kube-dns pod-template-hash:66bc5c9577 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-66bc5c9577-czb88 eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calid20ff29def5 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 } {liveness-probe TCP 8080 0 } {readiness-probe TCP 8181 0 }] [] }} ContainerID="ed5f6269457e0cbf1aa5400227a0d0a7ec85d07cede8bf834f313c470f5ec861" Namespace="kube-system" Pod="coredns-66bc5c9577-czb88" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--czb88-" Oct 31 00:52:16.718022 containerd[1535]: 2025-10-31 00:52:16.627 [INFO][4742] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="ed5f6269457e0cbf1aa5400227a0d0a7ec85d07cede8bf834f313c470f5ec861" Namespace="kube-system" Pod="coredns-66bc5c9577-czb88" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--czb88-eth0" Oct 31 00:52:16.718022 containerd[1535]: 2025-10-31 00:52:16.652 [INFO][4764] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="ed5f6269457e0cbf1aa5400227a0d0a7ec85d07cede8bf834f313c470f5ec861" HandleID="k8s-pod-network.ed5f6269457e0cbf1aa5400227a0d0a7ec85d07cede8bf834f313c470f5ec861" Workload="localhost-k8s-coredns--66bc5c9577--czb88-eth0" Oct 31 00:52:16.718022 containerd[1535]: 2025-10-31 00:52:16.653 [INFO][4764] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="ed5f6269457e0cbf1aa5400227a0d0a7ec85d07cede8bf834f313c470f5ec861" HandleID="k8s-pod-network.ed5f6269457e0cbf1aa5400227a0d0a7ec85d07cede8bf834f313c470f5ec861" Workload="localhost-k8s-coredns--66bc5c9577--czb88-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00024f8b0), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-66bc5c9577-czb88", "timestamp":"2025-10-31 00:52:16.652984442 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Oct 31 00:52:16.718022 containerd[1535]: 2025-10-31 00:52:16.653 [INFO][4764] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Oct 31 00:52:16.718022 containerd[1535]: 2025-10-31 00:52:16.653 [INFO][4764] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Oct 31 00:52:16.718022 containerd[1535]: 2025-10-31 00:52:16.653 [INFO][4764] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Oct 31 00:52:16.718022 containerd[1535]: 2025-10-31 00:52:16.659 [INFO][4764] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.ed5f6269457e0cbf1aa5400227a0d0a7ec85d07cede8bf834f313c470f5ec861" host="localhost" Oct 31 00:52:16.718022 containerd[1535]: 2025-10-31 00:52:16.662 [INFO][4764] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Oct 31 00:52:16.718022 containerd[1535]: 2025-10-31 00:52:16.664 [INFO][4764] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Oct 31 00:52:16.718022 containerd[1535]: 2025-10-31 00:52:16.664 [INFO][4764] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Oct 31 00:52:16.718022 containerd[1535]: 2025-10-31 00:52:16.665 [INFO][4764] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Oct 31 00:52:16.718022 containerd[1535]: 2025-10-31 00:52:16.665 [INFO][4764] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.ed5f6269457e0cbf1aa5400227a0d0a7ec85d07cede8bf834f313c470f5ec861" host="localhost" Oct 31 00:52:16.718022 containerd[1535]: 2025-10-31 00:52:16.666 [INFO][4764] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.ed5f6269457e0cbf1aa5400227a0d0a7ec85d07cede8bf834f313c470f5ec861 Oct 31 00:52:16.718022 containerd[1535]: 2025-10-31 00:52:16.669 [INFO][4764] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.ed5f6269457e0cbf1aa5400227a0d0a7ec85d07cede8bf834f313c470f5ec861" host="localhost" Oct 31 00:52:16.718022 containerd[1535]: 2025-10-31 00:52:16.672 [INFO][4764] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.133/26] block=192.168.88.128/26 handle="k8s-pod-network.ed5f6269457e0cbf1aa5400227a0d0a7ec85d07cede8bf834f313c470f5ec861" host="localhost" Oct 31 00:52:16.718022 containerd[1535]: 2025-10-31 00:52:16.672 [INFO][4764] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.133/26] handle="k8s-pod-network.ed5f6269457e0cbf1aa5400227a0d0a7ec85d07cede8bf834f313c470f5ec861" host="localhost" Oct 31 00:52:16.718022 containerd[1535]: 2025-10-31 00:52:16.672 [INFO][4764] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Oct 31 00:52:16.718022 containerd[1535]: 2025-10-31 00:52:16.672 [INFO][4764] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.133/26] IPv6=[] ContainerID="ed5f6269457e0cbf1aa5400227a0d0a7ec85d07cede8bf834f313c470f5ec861" HandleID="k8s-pod-network.ed5f6269457e0cbf1aa5400227a0d0a7ec85d07cede8bf834f313c470f5ec861" Workload="localhost-k8s-coredns--66bc5c9577--czb88-eth0" Oct 31 00:52:16.720837 containerd[1535]: 2025-10-31 00:52:16.677 [INFO][4742] cni-plugin/k8s.go 418: Populated endpoint ContainerID="ed5f6269457e0cbf1aa5400227a0d0a7ec85d07cede8bf834f313c470f5ec861" Namespace="kube-system" Pod="coredns-66bc5c9577-czb88" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--czb88-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--66bc5c9577--czb88-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"487d8bf9-c5d4-4162-b747-015052300a2e", ResourceVersion:"994", Generation:0, CreationTimestamp:time.Date(2025, time.October, 31, 0, 51, 40, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-66bc5c9577-czb88", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calid20ff29def5", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 31 00:52:16.720837 containerd[1535]: 2025-10-31 00:52:16.679 [INFO][4742] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.133/32] ContainerID="ed5f6269457e0cbf1aa5400227a0d0a7ec85d07cede8bf834f313c470f5ec861" Namespace="kube-system" Pod="coredns-66bc5c9577-czb88" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--czb88-eth0" Oct 31 00:52:16.720837 containerd[1535]: 2025-10-31 00:52:16.679 [INFO][4742] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calid20ff29def5 ContainerID="ed5f6269457e0cbf1aa5400227a0d0a7ec85d07cede8bf834f313c470f5ec861" Namespace="kube-system" Pod="coredns-66bc5c9577-czb88" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--czb88-eth0" Oct 31 00:52:16.720837 containerd[1535]: 2025-10-31 00:52:16.686 [INFO][4742] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="ed5f6269457e0cbf1aa5400227a0d0a7ec85d07cede8bf834f313c470f5ec861" Namespace="kube-system" Pod="coredns-66bc5c9577-czb88" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--czb88-eth0" Oct 31 00:52:16.720837 containerd[1535]: 2025-10-31 00:52:16.687 [INFO][4742] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="ed5f6269457e0cbf1aa5400227a0d0a7ec85d07cede8bf834f313c470f5ec861" Namespace="kube-system" Pod="coredns-66bc5c9577-czb88" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--czb88-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--66bc5c9577--czb88-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"487d8bf9-c5d4-4162-b747-015052300a2e", ResourceVersion:"994", Generation:0, CreationTimestamp:time.Date(2025, time.October, 31, 0, 51, 40, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"ed5f6269457e0cbf1aa5400227a0d0a7ec85d07cede8bf834f313c470f5ec861", Pod:"coredns-66bc5c9577-czb88", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calid20ff29def5", MAC:"46:1c:0d:1b:10:af", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 31 00:52:16.720837 containerd[1535]: 2025-10-31 00:52:16.717 [INFO][4742] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="ed5f6269457e0cbf1aa5400227a0d0a7ec85d07cede8bf834f313c470f5ec861" Namespace="kube-system" Pod="coredns-66bc5c9577-czb88" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--czb88-eth0" Oct 31 00:52:16.738729 containerd[1535]: time="2025-10-31T00:52:16.738676276Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 31 00:52:16.738729 containerd[1535]: time="2025-10-31T00:52:16.738708794Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 31 00:52:16.738844 containerd[1535]: time="2025-10-31T00:52:16.738715702Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 31 00:52:16.738844 containerd[1535]: time="2025-10-31T00:52:16.738761673Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 31 00:52:16.758850 systemd[1]: Started cri-containerd-ed5f6269457e0cbf1aa5400227a0d0a7ec85d07cede8bf834f313c470f5ec861.scope - libcontainer container ed5f6269457e0cbf1aa5400227a0d0a7ec85d07cede8bf834f313c470f5ec861. Oct 31 00:52:16.788868 systemd-resolved[1412]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Oct 31 00:52:16.793011 systemd-networkd[1463]: calie536c70d526: Link UP Oct 31 00:52:16.793125 systemd-networkd[1463]: calie536c70d526: Gained carrier Oct 31 00:52:16.805928 containerd[1535]: 2025-10-31 00:52:16.619 [INFO][4733] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Oct 31 00:52:16.805928 containerd[1535]: 2025-10-31 00:52:16.630 [INFO][4733] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--6577bb4886--mlhcr-eth0 calico-apiserver-6577bb4886- calico-apiserver 38874b0f-68ea-48b6-8bf2-17a9a00061c5 992 0 2025-10-31 00:51:49 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:6577bb4886 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-6577bb4886-mlhcr eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] calie536c70d526 [] [] }} ContainerID="47e58465b311ce174d7cdab046a88892a0727342950e3dfddf842d58aac99291" Namespace="calico-apiserver" Pod="calico-apiserver-6577bb4886-mlhcr" WorkloadEndpoint="localhost-k8s-calico--apiserver--6577bb4886--mlhcr-" Oct 31 00:52:16.805928 containerd[1535]: 2025-10-31 00:52:16.630 [INFO][4733] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="47e58465b311ce174d7cdab046a88892a0727342950e3dfddf842d58aac99291" Namespace="calico-apiserver" Pod="calico-apiserver-6577bb4886-mlhcr" WorkloadEndpoint="localhost-k8s-calico--apiserver--6577bb4886--mlhcr-eth0" Oct 31 00:52:16.805928 containerd[1535]: 2025-10-31 00:52:16.656 [INFO][4769] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="47e58465b311ce174d7cdab046a88892a0727342950e3dfddf842d58aac99291" HandleID="k8s-pod-network.47e58465b311ce174d7cdab046a88892a0727342950e3dfddf842d58aac99291" Workload="localhost-k8s-calico--apiserver--6577bb4886--mlhcr-eth0" Oct 31 00:52:16.805928 containerd[1535]: 2025-10-31 00:52:16.656 [INFO][4769] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="47e58465b311ce174d7cdab046a88892a0727342950e3dfddf842d58aac99291" HandleID="k8s-pod-network.47e58465b311ce174d7cdab046a88892a0727342950e3dfddf842d58aac99291" Workload="localhost-k8s-calico--apiserver--6577bb4886--mlhcr-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002d50d0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-6577bb4886-mlhcr", "timestamp":"2025-10-31 00:52:16.656529937 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Oct 31 00:52:16.805928 containerd[1535]: 2025-10-31 00:52:16.656 [INFO][4769] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Oct 31 00:52:16.805928 containerd[1535]: 2025-10-31 00:52:16.672 [INFO][4769] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Oct 31 00:52:16.805928 containerd[1535]: 2025-10-31 00:52:16.672 [INFO][4769] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Oct 31 00:52:16.805928 containerd[1535]: 2025-10-31 00:52:16.760 [INFO][4769] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.47e58465b311ce174d7cdab046a88892a0727342950e3dfddf842d58aac99291" host="localhost" Oct 31 00:52:16.805928 containerd[1535]: 2025-10-31 00:52:16.764 [INFO][4769] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Oct 31 00:52:16.805928 containerd[1535]: 2025-10-31 00:52:16.766 [INFO][4769] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Oct 31 00:52:16.805928 containerd[1535]: 2025-10-31 00:52:16.767 [INFO][4769] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Oct 31 00:52:16.805928 containerd[1535]: 2025-10-31 00:52:16.773 [INFO][4769] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Oct 31 00:52:16.805928 containerd[1535]: 2025-10-31 00:52:16.773 [INFO][4769] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.47e58465b311ce174d7cdab046a88892a0727342950e3dfddf842d58aac99291" host="localhost" Oct 31 00:52:16.805928 containerd[1535]: 2025-10-31 00:52:16.775 [INFO][4769] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.47e58465b311ce174d7cdab046a88892a0727342950e3dfddf842d58aac99291 Oct 31 00:52:16.805928 containerd[1535]: 2025-10-31 00:52:16.782 [INFO][4769] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.47e58465b311ce174d7cdab046a88892a0727342950e3dfddf842d58aac99291" host="localhost" Oct 31 00:52:16.805928 containerd[1535]: 2025-10-31 00:52:16.786 [INFO][4769] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.134/26] block=192.168.88.128/26 handle="k8s-pod-network.47e58465b311ce174d7cdab046a88892a0727342950e3dfddf842d58aac99291" host="localhost" Oct 31 00:52:16.805928 containerd[1535]: 2025-10-31 00:52:16.787 [INFO][4769] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.134/26] handle="k8s-pod-network.47e58465b311ce174d7cdab046a88892a0727342950e3dfddf842d58aac99291" host="localhost" Oct 31 00:52:16.805928 containerd[1535]: 2025-10-31 00:52:16.787 [INFO][4769] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Oct 31 00:52:16.805928 containerd[1535]: 2025-10-31 00:52:16.787 [INFO][4769] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.134/26] IPv6=[] ContainerID="47e58465b311ce174d7cdab046a88892a0727342950e3dfddf842d58aac99291" HandleID="k8s-pod-network.47e58465b311ce174d7cdab046a88892a0727342950e3dfddf842d58aac99291" Workload="localhost-k8s-calico--apiserver--6577bb4886--mlhcr-eth0" Oct 31 00:52:16.808038 containerd[1535]: 2025-10-31 00:52:16.788 [INFO][4733] cni-plugin/k8s.go 418: Populated endpoint ContainerID="47e58465b311ce174d7cdab046a88892a0727342950e3dfddf842d58aac99291" Namespace="calico-apiserver" Pod="calico-apiserver-6577bb4886-mlhcr" WorkloadEndpoint="localhost-k8s-calico--apiserver--6577bb4886--mlhcr-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--6577bb4886--mlhcr-eth0", GenerateName:"calico-apiserver-6577bb4886-", Namespace:"calico-apiserver", SelfLink:"", UID:"38874b0f-68ea-48b6-8bf2-17a9a00061c5", ResourceVersion:"992", Generation:0, CreationTimestamp:time.Date(2025, time.October, 31, 0, 51, 49, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6577bb4886", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-6577bb4886-mlhcr", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calie536c70d526", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 31 00:52:16.808038 containerd[1535]: 2025-10-31 00:52:16.788 [INFO][4733] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.134/32] ContainerID="47e58465b311ce174d7cdab046a88892a0727342950e3dfddf842d58aac99291" Namespace="calico-apiserver" Pod="calico-apiserver-6577bb4886-mlhcr" WorkloadEndpoint="localhost-k8s-calico--apiserver--6577bb4886--mlhcr-eth0" Oct 31 00:52:16.808038 containerd[1535]: 2025-10-31 00:52:16.788 [INFO][4733] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calie536c70d526 ContainerID="47e58465b311ce174d7cdab046a88892a0727342950e3dfddf842d58aac99291" Namespace="calico-apiserver" Pod="calico-apiserver-6577bb4886-mlhcr" WorkloadEndpoint="localhost-k8s-calico--apiserver--6577bb4886--mlhcr-eth0" Oct 31 00:52:16.808038 containerd[1535]: 2025-10-31 00:52:16.791 [INFO][4733] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="47e58465b311ce174d7cdab046a88892a0727342950e3dfddf842d58aac99291" Namespace="calico-apiserver" Pod="calico-apiserver-6577bb4886-mlhcr" WorkloadEndpoint="localhost-k8s-calico--apiserver--6577bb4886--mlhcr-eth0" Oct 31 00:52:16.808038 containerd[1535]: 2025-10-31 00:52:16.791 [INFO][4733] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="47e58465b311ce174d7cdab046a88892a0727342950e3dfddf842d58aac99291" Namespace="calico-apiserver" Pod="calico-apiserver-6577bb4886-mlhcr" WorkloadEndpoint="localhost-k8s-calico--apiserver--6577bb4886--mlhcr-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--6577bb4886--mlhcr-eth0", GenerateName:"calico-apiserver-6577bb4886-", Namespace:"calico-apiserver", SelfLink:"", UID:"38874b0f-68ea-48b6-8bf2-17a9a00061c5", ResourceVersion:"992", Generation:0, CreationTimestamp:time.Date(2025, time.October, 31, 0, 51, 49, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6577bb4886", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"47e58465b311ce174d7cdab046a88892a0727342950e3dfddf842d58aac99291", Pod:"calico-apiserver-6577bb4886-mlhcr", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calie536c70d526", MAC:"f2:6e:d5:0d:6c:f1", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 31 00:52:16.808038 containerd[1535]: 2025-10-31 00:52:16.805 [INFO][4733] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="47e58465b311ce174d7cdab046a88892a0727342950e3dfddf842d58aac99291" Namespace="calico-apiserver" Pod="calico-apiserver-6577bb4886-mlhcr" WorkloadEndpoint="localhost-k8s-calico--apiserver--6577bb4886--mlhcr-eth0" Oct 31 00:52:16.832988 containerd[1535]: time="2025-10-31T00:52:16.832718910Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 31 00:52:16.832988 containerd[1535]: time="2025-10-31T00:52:16.832773328Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 31 00:52:16.832988 containerd[1535]: time="2025-10-31T00:52:16.832797906Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 31 00:52:16.832988 containerd[1535]: time="2025-10-31T00:52:16.832862754Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 31 00:52:16.835229 containerd[1535]: time="2025-10-31T00:52:16.835120908Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-czb88,Uid:487d8bf9-c5d4-4162-b747-015052300a2e,Namespace:kube-system,Attempt:1,} returns sandbox id \"ed5f6269457e0cbf1aa5400227a0d0a7ec85d07cede8bf834f313c470f5ec861\"" Oct 31 00:52:16.851696 systemd[1]: Started cri-containerd-47e58465b311ce174d7cdab046a88892a0727342950e3dfddf842d58aac99291.scope - libcontainer container 47e58465b311ce174d7cdab046a88892a0727342950e3dfddf842d58aac99291. Oct 31 00:52:16.856003 containerd[1535]: time="2025-10-31T00:52:16.855978792Z" level=info msg="CreateContainer within sandbox \"ed5f6269457e0cbf1aa5400227a0d0a7ec85d07cede8bf834f313c470f5ec861\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Oct 31 00:52:16.871647 systemd-resolved[1412]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Oct 31 00:52:16.912386 containerd[1535]: time="2025-10-31T00:52:16.912319669Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6577bb4886-mlhcr,Uid:38874b0f-68ea-48b6-8bf2-17a9a00061c5,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"47e58465b311ce174d7cdab046a88892a0727342950e3dfddf842d58aac99291\"" Oct 31 00:52:16.914320 systemd-networkd[1463]: cali3826aed581c: Link UP Oct 31 00:52:16.915151 systemd-networkd[1463]: cali3826aed581c: Gained carrier Oct 31 00:52:16.924028 containerd[1535]: time="2025-10-31T00:52:16.916302168Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Oct 31 00:52:16.926272 containerd[1535]: time="2025-10-31T00:52:16.926202489Z" level=info msg="CreateContainer within sandbox \"ed5f6269457e0cbf1aa5400227a0d0a7ec85d07cede8bf834f313c470f5ec861\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"cee27c9005b1d484ea241964fb00dedc759bb652888117c63bf4a0f2cc207253\"" Oct 31 00:52:16.926664 containerd[1535]: time="2025-10-31T00:52:16.926609384Z" level=info msg="StartContainer for \"cee27c9005b1d484ea241964fb00dedc759bb652888117c63bf4a0f2cc207253\"" Oct 31 00:52:16.943729 systemd[1]: Started cri-containerd-cee27c9005b1d484ea241964fb00dedc759bb652888117c63bf4a0f2cc207253.scope - libcontainer container cee27c9005b1d484ea241964fb00dedc759bb652888117c63bf4a0f2cc207253. Oct 31 00:52:16.947238 containerd[1535]: 2025-10-31 00:52:16.611 [INFO][4725] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Oct 31 00:52:16.947238 containerd[1535]: 2025-10-31 00:52:16.624 [INFO][4725] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--kube--controllers--84c8975546--52zt4-eth0 calico-kube-controllers-84c8975546- calico-system fb892052-e9f7-4494-bd9e-d42433970af9 993 0 2025-10-31 00:51:53 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:84c8975546 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s localhost calico-kube-controllers-84c8975546-52zt4 eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali3826aed581c [] [] }} ContainerID="6d13ff949a4a2d6053f3072f43b4acb26028941f7b8c46e7c083f74517c22fa0" Namespace="calico-system" Pod="calico-kube-controllers-84c8975546-52zt4" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--84c8975546--52zt4-" Oct 31 00:52:16.947238 containerd[1535]: 2025-10-31 00:52:16.624 [INFO][4725] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="6d13ff949a4a2d6053f3072f43b4acb26028941f7b8c46e7c083f74517c22fa0" Namespace="calico-system" Pod="calico-kube-controllers-84c8975546-52zt4" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--84c8975546--52zt4-eth0" Oct 31 00:52:16.947238 containerd[1535]: 2025-10-31 00:52:16.657 [INFO][4759] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="6d13ff949a4a2d6053f3072f43b4acb26028941f7b8c46e7c083f74517c22fa0" HandleID="k8s-pod-network.6d13ff949a4a2d6053f3072f43b4acb26028941f7b8c46e7c083f74517c22fa0" Workload="localhost-k8s-calico--kube--controllers--84c8975546--52zt4-eth0" Oct 31 00:52:16.947238 containerd[1535]: 2025-10-31 00:52:16.657 [INFO][4759] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="6d13ff949a4a2d6053f3072f43b4acb26028941f7b8c46e7c083f74517c22fa0" HandleID="k8s-pod-network.6d13ff949a4a2d6053f3072f43b4acb26028941f7b8c46e7c083f74517c22fa0" Workload="localhost-k8s-calico--kube--controllers--84c8975546--52zt4-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00024f590), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"calico-kube-controllers-84c8975546-52zt4", "timestamp":"2025-10-31 00:52:16.657186654 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Oct 31 00:52:16.947238 containerd[1535]: 2025-10-31 00:52:16.657 [INFO][4759] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Oct 31 00:52:16.947238 containerd[1535]: 2025-10-31 00:52:16.787 [INFO][4759] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Oct 31 00:52:16.947238 containerd[1535]: 2025-10-31 00:52:16.787 [INFO][4759] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Oct 31 00:52:16.947238 containerd[1535]: 2025-10-31 00:52:16.861 [INFO][4759] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.6d13ff949a4a2d6053f3072f43b4acb26028941f7b8c46e7c083f74517c22fa0" host="localhost" Oct 31 00:52:16.947238 containerd[1535]: 2025-10-31 00:52:16.865 [INFO][4759] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Oct 31 00:52:16.947238 containerd[1535]: 2025-10-31 00:52:16.869 [INFO][4759] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Oct 31 00:52:16.947238 containerd[1535]: 2025-10-31 00:52:16.871 [INFO][4759] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Oct 31 00:52:16.947238 containerd[1535]: 2025-10-31 00:52:16.874 [INFO][4759] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Oct 31 00:52:16.947238 containerd[1535]: 2025-10-31 00:52:16.874 [INFO][4759] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.6d13ff949a4a2d6053f3072f43b4acb26028941f7b8c46e7c083f74517c22fa0" host="localhost" Oct 31 00:52:16.947238 containerd[1535]: 2025-10-31 00:52:16.878 [INFO][4759] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.6d13ff949a4a2d6053f3072f43b4acb26028941f7b8c46e7c083f74517c22fa0 Oct 31 00:52:16.947238 containerd[1535]: 2025-10-31 00:52:16.884 [INFO][4759] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.6d13ff949a4a2d6053f3072f43b4acb26028941f7b8c46e7c083f74517c22fa0" host="localhost" Oct 31 00:52:16.947238 containerd[1535]: 2025-10-31 00:52:16.900 [INFO][4759] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.135/26] block=192.168.88.128/26 handle="k8s-pod-network.6d13ff949a4a2d6053f3072f43b4acb26028941f7b8c46e7c083f74517c22fa0" host="localhost" Oct 31 00:52:16.947238 containerd[1535]: 2025-10-31 00:52:16.901 [INFO][4759] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.135/26] handle="k8s-pod-network.6d13ff949a4a2d6053f3072f43b4acb26028941f7b8c46e7c083f74517c22fa0" host="localhost" Oct 31 00:52:16.947238 containerd[1535]: 2025-10-31 00:52:16.901 [INFO][4759] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Oct 31 00:52:16.947238 containerd[1535]: 2025-10-31 00:52:16.901 [INFO][4759] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.135/26] IPv6=[] ContainerID="6d13ff949a4a2d6053f3072f43b4acb26028941f7b8c46e7c083f74517c22fa0" HandleID="k8s-pod-network.6d13ff949a4a2d6053f3072f43b4acb26028941f7b8c46e7c083f74517c22fa0" Workload="localhost-k8s-calico--kube--controllers--84c8975546--52zt4-eth0" Oct 31 00:52:16.947686 containerd[1535]: 2025-10-31 00:52:16.906 [INFO][4725] cni-plugin/k8s.go 418: Populated endpoint ContainerID="6d13ff949a4a2d6053f3072f43b4acb26028941f7b8c46e7c083f74517c22fa0" Namespace="calico-system" Pod="calico-kube-controllers-84c8975546-52zt4" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--84c8975546--52zt4-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--84c8975546--52zt4-eth0", GenerateName:"calico-kube-controllers-84c8975546-", Namespace:"calico-system", SelfLink:"", UID:"fb892052-e9f7-4494-bd9e-d42433970af9", ResourceVersion:"993", Generation:0, CreationTimestamp:time.Date(2025, time.October, 31, 0, 51, 53, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"84c8975546", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-kube-controllers-84c8975546-52zt4", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali3826aed581c", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 31 00:52:16.947686 containerd[1535]: 2025-10-31 00:52:16.909 [INFO][4725] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.135/32] ContainerID="6d13ff949a4a2d6053f3072f43b4acb26028941f7b8c46e7c083f74517c22fa0" Namespace="calico-system" Pod="calico-kube-controllers-84c8975546-52zt4" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--84c8975546--52zt4-eth0" Oct 31 00:52:16.947686 containerd[1535]: 2025-10-31 00:52:16.909 [INFO][4725] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali3826aed581c ContainerID="6d13ff949a4a2d6053f3072f43b4acb26028941f7b8c46e7c083f74517c22fa0" Namespace="calico-system" Pod="calico-kube-controllers-84c8975546-52zt4" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--84c8975546--52zt4-eth0" Oct 31 00:52:16.947686 containerd[1535]: 2025-10-31 00:52:16.919 [INFO][4725] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="6d13ff949a4a2d6053f3072f43b4acb26028941f7b8c46e7c083f74517c22fa0" Namespace="calico-system" Pod="calico-kube-controllers-84c8975546-52zt4" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--84c8975546--52zt4-eth0" Oct 31 00:52:16.947686 containerd[1535]: 2025-10-31 00:52:16.924 [INFO][4725] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="6d13ff949a4a2d6053f3072f43b4acb26028941f7b8c46e7c083f74517c22fa0" Namespace="calico-system" Pod="calico-kube-controllers-84c8975546-52zt4" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--84c8975546--52zt4-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--84c8975546--52zt4-eth0", GenerateName:"calico-kube-controllers-84c8975546-", Namespace:"calico-system", SelfLink:"", UID:"fb892052-e9f7-4494-bd9e-d42433970af9", ResourceVersion:"993", Generation:0, CreationTimestamp:time.Date(2025, time.October, 31, 0, 51, 53, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"84c8975546", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"6d13ff949a4a2d6053f3072f43b4acb26028941f7b8c46e7c083f74517c22fa0", Pod:"calico-kube-controllers-84c8975546-52zt4", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali3826aed581c", MAC:"62:93:19:3b:e1:29", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 31 00:52:16.947686 containerd[1535]: 2025-10-31 00:52:16.941 [INFO][4725] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="6d13ff949a4a2d6053f3072f43b4acb26028941f7b8c46e7c083f74517c22fa0" Namespace="calico-system" Pod="calico-kube-controllers-84c8975546-52zt4" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--84c8975546--52zt4-eth0" Oct 31 00:52:16.971943 containerd[1535]: time="2025-10-31T00:52:16.971779779Z" level=info msg="StartContainer for \"cee27c9005b1d484ea241964fb00dedc759bb652888117c63bf4a0f2cc207253\" returns successfully" Oct 31 00:52:16.974030 containerd[1535]: time="2025-10-31T00:52:16.973303351Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 31 00:52:16.974030 containerd[1535]: time="2025-10-31T00:52:16.973332214Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 31 00:52:16.974030 containerd[1535]: time="2025-10-31T00:52:16.973348737Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 31 00:52:16.974030 containerd[1535]: time="2025-10-31T00:52:16.973402648Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 31 00:52:16.991719 systemd[1]: Started cri-containerd-6d13ff949a4a2d6053f3072f43b4acb26028941f7b8c46e7c083f74517c22fa0.scope - libcontainer container 6d13ff949a4a2d6053f3072f43b4acb26028941f7b8c46e7c083f74517c22fa0. Oct 31 00:52:17.010957 systemd-resolved[1412]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Oct 31 00:52:17.044584 containerd[1535]: time="2025-10-31T00:52:17.044554503Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-84c8975546-52zt4,Uid:fb892052-e9f7-4494-bd9e-d42433970af9,Namespace:calico-system,Attempt:1,} returns sandbox id \"6d13ff949a4a2d6053f3072f43b4acb26028941f7b8c46e7c083f74517c22fa0\"" Oct 31 00:52:17.244612 containerd[1535]: time="2025-10-31T00:52:17.244582048Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Oct 31 00:52:17.248201 containerd[1535]: time="2025-10-31T00:52:17.248135156Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Oct 31 00:52:17.248289 containerd[1535]: time="2025-10-31T00:52:17.248197777Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Oct 31 00:52:17.248514 kubelet[2709]: E1031 00:52:17.248352 2709 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Oct 31 00:52:17.248514 kubelet[2709]: E1031 00:52:17.248399 2709 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Oct 31 00:52:17.248818 containerd[1535]: time="2025-10-31T00:52:17.248803054Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Oct 31 00:52:17.268717 kubelet[2709]: E1031 00:52:17.268694 2709 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-apiserver start failed in pod calico-apiserver-6577bb4886-mlhcr_calico-apiserver(38874b0f-68ea-48b6-8bf2-17a9a00061c5): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Oct 31 00:52:17.268788 kubelet[2709]: E1031 00:52:17.268727 2709 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6577bb4886-mlhcr" podUID="38874b0f-68ea-48b6-8bf2-17a9a00061c5" Oct 31 00:52:17.560255 systemd[1]: run-netns-cni\x2d58bb8d84\x2dc09e\x2dc9d5\x2d2c8e\x2dd7be9fa03622.mount: Deactivated successfully. Oct 31 00:52:17.585925 containerd[1535]: time="2025-10-31T00:52:17.585893096Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Oct 31 00:52:17.586396 containerd[1535]: time="2025-10-31T00:52:17.586360916Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Oct 31 00:52:17.586441 containerd[1535]: time="2025-10-31T00:52:17.586422743Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=85" Oct 31 00:52:17.586869 kubelet[2709]: E1031 00:52:17.586562 2709 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Oct 31 00:52:17.586869 kubelet[2709]: E1031 00:52:17.586598 2709 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Oct 31 00:52:17.586869 kubelet[2709]: E1031 00:52:17.586669 2709 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-kube-controllers start failed in pod calico-kube-controllers-84c8975546-52zt4_calico-system(fb892052-e9f7-4494-bd9e-d42433970af9): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Oct 31 00:52:17.586869 kubelet[2709]: E1031 00:52:17.586696 2709 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-84c8975546-52zt4" podUID="fb892052-e9f7-4494-bd9e-d42433970af9" Oct 31 00:52:17.683553 kubelet[2709]: E1031 00:52:17.683528 2709 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-84c8975546-52zt4" podUID="fb892052-e9f7-4494-bd9e-d42433970af9" Oct 31 00:52:17.687529 kubelet[2709]: E1031 00:52:17.687457 2709 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6577bb4886-mlhcr" podUID="38874b0f-68ea-48b6-8bf2-17a9a00061c5" Oct 31 00:52:17.697026 kubelet[2709]: I1031 00:52:17.696422 2709 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-czb88" podStartSLOduration=37.696412409 podStartE2EDuration="37.696412409s" podCreationTimestamp="2025-10-31 00:51:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-31 00:52:17.696040086 +0000 UTC m=+45.455489871" watchObservedRunningTime="2025-10-31 00:52:17.696412409 +0000 UTC m=+45.455862187" Oct 31 00:52:18.032783 systemd-networkd[1463]: calie536c70d526: Gained IPv6LL Oct 31 00:52:18.096750 systemd-networkd[1463]: cali3826aed581c: Gained IPv6LL Oct 31 00:52:18.474300 containerd[1535]: time="2025-10-31T00:52:18.473520788Z" level=info msg="StopPodSandbox for \"4c1ff8f77ca8225253f7b87874d03762a63424051ab3c7d8179a10904eb22b3b\"" Oct 31 00:52:18.522937 containerd[1535]: 2025-10-31 00:52:18.501 [INFO][5009] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="4c1ff8f77ca8225253f7b87874d03762a63424051ab3c7d8179a10904eb22b3b" Oct 31 00:52:18.522937 containerd[1535]: 2025-10-31 00:52:18.501 [INFO][5009] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="4c1ff8f77ca8225253f7b87874d03762a63424051ab3c7d8179a10904eb22b3b" iface="eth0" netns="/var/run/netns/cni-77337adb-efb6-995a-67a8-00e02a44c08a" Oct 31 00:52:18.522937 containerd[1535]: 2025-10-31 00:52:18.501 [INFO][5009] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="4c1ff8f77ca8225253f7b87874d03762a63424051ab3c7d8179a10904eb22b3b" iface="eth0" netns="/var/run/netns/cni-77337adb-efb6-995a-67a8-00e02a44c08a" Oct 31 00:52:18.522937 containerd[1535]: 2025-10-31 00:52:18.501 [INFO][5009] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="4c1ff8f77ca8225253f7b87874d03762a63424051ab3c7d8179a10904eb22b3b" iface="eth0" netns="/var/run/netns/cni-77337adb-efb6-995a-67a8-00e02a44c08a" Oct 31 00:52:18.522937 containerd[1535]: 2025-10-31 00:52:18.501 [INFO][5009] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="4c1ff8f77ca8225253f7b87874d03762a63424051ab3c7d8179a10904eb22b3b" Oct 31 00:52:18.522937 containerd[1535]: 2025-10-31 00:52:18.501 [INFO][5009] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="4c1ff8f77ca8225253f7b87874d03762a63424051ab3c7d8179a10904eb22b3b" Oct 31 00:52:18.522937 containerd[1535]: 2025-10-31 00:52:18.515 [INFO][5016] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="4c1ff8f77ca8225253f7b87874d03762a63424051ab3c7d8179a10904eb22b3b" HandleID="k8s-pod-network.4c1ff8f77ca8225253f7b87874d03762a63424051ab3c7d8179a10904eb22b3b" Workload="localhost-k8s-csi--node--driver--qfqgh-eth0" Oct 31 00:52:18.522937 containerd[1535]: 2025-10-31 00:52:18.515 [INFO][5016] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Oct 31 00:52:18.522937 containerd[1535]: 2025-10-31 00:52:18.515 [INFO][5016] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Oct 31 00:52:18.522937 containerd[1535]: 2025-10-31 00:52:18.518 [WARNING][5016] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="4c1ff8f77ca8225253f7b87874d03762a63424051ab3c7d8179a10904eb22b3b" HandleID="k8s-pod-network.4c1ff8f77ca8225253f7b87874d03762a63424051ab3c7d8179a10904eb22b3b" Workload="localhost-k8s-csi--node--driver--qfqgh-eth0" Oct 31 00:52:18.522937 containerd[1535]: 2025-10-31 00:52:18.518 [INFO][5016] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="4c1ff8f77ca8225253f7b87874d03762a63424051ab3c7d8179a10904eb22b3b" HandleID="k8s-pod-network.4c1ff8f77ca8225253f7b87874d03762a63424051ab3c7d8179a10904eb22b3b" Workload="localhost-k8s-csi--node--driver--qfqgh-eth0" Oct 31 00:52:18.522937 containerd[1535]: 2025-10-31 00:52:18.519 [INFO][5016] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Oct 31 00:52:18.522937 containerd[1535]: 2025-10-31 00:52:18.520 [INFO][5009] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="4c1ff8f77ca8225253f7b87874d03762a63424051ab3c7d8179a10904eb22b3b" Oct 31 00:52:18.523474 containerd[1535]: time="2025-10-31T00:52:18.523335788Z" level=info msg="TearDown network for sandbox \"4c1ff8f77ca8225253f7b87874d03762a63424051ab3c7d8179a10904eb22b3b\" successfully" Oct 31 00:52:18.523474 containerd[1535]: time="2025-10-31T00:52:18.523365293Z" level=info msg="StopPodSandbox for \"4c1ff8f77ca8225253f7b87874d03762a63424051ab3c7d8179a10904eb22b3b\" returns successfully" Oct 31 00:52:18.523270 systemd[1]: run-netns-cni\x2d77337adb\x2defb6\x2d995a\x2d67a8\x2d00e02a44c08a.mount: Deactivated successfully. Oct 31 00:52:18.524930 containerd[1535]: time="2025-10-31T00:52:18.524912335Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-qfqgh,Uid:5e393e8e-d87c-4c00-a0d8-1932978c09f4,Namespace:calico-system,Attempt:1,}" Oct 31 00:52:18.586940 systemd-networkd[1463]: cali862ebfa03bc: Link UP Oct 31 00:52:18.587298 systemd-networkd[1463]: cali862ebfa03bc: Gained carrier Oct 31 00:52:18.596911 containerd[1535]: 2025-10-31 00:52:18.544 [INFO][5026] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Oct 31 00:52:18.596911 containerd[1535]: 2025-10-31 00:52:18.549 [INFO][5026] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-csi--node--driver--qfqgh-eth0 csi-node-driver- calico-system 5e393e8e-d87c-4c00-a0d8-1932978c09f4 1039 0 2025-10-31 00:51:53 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:9d99788f7 k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s localhost csi-node-driver-qfqgh eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali862ebfa03bc [] [] }} ContainerID="46d1458117031b1e0dc77c6fc0265a941dba5a52d92f145204bac36f459f9df7" Namespace="calico-system" Pod="csi-node-driver-qfqgh" WorkloadEndpoint="localhost-k8s-csi--node--driver--qfqgh-" Oct 31 00:52:18.596911 containerd[1535]: 2025-10-31 00:52:18.550 [INFO][5026] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="46d1458117031b1e0dc77c6fc0265a941dba5a52d92f145204bac36f459f9df7" Namespace="calico-system" Pod="csi-node-driver-qfqgh" WorkloadEndpoint="localhost-k8s-csi--node--driver--qfqgh-eth0" Oct 31 00:52:18.596911 containerd[1535]: 2025-10-31 00:52:18.563 [INFO][5034] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="46d1458117031b1e0dc77c6fc0265a941dba5a52d92f145204bac36f459f9df7" HandleID="k8s-pod-network.46d1458117031b1e0dc77c6fc0265a941dba5a52d92f145204bac36f459f9df7" Workload="localhost-k8s-csi--node--driver--qfqgh-eth0" Oct 31 00:52:18.596911 containerd[1535]: 2025-10-31 00:52:18.564 [INFO][5034] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="46d1458117031b1e0dc77c6fc0265a941dba5a52d92f145204bac36f459f9df7" HandleID="k8s-pod-network.46d1458117031b1e0dc77c6fc0265a941dba5a52d92f145204bac36f459f9df7" Workload="localhost-k8s-csi--node--driver--qfqgh-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00024f590), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"csi-node-driver-qfqgh", "timestamp":"2025-10-31 00:52:18.563854832 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Oct 31 00:52:18.596911 containerd[1535]: 2025-10-31 00:52:18.564 [INFO][5034] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Oct 31 00:52:18.596911 containerd[1535]: 2025-10-31 00:52:18.564 [INFO][5034] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Oct 31 00:52:18.596911 containerd[1535]: 2025-10-31 00:52:18.564 [INFO][5034] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Oct 31 00:52:18.596911 containerd[1535]: 2025-10-31 00:52:18.568 [INFO][5034] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.46d1458117031b1e0dc77c6fc0265a941dba5a52d92f145204bac36f459f9df7" host="localhost" Oct 31 00:52:18.596911 containerd[1535]: 2025-10-31 00:52:18.569 [INFO][5034] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Oct 31 00:52:18.596911 containerd[1535]: 2025-10-31 00:52:18.572 [INFO][5034] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Oct 31 00:52:18.596911 containerd[1535]: 2025-10-31 00:52:18.573 [INFO][5034] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Oct 31 00:52:18.596911 containerd[1535]: 2025-10-31 00:52:18.574 [INFO][5034] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Oct 31 00:52:18.596911 containerd[1535]: 2025-10-31 00:52:18.574 [INFO][5034] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.46d1458117031b1e0dc77c6fc0265a941dba5a52d92f145204bac36f459f9df7" host="localhost" Oct 31 00:52:18.596911 containerd[1535]: 2025-10-31 00:52:18.574 [INFO][5034] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.46d1458117031b1e0dc77c6fc0265a941dba5a52d92f145204bac36f459f9df7 Oct 31 00:52:18.596911 containerd[1535]: 2025-10-31 00:52:18.576 [INFO][5034] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.46d1458117031b1e0dc77c6fc0265a941dba5a52d92f145204bac36f459f9df7" host="localhost" Oct 31 00:52:18.596911 containerd[1535]: 2025-10-31 00:52:18.581 [INFO][5034] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.136/26] block=192.168.88.128/26 handle="k8s-pod-network.46d1458117031b1e0dc77c6fc0265a941dba5a52d92f145204bac36f459f9df7" host="localhost" Oct 31 00:52:18.596911 containerd[1535]: 2025-10-31 00:52:18.581 [INFO][5034] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.136/26] handle="k8s-pod-network.46d1458117031b1e0dc77c6fc0265a941dba5a52d92f145204bac36f459f9df7" host="localhost" Oct 31 00:52:18.596911 containerd[1535]: 2025-10-31 00:52:18.581 [INFO][5034] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Oct 31 00:52:18.596911 containerd[1535]: 2025-10-31 00:52:18.581 [INFO][5034] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.136/26] IPv6=[] ContainerID="46d1458117031b1e0dc77c6fc0265a941dba5a52d92f145204bac36f459f9df7" HandleID="k8s-pod-network.46d1458117031b1e0dc77c6fc0265a941dba5a52d92f145204bac36f459f9df7" Workload="localhost-k8s-csi--node--driver--qfqgh-eth0" Oct 31 00:52:18.598234 containerd[1535]: 2025-10-31 00:52:18.583 [INFO][5026] cni-plugin/k8s.go 418: Populated endpoint ContainerID="46d1458117031b1e0dc77c6fc0265a941dba5a52d92f145204bac36f459f9df7" Namespace="calico-system" Pod="csi-node-driver-qfqgh" WorkloadEndpoint="localhost-k8s-csi--node--driver--qfqgh-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--qfqgh-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"5e393e8e-d87c-4c00-a0d8-1932978c09f4", ResourceVersion:"1039", Generation:0, CreationTimestamp:time.Date(2025, time.October, 31, 0, 51, 53, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"9d99788f7", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"csi-node-driver-qfqgh", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali862ebfa03bc", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 31 00:52:18.598234 containerd[1535]: 2025-10-31 00:52:18.585 [INFO][5026] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.136/32] ContainerID="46d1458117031b1e0dc77c6fc0265a941dba5a52d92f145204bac36f459f9df7" Namespace="calico-system" Pod="csi-node-driver-qfqgh" WorkloadEndpoint="localhost-k8s-csi--node--driver--qfqgh-eth0" Oct 31 00:52:18.598234 containerd[1535]: 2025-10-31 00:52:18.585 [INFO][5026] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali862ebfa03bc ContainerID="46d1458117031b1e0dc77c6fc0265a941dba5a52d92f145204bac36f459f9df7" Namespace="calico-system" Pod="csi-node-driver-qfqgh" WorkloadEndpoint="localhost-k8s-csi--node--driver--qfqgh-eth0" Oct 31 00:52:18.598234 containerd[1535]: 2025-10-31 00:52:18.587 [INFO][5026] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="46d1458117031b1e0dc77c6fc0265a941dba5a52d92f145204bac36f459f9df7" Namespace="calico-system" Pod="csi-node-driver-qfqgh" WorkloadEndpoint="localhost-k8s-csi--node--driver--qfqgh-eth0" Oct 31 00:52:18.598234 containerd[1535]: 2025-10-31 00:52:18.587 [INFO][5026] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="46d1458117031b1e0dc77c6fc0265a941dba5a52d92f145204bac36f459f9df7" Namespace="calico-system" Pod="csi-node-driver-qfqgh" WorkloadEndpoint="localhost-k8s-csi--node--driver--qfqgh-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--qfqgh-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"5e393e8e-d87c-4c00-a0d8-1932978c09f4", ResourceVersion:"1039", Generation:0, CreationTimestamp:time.Date(2025, time.October, 31, 0, 51, 53, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"9d99788f7", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"46d1458117031b1e0dc77c6fc0265a941dba5a52d92f145204bac36f459f9df7", Pod:"csi-node-driver-qfqgh", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali862ebfa03bc", MAC:"76:f5:fd:e0:98:15", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 31 00:52:18.598234 containerd[1535]: 2025-10-31 00:52:18.595 [INFO][5026] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="46d1458117031b1e0dc77c6fc0265a941dba5a52d92f145204bac36f459f9df7" Namespace="calico-system" Pod="csi-node-driver-qfqgh" WorkloadEndpoint="localhost-k8s-csi--node--driver--qfqgh-eth0" Oct 31 00:52:18.606997 containerd[1535]: time="2025-10-31T00:52:18.606949400Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 31 00:52:18.606997 containerd[1535]: time="2025-10-31T00:52:18.606984858Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 31 00:52:18.607231 containerd[1535]: time="2025-10-31T00:52:18.607001055Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 31 00:52:18.607231 containerd[1535]: time="2025-10-31T00:52:18.607049247Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 31 00:52:18.621729 systemd[1]: Started cri-containerd-46d1458117031b1e0dc77c6fc0265a941dba5a52d92f145204bac36f459f9df7.scope - libcontainer container 46d1458117031b1e0dc77c6fc0265a941dba5a52d92f145204bac36f459f9df7. Oct 31 00:52:18.629690 systemd-resolved[1412]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Oct 31 00:52:18.636150 containerd[1535]: time="2025-10-31T00:52:18.636130080Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-qfqgh,Uid:5e393e8e-d87c-4c00-a0d8-1932978c09f4,Namespace:calico-system,Attempt:1,} returns sandbox id \"46d1458117031b1e0dc77c6fc0265a941dba5a52d92f145204bac36f459f9df7\"" Oct 31 00:52:18.637208 containerd[1535]: time="2025-10-31T00:52:18.637156455Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Oct 31 00:52:18.672826 systemd-networkd[1463]: calid20ff29def5: Gained IPv6LL Oct 31 00:52:18.690420 kubelet[2709]: E1031 00:52:18.690384 2709 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-84c8975546-52zt4" podUID="fb892052-e9f7-4494-bd9e-d42433970af9" Oct 31 00:52:18.691409 kubelet[2709]: E1031 00:52:18.691367 2709 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6577bb4886-mlhcr" podUID="38874b0f-68ea-48b6-8bf2-17a9a00061c5" Oct 31 00:52:19.025364 containerd[1535]: time="2025-10-31T00:52:19.025338333Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Oct 31 00:52:19.060659 containerd[1535]: time="2025-10-31T00:52:19.060577764Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Oct 31 00:52:19.060738 containerd[1535]: time="2025-10-31T00:52:19.060647159Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Oct 31 00:52:19.060783 kubelet[2709]: E1031 00:52:19.060757 2709 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Oct 31 00:52:19.060810 kubelet[2709]: E1031 00:52:19.060785 2709 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Oct 31 00:52:19.060846 kubelet[2709]: E1031 00:52:19.060832 2709 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-csi start failed in pod csi-node-driver-qfqgh_calico-system(5e393e8e-d87c-4c00-a0d8-1932978c09f4): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Oct 31 00:52:19.062112 containerd[1535]: time="2025-10-31T00:52:19.061984716Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Oct 31 00:52:19.375471 containerd[1535]: time="2025-10-31T00:52:19.375373207Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Oct 31 00:52:19.375984 containerd[1535]: time="2025-10-31T00:52:19.375958062Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Oct 31 00:52:19.376655 containerd[1535]: time="2025-10-31T00:52:19.376017863Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Oct 31 00:52:19.376719 kubelet[2709]: E1031 00:52:19.376126 2709 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Oct 31 00:52:19.376719 kubelet[2709]: E1031 00:52:19.376166 2709 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Oct 31 00:52:19.376719 kubelet[2709]: E1031 00:52:19.376225 2709 kuberuntime_manager.go:1449] "Unhandled Error" err="container csi-node-driver-registrar start failed in pod csi-node-driver-qfqgh_calico-system(5e393e8e-d87c-4c00-a0d8-1932978c09f4): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Oct 31 00:52:19.376865 kubelet[2709]: E1031 00:52:19.376257 2709 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-qfqgh" podUID="5e393e8e-d87c-4c00-a0d8-1932978c09f4" Oct 31 00:52:19.697946 kubelet[2709]: E1031 00:52:19.697371 2709 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-qfqgh" podUID="5e393e8e-d87c-4c00-a0d8-1932978c09f4" Oct 31 00:52:19.952858 systemd-networkd[1463]: cali862ebfa03bc: Gained IPv6LL Oct 31 00:52:22.199959 kubelet[2709]: I1031 00:52:22.199736 2709 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Oct 31 00:52:22.533657 kernel: bpftool[5183]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set Oct 31 00:52:22.735117 systemd-networkd[1463]: vxlan.calico: Link UP Oct 31 00:52:22.735122 systemd-networkd[1463]: vxlan.calico: Gained carrier Oct 31 00:52:23.472893 containerd[1535]: time="2025-10-31T00:52:23.472865534Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Oct 31 00:52:23.853904 containerd[1535]: time="2025-10-31T00:52:23.853711023Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Oct 31 00:52:23.854145 containerd[1535]: time="2025-10-31T00:52:23.854117147Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Oct 31 00:52:23.854206 containerd[1535]: time="2025-10-31T00:52:23.854176288Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Oct 31 00:52:23.854317 kubelet[2709]: E1031 00:52:23.854289 2709 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Oct 31 00:52:23.854551 kubelet[2709]: E1031 00:52:23.854322 2709 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Oct 31 00:52:23.854551 kubelet[2709]: E1031 00:52:23.854376 2709 kuberuntime_manager.go:1449] "Unhandled Error" err="container whisker start failed in pod whisker-565cb9cccd-v4n4z_calico-system(e4d268cd-a761-4f69-b6d1-2ef35e9f3bff): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Oct 31 00:52:23.855527 containerd[1535]: time="2025-10-31T00:52:23.855508097Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Oct 31 00:52:24.267120 containerd[1535]: time="2025-10-31T00:52:24.267081420Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Oct 31 00:52:24.267496 containerd[1535]: time="2025-10-31T00:52:24.267469864Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Oct 31 00:52:24.267569 containerd[1535]: time="2025-10-31T00:52:24.267524367Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Oct 31 00:52:24.267683 kubelet[2709]: E1031 00:52:24.267652 2709 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Oct 31 00:52:24.267754 kubelet[2709]: E1031 00:52:24.267689 2709 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Oct 31 00:52:24.267754 kubelet[2709]: E1031 00:52:24.267745 2709 kuberuntime_manager.go:1449] "Unhandled Error" err="container whisker-backend start failed in pod whisker-565cb9cccd-v4n4z_calico-system(e4d268cd-a761-4f69-b6d1-2ef35e9f3bff): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Oct 31 00:52:24.268504 kubelet[2709]: E1031 00:52:24.267775 2709 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-565cb9cccd-v4n4z" podUID="e4d268cd-a761-4f69-b6d1-2ef35e9f3bff" Oct 31 00:52:24.433163 systemd-networkd[1463]: vxlan.calico: Gained IPv6LL Oct 31 00:52:26.472366 containerd[1535]: time="2025-10-31T00:52:26.472281211Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Oct 31 00:52:26.802834 containerd[1535]: time="2025-10-31T00:52:26.802666482Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Oct 31 00:52:26.803214 containerd[1535]: time="2025-10-31T00:52:26.803117404Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Oct 31 00:52:26.803214 containerd[1535]: time="2025-10-31T00:52:26.803145515Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=77" Oct 31 00:52:26.803339 kubelet[2709]: E1031 00:52:26.803294 2709 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Oct 31 00:52:26.803339 kubelet[2709]: E1031 00:52:26.803345 2709 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Oct 31 00:52:26.803753 kubelet[2709]: E1031 00:52:26.803402 2709 kuberuntime_manager.go:1449] "Unhandled Error" err="container goldmane start failed in pod goldmane-7c778bb748-62kb2_calico-system(56a98ce3-aefb-4f4d-a4ba-fe832cc8a1df): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Oct 31 00:52:26.803753 kubelet[2709]: E1031 00:52:26.803428 2709 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-62kb2" podUID="56a98ce3-aefb-4f4d-a4ba-fe832cc8a1df" Oct 31 00:52:29.471970 containerd[1535]: time="2025-10-31T00:52:29.471754317Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Oct 31 00:52:29.787071 containerd[1535]: time="2025-10-31T00:52:29.786986665Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Oct 31 00:52:29.787449 containerd[1535]: time="2025-10-31T00:52:29.787403208Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Oct 31 00:52:29.787490 containerd[1535]: time="2025-10-31T00:52:29.787435915Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Oct 31 00:52:29.787574 kubelet[2709]: E1031 00:52:29.787546 2709 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Oct 31 00:52:29.787574 kubelet[2709]: E1031 00:52:29.787573 2709 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Oct 31 00:52:29.787829 kubelet[2709]: E1031 00:52:29.787646 2709 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-apiserver start failed in pod calico-apiserver-6577bb4886-7s98w_calico-apiserver(3561ae24-137d-44ba-89a5-d4068542bce6): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Oct 31 00:52:29.787829 kubelet[2709]: E1031 00:52:29.787677 2709 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6577bb4886-7s98w" podUID="3561ae24-137d-44ba-89a5-d4068542bce6" Oct 31 00:52:31.472234 containerd[1535]: time="2025-10-31T00:52:31.472177644Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Oct 31 00:52:31.804846 containerd[1535]: time="2025-10-31T00:52:31.804763609Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Oct 31 00:52:31.805251 containerd[1535]: time="2025-10-31T00:52:31.805222221Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Oct 31 00:52:31.805357 containerd[1535]: time="2025-10-31T00:52:31.805276367Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=85" Oct 31 00:52:31.805389 kubelet[2709]: E1031 00:52:31.805333 2709 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Oct 31 00:52:31.805389 kubelet[2709]: E1031 00:52:31.805361 2709 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Oct 31 00:52:31.805670 kubelet[2709]: E1031 00:52:31.805409 2709 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-kube-controllers start failed in pod calico-kube-controllers-84c8975546-52zt4_calico-system(fb892052-e9f7-4494-bd9e-d42433970af9): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Oct 31 00:52:31.805670 kubelet[2709]: E1031 00:52:31.805435 2709 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-84c8975546-52zt4" podUID="fb892052-e9f7-4494-bd9e-d42433970af9" Oct 31 00:52:32.503183 containerd[1535]: time="2025-10-31T00:52:32.503151743Z" level=info msg="StopPodSandbox for \"8b537c6e4d2b69aee3949653c38278980ff87664400857b087d62263b3754a04\"" Oct 31 00:52:32.555687 containerd[1535]: 2025-10-31 00:52:32.533 [WARNING][5340] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="8b537c6e4d2b69aee3949653c38278980ff87664400857b087d62263b3754a04" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--84c8975546--52zt4-eth0", GenerateName:"calico-kube-controllers-84c8975546-", Namespace:"calico-system", SelfLink:"", UID:"fb892052-e9f7-4494-bd9e-d42433970af9", ResourceVersion:"1046", Generation:0, CreationTimestamp:time.Date(2025, time.October, 31, 0, 51, 53, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"84c8975546", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"6d13ff949a4a2d6053f3072f43b4acb26028941f7b8c46e7c083f74517c22fa0", Pod:"calico-kube-controllers-84c8975546-52zt4", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali3826aed581c", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 31 00:52:32.555687 containerd[1535]: 2025-10-31 00:52:32.533 [INFO][5340] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="8b537c6e4d2b69aee3949653c38278980ff87664400857b087d62263b3754a04" Oct 31 00:52:32.555687 containerd[1535]: 2025-10-31 00:52:32.533 [INFO][5340] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="8b537c6e4d2b69aee3949653c38278980ff87664400857b087d62263b3754a04" iface="eth0" netns="" Oct 31 00:52:32.555687 containerd[1535]: 2025-10-31 00:52:32.533 [INFO][5340] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="8b537c6e4d2b69aee3949653c38278980ff87664400857b087d62263b3754a04" Oct 31 00:52:32.555687 containerd[1535]: 2025-10-31 00:52:32.533 [INFO][5340] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="8b537c6e4d2b69aee3949653c38278980ff87664400857b087d62263b3754a04" Oct 31 00:52:32.555687 containerd[1535]: 2025-10-31 00:52:32.548 [INFO][5347] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="8b537c6e4d2b69aee3949653c38278980ff87664400857b087d62263b3754a04" HandleID="k8s-pod-network.8b537c6e4d2b69aee3949653c38278980ff87664400857b087d62263b3754a04" Workload="localhost-k8s-calico--kube--controllers--84c8975546--52zt4-eth0" Oct 31 00:52:32.555687 containerd[1535]: 2025-10-31 00:52:32.548 [INFO][5347] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Oct 31 00:52:32.555687 containerd[1535]: 2025-10-31 00:52:32.548 [INFO][5347] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Oct 31 00:52:32.555687 containerd[1535]: 2025-10-31 00:52:32.552 [WARNING][5347] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="8b537c6e4d2b69aee3949653c38278980ff87664400857b087d62263b3754a04" HandleID="k8s-pod-network.8b537c6e4d2b69aee3949653c38278980ff87664400857b087d62263b3754a04" Workload="localhost-k8s-calico--kube--controllers--84c8975546--52zt4-eth0" Oct 31 00:52:32.555687 containerd[1535]: 2025-10-31 00:52:32.552 [INFO][5347] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="8b537c6e4d2b69aee3949653c38278980ff87664400857b087d62263b3754a04" HandleID="k8s-pod-network.8b537c6e4d2b69aee3949653c38278980ff87664400857b087d62263b3754a04" Workload="localhost-k8s-calico--kube--controllers--84c8975546--52zt4-eth0" Oct 31 00:52:32.555687 containerd[1535]: 2025-10-31 00:52:32.553 [INFO][5347] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Oct 31 00:52:32.555687 containerd[1535]: 2025-10-31 00:52:32.554 [INFO][5340] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="8b537c6e4d2b69aee3949653c38278980ff87664400857b087d62263b3754a04" Oct 31 00:52:32.555687 containerd[1535]: time="2025-10-31T00:52:32.555529123Z" level=info msg="TearDown network for sandbox \"8b537c6e4d2b69aee3949653c38278980ff87664400857b087d62263b3754a04\" successfully" Oct 31 00:52:32.555687 containerd[1535]: time="2025-10-31T00:52:32.555544308Z" level=info msg="StopPodSandbox for \"8b537c6e4d2b69aee3949653c38278980ff87664400857b087d62263b3754a04\" returns successfully" Oct 31 00:52:32.556408 containerd[1535]: time="2025-10-31T00:52:32.556392120Z" level=info msg="RemovePodSandbox for \"8b537c6e4d2b69aee3949653c38278980ff87664400857b087d62263b3754a04\"" Oct 31 00:52:32.556438 containerd[1535]: time="2025-10-31T00:52:32.556415182Z" level=info msg="Forcibly stopping sandbox \"8b537c6e4d2b69aee3949653c38278980ff87664400857b087d62263b3754a04\"" Oct 31 00:52:32.593685 containerd[1535]: 2025-10-31 00:52:32.574 [WARNING][5361] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="8b537c6e4d2b69aee3949653c38278980ff87664400857b087d62263b3754a04" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--84c8975546--52zt4-eth0", GenerateName:"calico-kube-controllers-84c8975546-", Namespace:"calico-system", SelfLink:"", UID:"fb892052-e9f7-4494-bd9e-d42433970af9", ResourceVersion:"1046", Generation:0, CreationTimestamp:time.Date(2025, time.October, 31, 0, 51, 53, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"84c8975546", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"6d13ff949a4a2d6053f3072f43b4acb26028941f7b8c46e7c083f74517c22fa0", Pod:"calico-kube-controllers-84c8975546-52zt4", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali3826aed581c", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 31 00:52:32.593685 containerd[1535]: 2025-10-31 00:52:32.575 [INFO][5361] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="8b537c6e4d2b69aee3949653c38278980ff87664400857b087d62263b3754a04" Oct 31 00:52:32.593685 containerd[1535]: 2025-10-31 00:52:32.575 [INFO][5361] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="8b537c6e4d2b69aee3949653c38278980ff87664400857b087d62263b3754a04" iface="eth0" netns="" Oct 31 00:52:32.593685 containerd[1535]: 2025-10-31 00:52:32.575 [INFO][5361] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="8b537c6e4d2b69aee3949653c38278980ff87664400857b087d62263b3754a04" Oct 31 00:52:32.593685 containerd[1535]: 2025-10-31 00:52:32.575 [INFO][5361] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="8b537c6e4d2b69aee3949653c38278980ff87664400857b087d62263b3754a04" Oct 31 00:52:32.593685 containerd[1535]: 2025-10-31 00:52:32.586 [INFO][5368] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="8b537c6e4d2b69aee3949653c38278980ff87664400857b087d62263b3754a04" HandleID="k8s-pod-network.8b537c6e4d2b69aee3949653c38278980ff87664400857b087d62263b3754a04" Workload="localhost-k8s-calico--kube--controllers--84c8975546--52zt4-eth0" Oct 31 00:52:32.593685 containerd[1535]: 2025-10-31 00:52:32.586 [INFO][5368] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Oct 31 00:52:32.593685 containerd[1535]: 2025-10-31 00:52:32.586 [INFO][5368] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Oct 31 00:52:32.593685 containerd[1535]: 2025-10-31 00:52:32.590 [WARNING][5368] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="8b537c6e4d2b69aee3949653c38278980ff87664400857b087d62263b3754a04" HandleID="k8s-pod-network.8b537c6e4d2b69aee3949653c38278980ff87664400857b087d62263b3754a04" Workload="localhost-k8s-calico--kube--controllers--84c8975546--52zt4-eth0" Oct 31 00:52:32.593685 containerd[1535]: 2025-10-31 00:52:32.590 [INFO][5368] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="8b537c6e4d2b69aee3949653c38278980ff87664400857b087d62263b3754a04" HandleID="k8s-pod-network.8b537c6e4d2b69aee3949653c38278980ff87664400857b087d62263b3754a04" Workload="localhost-k8s-calico--kube--controllers--84c8975546--52zt4-eth0" Oct 31 00:52:32.593685 containerd[1535]: 2025-10-31 00:52:32.591 [INFO][5368] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Oct 31 00:52:32.593685 containerd[1535]: 2025-10-31 00:52:32.592 [INFO][5361] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="8b537c6e4d2b69aee3949653c38278980ff87664400857b087d62263b3754a04" Oct 31 00:52:32.594020 containerd[1535]: time="2025-10-31T00:52:32.593709883Z" level=info msg="TearDown network for sandbox \"8b537c6e4d2b69aee3949653c38278980ff87664400857b087d62263b3754a04\" successfully" Oct 31 00:52:32.595232 containerd[1535]: time="2025-10-31T00:52:32.595212983Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"8b537c6e4d2b69aee3949653c38278980ff87664400857b087d62263b3754a04\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Oct 31 00:52:32.595268 containerd[1535]: time="2025-10-31T00:52:32.595241034Z" level=info msg="RemovePodSandbox \"8b537c6e4d2b69aee3949653c38278980ff87664400857b087d62263b3754a04\" returns successfully" Oct 31 00:52:32.595667 containerd[1535]: time="2025-10-31T00:52:32.595654196Z" level=info msg="StopPodSandbox for \"d6f3147cf3337156a02b0c9a820c943cf399fa8734ed1be5c3149eace8f07109\"" Oct 31 00:52:32.640264 containerd[1535]: 2025-10-31 00:52:32.616 [WARNING][5382] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="d6f3147cf3337156a02b0c9a820c943cf399fa8734ed1be5c3149eace8f07109" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--66bc5c9577--czb88-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"487d8bf9-c5d4-4162-b747-015052300a2e", ResourceVersion:"1028", Generation:0, CreationTimestamp:time.Date(2025, time.October, 31, 0, 51, 40, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"ed5f6269457e0cbf1aa5400227a0d0a7ec85d07cede8bf834f313c470f5ec861", Pod:"coredns-66bc5c9577-czb88", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calid20ff29def5", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 31 00:52:32.640264 containerd[1535]: 2025-10-31 00:52:32.616 [INFO][5382] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="d6f3147cf3337156a02b0c9a820c943cf399fa8734ed1be5c3149eace8f07109" Oct 31 00:52:32.640264 containerd[1535]: 2025-10-31 00:52:32.616 [INFO][5382] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="d6f3147cf3337156a02b0c9a820c943cf399fa8734ed1be5c3149eace8f07109" iface="eth0" netns="" Oct 31 00:52:32.640264 containerd[1535]: 2025-10-31 00:52:32.616 [INFO][5382] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="d6f3147cf3337156a02b0c9a820c943cf399fa8734ed1be5c3149eace8f07109" Oct 31 00:52:32.640264 containerd[1535]: 2025-10-31 00:52:32.616 [INFO][5382] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="d6f3147cf3337156a02b0c9a820c943cf399fa8734ed1be5c3149eace8f07109" Oct 31 00:52:32.640264 containerd[1535]: 2025-10-31 00:52:32.633 [INFO][5389] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="d6f3147cf3337156a02b0c9a820c943cf399fa8734ed1be5c3149eace8f07109" HandleID="k8s-pod-network.d6f3147cf3337156a02b0c9a820c943cf399fa8734ed1be5c3149eace8f07109" Workload="localhost-k8s-coredns--66bc5c9577--czb88-eth0" Oct 31 00:52:32.640264 containerd[1535]: 2025-10-31 00:52:32.633 [INFO][5389] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Oct 31 00:52:32.640264 containerd[1535]: 2025-10-31 00:52:32.633 [INFO][5389] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Oct 31 00:52:32.640264 containerd[1535]: 2025-10-31 00:52:32.637 [WARNING][5389] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="d6f3147cf3337156a02b0c9a820c943cf399fa8734ed1be5c3149eace8f07109" HandleID="k8s-pod-network.d6f3147cf3337156a02b0c9a820c943cf399fa8734ed1be5c3149eace8f07109" Workload="localhost-k8s-coredns--66bc5c9577--czb88-eth0" Oct 31 00:52:32.640264 containerd[1535]: 2025-10-31 00:52:32.637 [INFO][5389] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="d6f3147cf3337156a02b0c9a820c943cf399fa8734ed1be5c3149eace8f07109" HandleID="k8s-pod-network.d6f3147cf3337156a02b0c9a820c943cf399fa8734ed1be5c3149eace8f07109" Workload="localhost-k8s-coredns--66bc5c9577--czb88-eth0" Oct 31 00:52:32.640264 containerd[1535]: 2025-10-31 00:52:32.638 [INFO][5389] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Oct 31 00:52:32.640264 containerd[1535]: 2025-10-31 00:52:32.639 [INFO][5382] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="d6f3147cf3337156a02b0c9a820c943cf399fa8734ed1be5c3149eace8f07109" Oct 31 00:52:32.640693 containerd[1535]: time="2025-10-31T00:52:32.640340538Z" level=info msg="TearDown network for sandbox \"d6f3147cf3337156a02b0c9a820c943cf399fa8734ed1be5c3149eace8f07109\" successfully" Oct 31 00:52:32.640693 containerd[1535]: time="2025-10-31T00:52:32.640355345Z" level=info msg="StopPodSandbox for \"d6f3147cf3337156a02b0c9a820c943cf399fa8734ed1be5c3149eace8f07109\" returns successfully" Oct 31 00:52:32.640992 containerd[1535]: time="2025-10-31T00:52:32.640974647Z" level=info msg="RemovePodSandbox for \"d6f3147cf3337156a02b0c9a820c943cf399fa8734ed1be5c3149eace8f07109\"" Oct 31 00:52:32.641049 containerd[1535]: time="2025-10-31T00:52:32.640991981Z" level=info msg="Forcibly stopping sandbox \"d6f3147cf3337156a02b0c9a820c943cf399fa8734ed1be5c3149eace8f07109\"" Oct 31 00:52:32.677526 containerd[1535]: 2025-10-31 00:52:32.659 [WARNING][5405] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="d6f3147cf3337156a02b0c9a820c943cf399fa8734ed1be5c3149eace8f07109" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--66bc5c9577--czb88-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"487d8bf9-c5d4-4162-b747-015052300a2e", ResourceVersion:"1028", Generation:0, CreationTimestamp:time.Date(2025, time.October, 31, 0, 51, 40, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"ed5f6269457e0cbf1aa5400227a0d0a7ec85d07cede8bf834f313c470f5ec861", Pod:"coredns-66bc5c9577-czb88", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calid20ff29def5", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 31 00:52:32.677526 containerd[1535]: 2025-10-31 00:52:32.659 [INFO][5405] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="d6f3147cf3337156a02b0c9a820c943cf399fa8734ed1be5c3149eace8f07109" Oct 31 00:52:32.677526 containerd[1535]: 2025-10-31 00:52:32.659 [INFO][5405] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="d6f3147cf3337156a02b0c9a820c943cf399fa8734ed1be5c3149eace8f07109" iface="eth0" netns="" Oct 31 00:52:32.677526 containerd[1535]: 2025-10-31 00:52:32.659 [INFO][5405] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="d6f3147cf3337156a02b0c9a820c943cf399fa8734ed1be5c3149eace8f07109" Oct 31 00:52:32.677526 containerd[1535]: 2025-10-31 00:52:32.659 [INFO][5405] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="d6f3147cf3337156a02b0c9a820c943cf399fa8734ed1be5c3149eace8f07109" Oct 31 00:52:32.677526 containerd[1535]: 2025-10-31 00:52:32.671 [INFO][5413] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="d6f3147cf3337156a02b0c9a820c943cf399fa8734ed1be5c3149eace8f07109" HandleID="k8s-pod-network.d6f3147cf3337156a02b0c9a820c943cf399fa8734ed1be5c3149eace8f07109" Workload="localhost-k8s-coredns--66bc5c9577--czb88-eth0" Oct 31 00:52:32.677526 containerd[1535]: 2025-10-31 00:52:32.671 [INFO][5413] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Oct 31 00:52:32.677526 containerd[1535]: 2025-10-31 00:52:32.671 [INFO][5413] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Oct 31 00:52:32.677526 containerd[1535]: 2025-10-31 00:52:32.674 [WARNING][5413] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="d6f3147cf3337156a02b0c9a820c943cf399fa8734ed1be5c3149eace8f07109" HandleID="k8s-pod-network.d6f3147cf3337156a02b0c9a820c943cf399fa8734ed1be5c3149eace8f07109" Workload="localhost-k8s-coredns--66bc5c9577--czb88-eth0" Oct 31 00:52:32.677526 containerd[1535]: 2025-10-31 00:52:32.674 [INFO][5413] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="d6f3147cf3337156a02b0c9a820c943cf399fa8734ed1be5c3149eace8f07109" HandleID="k8s-pod-network.d6f3147cf3337156a02b0c9a820c943cf399fa8734ed1be5c3149eace8f07109" Workload="localhost-k8s-coredns--66bc5c9577--czb88-eth0" Oct 31 00:52:32.677526 containerd[1535]: 2025-10-31 00:52:32.675 [INFO][5413] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Oct 31 00:52:32.677526 containerd[1535]: 2025-10-31 00:52:32.676 [INFO][5405] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="d6f3147cf3337156a02b0c9a820c943cf399fa8734ed1be5c3149eace8f07109" Oct 31 00:52:32.678068 containerd[1535]: time="2025-10-31T00:52:32.677513621Z" level=info msg="TearDown network for sandbox \"d6f3147cf3337156a02b0c9a820c943cf399fa8734ed1be5c3149eace8f07109\" successfully" Oct 31 00:52:32.684368 containerd[1535]: time="2025-10-31T00:52:32.684355279Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"d6f3147cf3337156a02b0c9a820c943cf399fa8734ed1be5c3149eace8f07109\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Oct 31 00:52:32.684472 containerd[1535]: time="2025-10-31T00:52:32.684444647Z" level=info msg="RemovePodSandbox \"d6f3147cf3337156a02b0c9a820c943cf399fa8734ed1be5c3149eace8f07109\" returns successfully" Oct 31 00:52:32.684837 containerd[1535]: time="2025-10-31T00:52:32.684753306Z" level=info msg="StopPodSandbox for \"7d1d21b2d5e8471fed85b6420e09a173dc91af84b6159b82570c8cfb5eeea9ea\"" Oct 31 00:52:32.724971 containerd[1535]: 2025-10-31 00:52:32.704 [WARNING][5427] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="7d1d21b2d5e8471fed85b6420e09a173dc91af84b6159b82570c8cfb5eeea9ea" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--6577bb4886--7s98w-eth0", GenerateName:"calico-apiserver-6577bb4886-", Namespace:"calico-apiserver", SelfLink:"", UID:"3561ae24-137d-44ba-89a5-d4068542bce6", ResourceVersion:"998", Generation:0, CreationTimestamp:time.Date(2025, time.October, 31, 0, 51, 49, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6577bb4886", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"2896c845f7a2febab7991dfc577d089d1befcefd1468597fdb729d308517a718", Pod:"calico-apiserver-6577bb4886-7s98w", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali81674dde2e9", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 31 00:52:32.724971 containerd[1535]: 2025-10-31 00:52:32.704 [INFO][5427] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="7d1d21b2d5e8471fed85b6420e09a173dc91af84b6159b82570c8cfb5eeea9ea" Oct 31 00:52:32.724971 containerd[1535]: 2025-10-31 00:52:32.704 [INFO][5427] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="7d1d21b2d5e8471fed85b6420e09a173dc91af84b6159b82570c8cfb5eeea9ea" iface="eth0" netns="" Oct 31 00:52:32.724971 containerd[1535]: 2025-10-31 00:52:32.704 [INFO][5427] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="7d1d21b2d5e8471fed85b6420e09a173dc91af84b6159b82570c8cfb5eeea9ea" Oct 31 00:52:32.724971 containerd[1535]: 2025-10-31 00:52:32.704 [INFO][5427] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="7d1d21b2d5e8471fed85b6420e09a173dc91af84b6159b82570c8cfb5eeea9ea" Oct 31 00:52:32.724971 containerd[1535]: 2025-10-31 00:52:32.716 [INFO][5434] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="7d1d21b2d5e8471fed85b6420e09a173dc91af84b6159b82570c8cfb5eeea9ea" HandleID="k8s-pod-network.7d1d21b2d5e8471fed85b6420e09a173dc91af84b6159b82570c8cfb5eeea9ea" Workload="localhost-k8s-calico--apiserver--6577bb4886--7s98w-eth0" Oct 31 00:52:32.724971 containerd[1535]: 2025-10-31 00:52:32.716 [INFO][5434] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Oct 31 00:52:32.724971 containerd[1535]: 2025-10-31 00:52:32.716 [INFO][5434] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Oct 31 00:52:32.724971 containerd[1535]: 2025-10-31 00:52:32.721 [WARNING][5434] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="7d1d21b2d5e8471fed85b6420e09a173dc91af84b6159b82570c8cfb5eeea9ea" HandleID="k8s-pod-network.7d1d21b2d5e8471fed85b6420e09a173dc91af84b6159b82570c8cfb5eeea9ea" Workload="localhost-k8s-calico--apiserver--6577bb4886--7s98w-eth0" Oct 31 00:52:32.724971 containerd[1535]: 2025-10-31 00:52:32.721 [INFO][5434] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="7d1d21b2d5e8471fed85b6420e09a173dc91af84b6159b82570c8cfb5eeea9ea" HandleID="k8s-pod-network.7d1d21b2d5e8471fed85b6420e09a173dc91af84b6159b82570c8cfb5eeea9ea" Workload="localhost-k8s-calico--apiserver--6577bb4886--7s98w-eth0" Oct 31 00:52:32.724971 containerd[1535]: 2025-10-31 00:52:32.722 [INFO][5434] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Oct 31 00:52:32.724971 containerd[1535]: 2025-10-31 00:52:32.723 [INFO][5427] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="7d1d21b2d5e8471fed85b6420e09a173dc91af84b6159b82570c8cfb5eeea9ea" Oct 31 00:52:32.725743 containerd[1535]: time="2025-10-31T00:52:32.725336227Z" level=info msg="TearDown network for sandbox \"7d1d21b2d5e8471fed85b6420e09a173dc91af84b6159b82570c8cfb5eeea9ea\" successfully" Oct 31 00:52:32.725743 containerd[1535]: time="2025-10-31T00:52:32.725351202Z" level=info msg="StopPodSandbox for \"7d1d21b2d5e8471fed85b6420e09a173dc91af84b6159b82570c8cfb5eeea9ea\" returns successfully" Oct 31 00:52:32.725743 containerd[1535]: time="2025-10-31T00:52:32.725572075Z" level=info msg="RemovePodSandbox for \"7d1d21b2d5e8471fed85b6420e09a173dc91af84b6159b82570c8cfb5eeea9ea\"" Oct 31 00:52:32.725743 containerd[1535]: time="2025-10-31T00:52:32.725585124Z" level=info msg="Forcibly stopping sandbox \"7d1d21b2d5e8471fed85b6420e09a173dc91af84b6159b82570c8cfb5eeea9ea\"" Oct 31 00:52:32.767578 containerd[1535]: 2025-10-31 00:52:32.747 [WARNING][5448] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="7d1d21b2d5e8471fed85b6420e09a173dc91af84b6159b82570c8cfb5eeea9ea" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--6577bb4886--7s98w-eth0", GenerateName:"calico-apiserver-6577bb4886-", Namespace:"calico-apiserver", SelfLink:"", UID:"3561ae24-137d-44ba-89a5-d4068542bce6", ResourceVersion:"998", Generation:0, CreationTimestamp:time.Date(2025, time.October, 31, 0, 51, 49, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6577bb4886", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"2896c845f7a2febab7991dfc577d089d1befcefd1468597fdb729d308517a718", Pod:"calico-apiserver-6577bb4886-7s98w", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali81674dde2e9", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 31 00:52:32.767578 containerd[1535]: 2025-10-31 00:52:32.747 [INFO][5448] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="7d1d21b2d5e8471fed85b6420e09a173dc91af84b6159b82570c8cfb5eeea9ea" Oct 31 00:52:32.767578 containerd[1535]: 2025-10-31 00:52:32.747 [INFO][5448] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="7d1d21b2d5e8471fed85b6420e09a173dc91af84b6159b82570c8cfb5eeea9ea" iface="eth0" netns="" Oct 31 00:52:32.767578 containerd[1535]: 2025-10-31 00:52:32.747 [INFO][5448] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="7d1d21b2d5e8471fed85b6420e09a173dc91af84b6159b82570c8cfb5eeea9ea" Oct 31 00:52:32.767578 containerd[1535]: 2025-10-31 00:52:32.747 [INFO][5448] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="7d1d21b2d5e8471fed85b6420e09a173dc91af84b6159b82570c8cfb5eeea9ea" Oct 31 00:52:32.767578 containerd[1535]: 2025-10-31 00:52:32.759 [INFO][5455] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="7d1d21b2d5e8471fed85b6420e09a173dc91af84b6159b82570c8cfb5eeea9ea" HandleID="k8s-pod-network.7d1d21b2d5e8471fed85b6420e09a173dc91af84b6159b82570c8cfb5eeea9ea" Workload="localhost-k8s-calico--apiserver--6577bb4886--7s98w-eth0" Oct 31 00:52:32.767578 containerd[1535]: 2025-10-31 00:52:32.759 [INFO][5455] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Oct 31 00:52:32.767578 containerd[1535]: 2025-10-31 00:52:32.759 [INFO][5455] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Oct 31 00:52:32.767578 containerd[1535]: 2025-10-31 00:52:32.763 [WARNING][5455] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="7d1d21b2d5e8471fed85b6420e09a173dc91af84b6159b82570c8cfb5eeea9ea" HandleID="k8s-pod-network.7d1d21b2d5e8471fed85b6420e09a173dc91af84b6159b82570c8cfb5eeea9ea" Workload="localhost-k8s-calico--apiserver--6577bb4886--7s98w-eth0" Oct 31 00:52:32.767578 containerd[1535]: 2025-10-31 00:52:32.763 [INFO][5455] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="7d1d21b2d5e8471fed85b6420e09a173dc91af84b6159b82570c8cfb5eeea9ea" HandleID="k8s-pod-network.7d1d21b2d5e8471fed85b6420e09a173dc91af84b6159b82570c8cfb5eeea9ea" Workload="localhost-k8s-calico--apiserver--6577bb4886--7s98w-eth0" Oct 31 00:52:32.767578 containerd[1535]: 2025-10-31 00:52:32.764 [INFO][5455] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Oct 31 00:52:32.767578 containerd[1535]: 2025-10-31 00:52:32.765 [INFO][5448] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="7d1d21b2d5e8471fed85b6420e09a173dc91af84b6159b82570c8cfb5eeea9ea" Oct 31 00:52:32.767578 containerd[1535]: time="2025-10-31T00:52:32.766637473Z" level=info msg="TearDown network for sandbox \"7d1d21b2d5e8471fed85b6420e09a173dc91af84b6159b82570c8cfb5eeea9ea\" successfully" Oct 31 00:52:32.768299 containerd[1535]: time="2025-10-31T00:52:32.768277888Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"7d1d21b2d5e8471fed85b6420e09a173dc91af84b6159b82570c8cfb5eeea9ea\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Oct 31 00:52:32.768373 containerd[1535]: time="2025-10-31T00:52:32.768363293Z" level=info msg="RemovePodSandbox \"7d1d21b2d5e8471fed85b6420e09a173dc91af84b6159b82570c8cfb5eeea9ea\" returns successfully" Oct 31 00:52:32.769716 containerd[1535]: time="2025-10-31T00:52:32.769700803Z" level=info msg="StopPodSandbox for \"ceb38f7ee9cb29205c35e10ae71428df3a5d39ec886165e32b917be45fb281c8\"" Oct 31 00:52:32.811644 containerd[1535]: 2025-10-31 00:52:32.793 [WARNING][5469] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="ceb38f7ee9cb29205c35e10ae71428df3a5d39ec886165e32b917be45fb281c8" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--66bc5c9577--tx48q-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"caacd2fa-ec7d-4e23-bd05-4cbeb62fc6f5", ResourceVersion:"982", Generation:0, CreationTimestamp:time.Date(2025, time.October, 31, 0, 51, 40, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"570e91535d9bf045e5aac75623aa5c3f028026a5d26a04bb48f54fc4a68c5242", Pod:"coredns-66bc5c9577-tx48q", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali74905b13898", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 31 00:52:32.811644 containerd[1535]: 2025-10-31 00:52:32.793 [INFO][5469] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="ceb38f7ee9cb29205c35e10ae71428df3a5d39ec886165e32b917be45fb281c8" Oct 31 00:52:32.811644 containerd[1535]: 2025-10-31 00:52:32.793 [INFO][5469] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="ceb38f7ee9cb29205c35e10ae71428df3a5d39ec886165e32b917be45fb281c8" iface="eth0" netns="" Oct 31 00:52:32.811644 containerd[1535]: 2025-10-31 00:52:32.793 [INFO][5469] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="ceb38f7ee9cb29205c35e10ae71428df3a5d39ec886165e32b917be45fb281c8" Oct 31 00:52:32.811644 containerd[1535]: 2025-10-31 00:52:32.793 [INFO][5469] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="ceb38f7ee9cb29205c35e10ae71428df3a5d39ec886165e32b917be45fb281c8" Oct 31 00:52:32.811644 containerd[1535]: 2025-10-31 00:52:32.804 [INFO][5476] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="ceb38f7ee9cb29205c35e10ae71428df3a5d39ec886165e32b917be45fb281c8" HandleID="k8s-pod-network.ceb38f7ee9cb29205c35e10ae71428df3a5d39ec886165e32b917be45fb281c8" Workload="localhost-k8s-coredns--66bc5c9577--tx48q-eth0" Oct 31 00:52:32.811644 containerd[1535]: 2025-10-31 00:52:32.804 [INFO][5476] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Oct 31 00:52:32.811644 containerd[1535]: 2025-10-31 00:52:32.804 [INFO][5476] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Oct 31 00:52:32.811644 containerd[1535]: 2025-10-31 00:52:32.808 [WARNING][5476] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="ceb38f7ee9cb29205c35e10ae71428df3a5d39ec886165e32b917be45fb281c8" HandleID="k8s-pod-network.ceb38f7ee9cb29205c35e10ae71428df3a5d39ec886165e32b917be45fb281c8" Workload="localhost-k8s-coredns--66bc5c9577--tx48q-eth0" Oct 31 00:52:32.811644 containerd[1535]: 2025-10-31 00:52:32.808 [INFO][5476] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="ceb38f7ee9cb29205c35e10ae71428df3a5d39ec886165e32b917be45fb281c8" HandleID="k8s-pod-network.ceb38f7ee9cb29205c35e10ae71428df3a5d39ec886165e32b917be45fb281c8" Workload="localhost-k8s-coredns--66bc5c9577--tx48q-eth0" Oct 31 00:52:32.811644 containerd[1535]: 2025-10-31 00:52:32.808 [INFO][5476] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Oct 31 00:52:32.811644 containerd[1535]: 2025-10-31 00:52:32.809 [INFO][5469] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="ceb38f7ee9cb29205c35e10ae71428df3a5d39ec886165e32b917be45fb281c8" Oct 31 00:52:32.814040 containerd[1535]: time="2025-10-31T00:52:32.811675263Z" level=info msg="TearDown network for sandbox \"ceb38f7ee9cb29205c35e10ae71428df3a5d39ec886165e32b917be45fb281c8\" successfully" Oct 31 00:52:32.814040 containerd[1535]: time="2025-10-31T00:52:32.811690491Z" level=info msg="StopPodSandbox for \"ceb38f7ee9cb29205c35e10ae71428df3a5d39ec886165e32b917be45fb281c8\" returns successfully" Oct 31 00:52:32.814040 containerd[1535]: time="2025-10-31T00:52:32.812658121Z" level=info msg="RemovePodSandbox for \"ceb38f7ee9cb29205c35e10ae71428df3a5d39ec886165e32b917be45fb281c8\"" Oct 31 00:52:32.814040 containerd[1535]: time="2025-10-31T00:52:32.812672467Z" level=info msg="Forcibly stopping sandbox \"ceb38f7ee9cb29205c35e10ae71428df3a5d39ec886165e32b917be45fb281c8\"" Oct 31 00:52:32.852045 containerd[1535]: 2025-10-31 00:52:32.832 [WARNING][5490] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="ceb38f7ee9cb29205c35e10ae71428df3a5d39ec886165e32b917be45fb281c8" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--66bc5c9577--tx48q-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"caacd2fa-ec7d-4e23-bd05-4cbeb62fc6f5", ResourceVersion:"982", Generation:0, CreationTimestamp:time.Date(2025, time.October, 31, 0, 51, 40, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"570e91535d9bf045e5aac75623aa5c3f028026a5d26a04bb48f54fc4a68c5242", Pod:"coredns-66bc5c9577-tx48q", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali74905b13898", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 31 00:52:32.852045 containerd[1535]: 2025-10-31 00:52:32.832 [INFO][5490] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="ceb38f7ee9cb29205c35e10ae71428df3a5d39ec886165e32b917be45fb281c8" Oct 31 00:52:32.852045 containerd[1535]: 2025-10-31 00:52:32.832 [INFO][5490] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="ceb38f7ee9cb29205c35e10ae71428df3a5d39ec886165e32b917be45fb281c8" iface="eth0" netns="" Oct 31 00:52:32.852045 containerd[1535]: 2025-10-31 00:52:32.832 [INFO][5490] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="ceb38f7ee9cb29205c35e10ae71428df3a5d39ec886165e32b917be45fb281c8" Oct 31 00:52:32.852045 containerd[1535]: 2025-10-31 00:52:32.832 [INFO][5490] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="ceb38f7ee9cb29205c35e10ae71428df3a5d39ec886165e32b917be45fb281c8" Oct 31 00:52:32.852045 containerd[1535]: 2025-10-31 00:52:32.845 [INFO][5497] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="ceb38f7ee9cb29205c35e10ae71428df3a5d39ec886165e32b917be45fb281c8" HandleID="k8s-pod-network.ceb38f7ee9cb29205c35e10ae71428df3a5d39ec886165e32b917be45fb281c8" Workload="localhost-k8s-coredns--66bc5c9577--tx48q-eth0" Oct 31 00:52:32.852045 containerd[1535]: 2025-10-31 00:52:32.845 [INFO][5497] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Oct 31 00:52:32.852045 containerd[1535]: 2025-10-31 00:52:32.845 [INFO][5497] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Oct 31 00:52:32.852045 containerd[1535]: 2025-10-31 00:52:32.849 [WARNING][5497] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="ceb38f7ee9cb29205c35e10ae71428df3a5d39ec886165e32b917be45fb281c8" HandleID="k8s-pod-network.ceb38f7ee9cb29205c35e10ae71428df3a5d39ec886165e32b917be45fb281c8" Workload="localhost-k8s-coredns--66bc5c9577--tx48q-eth0" Oct 31 00:52:32.852045 containerd[1535]: 2025-10-31 00:52:32.849 [INFO][5497] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="ceb38f7ee9cb29205c35e10ae71428df3a5d39ec886165e32b917be45fb281c8" HandleID="k8s-pod-network.ceb38f7ee9cb29205c35e10ae71428df3a5d39ec886165e32b917be45fb281c8" Workload="localhost-k8s-coredns--66bc5c9577--tx48q-eth0" Oct 31 00:52:32.852045 containerd[1535]: 2025-10-31 00:52:32.850 [INFO][5497] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Oct 31 00:52:32.852045 containerd[1535]: 2025-10-31 00:52:32.851 [INFO][5490] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="ceb38f7ee9cb29205c35e10ae71428df3a5d39ec886165e32b917be45fb281c8" Oct 31 00:52:32.852421 containerd[1535]: time="2025-10-31T00:52:32.852065491Z" level=info msg="TearDown network for sandbox \"ceb38f7ee9cb29205c35e10ae71428df3a5d39ec886165e32b917be45fb281c8\" successfully" Oct 31 00:52:32.853482 containerd[1535]: time="2025-10-31T00:52:32.853461524Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"ceb38f7ee9cb29205c35e10ae71428df3a5d39ec886165e32b917be45fb281c8\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Oct 31 00:52:32.853514 containerd[1535]: time="2025-10-31T00:52:32.853487973Z" level=info msg="RemovePodSandbox \"ceb38f7ee9cb29205c35e10ae71428df3a5d39ec886165e32b917be45fb281c8\" returns successfully" Oct 31 00:52:32.853983 containerd[1535]: time="2025-10-31T00:52:32.853813706Z" level=info msg="StopPodSandbox for \"4f84c823dec1fe4194385f81db9f5bb956efcbe330f3474159feffcdbb5d2e5f\"" Oct 31 00:52:32.890308 containerd[1535]: 2025-10-31 00:52:32.872 [WARNING][5511] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="4f84c823dec1fe4194385f81db9f5bb956efcbe330f3474159feffcdbb5d2e5f" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--7c778bb748--62kb2-eth0", GenerateName:"goldmane-7c778bb748-", Namespace:"calico-system", SelfLink:"", UID:"56a98ce3-aefb-4f4d-a4ba-fe832cc8a1df", ResourceVersion:"981", Generation:0, CreationTimestamp:time.Date(2025, time.October, 31, 0, 51, 50, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"7c778bb748", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"7ba59e6168a2750c86cb266a5c5e9589be7d932399b904ba71b0ada0db657073", Pod:"goldmane-7c778bb748-62kb2", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali0dfb2244a1a", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 31 00:52:32.890308 containerd[1535]: 2025-10-31 00:52:32.872 [INFO][5511] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="4f84c823dec1fe4194385f81db9f5bb956efcbe330f3474159feffcdbb5d2e5f" Oct 31 00:52:32.890308 containerd[1535]: 2025-10-31 00:52:32.872 [INFO][5511] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="4f84c823dec1fe4194385f81db9f5bb956efcbe330f3474159feffcdbb5d2e5f" iface="eth0" netns="" Oct 31 00:52:32.890308 containerd[1535]: 2025-10-31 00:52:32.872 [INFO][5511] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="4f84c823dec1fe4194385f81db9f5bb956efcbe330f3474159feffcdbb5d2e5f" Oct 31 00:52:32.890308 containerd[1535]: 2025-10-31 00:52:32.872 [INFO][5511] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="4f84c823dec1fe4194385f81db9f5bb956efcbe330f3474159feffcdbb5d2e5f" Oct 31 00:52:32.890308 containerd[1535]: 2025-10-31 00:52:32.884 [INFO][5518] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="4f84c823dec1fe4194385f81db9f5bb956efcbe330f3474159feffcdbb5d2e5f" HandleID="k8s-pod-network.4f84c823dec1fe4194385f81db9f5bb956efcbe330f3474159feffcdbb5d2e5f" Workload="localhost-k8s-goldmane--7c778bb748--62kb2-eth0" Oct 31 00:52:32.890308 containerd[1535]: 2025-10-31 00:52:32.884 [INFO][5518] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Oct 31 00:52:32.890308 containerd[1535]: 2025-10-31 00:52:32.884 [INFO][5518] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Oct 31 00:52:32.890308 containerd[1535]: 2025-10-31 00:52:32.887 [WARNING][5518] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="4f84c823dec1fe4194385f81db9f5bb956efcbe330f3474159feffcdbb5d2e5f" HandleID="k8s-pod-network.4f84c823dec1fe4194385f81db9f5bb956efcbe330f3474159feffcdbb5d2e5f" Workload="localhost-k8s-goldmane--7c778bb748--62kb2-eth0" Oct 31 00:52:32.890308 containerd[1535]: 2025-10-31 00:52:32.887 [INFO][5518] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="4f84c823dec1fe4194385f81db9f5bb956efcbe330f3474159feffcdbb5d2e5f" HandleID="k8s-pod-network.4f84c823dec1fe4194385f81db9f5bb956efcbe330f3474159feffcdbb5d2e5f" Workload="localhost-k8s-goldmane--7c778bb748--62kb2-eth0" Oct 31 00:52:32.890308 containerd[1535]: 2025-10-31 00:52:32.888 [INFO][5518] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Oct 31 00:52:32.890308 containerd[1535]: 2025-10-31 00:52:32.889 [INFO][5511] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="4f84c823dec1fe4194385f81db9f5bb956efcbe330f3474159feffcdbb5d2e5f" Oct 31 00:52:32.891103 containerd[1535]: time="2025-10-31T00:52:32.890329906Z" level=info msg="TearDown network for sandbox \"4f84c823dec1fe4194385f81db9f5bb956efcbe330f3474159feffcdbb5d2e5f\" successfully" Oct 31 00:52:32.891103 containerd[1535]: time="2025-10-31T00:52:32.890345951Z" level=info msg="StopPodSandbox for \"4f84c823dec1fe4194385f81db9f5bb956efcbe330f3474159feffcdbb5d2e5f\" returns successfully" Oct 31 00:52:32.891103 containerd[1535]: time="2025-10-31T00:52:32.890888995Z" level=info msg="RemovePodSandbox for \"4f84c823dec1fe4194385f81db9f5bb956efcbe330f3474159feffcdbb5d2e5f\"" Oct 31 00:52:32.891103 containerd[1535]: time="2025-10-31T00:52:32.890905350Z" level=info msg="Forcibly stopping sandbox \"4f84c823dec1fe4194385f81db9f5bb956efcbe330f3474159feffcdbb5d2e5f\"" Oct 31 00:52:32.927220 containerd[1535]: 2025-10-31 00:52:32.908 [WARNING][5532] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="4f84c823dec1fe4194385f81db9f5bb956efcbe330f3474159feffcdbb5d2e5f" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--7c778bb748--62kb2-eth0", GenerateName:"goldmane-7c778bb748-", Namespace:"calico-system", SelfLink:"", UID:"56a98ce3-aefb-4f4d-a4ba-fe832cc8a1df", ResourceVersion:"981", Generation:0, CreationTimestamp:time.Date(2025, time.October, 31, 0, 51, 50, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"7c778bb748", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"7ba59e6168a2750c86cb266a5c5e9589be7d932399b904ba71b0ada0db657073", Pod:"goldmane-7c778bb748-62kb2", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali0dfb2244a1a", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 31 00:52:32.927220 containerd[1535]: 2025-10-31 00:52:32.909 [INFO][5532] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="4f84c823dec1fe4194385f81db9f5bb956efcbe330f3474159feffcdbb5d2e5f" Oct 31 00:52:32.927220 containerd[1535]: 2025-10-31 00:52:32.909 [INFO][5532] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="4f84c823dec1fe4194385f81db9f5bb956efcbe330f3474159feffcdbb5d2e5f" iface="eth0" netns="" Oct 31 00:52:32.927220 containerd[1535]: 2025-10-31 00:52:32.909 [INFO][5532] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="4f84c823dec1fe4194385f81db9f5bb956efcbe330f3474159feffcdbb5d2e5f" Oct 31 00:52:32.927220 containerd[1535]: 2025-10-31 00:52:32.909 [INFO][5532] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="4f84c823dec1fe4194385f81db9f5bb956efcbe330f3474159feffcdbb5d2e5f" Oct 31 00:52:32.927220 containerd[1535]: 2025-10-31 00:52:32.920 [INFO][5539] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="4f84c823dec1fe4194385f81db9f5bb956efcbe330f3474159feffcdbb5d2e5f" HandleID="k8s-pod-network.4f84c823dec1fe4194385f81db9f5bb956efcbe330f3474159feffcdbb5d2e5f" Workload="localhost-k8s-goldmane--7c778bb748--62kb2-eth0" Oct 31 00:52:32.927220 containerd[1535]: 2025-10-31 00:52:32.920 [INFO][5539] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Oct 31 00:52:32.927220 containerd[1535]: 2025-10-31 00:52:32.920 [INFO][5539] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Oct 31 00:52:32.927220 containerd[1535]: 2025-10-31 00:52:32.924 [WARNING][5539] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="4f84c823dec1fe4194385f81db9f5bb956efcbe330f3474159feffcdbb5d2e5f" HandleID="k8s-pod-network.4f84c823dec1fe4194385f81db9f5bb956efcbe330f3474159feffcdbb5d2e5f" Workload="localhost-k8s-goldmane--7c778bb748--62kb2-eth0" Oct 31 00:52:32.927220 containerd[1535]: 2025-10-31 00:52:32.924 [INFO][5539] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="4f84c823dec1fe4194385f81db9f5bb956efcbe330f3474159feffcdbb5d2e5f" HandleID="k8s-pod-network.4f84c823dec1fe4194385f81db9f5bb956efcbe330f3474159feffcdbb5d2e5f" Workload="localhost-k8s-goldmane--7c778bb748--62kb2-eth0" Oct 31 00:52:32.927220 containerd[1535]: 2025-10-31 00:52:32.925 [INFO][5539] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Oct 31 00:52:32.927220 containerd[1535]: 2025-10-31 00:52:32.926 [INFO][5532] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="4f84c823dec1fe4194385f81db9f5bb956efcbe330f3474159feffcdbb5d2e5f" Oct 31 00:52:32.927507 containerd[1535]: time="2025-10-31T00:52:32.927239045Z" level=info msg="TearDown network for sandbox \"4f84c823dec1fe4194385f81db9f5bb956efcbe330f3474159feffcdbb5d2e5f\" successfully" Oct 31 00:52:32.928521 containerd[1535]: time="2025-10-31T00:52:32.928506784Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"4f84c823dec1fe4194385f81db9f5bb956efcbe330f3474159feffcdbb5d2e5f\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Oct 31 00:52:32.928572 containerd[1535]: time="2025-10-31T00:52:32.928534113Z" level=info msg="RemovePodSandbox \"4f84c823dec1fe4194385f81db9f5bb956efcbe330f3474159feffcdbb5d2e5f\" returns successfully" Oct 31 00:52:32.928855 containerd[1535]: time="2025-10-31T00:52:32.928840811Z" level=info msg="StopPodSandbox for \"e052b87dd21ca98617011b88fba7c81b4a9fe745c55e055df734d99cb65ec604\"" Oct 31 00:52:32.964617 containerd[1535]: 2025-10-31 00:52:32.947 [WARNING][5553] cni-plugin/k8s.go 598: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="e052b87dd21ca98617011b88fba7c81b4a9fe745c55e055df734d99cb65ec604" WorkloadEndpoint="localhost-k8s-whisker--6b8495b656--fkzcv-eth0" Oct 31 00:52:32.964617 containerd[1535]: 2025-10-31 00:52:32.947 [INFO][5553] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="e052b87dd21ca98617011b88fba7c81b4a9fe745c55e055df734d99cb65ec604" Oct 31 00:52:32.964617 containerd[1535]: 2025-10-31 00:52:32.947 [INFO][5553] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="e052b87dd21ca98617011b88fba7c81b4a9fe745c55e055df734d99cb65ec604" iface="eth0" netns="" Oct 31 00:52:32.964617 containerd[1535]: 2025-10-31 00:52:32.947 [INFO][5553] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="e052b87dd21ca98617011b88fba7c81b4a9fe745c55e055df734d99cb65ec604" Oct 31 00:52:32.964617 containerd[1535]: 2025-10-31 00:52:32.947 [INFO][5553] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="e052b87dd21ca98617011b88fba7c81b4a9fe745c55e055df734d99cb65ec604" Oct 31 00:52:32.964617 containerd[1535]: 2025-10-31 00:52:32.958 [INFO][5560] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="e052b87dd21ca98617011b88fba7c81b4a9fe745c55e055df734d99cb65ec604" HandleID="k8s-pod-network.e052b87dd21ca98617011b88fba7c81b4a9fe745c55e055df734d99cb65ec604" Workload="localhost-k8s-whisker--6b8495b656--fkzcv-eth0" Oct 31 00:52:32.964617 containerd[1535]: 2025-10-31 00:52:32.958 [INFO][5560] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Oct 31 00:52:32.964617 containerd[1535]: 2025-10-31 00:52:32.958 [INFO][5560] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Oct 31 00:52:32.964617 containerd[1535]: 2025-10-31 00:52:32.962 [WARNING][5560] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="e052b87dd21ca98617011b88fba7c81b4a9fe745c55e055df734d99cb65ec604" HandleID="k8s-pod-network.e052b87dd21ca98617011b88fba7c81b4a9fe745c55e055df734d99cb65ec604" Workload="localhost-k8s-whisker--6b8495b656--fkzcv-eth0" Oct 31 00:52:32.964617 containerd[1535]: 2025-10-31 00:52:32.962 [INFO][5560] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="e052b87dd21ca98617011b88fba7c81b4a9fe745c55e055df734d99cb65ec604" HandleID="k8s-pod-network.e052b87dd21ca98617011b88fba7c81b4a9fe745c55e055df734d99cb65ec604" Workload="localhost-k8s-whisker--6b8495b656--fkzcv-eth0" Oct 31 00:52:32.964617 containerd[1535]: 2025-10-31 00:52:32.962 [INFO][5560] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Oct 31 00:52:32.964617 containerd[1535]: 2025-10-31 00:52:32.963 [INFO][5553] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="e052b87dd21ca98617011b88fba7c81b4a9fe745c55e055df734d99cb65ec604" Oct 31 00:52:32.965214 containerd[1535]: time="2025-10-31T00:52:32.964654116Z" level=info msg="TearDown network for sandbox \"e052b87dd21ca98617011b88fba7c81b4a9fe745c55e055df734d99cb65ec604\" successfully" Oct 31 00:52:32.965214 containerd[1535]: time="2025-10-31T00:52:32.964672734Z" level=info msg="StopPodSandbox for \"e052b87dd21ca98617011b88fba7c81b4a9fe745c55e055df734d99cb65ec604\" returns successfully" Oct 31 00:52:32.965214 containerd[1535]: time="2025-10-31T00:52:32.964998791Z" level=info msg="RemovePodSandbox for \"e052b87dd21ca98617011b88fba7c81b4a9fe745c55e055df734d99cb65ec604\"" Oct 31 00:52:32.965214 containerd[1535]: time="2025-10-31T00:52:32.965013671Z" level=info msg="Forcibly stopping sandbox \"e052b87dd21ca98617011b88fba7c81b4a9fe745c55e055df734d99cb65ec604\"" Oct 31 00:52:33.007153 containerd[1535]: 2025-10-31 00:52:32.986 [WARNING][5574] cni-plugin/k8s.go 598: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="e052b87dd21ca98617011b88fba7c81b4a9fe745c55e055df734d99cb65ec604" WorkloadEndpoint="localhost-k8s-whisker--6b8495b656--fkzcv-eth0" Oct 31 00:52:33.007153 containerd[1535]: 2025-10-31 00:52:32.987 [INFO][5574] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="e052b87dd21ca98617011b88fba7c81b4a9fe745c55e055df734d99cb65ec604" Oct 31 00:52:33.007153 containerd[1535]: 2025-10-31 00:52:32.987 [INFO][5574] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="e052b87dd21ca98617011b88fba7c81b4a9fe745c55e055df734d99cb65ec604" iface="eth0" netns="" Oct 31 00:52:33.007153 containerd[1535]: 2025-10-31 00:52:32.987 [INFO][5574] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="e052b87dd21ca98617011b88fba7c81b4a9fe745c55e055df734d99cb65ec604" Oct 31 00:52:33.007153 containerd[1535]: 2025-10-31 00:52:32.987 [INFO][5574] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="e052b87dd21ca98617011b88fba7c81b4a9fe745c55e055df734d99cb65ec604" Oct 31 00:52:33.007153 containerd[1535]: 2025-10-31 00:52:32.999 [INFO][5586] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="e052b87dd21ca98617011b88fba7c81b4a9fe745c55e055df734d99cb65ec604" HandleID="k8s-pod-network.e052b87dd21ca98617011b88fba7c81b4a9fe745c55e055df734d99cb65ec604" Workload="localhost-k8s-whisker--6b8495b656--fkzcv-eth0" Oct 31 00:52:33.007153 containerd[1535]: 2025-10-31 00:52:33.000 [INFO][5586] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Oct 31 00:52:33.007153 containerd[1535]: 2025-10-31 00:52:33.000 [INFO][5586] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Oct 31 00:52:33.007153 containerd[1535]: 2025-10-31 00:52:33.004 [WARNING][5586] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="e052b87dd21ca98617011b88fba7c81b4a9fe745c55e055df734d99cb65ec604" HandleID="k8s-pod-network.e052b87dd21ca98617011b88fba7c81b4a9fe745c55e055df734d99cb65ec604" Workload="localhost-k8s-whisker--6b8495b656--fkzcv-eth0" Oct 31 00:52:33.007153 containerd[1535]: 2025-10-31 00:52:33.004 [INFO][5586] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="e052b87dd21ca98617011b88fba7c81b4a9fe745c55e055df734d99cb65ec604" HandleID="k8s-pod-network.e052b87dd21ca98617011b88fba7c81b4a9fe745c55e055df734d99cb65ec604" Workload="localhost-k8s-whisker--6b8495b656--fkzcv-eth0" Oct 31 00:52:33.007153 containerd[1535]: 2025-10-31 00:52:33.005 [INFO][5586] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Oct 31 00:52:33.007153 containerd[1535]: 2025-10-31 00:52:33.006 [INFO][5574] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="e052b87dd21ca98617011b88fba7c81b4a9fe745c55e055df734d99cb65ec604" Oct 31 00:52:33.007462 containerd[1535]: time="2025-10-31T00:52:33.007176380Z" level=info msg="TearDown network for sandbox \"e052b87dd21ca98617011b88fba7c81b4a9fe745c55e055df734d99cb65ec604\" successfully" Oct 31 00:52:33.008475 containerd[1535]: time="2025-10-31T00:52:33.008459858Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"e052b87dd21ca98617011b88fba7c81b4a9fe745c55e055df734d99cb65ec604\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Oct 31 00:52:33.008529 containerd[1535]: time="2025-10-31T00:52:33.008486415Z" level=info msg="RemovePodSandbox \"e052b87dd21ca98617011b88fba7c81b4a9fe745c55e055df734d99cb65ec604\" returns successfully" Oct 31 00:52:33.008825 containerd[1535]: time="2025-10-31T00:52:33.008811026Z" level=info msg="StopPodSandbox for \"afc1584ace6ae6cad1154f5fe1167888c3bef1e302851579624a34bbdfaa0ae3\"" Oct 31 00:52:33.053074 containerd[1535]: 2025-10-31 00:52:33.032 [WARNING][5602] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="afc1584ace6ae6cad1154f5fe1167888c3bef1e302851579624a34bbdfaa0ae3" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--6577bb4886--mlhcr-eth0", GenerateName:"calico-apiserver-6577bb4886-", Namespace:"calico-apiserver", SelfLink:"", UID:"38874b0f-68ea-48b6-8bf2-17a9a00061c5", ResourceVersion:"1049", Generation:0, CreationTimestamp:time.Date(2025, time.October, 31, 0, 51, 49, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6577bb4886", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"47e58465b311ce174d7cdab046a88892a0727342950e3dfddf842d58aac99291", Pod:"calico-apiserver-6577bb4886-mlhcr", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calie536c70d526", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 31 00:52:33.053074 containerd[1535]: 2025-10-31 00:52:33.032 [INFO][5602] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="afc1584ace6ae6cad1154f5fe1167888c3bef1e302851579624a34bbdfaa0ae3" Oct 31 00:52:33.053074 containerd[1535]: 2025-10-31 00:52:33.032 [INFO][5602] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="afc1584ace6ae6cad1154f5fe1167888c3bef1e302851579624a34bbdfaa0ae3" iface="eth0" netns="" Oct 31 00:52:33.053074 containerd[1535]: 2025-10-31 00:52:33.032 [INFO][5602] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="afc1584ace6ae6cad1154f5fe1167888c3bef1e302851579624a34bbdfaa0ae3" Oct 31 00:52:33.053074 containerd[1535]: 2025-10-31 00:52:33.032 [INFO][5602] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="afc1584ace6ae6cad1154f5fe1167888c3bef1e302851579624a34bbdfaa0ae3" Oct 31 00:52:33.053074 containerd[1535]: 2025-10-31 00:52:33.045 [INFO][5611] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="afc1584ace6ae6cad1154f5fe1167888c3bef1e302851579624a34bbdfaa0ae3" HandleID="k8s-pod-network.afc1584ace6ae6cad1154f5fe1167888c3bef1e302851579624a34bbdfaa0ae3" Workload="localhost-k8s-calico--apiserver--6577bb4886--mlhcr-eth0" Oct 31 00:52:33.053074 containerd[1535]: 2025-10-31 00:52:33.045 [INFO][5611] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Oct 31 00:52:33.053074 containerd[1535]: 2025-10-31 00:52:33.045 [INFO][5611] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Oct 31 00:52:33.053074 containerd[1535]: 2025-10-31 00:52:33.049 [WARNING][5611] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="afc1584ace6ae6cad1154f5fe1167888c3bef1e302851579624a34bbdfaa0ae3" HandleID="k8s-pod-network.afc1584ace6ae6cad1154f5fe1167888c3bef1e302851579624a34bbdfaa0ae3" Workload="localhost-k8s-calico--apiserver--6577bb4886--mlhcr-eth0" Oct 31 00:52:33.053074 containerd[1535]: 2025-10-31 00:52:33.049 [INFO][5611] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="afc1584ace6ae6cad1154f5fe1167888c3bef1e302851579624a34bbdfaa0ae3" HandleID="k8s-pod-network.afc1584ace6ae6cad1154f5fe1167888c3bef1e302851579624a34bbdfaa0ae3" Workload="localhost-k8s-calico--apiserver--6577bb4886--mlhcr-eth0" Oct 31 00:52:33.053074 containerd[1535]: 2025-10-31 00:52:33.051 [INFO][5611] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Oct 31 00:52:33.053074 containerd[1535]: 2025-10-31 00:52:33.052 [INFO][5602] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="afc1584ace6ae6cad1154f5fe1167888c3bef1e302851579624a34bbdfaa0ae3" Oct 31 00:52:33.053391 containerd[1535]: time="2025-10-31T00:52:33.053142418Z" level=info msg="TearDown network for sandbox \"afc1584ace6ae6cad1154f5fe1167888c3bef1e302851579624a34bbdfaa0ae3\" successfully" Oct 31 00:52:33.053391 containerd[1535]: time="2025-10-31T00:52:33.053159649Z" level=info msg="StopPodSandbox for \"afc1584ace6ae6cad1154f5fe1167888c3bef1e302851579624a34bbdfaa0ae3\" returns successfully" Oct 31 00:52:33.053723 containerd[1535]: time="2025-10-31T00:52:33.053707550Z" level=info msg="RemovePodSandbox for \"afc1584ace6ae6cad1154f5fe1167888c3bef1e302851579624a34bbdfaa0ae3\"" Oct 31 00:52:33.053747 containerd[1535]: time="2025-10-31T00:52:33.053726802Z" level=info msg="Forcibly stopping sandbox \"afc1584ace6ae6cad1154f5fe1167888c3bef1e302851579624a34bbdfaa0ae3\"" Oct 31 00:52:33.094059 containerd[1535]: 2025-10-31 00:52:33.074 [WARNING][5626] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="afc1584ace6ae6cad1154f5fe1167888c3bef1e302851579624a34bbdfaa0ae3" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--6577bb4886--mlhcr-eth0", GenerateName:"calico-apiserver-6577bb4886-", Namespace:"calico-apiserver", SelfLink:"", UID:"38874b0f-68ea-48b6-8bf2-17a9a00061c5", ResourceVersion:"1049", Generation:0, CreationTimestamp:time.Date(2025, time.October, 31, 0, 51, 49, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6577bb4886", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"47e58465b311ce174d7cdab046a88892a0727342950e3dfddf842d58aac99291", Pod:"calico-apiserver-6577bb4886-mlhcr", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calie536c70d526", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 31 00:52:33.094059 containerd[1535]: 2025-10-31 00:52:33.074 [INFO][5626] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="afc1584ace6ae6cad1154f5fe1167888c3bef1e302851579624a34bbdfaa0ae3" Oct 31 00:52:33.094059 containerd[1535]: 2025-10-31 00:52:33.074 [INFO][5626] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="afc1584ace6ae6cad1154f5fe1167888c3bef1e302851579624a34bbdfaa0ae3" iface="eth0" netns="" Oct 31 00:52:33.094059 containerd[1535]: 2025-10-31 00:52:33.074 [INFO][5626] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="afc1584ace6ae6cad1154f5fe1167888c3bef1e302851579624a34bbdfaa0ae3" Oct 31 00:52:33.094059 containerd[1535]: 2025-10-31 00:52:33.074 [INFO][5626] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="afc1584ace6ae6cad1154f5fe1167888c3bef1e302851579624a34bbdfaa0ae3" Oct 31 00:52:33.094059 containerd[1535]: 2025-10-31 00:52:33.087 [INFO][5633] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="afc1584ace6ae6cad1154f5fe1167888c3bef1e302851579624a34bbdfaa0ae3" HandleID="k8s-pod-network.afc1584ace6ae6cad1154f5fe1167888c3bef1e302851579624a34bbdfaa0ae3" Workload="localhost-k8s-calico--apiserver--6577bb4886--mlhcr-eth0" Oct 31 00:52:33.094059 containerd[1535]: 2025-10-31 00:52:33.087 [INFO][5633] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Oct 31 00:52:33.094059 containerd[1535]: 2025-10-31 00:52:33.087 [INFO][5633] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Oct 31 00:52:33.094059 containerd[1535]: 2025-10-31 00:52:33.091 [WARNING][5633] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="afc1584ace6ae6cad1154f5fe1167888c3bef1e302851579624a34bbdfaa0ae3" HandleID="k8s-pod-network.afc1584ace6ae6cad1154f5fe1167888c3bef1e302851579624a34bbdfaa0ae3" Workload="localhost-k8s-calico--apiserver--6577bb4886--mlhcr-eth0" Oct 31 00:52:33.094059 containerd[1535]: 2025-10-31 00:52:33.091 [INFO][5633] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="afc1584ace6ae6cad1154f5fe1167888c3bef1e302851579624a34bbdfaa0ae3" HandleID="k8s-pod-network.afc1584ace6ae6cad1154f5fe1167888c3bef1e302851579624a34bbdfaa0ae3" Workload="localhost-k8s-calico--apiserver--6577bb4886--mlhcr-eth0" Oct 31 00:52:33.094059 containerd[1535]: 2025-10-31 00:52:33.092 [INFO][5633] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Oct 31 00:52:33.094059 containerd[1535]: 2025-10-31 00:52:33.093 [INFO][5626] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="afc1584ace6ae6cad1154f5fe1167888c3bef1e302851579624a34bbdfaa0ae3" Oct 31 00:52:33.094396 containerd[1535]: time="2025-10-31T00:52:33.094124713Z" level=info msg="TearDown network for sandbox \"afc1584ace6ae6cad1154f5fe1167888c3bef1e302851579624a34bbdfaa0ae3\" successfully" Oct 31 00:52:33.095680 containerd[1535]: time="2025-10-31T00:52:33.095663646Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"afc1584ace6ae6cad1154f5fe1167888c3bef1e302851579624a34bbdfaa0ae3\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Oct 31 00:52:33.095706 containerd[1535]: time="2025-10-31T00:52:33.095690820Z" level=info msg="RemovePodSandbox \"afc1584ace6ae6cad1154f5fe1167888c3bef1e302851579624a34bbdfaa0ae3\" returns successfully" Oct 31 00:52:33.096049 containerd[1535]: time="2025-10-31T00:52:33.095998642Z" level=info msg="StopPodSandbox for \"4c1ff8f77ca8225253f7b87874d03762a63424051ab3c7d8179a10904eb22b3b\"" Oct 31 00:52:33.132825 containerd[1535]: 2025-10-31 00:52:33.114 [WARNING][5647] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="4c1ff8f77ca8225253f7b87874d03762a63424051ab3c7d8179a10904eb22b3b" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--qfqgh-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"5e393e8e-d87c-4c00-a0d8-1932978c09f4", ResourceVersion:"1059", Generation:0, CreationTimestamp:time.Date(2025, time.October, 31, 0, 51, 53, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"9d99788f7", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"46d1458117031b1e0dc77c6fc0265a941dba5a52d92f145204bac36f459f9df7", Pod:"csi-node-driver-qfqgh", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali862ebfa03bc", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 31 00:52:33.132825 containerd[1535]: 2025-10-31 00:52:33.114 [INFO][5647] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="4c1ff8f77ca8225253f7b87874d03762a63424051ab3c7d8179a10904eb22b3b" Oct 31 00:52:33.132825 containerd[1535]: 2025-10-31 00:52:33.114 [INFO][5647] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="4c1ff8f77ca8225253f7b87874d03762a63424051ab3c7d8179a10904eb22b3b" iface="eth0" netns="" Oct 31 00:52:33.132825 containerd[1535]: 2025-10-31 00:52:33.114 [INFO][5647] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="4c1ff8f77ca8225253f7b87874d03762a63424051ab3c7d8179a10904eb22b3b" Oct 31 00:52:33.132825 containerd[1535]: 2025-10-31 00:52:33.114 [INFO][5647] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="4c1ff8f77ca8225253f7b87874d03762a63424051ab3c7d8179a10904eb22b3b" Oct 31 00:52:33.132825 containerd[1535]: 2025-10-31 00:52:33.126 [INFO][5654] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="4c1ff8f77ca8225253f7b87874d03762a63424051ab3c7d8179a10904eb22b3b" HandleID="k8s-pod-network.4c1ff8f77ca8225253f7b87874d03762a63424051ab3c7d8179a10904eb22b3b" Workload="localhost-k8s-csi--node--driver--qfqgh-eth0" Oct 31 00:52:33.132825 containerd[1535]: 2025-10-31 00:52:33.126 [INFO][5654] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Oct 31 00:52:33.132825 containerd[1535]: 2025-10-31 00:52:33.126 [INFO][5654] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Oct 31 00:52:33.132825 containerd[1535]: 2025-10-31 00:52:33.130 [WARNING][5654] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="4c1ff8f77ca8225253f7b87874d03762a63424051ab3c7d8179a10904eb22b3b" HandleID="k8s-pod-network.4c1ff8f77ca8225253f7b87874d03762a63424051ab3c7d8179a10904eb22b3b" Workload="localhost-k8s-csi--node--driver--qfqgh-eth0" Oct 31 00:52:33.132825 containerd[1535]: 2025-10-31 00:52:33.130 [INFO][5654] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="4c1ff8f77ca8225253f7b87874d03762a63424051ab3c7d8179a10904eb22b3b" HandleID="k8s-pod-network.4c1ff8f77ca8225253f7b87874d03762a63424051ab3c7d8179a10904eb22b3b" Workload="localhost-k8s-csi--node--driver--qfqgh-eth0" Oct 31 00:52:33.132825 containerd[1535]: 2025-10-31 00:52:33.130 [INFO][5654] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Oct 31 00:52:33.132825 containerd[1535]: 2025-10-31 00:52:33.131 [INFO][5647] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="4c1ff8f77ca8225253f7b87874d03762a63424051ab3c7d8179a10904eb22b3b" Oct 31 00:52:33.133217 containerd[1535]: time="2025-10-31T00:52:33.132859305Z" level=info msg="TearDown network for sandbox \"4c1ff8f77ca8225253f7b87874d03762a63424051ab3c7d8179a10904eb22b3b\" successfully" Oct 31 00:52:33.133217 containerd[1535]: time="2025-10-31T00:52:33.132874161Z" level=info msg="StopPodSandbox for \"4c1ff8f77ca8225253f7b87874d03762a63424051ab3c7d8179a10904eb22b3b\" returns successfully" Oct 31 00:52:33.133271 containerd[1535]: time="2025-10-31T00:52:33.133257964Z" level=info msg="RemovePodSandbox for \"4c1ff8f77ca8225253f7b87874d03762a63424051ab3c7d8179a10904eb22b3b\"" Oct 31 00:52:33.133296 containerd[1535]: time="2025-10-31T00:52:33.133275931Z" level=info msg="Forcibly stopping sandbox \"4c1ff8f77ca8225253f7b87874d03762a63424051ab3c7d8179a10904eb22b3b\"" Oct 31 00:52:33.170719 containerd[1535]: 2025-10-31 00:52:33.152 [WARNING][5668] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="4c1ff8f77ca8225253f7b87874d03762a63424051ab3c7d8179a10904eb22b3b" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--qfqgh-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"5e393e8e-d87c-4c00-a0d8-1932978c09f4", ResourceVersion:"1059", Generation:0, CreationTimestamp:time.Date(2025, time.October, 31, 0, 51, 53, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"9d99788f7", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"46d1458117031b1e0dc77c6fc0265a941dba5a52d92f145204bac36f459f9df7", Pod:"csi-node-driver-qfqgh", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali862ebfa03bc", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 31 00:52:33.170719 containerd[1535]: 2025-10-31 00:52:33.153 [INFO][5668] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="4c1ff8f77ca8225253f7b87874d03762a63424051ab3c7d8179a10904eb22b3b" Oct 31 00:52:33.170719 containerd[1535]: 2025-10-31 00:52:33.153 [INFO][5668] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="4c1ff8f77ca8225253f7b87874d03762a63424051ab3c7d8179a10904eb22b3b" iface="eth0" netns="" Oct 31 00:52:33.170719 containerd[1535]: 2025-10-31 00:52:33.153 [INFO][5668] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="4c1ff8f77ca8225253f7b87874d03762a63424051ab3c7d8179a10904eb22b3b" Oct 31 00:52:33.170719 containerd[1535]: 2025-10-31 00:52:33.153 [INFO][5668] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="4c1ff8f77ca8225253f7b87874d03762a63424051ab3c7d8179a10904eb22b3b" Oct 31 00:52:33.170719 containerd[1535]: 2025-10-31 00:52:33.164 [INFO][5675] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="4c1ff8f77ca8225253f7b87874d03762a63424051ab3c7d8179a10904eb22b3b" HandleID="k8s-pod-network.4c1ff8f77ca8225253f7b87874d03762a63424051ab3c7d8179a10904eb22b3b" Workload="localhost-k8s-csi--node--driver--qfqgh-eth0" Oct 31 00:52:33.170719 containerd[1535]: 2025-10-31 00:52:33.164 [INFO][5675] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Oct 31 00:52:33.170719 containerd[1535]: 2025-10-31 00:52:33.164 [INFO][5675] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Oct 31 00:52:33.170719 containerd[1535]: 2025-10-31 00:52:33.168 [WARNING][5675] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="4c1ff8f77ca8225253f7b87874d03762a63424051ab3c7d8179a10904eb22b3b" HandleID="k8s-pod-network.4c1ff8f77ca8225253f7b87874d03762a63424051ab3c7d8179a10904eb22b3b" Workload="localhost-k8s-csi--node--driver--qfqgh-eth0" Oct 31 00:52:33.170719 containerd[1535]: 2025-10-31 00:52:33.168 [INFO][5675] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="4c1ff8f77ca8225253f7b87874d03762a63424051ab3c7d8179a10904eb22b3b" HandleID="k8s-pod-network.4c1ff8f77ca8225253f7b87874d03762a63424051ab3c7d8179a10904eb22b3b" Workload="localhost-k8s-csi--node--driver--qfqgh-eth0" Oct 31 00:52:33.170719 containerd[1535]: 2025-10-31 00:52:33.168 [INFO][5675] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Oct 31 00:52:33.170719 containerd[1535]: 2025-10-31 00:52:33.169 [INFO][5668] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="4c1ff8f77ca8225253f7b87874d03762a63424051ab3c7d8179a10904eb22b3b" Oct 31 00:52:33.171066 containerd[1535]: time="2025-10-31T00:52:33.170741105Z" level=info msg="TearDown network for sandbox \"4c1ff8f77ca8225253f7b87874d03762a63424051ab3c7d8179a10904eb22b3b\" successfully" Oct 31 00:52:33.171972 containerd[1535]: time="2025-10-31T00:52:33.171949622Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"4c1ff8f77ca8225253f7b87874d03762a63424051ab3c7d8179a10904eb22b3b\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Oct 31 00:52:33.172003 containerd[1535]: time="2025-10-31T00:52:33.171983283Z" level=info msg="RemovePodSandbox \"4c1ff8f77ca8225253f7b87874d03762a63424051ab3c7d8179a10904eb22b3b\" returns successfully" Oct 31 00:52:33.473354 containerd[1535]: time="2025-10-31T00:52:33.473309262Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Oct 31 00:52:33.807557 containerd[1535]: time="2025-10-31T00:52:33.807321651Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Oct 31 00:52:33.807998 containerd[1535]: time="2025-10-31T00:52:33.807789021Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Oct 31 00:52:33.807998 containerd[1535]: time="2025-10-31T00:52:33.807853856Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Oct 31 00:52:33.809743 kubelet[2709]: E1031 00:52:33.807948 2709 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Oct 31 00:52:33.809743 kubelet[2709]: E1031 00:52:33.808013 2709 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Oct 31 00:52:33.809743 kubelet[2709]: E1031 00:52:33.808158 2709 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-apiserver start failed in pod calico-apiserver-6577bb4886-mlhcr_calico-apiserver(38874b0f-68ea-48b6-8bf2-17a9a00061c5): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Oct 31 00:52:33.809743 kubelet[2709]: E1031 00:52:33.809363 2709 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6577bb4886-mlhcr" podUID="38874b0f-68ea-48b6-8bf2-17a9a00061c5" Oct 31 00:52:33.810092 containerd[1535]: time="2025-10-31T00:52:33.808322085Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Oct 31 00:52:34.131353 containerd[1535]: time="2025-10-31T00:52:34.131015769Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Oct 31 00:52:34.131486 containerd[1535]: time="2025-10-31T00:52:34.131384651Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Oct 31 00:52:34.131486 containerd[1535]: time="2025-10-31T00:52:34.131453015Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Oct 31 00:52:34.131878 kubelet[2709]: E1031 00:52:34.131669 2709 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Oct 31 00:52:34.131878 kubelet[2709]: E1031 00:52:34.131707 2709 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Oct 31 00:52:34.131878 kubelet[2709]: E1031 00:52:34.131772 2709 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-csi start failed in pod csi-node-driver-qfqgh_calico-system(5e393e8e-d87c-4c00-a0d8-1932978c09f4): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Oct 31 00:52:34.132679 containerd[1535]: time="2025-10-31T00:52:34.132604493Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Oct 31 00:52:34.488121 containerd[1535]: time="2025-10-31T00:52:34.488082855Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Oct 31 00:52:34.488944 containerd[1535]: time="2025-10-31T00:52:34.488912327Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Oct 31 00:52:34.488992 containerd[1535]: time="2025-10-31T00:52:34.488957951Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Oct 31 00:52:34.489088 kubelet[2709]: E1031 00:52:34.489060 2709 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Oct 31 00:52:34.489159 kubelet[2709]: E1031 00:52:34.489089 2709 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Oct 31 00:52:34.489159 kubelet[2709]: E1031 00:52:34.489140 2709 kuberuntime_manager.go:1449] "Unhandled Error" err="container csi-node-driver-registrar start failed in pod csi-node-driver-qfqgh_calico-system(5e393e8e-d87c-4c00-a0d8-1932978c09f4): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Oct 31 00:52:34.489358 kubelet[2709]: E1031 00:52:34.489170 2709 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-qfqgh" podUID="5e393e8e-d87c-4c00-a0d8-1932978c09f4" Oct 31 00:52:38.473939 kubelet[2709]: E1031 00:52:38.473475 2709 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-565cb9cccd-v4n4z" podUID="e4d268cd-a761-4f69-b6d1-2ef35e9f3bff" Oct 31 00:52:38.683172 systemd[1]: run-containerd-runc-k8s.io-aedcab1663400171fac8ea15e99e8a97eed1e80174b781965f72c83b6acef519-runc.Cq04Cw.mount: Deactivated successfully. Oct 31 00:52:40.473955 kubelet[2709]: E1031 00:52:40.473918 2709 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6577bb4886-7s98w" podUID="3561ae24-137d-44ba-89a5-d4068542bce6" Oct 31 00:52:41.472646 kubelet[2709]: E1031 00:52:41.472462 2709 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-62kb2" podUID="56a98ce3-aefb-4f4d-a4ba-fe832cc8a1df" Oct 31 00:52:42.346328 systemd[1]: Started sshd@7-139.178.70.106:22-139.178.68.195:44634.service - OpenSSH per-connection server daemon (139.178.68.195:44634). Oct 31 00:52:42.423762 sshd[5708]: Accepted publickey for core from 139.178.68.195 port 44634 ssh2: RSA SHA256:oP22LaO0XIGArebTxOYO0fKYAeWWO0nZUw/MwdKnF1w Oct 31 00:52:42.426083 sshd[5708]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 31 00:52:42.433060 systemd-logind[1514]: New session 10 of user core. Oct 31 00:52:42.437754 systemd[1]: Started session-10.scope - Session 10 of User core. Oct 31 00:52:42.904174 sshd[5708]: pam_unix(sshd:session): session closed for user core Oct 31 00:52:42.907146 systemd[1]: sshd@7-139.178.70.106:22-139.178.68.195:44634.service: Deactivated successfully. Oct 31 00:52:42.908169 systemd[1]: session-10.scope: Deactivated successfully. Oct 31 00:52:42.908791 systemd-logind[1514]: Session 10 logged out. Waiting for processes to exit. Oct 31 00:52:42.909303 systemd-logind[1514]: Removed session 10. Oct 31 00:52:45.472692 kubelet[2709]: E1031 00:52:45.472656 2709 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-84c8975546-52zt4" podUID="fb892052-e9f7-4494-bd9e-d42433970af9" Oct 31 00:52:45.483007 kubelet[2709]: E1031 00:52:45.482972 2709 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-qfqgh" podUID="5e393e8e-d87c-4c00-a0d8-1932978c09f4" Oct 31 00:52:47.914663 systemd[1]: Started sshd@8-139.178.70.106:22-139.178.68.195:59750.service - OpenSSH per-connection server daemon (139.178.68.195:59750). Oct 31 00:52:47.951991 sshd[5736]: Accepted publickey for core from 139.178.68.195 port 59750 ssh2: RSA SHA256:oP22LaO0XIGArebTxOYO0fKYAeWWO0nZUw/MwdKnF1w Oct 31 00:52:47.952724 sshd[5736]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 31 00:52:47.955254 systemd-logind[1514]: New session 11 of user core. Oct 31 00:52:47.962732 systemd[1]: Started session-11.scope - Session 11 of User core. Oct 31 00:52:48.062957 sshd[5736]: pam_unix(sshd:session): session closed for user core Oct 31 00:52:48.065528 systemd[1]: sshd@8-139.178.70.106:22-139.178.68.195:59750.service: Deactivated successfully. Oct 31 00:52:48.066515 systemd[1]: session-11.scope: Deactivated successfully. Oct 31 00:52:48.066955 systemd-logind[1514]: Session 11 logged out. Waiting for processes to exit. Oct 31 00:52:48.067471 systemd-logind[1514]: Removed session 11. Oct 31 00:52:49.472858 kubelet[2709]: E1031 00:52:49.472169 2709 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6577bb4886-mlhcr" podUID="38874b0f-68ea-48b6-8bf2-17a9a00061c5" Oct 31 00:52:52.473175 containerd[1535]: time="2025-10-31T00:52:52.473145788Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Oct 31 00:52:52.806005 containerd[1535]: time="2025-10-31T00:52:52.805870318Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Oct 31 00:52:52.806383 containerd[1535]: time="2025-10-31T00:52:52.806302604Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Oct 31 00:52:52.806383 containerd[1535]: time="2025-10-31T00:52:52.806352466Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Oct 31 00:52:52.806479 kubelet[2709]: E1031 00:52:52.806439 2709 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Oct 31 00:52:52.806479 kubelet[2709]: E1031 00:52:52.806467 2709 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Oct 31 00:52:52.807023 kubelet[2709]: E1031 00:52:52.806520 2709 kuberuntime_manager.go:1449] "Unhandled Error" err="container whisker start failed in pod whisker-565cb9cccd-v4n4z_calico-system(e4d268cd-a761-4f69-b6d1-2ef35e9f3bff): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Oct 31 00:52:52.807726 containerd[1535]: time="2025-10-31T00:52:52.807484434Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Oct 31 00:52:53.073704 systemd[1]: Started sshd@9-139.178.70.106:22-139.178.68.195:57666.service - OpenSSH per-connection server daemon (139.178.68.195:57666). Oct 31 00:52:53.119320 sshd[5751]: Accepted publickey for core from 139.178.68.195 port 57666 ssh2: RSA SHA256:oP22LaO0XIGArebTxOYO0fKYAeWWO0nZUw/MwdKnF1w Oct 31 00:52:53.120327 sshd[5751]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 31 00:52:53.123799 systemd-logind[1514]: New session 12 of user core. Oct 31 00:52:53.130808 systemd[1]: Started session-12.scope - Session 12 of User core. Oct 31 00:52:53.138042 containerd[1535]: time="2025-10-31T00:52:53.137999727Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Oct 31 00:52:53.138383 containerd[1535]: time="2025-10-31T00:52:53.138356870Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Oct 31 00:52:53.138453 containerd[1535]: time="2025-10-31T00:52:53.138411744Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Oct 31 00:52:53.138541 kubelet[2709]: E1031 00:52:53.138512 2709 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Oct 31 00:52:53.138579 kubelet[2709]: E1031 00:52:53.138548 2709 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Oct 31 00:52:53.138643 kubelet[2709]: E1031 00:52:53.138605 2709 kuberuntime_manager.go:1449] "Unhandled Error" err="container whisker-backend start failed in pod whisker-565cb9cccd-v4n4z_calico-system(e4d268cd-a761-4f69-b6d1-2ef35e9f3bff): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Oct 31 00:52:53.138691 kubelet[2709]: E1031 00:52:53.138654 2709 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-565cb9cccd-v4n4z" podUID="e4d268cd-a761-4f69-b6d1-2ef35e9f3bff" Oct 31 00:52:53.237802 sshd[5751]: pam_unix(sshd:session): session closed for user core Oct 31 00:52:53.244050 systemd[1]: sshd@9-139.178.70.106:22-139.178.68.195:57666.service: Deactivated successfully. Oct 31 00:52:53.244965 systemd[1]: session-12.scope: Deactivated successfully. Oct 31 00:52:53.245449 systemd-logind[1514]: Session 12 logged out. Waiting for processes to exit. Oct 31 00:52:53.251791 systemd[1]: Started sshd@10-139.178.70.106:22-139.178.68.195:57676.service - OpenSSH per-connection server daemon (139.178.68.195:57676). Oct 31 00:52:53.253942 systemd-logind[1514]: Removed session 12. Oct 31 00:52:53.274275 sshd[5764]: Accepted publickey for core from 139.178.68.195 port 57676 ssh2: RSA SHA256:oP22LaO0XIGArebTxOYO0fKYAeWWO0nZUw/MwdKnF1w Oct 31 00:52:53.275974 sshd[5764]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 31 00:52:53.279064 systemd-logind[1514]: New session 13 of user core. Oct 31 00:52:53.283733 systemd[1]: Started session-13.scope - Session 13 of User core. Oct 31 00:52:53.413409 sshd[5764]: pam_unix(sshd:session): session closed for user core Oct 31 00:52:53.422661 systemd[1]: sshd@10-139.178.70.106:22-139.178.68.195:57676.service: Deactivated successfully. Oct 31 00:52:53.425131 systemd[1]: session-13.scope: Deactivated successfully. Oct 31 00:52:53.426598 systemd-logind[1514]: Session 13 logged out. Waiting for processes to exit. Oct 31 00:52:53.435107 systemd[1]: Started sshd@11-139.178.70.106:22-139.178.68.195:57690.service - OpenSSH per-connection server daemon (139.178.68.195:57690). Oct 31 00:52:53.438075 systemd-logind[1514]: Removed session 13. Oct 31 00:52:53.484979 sshd[5775]: Accepted publickey for core from 139.178.68.195 port 57690 ssh2: RSA SHA256:oP22LaO0XIGArebTxOYO0fKYAeWWO0nZUw/MwdKnF1w Oct 31 00:52:53.486361 sshd[5775]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 31 00:52:53.491027 systemd-logind[1514]: New session 14 of user core. Oct 31 00:52:53.496772 systemd[1]: Started session-14.scope - Session 14 of User core. Oct 31 00:52:53.608401 sshd[5775]: pam_unix(sshd:session): session closed for user core Oct 31 00:52:53.610851 systemd[1]: sshd@11-139.178.70.106:22-139.178.68.195:57690.service: Deactivated successfully. Oct 31 00:52:53.612419 systemd[1]: session-14.scope: Deactivated successfully. Oct 31 00:52:53.614846 systemd-logind[1514]: Session 14 logged out. Waiting for processes to exit. Oct 31 00:52:53.615427 systemd-logind[1514]: Removed session 14. Oct 31 00:52:54.472941 containerd[1535]: time="2025-10-31T00:52:54.472730393Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Oct 31 00:52:54.845739 containerd[1535]: time="2025-10-31T00:52:54.845605119Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Oct 31 00:52:54.846213 containerd[1535]: time="2025-10-31T00:52:54.846189822Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Oct 31 00:52:54.846278 containerd[1535]: time="2025-10-31T00:52:54.846249175Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Oct 31 00:52:54.846375 kubelet[2709]: E1031 00:52:54.846347 2709 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Oct 31 00:52:54.846601 kubelet[2709]: E1031 00:52:54.846380 2709 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Oct 31 00:52:54.846601 kubelet[2709]: E1031 00:52:54.846429 2709 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-apiserver start failed in pod calico-apiserver-6577bb4886-7s98w_calico-apiserver(3561ae24-137d-44ba-89a5-d4068542bce6): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Oct 31 00:52:54.846601 kubelet[2709]: E1031 00:52:54.846450 2709 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6577bb4886-7s98w" podUID="3561ae24-137d-44ba-89a5-d4068542bce6" Oct 31 00:52:55.474833 containerd[1535]: time="2025-10-31T00:52:55.474672241Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Oct 31 00:52:55.854799 containerd[1535]: time="2025-10-31T00:52:55.854717062Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Oct 31 00:52:55.859346 containerd[1535]: time="2025-10-31T00:52:55.859323158Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Oct 31 00:52:55.859402 containerd[1535]: time="2025-10-31T00:52:55.859384406Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=77" Oct 31 00:52:55.859535 kubelet[2709]: E1031 00:52:55.859496 2709 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Oct 31 00:52:55.860000 kubelet[2709]: E1031 00:52:55.859543 2709 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Oct 31 00:52:55.860000 kubelet[2709]: E1031 00:52:55.859605 2709 kuberuntime_manager.go:1449] "Unhandled Error" err="container goldmane start failed in pod goldmane-7c778bb748-62kb2_calico-system(56a98ce3-aefb-4f4d-a4ba-fe832cc8a1df): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Oct 31 00:52:55.860000 kubelet[2709]: E1031 00:52:55.859649 2709 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-62kb2" podUID="56a98ce3-aefb-4f4d-a4ba-fe832cc8a1df" Oct 31 00:52:58.618947 systemd[1]: Started sshd@12-139.178.70.106:22-139.178.68.195:57694.service - OpenSSH per-connection server daemon (139.178.68.195:57694). Oct 31 00:52:58.704691 sshd[5791]: Accepted publickey for core from 139.178.68.195 port 57694 ssh2: RSA SHA256:oP22LaO0XIGArebTxOYO0fKYAeWWO0nZUw/MwdKnF1w Oct 31 00:52:58.705707 sshd[5791]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 31 00:52:58.709405 systemd-logind[1514]: New session 15 of user core. Oct 31 00:52:58.711745 systemd[1]: Started session-15.scope - Session 15 of User core. Oct 31 00:52:58.834142 sshd[5791]: pam_unix(sshd:session): session closed for user core Oct 31 00:52:58.836211 systemd[1]: sshd@12-139.178.70.106:22-139.178.68.195:57694.service: Deactivated successfully. Oct 31 00:52:58.837363 systemd[1]: session-15.scope: Deactivated successfully. Oct 31 00:52:58.837820 systemd-logind[1514]: Session 15 logged out. Waiting for processes to exit. Oct 31 00:52:58.838339 systemd-logind[1514]: Removed session 15. Oct 31 00:52:59.471925 containerd[1535]: time="2025-10-31T00:52:59.471866909Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Oct 31 00:52:59.797029 containerd[1535]: time="2025-10-31T00:52:59.796838214Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Oct 31 00:52:59.797392 containerd[1535]: time="2025-10-31T00:52:59.797158926Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Oct 31 00:52:59.797392 containerd[1535]: time="2025-10-31T00:52:59.797198956Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Oct 31 00:52:59.797471 kubelet[2709]: E1031 00:52:59.797284 2709 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Oct 31 00:52:59.797471 kubelet[2709]: E1031 00:52:59.797324 2709 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Oct 31 00:52:59.797471 kubelet[2709]: E1031 00:52:59.797437 2709 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-csi start failed in pod csi-node-driver-qfqgh_calico-system(5e393e8e-d87c-4c00-a0d8-1932978c09f4): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Oct 31 00:52:59.798586 containerd[1535]: time="2025-10-31T00:52:59.798088005Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Oct 31 00:53:00.164025 containerd[1535]: time="2025-10-31T00:53:00.163557432Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Oct 31 00:53:00.164517 containerd[1535]: time="2025-10-31T00:53:00.164262434Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Oct 31 00:53:00.164517 containerd[1535]: time="2025-10-31T00:53:00.164321026Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=85" Oct 31 00:53:00.164590 kubelet[2709]: E1031 00:53:00.164382 2709 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Oct 31 00:53:00.164590 kubelet[2709]: E1031 00:53:00.164410 2709 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Oct 31 00:53:00.164590 kubelet[2709]: E1031 00:53:00.164556 2709 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-kube-controllers start failed in pod calico-kube-controllers-84c8975546-52zt4_calico-system(fb892052-e9f7-4494-bd9e-d42433970af9): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Oct 31 00:53:00.164738 kubelet[2709]: E1031 00:53:00.164588 2709 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-84c8975546-52zt4" podUID="fb892052-e9f7-4494-bd9e-d42433970af9" Oct 31 00:53:00.164945 containerd[1535]: time="2025-10-31T00:53:00.164924345Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Oct 31 00:53:00.848300 containerd[1535]: time="2025-10-31T00:53:00.848271033Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Oct 31 00:53:00.848616 containerd[1535]: time="2025-10-31T00:53:00.848585479Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Oct 31 00:53:00.848616 containerd[1535]: time="2025-10-31T00:53:00.848651363Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Oct 31 00:53:00.848894 kubelet[2709]: E1031 00:53:00.848764 2709 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Oct 31 00:53:00.848894 kubelet[2709]: E1031 00:53:00.848805 2709 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Oct 31 00:53:00.848894 kubelet[2709]: E1031 00:53:00.848862 2709 kuberuntime_manager.go:1449] "Unhandled Error" err="container csi-node-driver-registrar start failed in pod csi-node-driver-qfqgh_calico-system(5e393e8e-d87c-4c00-a0d8-1932978c09f4): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Oct 31 00:53:00.849282 kubelet[2709]: E1031 00:53:00.848895 2709 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-qfqgh" podUID="5e393e8e-d87c-4c00-a0d8-1932978c09f4" Oct 31 00:53:03.843856 systemd[1]: Started sshd@13-139.178.70.106:22-139.178.68.195:58956.service - OpenSSH per-connection server daemon (139.178.68.195:58956). Oct 31 00:53:03.886351 sshd[5812]: Accepted publickey for core from 139.178.68.195 port 58956 ssh2: RSA SHA256:oP22LaO0XIGArebTxOYO0fKYAeWWO0nZUw/MwdKnF1w Oct 31 00:53:03.887303 sshd[5812]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 31 00:53:03.890666 systemd-logind[1514]: New session 16 of user core. Oct 31 00:53:03.893799 systemd[1]: Started session-16.scope - Session 16 of User core. Oct 31 00:53:03.999563 sshd[5812]: pam_unix(sshd:session): session closed for user core Oct 31 00:53:04.001658 systemd[1]: sshd@13-139.178.70.106:22-139.178.68.195:58956.service: Deactivated successfully. Oct 31 00:53:04.002838 systemd[1]: session-16.scope: Deactivated successfully. Oct 31 00:53:04.003284 systemd-logind[1514]: Session 16 logged out. Waiting for processes to exit. Oct 31 00:53:04.003918 systemd-logind[1514]: Removed session 16. Oct 31 00:53:04.472251 containerd[1535]: time="2025-10-31T00:53:04.472221061Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Oct 31 00:53:04.801235 containerd[1535]: time="2025-10-31T00:53:04.801152651Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Oct 31 00:53:04.801499 containerd[1535]: time="2025-10-31T00:53:04.801464190Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Oct 31 00:53:04.801542 containerd[1535]: time="2025-10-31T00:53:04.801506998Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Oct 31 00:53:04.802673 kubelet[2709]: E1031 00:53:04.801730 2709 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Oct 31 00:53:04.802673 kubelet[2709]: E1031 00:53:04.801768 2709 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Oct 31 00:53:04.802673 kubelet[2709]: E1031 00:53:04.801824 2709 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-apiserver start failed in pod calico-apiserver-6577bb4886-mlhcr_calico-apiserver(38874b0f-68ea-48b6-8bf2-17a9a00061c5): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Oct 31 00:53:04.802673 kubelet[2709]: E1031 00:53:04.801850 2709 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6577bb4886-mlhcr" podUID="38874b0f-68ea-48b6-8bf2-17a9a00061c5" Oct 31 00:53:06.472598 kubelet[2709]: E1031 00:53:06.472388 2709 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6577bb4886-7s98w" podUID="3561ae24-137d-44ba-89a5-d4068542bce6" Oct 31 00:53:06.473529 kubelet[2709]: E1031 00:53:06.473477 2709 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-62kb2" podUID="56a98ce3-aefb-4f4d-a4ba-fe832cc8a1df" Oct 31 00:53:07.479650 kubelet[2709]: E1031 00:53:07.479611 2709 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-565cb9cccd-v4n4z" podUID="e4d268cd-a761-4f69-b6d1-2ef35e9f3bff" Oct 31 00:53:09.009421 systemd[1]: Started sshd@14-139.178.70.106:22-139.178.68.195:58962.service - OpenSSH per-connection server daemon (139.178.68.195:58962). Oct 31 00:53:09.054808 sshd[5847]: Accepted publickey for core from 139.178.68.195 port 58962 ssh2: RSA SHA256:oP22LaO0XIGArebTxOYO0fKYAeWWO0nZUw/MwdKnF1w Oct 31 00:53:09.057361 sshd[5847]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 31 00:53:09.060792 systemd-logind[1514]: New session 17 of user core. Oct 31 00:53:09.070729 systemd[1]: Started session-17.scope - Session 17 of User core. Oct 31 00:53:09.186922 sshd[5847]: pam_unix(sshd:session): session closed for user core Oct 31 00:53:09.189063 systemd[1]: sshd@14-139.178.70.106:22-139.178.68.195:58962.service: Deactivated successfully. Oct 31 00:53:09.190121 systemd[1]: session-17.scope: Deactivated successfully. Oct 31 00:53:09.190548 systemd-logind[1514]: Session 17 logged out. Waiting for processes to exit. Oct 31 00:53:09.191094 systemd-logind[1514]: Removed session 17. Oct 31 00:53:12.473032 kubelet[2709]: E1031 00:53:12.472978 2709 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-84c8975546-52zt4" podUID="fb892052-e9f7-4494-bd9e-d42433970af9" Oct 31 00:53:12.473863 kubelet[2709]: E1031 00:53:12.473506 2709 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-qfqgh" podUID="5e393e8e-d87c-4c00-a0d8-1932978c09f4" Oct 31 00:53:14.197416 systemd[1]: Started sshd@15-139.178.70.106:22-139.178.68.195:54464.service - OpenSSH per-connection server daemon (139.178.68.195:54464). Oct 31 00:53:14.253294 sshd[5864]: Accepted publickey for core from 139.178.68.195 port 54464 ssh2: RSA SHA256:oP22LaO0XIGArebTxOYO0fKYAeWWO0nZUw/MwdKnF1w Oct 31 00:53:14.254417 sshd[5864]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 31 00:53:14.257779 systemd-logind[1514]: New session 18 of user core. Oct 31 00:53:14.261714 systemd[1]: Started session-18.scope - Session 18 of User core. Oct 31 00:53:14.396953 sshd[5864]: pam_unix(sshd:session): session closed for user core Oct 31 00:53:14.401742 systemd[1]: Started sshd@16-139.178.70.106:22-139.178.68.195:54468.service - OpenSSH per-connection server daemon (139.178.68.195:54468). Oct 31 00:53:14.403417 systemd[1]: sshd@15-139.178.70.106:22-139.178.68.195:54464.service: Deactivated successfully. Oct 31 00:53:14.405435 systemd[1]: session-18.scope: Deactivated successfully. Oct 31 00:53:14.406669 systemd-logind[1514]: Session 18 logged out. Waiting for processes to exit. Oct 31 00:53:14.408179 systemd-logind[1514]: Removed session 18. Oct 31 00:53:14.439492 sshd[5875]: Accepted publickey for core from 139.178.68.195 port 54468 ssh2: RSA SHA256:oP22LaO0XIGArebTxOYO0fKYAeWWO0nZUw/MwdKnF1w Oct 31 00:53:14.440476 sshd[5875]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 31 00:53:14.443114 systemd-logind[1514]: New session 19 of user core. Oct 31 00:53:14.447808 systemd[1]: Started session-19.scope - Session 19 of User core. Oct 31 00:53:14.832785 sshd[5875]: pam_unix(sshd:session): session closed for user core Oct 31 00:53:14.839193 systemd[1]: Started sshd@17-139.178.70.106:22-139.178.68.195:54478.service - OpenSSH per-connection server daemon (139.178.68.195:54478). Oct 31 00:53:14.839445 systemd[1]: sshd@16-139.178.70.106:22-139.178.68.195:54468.service: Deactivated successfully. Oct 31 00:53:14.840384 systemd[1]: session-19.scope: Deactivated successfully. Oct 31 00:53:14.841282 systemd-logind[1514]: Session 19 logged out. Waiting for processes to exit. Oct 31 00:53:14.842535 systemd-logind[1514]: Removed session 19. Oct 31 00:53:14.888816 sshd[5885]: Accepted publickey for core from 139.178.68.195 port 54478 ssh2: RSA SHA256:oP22LaO0XIGArebTxOYO0fKYAeWWO0nZUw/MwdKnF1w Oct 31 00:53:14.889733 sshd[5885]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 31 00:53:14.893521 systemd-logind[1514]: New session 20 of user core. Oct 31 00:53:14.899744 systemd[1]: Started session-20.scope - Session 20 of User core. Oct 31 00:53:15.476398 sshd[5885]: pam_unix(sshd:session): session closed for user core Oct 31 00:53:15.484342 systemd[1]: sshd@17-139.178.70.106:22-139.178.68.195:54478.service: Deactivated successfully. Oct 31 00:53:15.485670 systemd[1]: session-20.scope: Deactivated successfully. Oct 31 00:53:15.487572 systemd-logind[1514]: Session 20 logged out. Waiting for processes to exit. Oct 31 00:53:15.493906 systemd[1]: Started sshd@18-139.178.70.106:22-139.178.68.195:54484.service - OpenSSH per-connection server daemon (139.178.68.195:54484). Oct 31 00:53:15.494828 systemd-logind[1514]: Removed session 20. Oct 31 00:53:15.548391 sshd[5903]: Accepted publickey for core from 139.178.68.195 port 54484 ssh2: RSA SHA256:oP22LaO0XIGArebTxOYO0fKYAeWWO0nZUw/MwdKnF1w Oct 31 00:53:15.549273 sshd[5903]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 31 00:53:15.552584 systemd-logind[1514]: New session 21 of user core. Oct 31 00:53:15.557829 systemd[1]: Started session-21.scope - Session 21 of User core. Oct 31 00:53:15.801312 sshd[5903]: pam_unix(sshd:session): session closed for user core Oct 31 00:53:15.808477 systemd[1]: sshd@18-139.178.70.106:22-139.178.68.195:54484.service: Deactivated successfully. Oct 31 00:53:15.809610 systemd[1]: session-21.scope: Deactivated successfully. Oct 31 00:53:15.811513 systemd-logind[1514]: Session 21 logged out. Waiting for processes to exit. Oct 31 00:53:15.817877 systemd[1]: Started sshd@19-139.178.70.106:22-139.178.68.195:54500.service - OpenSSH per-connection server daemon (139.178.68.195:54500). Oct 31 00:53:15.819680 systemd-logind[1514]: Removed session 21. Oct 31 00:53:15.854780 sshd[5914]: Accepted publickey for core from 139.178.68.195 port 54500 ssh2: RSA SHA256:oP22LaO0XIGArebTxOYO0fKYAeWWO0nZUw/MwdKnF1w Oct 31 00:53:15.855087 sshd[5914]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 31 00:53:15.858565 systemd-logind[1514]: New session 22 of user core. Oct 31 00:53:15.865787 systemd[1]: Started session-22.scope - Session 22 of User core. Oct 31 00:53:15.957761 sshd[5914]: pam_unix(sshd:session): session closed for user core Oct 31 00:53:15.959724 systemd[1]: sshd@19-139.178.70.106:22-139.178.68.195:54500.service: Deactivated successfully. Oct 31 00:53:15.961584 systemd[1]: session-22.scope: Deactivated successfully. Oct 31 00:53:15.962601 systemd-logind[1514]: Session 22 logged out. Waiting for processes to exit. Oct 31 00:53:15.963293 systemd-logind[1514]: Removed session 22. Oct 31 00:53:17.473321 kubelet[2709]: E1031 00:53:17.473205 2709 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6577bb4886-mlhcr" podUID="38874b0f-68ea-48b6-8bf2-17a9a00061c5" Oct 31 00:53:17.479030 kubelet[2709]: E1031 00:53:17.473482 2709 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6577bb4886-7s98w" podUID="3561ae24-137d-44ba-89a5-d4068542bce6" Oct 31 00:53:20.967484 systemd[1]: Started sshd@20-139.178.70.106:22-139.178.68.195:54508.service - OpenSSH per-connection server daemon (139.178.68.195:54508). Oct 31 00:53:20.993317 sshd[5934]: Accepted publickey for core from 139.178.68.195 port 54508 ssh2: RSA SHA256:oP22LaO0XIGArebTxOYO0fKYAeWWO0nZUw/MwdKnF1w Oct 31 00:53:20.994063 sshd[5934]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 31 00:53:20.996939 systemd-logind[1514]: New session 23 of user core. Oct 31 00:53:21.001812 systemd[1]: Started session-23.scope - Session 23 of User core. Oct 31 00:53:21.094653 sshd[5934]: pam_unix(sshd:session): session closed for user core Oct 31 00:53:21.097094 systemd[1]: sshd@20-139.178.70.106:22-139.178.68.195:54508.service: Deactivated successfully. Oct 31 00:53:21.098431 systemd[1]: session-23.scope: Deactivated successfully. Oct 31 00:53:21.099005 systemd-logind[1514]: Session 23 logged out. Waiting for processes to exit. Oct 31 00:53:21.099585 systemd-logind[1514]: Removed session 23. Oct 31 00:53:21.472902 kubelet[2709]: E1031 00:53:21.472647 2709 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-62kb2" podUID="56a98ce3-aefb-4f4d-a4ba-fe832cc8a1df" Oct 31 00:53:21.474114 kubelet[2709]: E1031 00:53:21.473933 2709 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-565cb9cccd-v4n4z" podUID="e4d268cd-a761-4f69-b6d1-2ef35e9f3bff" Oct 31 00:53:25.472646 kubelet[2709]: E1031 00:53:25.472410 2709 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-84c8975546-52zt4" podUID="fb892052-e9f7-4494-bd9e-d42433970af9" Oct 31 00:53:25.474203 kubelet[2709]: E1031 00:53:25.473492 2709 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-qfqgh" podUID="5e393e8e-d87c-4c00-a0d8-1932978c09f4" Oct 31 00:53:26.108563 systemd[1]: Started sshd@21-139.178.70.106:22-139.178.68.195:34118.service - OpenSSH per-connection server daemon (139.178.68.195:34118). Oct 31 00:53:26.157273 sshd[5948]: Accepted publickey for core from 139.178.68.195 port 34118 ssh2: RSA SHA256:oP22LaO0XIGArebTxOYO0fKYAeWWO0nZUw/MwdKnF1w Oct 31 00:53:26.157564 sshd[5948]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 31 00:53:26.160367 systemd-logind[1514]: New session 24 of user core. Oct 31 00:53:26.166752 systemd[1]: Started session-24.scope - Session 24 of User core. Oct 31 00:53:26.260844 sshd[5948]: pam_unix(sshd:session): session closed for user core Oct 31 00:53:26.263432 systemd[1]: sshd@21-139.178.70.106:22-139.178.68.195:34118.service: Deactivated successfully. Oct 31 00:53:26.264715 systemd[1]: session-24.scope: Deactivated successfully. Oct 31 00:53:26.265239 systemd-logind[1514]: Session 24 logged out. Waiting for processes to exit. Oct 31 00:53:26.265976 systemd-logind[1514]: Removed session 24. Oct 31 00:53:29.472043 kubelet[2709]: E1031 00:53:29.472006 2709 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6577bb4886-7s98w" podUID="3561ae24-137d-44ba-89a5-d4068542bce6" Oct 31 00:53:29.472409 kubelet[2709]: E1031 00:53:29.472060 2709 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6577bb4886-mlhcr" podUID="38874b0f-68ea-48b6-8bf2-17a9a00061c5" Oct 31 00:53:31.270024 systemd[1]: Started sshd@22-139.178.70.106:22-139.178.68.195:34122.service - OpenSSH per-connection server daemon (139.178.68.195:34122). Oct 31 00:53:31.360275 sshd[5962]: Accepted publickey for core from 139.178.68.195 port 34122 ssh2: RSA SHA256:oP22LaO0XIGArebTxOYO0fKYAeWWO0nZUw/MwdKnF1w Oct 31 00:53:31.360842 sshd[5962]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 31 00:53:31.363455 systemd-logind[1514]: New session 25 of user core. Oct 31 00:53:31.366701 systemd[1]: Started session-25.scope - Session 25 of User core. Oct 31 00:53:31.559827 sshd[5962]: pam_unix(sshd:session): session closed for user core Oct 31 00:53:31.561644 systemd[1]: sshd@22-139.178.70.106:22-139.178.68.195:34122.service: Deactivated successfully. Oct 31 00:53:31.563171 systemd[1]: session-25.scope: Deactivated successfully. Oct 31 00:53:31.563665 systemd-logind[1514]: Session 25 logged out. Waiting for processes to exit. Oct 31 00:53:31.564214 systemd-logind[1514]: Removed session 25. Oct 31 00:53:33.182453 systemd[1]: Started sshd@23-139.178.70.106:22-112.187.124.38:49902.service - OpenSSH per-connection server daemon (112.187.124.38:49902).